text
stringlengths
559
401k
source
stringlengths
13
121
In mathematics and physics, many topics are named in honor of Swiss mathematician Leonhard Euler (1707–1783), who made many important discoveries and innovations. Many of these items named after Euler include their own unique function, equation, formula, identity, number (single or sequence), or other mathematical entity. Many of these entities have been given simple yet ambiguous names such as Euler's function, Euler's equation, and Euler's formula. Euler's work touched upon so many fields that he is often the earliest written reference on a given matter. In an effort to avoid naming everything after Euler, some discoveries and theorems are attributed to the first person to have proved them after Euler. == Conjectures == Euler's sum of powers conjecture – disproved for exponents 4 and 5 during the 20th century; unsolved for higher exponents Euler's Graeco-Latin square conjecture – proved to be true for ⁠ n = 6 {\displaystyle n=6} ⁠ and disproved otherwise, during the 20th century == Equations == Usually, Euler's equation refers to one of (or a set of) differential equations (DEs). It is customary to classify them into ODEs and PDEs. Otherwise, Euler's equation may refer to a non-differential equation, as in these three cases: Euler–Lotka equation, a characteristic equation employed in mathematical demography Euler's pump and turbine equation Euler transform used to accelerate the convergence of an alternating series and is also frequently applied to the hypergeometric series === Ordinary differential equations === Euler rotation equations, a set of first-order ODEs concerning the rotations of a rigid body. Euler–Cauchy equation, a linear equidimensional second-order ODE with variable coefficients. Its second-order version can emerge from Laplace's equation in polar coordinates. Euler–Bernoulli beam equation, a fourth-order ODE concerning the elasticity of structural beams. Euler's differential equation, a first order nonlinear ordinary differential equation === Partial differential equations === Euler conservation equations, a set of quasilinear first-order hyperbolic equations used in fluid dynamics for inviscid flows. In the (Froude) limit of no external field, they are conservation equations. Euler–Tricomi equation – a second-order PDE emerging from Euler conservation equations. Euler–Poisson–Darboux equation, a second-order PDE playing important role in solving the wave equation. Euler–Lagrange equation, a second-order PDE emerging from minimization problems in calculus of variations. == Formulas == == Functions == The Euler function, a modular form that is a prototypical q-series. Euler's totient function (or Euler phi (φ) function) in number theory, counting the number of coprime integers less than an integer. Euler hypergeometric integral Euler–Riemann zeta function == Identities == Euler's identity e iπ + 1 = 0. Euler's four-square identity, which shows that the product of two sums of four squares can itself be expressed as the sum of four squares. Euler's identity may also refer to the pentagonal number theorem. == Numbers == Euler's number, e = 2.71828 . . . , the base of the natural logarithm Euler's idoneal numbers, a set of 65 or possibly 66 or 67 integers with special properties Euler numbers, integers occurring in the coefficients of the Taylor series of 1/cosh t Eulerian numbers count certain types of permutations. Euler number (physics), the cavitation number in fluid dynamics. Euler number (algebraic topology) – now, Euler characteristic, classically the number of vertices minus edges plus faces of a polyhedron. Euler number (3-manifold topology) – see Seifert fiber space Lucky numbers of Euler Euler's constant gamma (γ), also known as the Euler–Mascheroni constant Eulerian integers, more commonly called Eisenstein integers, the algebraic integers of form a + bω where ω is a complex cube root of 1. Euler–Gompertz constant == Theorems == Euler's homogeneous function theorem – A homogeneous function is a linear combination of its partial derivatives Euler's infinite tetration theorem – About the limit of iterated exponentiation Euler's rotation theorem – Movement with a fixed point is rotation Euler's theorem (differential geometry) – Orthogonality of the directions of the principal curvatures of a surface Euler's theorem in geometry – On distance between centers of a triangle Euler's quadrilateral theorem – Relation between the sides of a convex quadrilateral and its diagonals Euclid–Euler theorem, characterizing even perfect numbers Euler's theorem, on modular exponentiation Euler's partition theorem relating the product and series representations of the Euler function Π(1 − xn) Goldbach–Euler theorem, stating that sum of 1/(k − 1), where k ranges over positive integers of the form mn for m ≥ 2 and n ≥ 2, equals 1 Gram–Euler theorem == Laws == Euler's first law, the sum of the external forces acting on a rigid body is equal to the rate of change of linear momentum of the body. Euler's second law, the sum of the external moments about a point is equal to the rate of change of angular momentum about that point. == Other things == == Topics by field of study == Selected topics from above, grouped by subject, and additional topics from the fields of music and physical systems === Analysis: derivatives, integrals, and logarithms === === Geometry and spatial arrangement === === Graph theory === Euler characteristic (formerly called Euler number) in algebraic topology and topological graph theory, and the corresponding Euler's formula χ ( S 2 ) = F − E + V = 2 {\textstyle \chi (S^{2})=F-E+V=2} Eulerian circuit, Euler cycle or Eulerian path – a path through a graph that takes each edge once Eulerian graph has all its vertices spanned by an Eulerian path Euler class Euler diagram – popularly called "Venn diagrams", although some use this term only for a subclass of Euler diagrams. Euler tour technique === Music === Euler–Fokker genus Euler's tritone === Number theory === Euler's criterion – quadratic residues modulo by primes Euler product – infinite product expansion, indexed by prime numbers of a Dirichlet series Euler pseudoprime Euler–Jacobi pseudoprime Euler's totient function (or Euler phi (φ) function) in number theory, counting the number of coprime integers less than an integer. Euler system Euler's factorization method === Physical systems === === Polynomials === Euler's homogeneous function theorem, a theorem about homogeneous polynomials. Euler polynomials Euler spline – splines composed of arcs using Euler polynomials == See also == Contributions of Leonhard Euler to mathematics == Notes ==
Wikipedia/Euler_equations
In mathematics, a metric space is a set together with a notion of distance between its elements, usually called points. The distance is measured by a function called a metric or distance function. Metric spaces are a general setting for studying many of the concepts of mathematical analysis and geometry. The most familiar example of a metric space is 3-dimensional Euclidean space with its usual notion of distance. Other well-known examples are a sphere equipped with the angular distance and the hyperbolic plane. A metric may correspond to a metaphorical, rather than physical, notion of distance: for example, the set of 100-character Unicode strings can be equipped with the Hamming distance, which measures the number of characters that need to be changed to get from one string to another. Since they are very general, metric spaces are a tool used in many different branches of mathematics. Many types of mathematical objects have a natural notion of distance and therefore admit the structure of a metric space, including Riemannian manifolds, normed vector spaces, and graphs. In abstract algebra, the p-adic numbers arise as elements of the completion of a metric structure on the rational numbers. Metric spaces are also studied in their own right in metric geometry and analysis on metric spaces. Many of the basic notions of mathematical analysis, including balls, completeness, as well as uniform, Lipschitz, and Hölder continuity, can be defined in the setting of metric spaces. Other notions, such as continuity, compactness, and open and closed sets, can be defined for metric spaces, but also in the even more general setting of topological spaces. == Definition and illustration == === Motivation === To see the utility of different notions of distance, consider the surface of the Earth as a set of points. We can measure the distance between two such points by the length of the shortest path along the surface, "as the crow flies"; this is particularly useful for shipping and aviation. We can also measure the straight-line distance between two points through the Earth's interior; this notion is, for example, natural in seismology, since it roughly corresponds to the length of time it takes for seismic waves to travel between those two points. The notion of distance encoded by the metric space axioms has relatively few requirements. This generality gives metric spaces a lot of flexibility. At the same time, the notion is strong enough to encode many intuitive facts about what distance means. This means that general results about metric spaces can be applied in many different contexts. Like many fundamental mathematical concepts, the metric on a metric space can be interpreted in many different ways. A particular metric may not be best thought of as measuring physical distance, but, instead, as the cost of changing from one state to another (as with Wasserstein metrics on spaces of measures) or the degree of difference between two objects (for example, the Hamming distance between two strings of characters, or the Gromov–Hausdorff distance between metric spaces themselves). === Definition === Formally, a metric space is an ordered pair (M, d) where M is a set and d is a metric on M, i.e., a function d : M × M → R {\displaystyle d\,\colon M\times M\to \mathbb {R} } satisfying the following axioms for all points x , y , z ∈ M {\displaystyle x,y,z\in M} : The distance from a point to itself is zero: d ( x , x ) = 0 {\displaystyle d(x,x)=0} (Positivity) The distance between two distinct points is always positive: If x ≠ y , then d ( x , y ) > 0 {\displaystyle {\text{If }}x\neq y{\text{, then }}d(x,y)>0} (Symmetry) The distance from x to y is always the same as the distance from y to x: d ( x , y ) = d ( y , x ) {\displaystyle d(x,y)=d(y,x)} The triangle inequality holds: d ( x , z ) ≤ d ( x , y ) + d ( y , z ) {\displaystyle d(x,z)\leq d(x,y)+d(y,z)} This is a natural property of both physical and metaphorical notions of distance: you can arrive at z from x by taking a detour through y, but this will not make your journey any shorter than the direct path. If the metric d is unambiguous, one often refers by abuse of notation to "the metric space M". By taking all axioms except the second, one can show that distance is always non-negative: 0 = d ( x , x ) ≤ d ( x , y ) + d ( y , x ) = 2 d ( x , y ) {\displaystyle 0=d(x,x)\leq d(x,y)+d(y,x)=2d(x,y)} Therefore the second axiom can be weakened to If x ≠ y , then d ( x , y ) ≠ 0 {\textstyle {\text{If }}x\neq y{\text{, then }}d(x,y)\neq 0} and combined with the first to make d ( x , y ) = 0 ⟺ x = y {\textstyle d(x,y)=0\iff x=y} . === Simple examples === ==== The real numbers ==== The real numbers with the distance function d ( x , y ) = | y − x | {\displaystyle d(x,y)=|y-x|} given by the absolute difference form a metric space. Many properties of metric spaces and functions between them are generalizations of concepts in real analysis and coincide with those concepts when applied to the real line. ==== Metrics on Euclidean spaces ==== The Euclidean plane R 2 {\displaystyle \mathbb {R} ^{2}} can be equipped with many different metrics. The Euclidean distance familiar from school mathematics can be defined by d 2 ( ( x 1 , y 1 ) , ( x 2 , y 2 ) ) = ( x 2 − x 1 ) 2 + ( y 2 − y 1 ) 2 . {\displaystyle d_{2}((x_{1},y_{1}),(x_{2},y_{2}))={\sqrt {(x_{2}-x_{1})^{2}+(y_{2}-y_{1})^{2}}}.} The taxicab or Manhattan distance is defined by d 1 ( ( x 1 , y 1 ) , ( x 2 , y 2 ) ) = | x 2 − x 1 | + | y 2 − y 1 | {\displaystyle d_{1}((x_{1},y_{1}),(x_{2},y_{2}))=|x_{2}-x_{1}|+|y_{2}-y_{1}|} and can be thought of as the distance you need to travel along horizontal and vertical lines to get from one point to the other, as illustrated at the top of the article. The maximum, L ∞ {\displaystyle L^{\infty }} , or Chebyshev distance is defined by d ∞ ( ( x 1 , y 1 ) , ( x 2 , y 2 ) ) = max { | x 2 − x 1 | , | y 2 − y 1 | } . {\displaystyle d_{\infty }((x_{1},y_{1}),(x_{2},y_{2}))=\max\{|x_{2}-x_{1}|,|y_{2}-y_{1}|\}.} This distance does not have an easy explanation in terms of paths in the plane, but it still satisfies the metric space axioms. It can be thought of similarly to the number of moves a king would have to make on a chess board to travel from one point to another on the given space. In fact, these three distances, while they have distinct properties, are similar in some ways. Informally, points that are close in one are close in the others, too. This observation can be quantified with the formula d ∞ ( p , q ) ≤ d 2 ( p , q ) ≤ d 1 ( p , q ) ≤ 2 d ∞ ( p , q ) , {\displaystyle d_{\infty }(p,q)\leq d_{2}(p,q)\leq d_{1}(p,q)\leq 2d_{\infty }(p,q),} which holds for every pair of points p , q ∈ R 2 {\displaystyle p,q\in \mathbb {R} ^{2}} . A radically different distance can be defined by setting d ( p , q ) = { 0 , if p = q , 1 , otherwise. {\displaystyle d(p,q)={\begin{cases}0,&{\text{if }}p=q,\\1,&{\text{otherwise.}}\end{cases}}} Using Iverson brackets, d ( p , q ) = [ p ≠ q ] {\displaystyle d(p,q)=[p\neq q]} In this discrete metric, all distinct points are 1 unit apart: none of them are close to each other, and none of them are very far away from each other either. Intuitively, the discrete metric no longer remembers that the set is a plane, but treats it just as an undifferentiated set of points. All of these metrics make sense on R n {\displaystyle \mathbb {R} ^{n}} as well as R 2 {\displaystyle \mathbb {R} ^{2}} . ==== Subspaces ==== Given a metric space (M, d) and a subset A ⊆ M {\displaystyle A\subseteq M} , we can consider A to be a metric space by measuring distances the same way we would in M. Formally, the induced metric on A is a function d A : A × A → R {\displaystyle d_{A}:A\times A\to \mathbb {R} } defined by d A ( x , y ) = d ( x , y ) . {\displaystyle d_{A}(x,y)=d(x,y).} For example, if we take the two-dimensional sphere S2 as a subset of R 3 {\displaystyle \mathbb {R} ^{3}} , the Euclidean metric on R 3 {\displaystyle \mathbb {R} ^{3}} induces the straight-line metric on S2 described above. Two more useful examples are the open interval (0, 1) and the closed interval [0, 1] thought of as subspaces of the real line. == History == Arthur Cayley, in his article "On Distance", extended metric concepts beyond Euclidean geometry into domains bounded by a conic in a projective space. His distance was given by logarithm of a cross ratio. Any projectivity leaving the conic stable also leaves the cross ratio constant, so isometries are implicit. This method provides models for elliptic geometry and hyperbolic geometry, and Felix Klein, in several publications, established the field of non-euclidean geometry through the use of the Cayley-Klein metric. The idea of an abstract space with metric properties was addressed in 1906 by René Maurice Fréchet and the term metric space was coined by Felix Hausdorff in 1914. Fréchet's work laid the foundation for understanding convergence, continuity, and other key concepts in non-geometric spaces. This allowed mathematicians to study functions and sequences in a broader and more flexible way. This was important for the growing field of functional analysis. Mathematicians like Hausdorff and Stefan Banach further refined and expanded the framework of metric spaces. Hausdorff introduced topological spaces as a generalization of metric spaces. Banach's work in functional analysis heavily relied on the metric structure. Over time, metric spaces became a central part of modern mathematics. They have influenced various fields including topology, geometry, and applied mathematics. Metric spaces continue to play a crucial role in the study of abstract mathematical concepts. == Basic notions == A distance function is enough to define notions of closeness and convergence that were first developed in real analysis. Properties that depend on the structure of a metric space are referred to as metric properties. Every metric space is also a topological space, and some metric properties can also be rephrased without reference to distance in the language of topology; that is, they are really topological properties. === The topology of a metric space === For any point x in a metric space M and any real number r > 0, the open ball of radius r around x is defined to be the set of points that are strictly less than distance r from x: B r ( x ) = { y ∈ M : d ( x , y ) < r } . {\displaystyle B_{r}(x)=\{y\in M:d(x,y)<r\}.} This is a natural way to define a set of points that are relatively close to x. Therefore, a set N ⊆ M {\displaystyle N\subseteq M} is a neighborhood of x (informally, it contains all points "close enough" to x) if it contains an open ball of radius r around x for some r > 0. An open set is a set which is a neighborhood of all its points. It follows that the open balls form a base for a topology on M. In other words, the open sets of M are exactly the unions of open balls. As in any topology, closed sets are the complements of open sets. Sets may be both open and closed as well as neither open nor closed. This topology does not carry all the information about the metric space. For example, the distances d1, d2, and d∞ defined above all induce the same topology on R 2 {\displaystyle \mathbb {R} ^{2}} , although they behave differently in many respects. Similarly, R {\displaystyle \mathbb {R} } with the Euclidean metric and its subspace the interval (0, 1) with the induced metric are homeomorphic but have very different metric properties. Conversely, not every topological space can be given a metric. Topological spaces which are compatible with a metric are called metrizable and are particularly well-behaved in many ways: in particular, they are paracompact Hausdorff spaces (hence normal) and first-countable. The Nagata–Smirnov metrization theorem gives a characterization of metrizability in terms of other topological properties, without reference to metrics. === Convergence === Convergence of sequences in Euclidean space is defined as follows: A sequence (xn) converges to a point x if for every ε > 0 there is an integer N such that for all n > N, d(xn, x) < ε. Convergence of sequences in a topological space is defined as follows: A sequence (xn) converges to a point x if for every open set U containing x there is an integer N such that for all n > N, x n ∈ U {\displaystyle x_{n}\in U} . In metric spaces, both of these definitions make sense and they are equivalent. This is a general pattern for topological properties of metric spaces: while they can be defined in a purely topological way, there is often a way that uses the metric which is easier to state or more familiar from real analysis. === Completeness === Informally, a metric space is complete if it has no "missing points": every sequence that looks like it should converge to something actually converges. To make this precise: a sequence (xn) in a metric space M is Cauchy if for every ε > 0 there is an integer N such that for all m, n > N, d(xm, xn) < ε. By the triangle inequality, any convergent sequence is Cauchy: if xm and xn are both less than ε away from the limit, then they are less than 2ε away from each other. If the converse is true—every Cauchy sequence in M converges—then M is complete. Euclidean spaces are complete, as is R 2 {\displaystyle \mathbb {R} ^{2}} with the other metrics described above. Two examples of spaces which are not complete are (0, 1) and the rationals, each with the metric induced from R {\displaystyle \mathbb {R} } . One can think of (0, 1) as "missing" its endpoints 0 and 1. The rationals are missing all the irrationals, since any irrational has a sequence of rationals converging to it in R {\displaystyle \mathbb {R} } (for example, its successive decimal approximations). These examples show that completeness is not a topological property, since R {\displaystyle \mathbb {R} } is complete but the homeomorphic space (0, 1) is not. This notion of "missing points" can be made precise. In fact, every metric space has a unique completion, which is a complete space that contains the given space as a dense subset. For example, [0, 1] is the completion of (0, 1), and the real numbers are the completion of the rationals. Since complete spaces are generally easier to work with, completions are important throughout mathematics. For example, in abstract algebra, the p-adic numbers are defined as the completion of the rationals under a different metric. Completion is particularly common as a tool in functional analysis. Often one has a set of nice functions and a way of measuring distances between them. Taking the completion of this metric space gives a new set of functions which may be less nice, but nevertheless useful because they behave similarly to the original nice functions in important ways. For example, weak solutions to differential equations typically live in a completion (a Sobolev space) rather than the original space of nice functions for which the differential equation actually makes sense. === Bounded and totally bounded spaces === A metric space M is bounded if there is an r such that no pair of points in M is more than distance r apart. The least such r is called the diameter of M. The space M is called precompact or totally bounded if for every r > 0 there is a finite cover of M by open balls of radius r. Every totally bounded space is bounded. To see this, start with a finite cover by r-balls for some arbitrary r. Since the subset of M consisting of the centers of these balls is finite, it has finite diameter, say D. By the triangle inequality, the diameter of the whole space is at most D + 2r. The converse does not hold: an example of a metric space that is bounded but not totally bounded is R 2 {\displaystyle \mathbb {R} ^{2}} (or any other infinite set) with the discrete metric. === Compactness === Compactness is a topological property which generalizes the properties of a closed and bounded subset of Euclidean space. There are several equivalent definitions of compactness in metric spaces: A metric space M is compact if every open cover has a finite subcover (the usual topological definition). A metric space M is compact if every sequence has a convergent subsequence. (For general topological spaces this is called sequential compactness and is not equivalent to compactness.) A metric space M is compact if it is complete and totally bounded. (This definition is written in terms of metric properties and does not make sense for a general topological space, but it is nevertheless topologically invariant since it is equivalent to compactness.) One example of a compact space is the closed interval [0, 1]. Compactness is important for similar reasons to completeness: it makes it easy to find limits. Another important tool is Lebesgue's number lemma, which shows that for any open cover of a compact space, every point is relatively deep inside one of the sets of the cover. == Functions between metric spaces == Unlike in the case of topological spaces or algebraic structures such as groups or rings, there is no single "right" type of structure-preserving function between metric spaces. Instead, one works with different types of functions depending on one's goals. Throughout this section, suppose that ( M 1 , d 1 ) {\displaystyle (M_{1},d_{1})} and ( M 2 , d 2 ) {\displaystyle (M_{2},d_{2})} are two metric spaces. The words "function" and "map" are used interchangeably. === Isometries === One interpretation of a "structure-preserving" map is one that fully preserves the distance function: A function f : M 1 → M 2 {\displaystyle f:M_{1}\to M_{2}} is distance-preserving if for every pair of points x and y in M1, d 2 ( f ( x ) , f ( y ) ) = d 1 ( x , y ) . {\displaystyle d_{2}(f(x),f(y))=d_{1}(x,y).} It follows from the metric space axioms that a distance-preserving function is injective. A bijective distance-preserving function is called an isometry. One perhaps non-obvious example of an isometry between spaces described in this article is the map f : ( R 2 , d 1 ) → ( R 2 , d ∞ ) {\displaystyle f:(\mathbb {R} ^{2},d_{1})\to (\mathbb {R} ^{2},d_{\infty })} defined by f ( x , y ) = ( x + y , x − y ) . {\displaystyle f(x,y)=(x+y,x-y).} If there is an isometry between the spaces M1 and M2, they are said to be isometric. Metric spaces that are isometric are essentially identical. === Continuous maps === On the other end of the spectrum, one can forget entirely about the metric structure and study continuous maps, which only preserve topological structure. There are several equivalent definitions of continuity for metric spaces. The most important are: Topological definition. A function f : M 1 → M 2 {\displaystyle f\,\colon M_{1}\to M_{2}} is continuous if for every open set U in M2, the preimage f − 1 ( U ) {\displaystyle f^{-1}(U)} is open. Sequential continuity. A function f : M 1 → M 2 {\displaystyle f\,\colon M_{1}\to M_{2}} is continuous if whenever a sequence (xn) converges to a point x in M1, the sequence f ( x 1 ) , f ( x 2 ) , … {\displaystyle f(x_{1}),f(x_{2}),\ldots } converges to the point f(x) in M2. (These first two definitions are not equivalent for all topological spaces.) ε–δ definition. A function f : M 1 → M 2 {\displaystyle f\,\colon M_{1}\to M_{2}} is continuous if for every point x in M1 and every ε > 0 there exists δ > 0 such that for all y in M1 we have d 1 ( x , y ) < δ ⟹ d 2 ( f ( x ) , f ( y ) ) < ε . {\displaystyle d_{1}(x,y)<\delta \implies d_{2}(f(x),f(y))<\varepsilon .} A homeomorphism is a continuous bijection whose inverse is also continuous; if there is a homeomorphism between M1 and M2, they are said to be homeomorphic. Homeomorphic spaces are the same from the point of view of topology, but may have very different metric properties. For example, R {\displaystyle \mathbb {R} } is unbounded and complete, while (0, 1) is bounded but not complete. === Uniformly continuous maps === A function f : M 1 → M 2 {\displaystyle f\,\colon M_{1}\to M_{2}} is uniformly continuous if for every real number ε > 0 there exists δ > 0 such that for all points x and y in M1 such that d ( x , y ) < δ {\displaystyle d(x,y)<\delta } , we have d 2 ( f ( x ) , f ( y ) ) < ε . {\displaystyle d_{2}(f(x),f(y))<\varepsilon .} The only difference between this definition and the ε–δ definition of continuity is the order of quantifiers: the choice of δ must depend only on ε and not on the point x. However, this subtle change makes a big difference. For example, uniformly continuous maps take Cauchy sequences in M1 to Cauchy sequences in M2. In other words, uniform continuity preserves some metric properties which are not purely topological. On the other hand, the Heine–Cantor theorem states that if M1 is compact, then every continuous map is uniformly continuous. In other words, uniform continuity cannot distinguish any non-topological features of compact metric spaces. === Lipschitz maps and contractions === A Lipschitz map is one that stretches distances by at most a bounded factor. Formally, given a real number K > 0, the map f : M 1 → M 2 {\displaystyle f\,\colon M_{1}\to M_{2}} is K-Lipschitz if d 2 ( f ( x ) , f ( y ) ) ≤ K d 1 ( x , y ) for all x , y ∈ M 1 . {\displaystyle d_{2}(f(x),f(y))\leq Kd_{1}(x,y)\quad {\text{for all}}\quad x,y\in M_{1}.} Lipschitz maps are particularly important in metric geometry, since they provide more flexibility than distance-preserving maps, but still make essential use of the metric. For example, a curve in a metric space is rectifiable (has finite length) if and only if it has a Lipschitz reparametrization. A 1-Lipschitz map is sometimes called a nonexpanding or metric map. Metric maps are commonly taken to be the morphisms of the category of metric spaces. A K-Lipschitz map for K < 1 is called a contraction. The Banach fixed-point theorem states that if M is a complete metric space, then every contraction f : M → M {\displaystyle f:M\to M} admits a unique fixed point. If the metric space M is compact, the result holds for a slightly weaker condition on f: a map f : M → M {\displaystyle f:M\to M} admits a unique fixed point if d ( f ( x ) , f ( y ) ) < d ( x , y ) for all x ≠ y ∈ M 1 . {\displaystyle d(f(x),f(y))<d(x,y)\quad {\mbox{for all}}\quad x\neq y\in M_{1}.} === Quasi-isometries === A quasi-isometry is a map that preserves the "large-scale structure" of a metric space. Quasi-isometries need not be continuous. For example, R 2 {\displaystyle \mathbb {R} ^{2}} and its subspace Z 2 {\displaystyle \mathbb {Z} ^{2}} are quasi-isometric, even though one is connected and the other is discrete. The equivalence relation of quasi-isometry is important in geometric group theory: the Švarc–Milnor lemma states that all spaces on which a group acts geometrically are quasi-isometric. Formally, the map f : M 1 → M 2 {\displaystyle f\,\colon M_{1}\to M_{2}} is a quasi-isometric embedding if there exist constants A ≥ 1 and B ≥ 0 such that 1 A d 2 ( f ( x ) , f ( y ) ) − B ≤ d 1 ( x , y ) ≤ A d 2 ( f ( x ) , f ( y ) ) + B for all x , y ∈ M 1 . {\displaystyle {\frac {1}{A}}d_{2}(f(x),f(y))-B\leq d_{1}(x,y)\leq Ad_{2}(f(x),f(y))+B\quad {\text{ for all }}\quad x,y\in M_{1}.} It is a quasi-isometry if in addition it is quasi-surjective, i.e. there is a constant C ≥ 0 such that every point in M 2 {\displaystyle M_{2}} is at distance at most C from some point in the image f ( M 1 ) {\displaystyle f(M_{1})} . === Notions of metric space equivalence === Given two metric spaces ( M 1 , d 1 ) {\displaystyle (M_{1},d_{1})} and ( M 2 , d 2 ) {\displaystyle (M_{2},d_{2})} : They are called homeomorphic (topologically isomorphic) if there is a homeomorphism between them (i.e., a continuous bijection with a continuous inverse). If M 1 = M 2 {\displaystyle M_{1}=M_{2}} and the identity map is a homeomorphism, then d 1 {\displaystyle d_{1}} and d 2 {\displaystyle d_{2}} are said to be topologically equivalent. They are called uniformic (uniformly isomorphic) if there is a uniform isomorphism between them (i.e., a uniformly continuous bijection with a uniformly continuous inverse). They are called bilipschitz homeomorphic if there is a bilipschitz bijection between them (i.e., a Lipschitz bijection with a Lipschitz inverse). They are called isometric if there is a (bijective) isometry between them. In this case, the two metric spaces are essentially identical. They are called quasi-isometric if there is a quasi-isometry between them. == Metric spaces with additional structure == === Normed vector spaces === A normed vector space is a vector space equipped with a norm, which is a function that measures the length of vectors. The norm of a vector v is typically denoted by ‖ v ‖ {\displaystyle \lVert v\rVert } . Any normed vector space can be equipped with a metric in which the distance between two vectors x and y is given by d ( x , y ) := ‖ x − y ‖ . {\displaystyle d(x,y):=\lVert x-y\rVert .} The metric d is said to be induced by the norm ‖ ⋅ ‖ {\displaystyle \lVert {\cdot }\rVert } . Conversely, if a metric d on a vector space X is translation invariant: d ( x , y ) = d ( x + a , y + a ) {\displaystyle d(x,y)=d(x+a,y+a)} for every x, y, and a in X; and absolutely homogeneous: d ( α x , α y ) = | α | d ( x , y ) {\displaystyle d(\alpha x,\alpha y)=|\alpha |d(x,y)} for every x and y in X and real number α; then it is the metric induced by the norm ‖ x ‖ := d ( x , 0 ) . {\displaystyle \lVert x\rVert :=d(x,0).} A similar relationship holds between seminorms and pseudometrics. Among examples of metrics induced by a norm are the metrics d1, d2, and d∞ on R 2 {\displaystyle \mathbb {R} ^{2}} , which are induced by the Manhattan norm, the Euclidean norm, and the maximum norm, respectively. More generally, the Kuratowski embedding allows one to see any metric space as a subspace of a normed vector space. Infinite-dimensional normed vector spaces, particularly spaces of functions, are studied in functional analysis. Completeness is particularly important in this context: a complete normed vector space is known as a Banach space. An unusual property of normed vector spaces is that linear transformations between them are continuous if and only if they are Lipschitz. Such transformations are known as bounded operators. === Length spaces === A curve in a metric space (M, d) is a continuous function γ : [ 0 , T ] → M {\displaystyle \gamma :[0,T]\to M} . The length of γ is measured by L ( γ ) = sup 0 = x 0 < x 1 < ⋯ < x n = T { ∑ k = 1 n d ( γ ( x k − 1 ) , γ ( x k ) ) } . {\displaystyle L(\gamma )=\sup _{0=x_{0}<x_{1}<\cdots <x_{n}=T}\left\{\sum _{k=1}^{n}d(\gamma (x_{k-1}),\gamma (x_{k}))\right\}.} In general, this supremum may be infinite; a curve of finite length is called rectifiable. Suppose that the length of the curve γ is equal to the distance between its endpoints—that is, it is the shortest possible path between its endpoints. After reparametrization by arc length, γ becomes a geodesic: a curve which is a distance-preserving function. A geodesic is a shortest possible path between any two of its points. A geodesic metric space is a metric space which admits a geodesic between any two of its points. The spaces ( R 2 , d 1 ) {\displaystyle (\mathbb {R} ^{2},d_{1})} and ( R 2 , d 2 ) {\displaystyle (\mathbb {R} ^{2},d_{2})} are both geodesic metric spaces. In ( R 2 , d 2 ) {\displaystyle (\mathbb {R} ^{2},d_{2})} , geodesics are unique, but in ( R 2 , d 1 ) {\displaystyle (\mathbb {R} ^{2},d_{1})} , there are often infinitely many geodesics between two points, as shown in the figure at the top of the article. The space M is a length space (or the metric d is intrinsic) if the distance between any two points x and y is the infimum of lengths of paths between them. Unlike in a geodesic metric space, the infimum does not have to be attained. An example of a length space which is not geodesic is the Euclidean plane minus the origin: the points (1, 0) and (-1, 0) can be joined by paths of length arbitrarily close to 2, but not by a path of length 2. An example of a metric space which is not a length space is given by the straight-line metric on the sphere: the straight line between two points through the center of the Earth is shorter than any path along the surface. Given any metric space (M, d), one can define a new, intrinsic distance function dintrinsic on M by setting the distance between points x and y to be the infimum of the d-lengths of paths between them. For instance, if d is the straight-line distance on the sphere, then dintrinsic is the great-circle distance. However, in some cases dintrinsic may have infinite values. For example, if M is the Koch snowflake with the subspace metric d induced from R 2 {\displaystyle \mathbb {R} ^{2}} , then the resulting intrinsic distance is infinite for any pair of distinct points. === Riemannian manifolds === A Riemannian manifold is a space equipped with a Riemannian metric tensor, which determines lengths of tangent vectors at every point. This can be thought of defining a notion of distance infinitesimally. In particular, a differentiable path γ : [ 0 , T ] → M {\displaystyle \gamma :[0,T]\to M} in a Riemannian manifold M has length defined as the integral of the length of the tangent vector to the path: L ( γ ) = ∫ 0 T | γ ˙ ( t ) | d t . {\displaystyle L(\gamma )=\int _{0}^{T}|{\dot {\gamma }}(t)|dt.} On a connected Riemannian manifold, one then defines the distance between two points as the infimum of lengths of smooth paths between them. This construction generalizes to other kinds of infinitesimal metrics on manifolds, such as sub-Riemannian and Finsler metrics. The Riemannian metric is uniquely determined by the distance function; this means that in principle, all information about a Riemannian manifold can be recovered from its distance function. One direction in metric geometry is finding purely metric ("synthetic") formulations of properties of Riemannian manifolds. For example, a Riemannian manifold is a CAT(k) space (a synthetic condition which depends purely on the metric) if and only if its sectional curvature is bounded above by k. Thus CAT(k) spaces generalize upper curvature bounds to general metric spaces. === Metric measure spaces === Real analysis makes use of both the metric on R n {\displaystyle \mathbb {R} ^{n}} and the Lebesgue measure. Therefore, generalizations of many ideas from analysis naturally reside in metric measure spaces: spaces that have both a measure and a metric which are compatible with each other. Formally, a metric measure space is a metric space equipped with a Borel regular measure such that every ball has positive measure. For example Euclidean spaces of dimension n, and more generally n-dimensional Riemannian manifolds, naturally have the structure of a metric measure space, equipped with the Lebesgue measure. Certain fractal metric spaces such as the Sierpiński gasket can be equipped with the α-dimensional Hausdorff measure where α is the Hausdorff dimension. In general, however, a metric space may not have an "obvious" choice of measure. One application of metric measure spaces is generalizing the notion of Ricci curvature beyond Riemannian manifolds. Just as CAT(k) and Alexandrov spaces generalize sectional curvature bounds, RCD spaces are a class of metric measure spaces which generalize lower bounds on Ricci curvature. == Further examples and applications == === Graphs and finite metric spaces === A metric space is discrete if its induced topology is the discrete topology. Although many concepts, such as completeness and compactness, are not interesting for such spaces, they are nevertheless an object of study in several branches of mathematics. In particular, finite metric spaces (those having a finite number of points) are studied in combinatorics and theoretical computer science. Embeddings in other metric spaces are particularly well-studied. For example, not every finite metric space can be isometrically embedded in a Euclidean space or in Hilbert space. On the other hand, in the worst case the required distortion (bilipschitz constant) is only logarithmic in the number of points. For any undirected connected graph G, the set V of vertices of G can be turned into a metric space by defining the distance between vertices x and y to be the length of the shortest edge path connecting them. This is also called shortest-path distance or geodesic distance. In geometric group theory this construction is applied to the Cayley graph of a (typically infinite) finitely-generated group, yielding the word metric. Up to a bilipschitz homeomorphism, the word metric depends only on the group and not on the chosen finite generating set. === Metric embeddings and approximations === An important area of study in finite metric spaces is the embedding of complex metric spaces into simpler ones while controlling the distortion of distances. This is particularly useful in computer science and discrete mathematics, where algorithms often perform more efficiently on simpler structures like tree metrics. A significant result in this area is that any finite metric space can be probabilistically embedded into a tree metric with an expected distortion of O ( l o g n ) {\displaystyle O(logn)} , where n {\displaystyle n} is the number of points in the metric space. This embedding is notable because it achieves the best possible asymptotic bound on distortion, matching the lower bound of Ω ( l o g n ) {\displaystyle \Omega (logn)} . The tree metrics produced in this embedding dominate the original metrics, meaning that distances in the tree are greater than or equal to those in the original space. This property is particularly useful for designing approximation algorithms, as it allows for the preservation of distance-related properties while simplifying the underlying structure. The result has significant implications for various computational problems: Network design: Improves approximation algorithms for problems like the Group Steiner tree problem (a generalization of the Steiner tree problem) and Buy-at-bulk network design (a problem in Network planning and design) by simplifying the metric space to a tree metric. Clustering: Enhances algorithms for clustering problems where hierarchical clustering can be performed more efficiently on tree metrics. Online algorithms: Benefits problems like the k-server problem and metrical task system by providing better competitive ratios through simplified metrics. The technique involves constructing a hierarchical decomposition of the original metric space and converting it into a tree metric via a randomized algorithm. The O ( l o g n ) {\displaystyle O(logn)} distortion bound has led to improved approximation ratios in several algorithmic problems, demonstrating the practical significance of this theoretical result. === Distances between mathematical objects === In modern mathematics, one often studies spaces whose points are themselves mathematical objects. A distance function on such a space generally aims to measure the dissimilarity between two objects. Here are some examples: Functions to a metric space. If X is any set and M is a metric space, then the set of all bounded functions f : X → M {\displaystyle f\colon X\to M} (i.e. those functions whose image is a bounded subset of M {\displaystyle M} ) can be turned into a metric space by defining the distance between two bounded functions f and g to be d ( f , g ) = sup x ∈ X d ( f ( x ) , g ( x ) ) . {\displaystyle d(f,g)=\sup _{x\in X}d(f(x),g(x)).} This metric is called the uniform metric or supremum metric. If M is complete, then this function space is complete as well; moreover, if X is also a topological space, then the subspace consisting of all bounded continuous functions from X to M is also complete. When X is a subspace of R n {\displaystyle \mathbb {R} ^{n}} , this function space is known as a classical Wiener space. String metrics and edit distances. There are many ways of measuring distances between strings of characters, which may represent sentences in computational linguistics or code words in coding theory. Edit distances attempt to measure the number of changes necessary to get from one string to another. For example, the Hamming distance measures the minimal number of substitutions needed, while the Levenshtein distance measures the minimal number of deletions, insertions, and substitutions; both of these can be thought of as distances in an appropriate graph. Graph edit distance is a measure of dissimilarity between two graphs, defined as the minimal number of graph edit operations required to transform one graph into another. Wasserstein metrics measure the distance between two measures on the same metric space. The Wasserstein distance between two measures is, roughly speaking, the cost of transporting one to the other. The set of all m by n matrices over some field is a metric space with respect to the rank distance d ( A , B ) = r a n k ( B − A ) {\displaystyle d(A,B)=\mathrm {rank} (B-A)} . The Helly metric in game theory measures the difference between strategies in a game. === Hausdorff and Gromov–Hausdorff distance === The idea of spaces of mathematical objects can also be applied to subsets of a metric space, as well as metric spaces themselves. Hausdorff and Gromov–Hausdorff distance define metrics on the set of compact subsets of a metric space and the set of compact metric spaces, respectively. Suppose (M, d) is a metric space, and let S be a subset of M. The distance from S to a point x of M is, informally, the distance from x to the closest point of S. However, since there may not be a single closest point, it is defined via an infimum: d ( x , S ) = inf { d ( x , s ) : s ∈ S } . {\displaystyle d(x,S)=\inf\{d(x,s):s\in S\}.} In particular, d ( x , S ) = 0 {\displaystyle d(x,S)=0} if and only if x belongs to the closure of S. Furthermore, distances between points and sets satisfy a version of the triangle inequality: d ( x , S ) ≤ d ( x , y ) + d ( y , S ) , {\displaystyle d(x,S)\leq d(x,y)+d(y,S),} and therefore the map d S : M → R {\displaystyle d_{S}:M\to \mathbb {R} } defined by d S ( x ) = d ( x , S ) {\displaystyle d_{S}(x)=d(x,S)} is continuous. Incidentally, this shows that metric spaces are completely regular. Given two subsets S and T of M, their Hausdorff distance is d H ( S , T ) = max { sup { d ( s , T ) : s ∈ S } , sup { d ( t , S ) : t ∈ T } } . {\displaystyle d_{H}(S,T)=\max\{\sup\{d(s,T):s\in S\},\sup\{d(t,S):t\in T\}\}.} Informally, two sets S and T are close to each other in the Hausdorff distance if no element of S is too far from T and vice versa. For example, if S is an open set in Euclidean space T is an ε-net inside S, then d H ( S , T ) < ε {\displaystyle d_{H}(S,T)<\varepsilon } . In general, the Hausdorff distance d H ( S , T ) {\displaystyle d_{H}(S,T)} can be infinite or zero. However, the Hausdorff distance between two distinct compact sets is always positive and finite. Thus the Hausdorff distance defines a metric on the set of compact subsets of M. The Gromov–Hausdorff metric defines a distance between (isometry classes of) compact metric spaces. The Gromov–Hausdorff distance between compact spaces X and Y is the infimum of the Hausdorff distance over all metric spaces Z that contain X and Y as subspaces. While the exact value of the Gromov–Hausdorff distance is rarely useful to know, the resulting topology has found many applications. === Miscellaneous examples === Given a metric space (X, d) and an increasing concave function f : [ 0 , ∞ ) → [ 0 , ∞ ) {\displaystyle f\colon [0,\infty )\to [0,\infty )} such that f(t) = 0 if and only if t = 0, then d f ( x , y ) = f ( d ( x , y ) ) {\displaystyle d_{f}(x,y)=f(d(x,y))} is also a metric on X. If f(t) = tα for some real number α < 1, such a metric is known as a snowflake of d. The tight span of a metric space is another metric space which can be thought of as an abstract version of the convex hull. The knight's move metric, the minimal number of knight's moves to reach one point in Z 2 {\displaystyle \mathbb {Z} ^{2}} from another, is a metric on Z 2 {\displaystyle \mathbb {Z} ^{2}} . The British Rail metric (also called the "post office metric" or the "French railway metric") on a normed vector space is given by d ( x , y ) = ‖ x ‖ + ‖ y ‖ {\displaystyle d(x,y)=\lVert x\rVert +\lVert y\rVert } for distinct points x {\displaystyle x} and y {\displaystyle y} , and d ( x , x ) = 0 {\displaystyle d(x,x)=0} . More generally ‖ ⋅ ‖ {\displaystyle \lVert \cdot \rVert } can be replaced with a function f {\displaystyle f} taking an arbitrary set S {\displaystyle S} to non-negative reals and taking the value 0 {\displaystyle 0} at most once: then the metric is defined on S {\displaystyle S} by d ( x , y ) = f ( x ) + f ( y ) {\displaystyle d(x,y)=f(x)+f(y)} for distinct points x {\displaystyle x} and y {\displaystyle y} , and d ( x , x ) = 0 {\displaystyle d(x,x)=0} . The name alludes to the tendency of railway journeys to proceed via London (or Paris) irrespective of their final destination. The Robinson–Foulds metric used for calculating the distances between Phylogenetic trees in Phylogenetics == Constructions == === Product metric spaces === If ( M 1 , d 1 ) , … , ( M n , d n ) {\displaystyle (M_{1},d_{1}),\ldots ,(M_{n},d_{n})} are metric spaces, and N is the Euclidean norm on R n {\displaystyle \mathbb {R} ^{n}} , then ( M 1 × ⋯ × M n , d × ) {\displaystyle {\bigl (}M_{1}\times \cdots \times M_{n},d_{\times }{\bigr )}} is a metric space, where the product metric is defined by d × ( ( x 1 , … , x n ) , ( y 1 , … , y n ) ) = N ( d 1 ( x 1 , y 1 ) , … , d n ( x n , y n ) ) , {\displaystyle d_{\times }{\bigl (}(x_{1},\ldots ,x_{n}),(y_{1},\ldots ,y_{n}){\bigr )}=N{\bigl (}d_{1}(x_{1},y_{1}),\ldots ,d_{n}(x_{n},y_{n}){\bigr )},} and the induced topology agrees with the product topology. By the equivalence of norms in finite dimensions, a topologically equivalent metric is obtained if N is the taxicab norm, a p-norm, the maximum norm, or any other norm which is non-decreasing as the coordinates of a positive n-tuple increase (yielding the triangle inequality). Similarly, a metric on the topological product of countably many metric spaces can be obtained using the metric d ( x , y ) = ∑ i = 1 ∞ 1 2 i d i ( x i , y i ) 1 + d i ( x i , y i ) . {\displaystyle d(x,y)=\sum _{i=1}^{\infty }{\frac {1}{2^{i}}}{\frac {d_{i}(x_{i},y_{i})}{1+d_{i}(x_{i},y_{i})}}.} The topological product of uncountably many metric spaces need not be metrizable. For example, an uncountable product of copies of R {\displaystyle \mathbb {R} } is not first-countable and thus is not metrizable. === Quotient metric spaces === If M is a metric space with metric d, and ∼ {\displaystyle \sim } is an equivalence relation on M, then we can endow the quotient set M / ∼ {\displaystyle M/\!\sim } with a pseudometric. The distance between two equivalence classes [ x ] {\displaystyle [x]} and [ y ] {\displaystyle [y]} is defined as d ′ ( [ x ] , [ y ] ) = inf { d ( p 1 , q 1 ) + d ( p 2 , q 2 ) + ⋯ + d ( p n , q n ) } , {\displaystyle d'([x],[y])=\inf\{d(p_{1},q_{1})+d(p_{2},q_{2})+\dotsb +d(p_{n},q_{n})\},} where the infimum is taken over all finite sequences ( p 1 , p 2 , … , p n ) {\displaystyle (p_{1},p_{2},\dots ,p_{n})} and ( q 1 , q 2 , … , q n ) {\displaystyle (q_{1},q_{2},\dots ,q_{n})} with p 1 ∼ x {\displaystyle p_{1}\sim x} , q n ∼ y {\displaystyle q_{n}\sim y} , q i ∼ p i + 1 , i = 1 , 2 , … , n − 1 {\displaystyle q_{i}\sim p_{i+1},i=1,2,\dots ,n-1} . In general this will only define a pseudometric, i.e. d ′ ( [ x ] , [ y ] ) = 0 {\displaystyle d'([x],[y])=0} does not necessarily imply that [ x ] = [ y ] {\displaystyle [x]=[y]} . However, for some equivalence relations (e.g., those given by gluing together polyhedra along faces), d ′ {\displaystyle d'} is a metric. The quotient metric d ′ {\displaystyle d'} is characterized by the following universal property. If f : ( M , d ) → ( X , δ ) {\displaystyle f\,\colon (M,d)\to (X,\delta )} is a metric (i.e. 1-Lipschitz) map between metric spaces satisfying f(x) = f(y) whenever x ∼ y {\displaystyle x\sim y} , then the induced function f ¯ : M / ∼ → X {\displaystyle {\overline {f}}\,\colon {M/\sim }\to X} , given by f ¯ ( [ x ] ) = f ( x ) {\displaystyle {\overline {f}}([x])=f(x)} , is a metric map f ¯ : ( M / ∼ , d ′ ) → ( X , δ ) . {\displaystyle {\overline {f}}\,\colon (M/\sim ,d')\to (X,\delta ).} The quotient metric does not always induce the quotient topology. For example, the topological quotient of the metric space N × [ 0 , 1 ] {\displaystyle \mathbb {N} \times [0,1]} identifying all points of the form ( n , 0 ) {\displaystyle (n,0)} is not metrizable since it is not first-countable, but the quotient metric is a well-defined metric on the same set which induces a coarser topology. Moreover, different metrics on the original topological space (a disjoint union of countably many intervals) lead to different topologies on the quotient. A topological space is sequential if and only if it is a (topological) quotient of a metric space. == Generalizations of metric spaces == There are several notions of spaces which have less structure than a metric space, but more than a topological space. Uniform spaces are spaces in which distances are not defined, but uniform continuity is. Approach spaces are spaces in which point-to-set distances are defined, instead of point-to-point distances. They have particularly good properties from the point of view of category theory. Continuity spaces are a generalization of metric spaces and posets that can be used to unify the notions of metric spaces and domains. There are also numerous ways of relaxing the axioms for a metric, giving rise to various notions of generalized metric spaces. These generalizations can also be combined. The terminology used to describe them is not completely standardized. Most notably, in functional analysis pseudometrics often come from seminorms on vector spaces, and so it is natural to call them "semimetrics". This conflicts with the use of the term in topology. === Extended metrics === Some authors define metrics so as to allow the distance function d to attain the value ∞, i.e. distances are non-negative numbers on the extended real number line. Such a function is also called an extended metric or "∞-metric". Every extended metric can be replaced by a real-valued metric that is topologically equivalent. This can be done using a subadditive monotonically increasing bounded function which is zero at zero, e.g. d ′ ( x , y ) = d ( x , y ) / ( 1 + d ( x , y ) ) {\displaystyle d'(x,y)=d(x,y)/(1+d(x,y))} or d ″ ( x , y ) = min ( 1 , d ( x , y ) ) {\displaystyle d''(x,y)=\min(1,d(x,y))} . === Metrics valued in structures other than the real numbers === The requirement that the metric take values in [ 0 , ∞ ) {\displaystyle [0,\infty )} can be relaxed to consider metrics with values in other structures, including: Ordered fields, yielding the notion of a generalised metric. More general directed sets. In the absence of an addition operation, the triangle inequality does not make sense and is replaced with an ultrametric inequality. This leads to the notion of a generalized ultrametric. These generalizations still induce a uniform structure on the space. === Pseudometrics === A pseudometric on X {\displaystyle X} is a function d : X × X → R {\displaystyle d:X\times X\to \mathbb {R} } which satisfies the axioms for a metric, except that instead of the second (identity of indiscernibles) only d ( x , x ) = 0 {\displaystyle d(x,x)=0} for all x {\displaystyle x} is required. In other words, the axioms for a pseudometric are: d ( x , y ) ≥ 0 {\displaystyle d(x,y)\geq 0} d ( x , x ) = 0 {\displaystyle d(x,x)=0} d ( x , y ) = d ( y , x ) {\displaystyle d(x,y)=d(y,x)} d ( x , z ) ≤ d ( x , y ) + d ( y , z ) {\displaystyle d(x,z)\leq d(x,y)+d(y,z)} . In some contexts, pseudometrics are referred to as semimetrics because of their relation to seminorms. === Quasimetrics === Occasionally, a quasimetric is defined as a function that satisfies all axioms for a metric with the possible exception of symmetry. The name of this generalisation is not entirely standardized. d ( x , y ) ≥ 0 {\displaystyle d(x,y)\geq 0} d ( x , y ) = 0 ⟺ x = y {\displaystyle d(x,y)=0\iff x=y} d ( x , z ) ≤ d ( x , y ) + d ( y , z ) {\displaystyle d(x,z)\leq d(x,y)+d(y,z)} Quasimetrics are common in real life. For example, given a set X of mountain villages, the typical walking times between elements of X form a quasimetric because travel uphill takes longer than travel downhill. Another example is the length of car rides in a city with one-way streets: here, a shortest path from point A to point B goes along a different set of streets than a shortest path from B to A and may have a different length. A quasimetric on the reals can be defined by setting d ( x , y ) = { x − y if x ≥ y , 1 otherwise. {\displaystyle d(x,y)={\begin{cases}x-y&{\text{if }}x\geq y,\\1&{\text{otherwise.}}\end{cases}}} The 1 may be replaced, for example, by infinity or by 1 + y − x {\displaystyle 1+{\sqrt {y-x}}} or any other subadditive function of y-x. This quasimetric describes the cost of modifying a metal stick: it is easy to reduce its size by filing it down, but it is difficult or impossible to grow it. Given a quasimetric on X, one can define an R-ball around x to be the set { y ∈ X | d ( x , y ) ≤ R } {\displaystyle \{y\in X|d(x,y)\leq R\}} . As in the case of a metric, such balls form a basis for a topology on X, but this topology need not be metrizable. For example, the topology induced by the quasimetric on the reals described above is the (reversed) Sorgenfrey line. === Metametrics or partial metrics === In a metametric, all the axioms of a metric are satisfied except that the distance between identical points is not necessarily zero. In other words, the axioms for a metametric are: d ( x , y ) ≥ 0 {\displaystyle d(x,y)\geq 0} d ( x , y ) = 0 ⟹ x = y {\displaystyle d(x,y)=0\implies x=y} d ( x , y ) = d ( y , x ) {\displaystyle d(x,y)=d(y,x)} d ( x , z ) ≤ d ( x , y ) + d ( y , z ) . {\displaystyle d(x,z)\leq d(x,y)+d(y,z).} Metametrics appear in the study of Gromov hyperbolic metric spaces and their boundaries. The visual metametric on such a space satisfies d ( x , x ) = 0 {\displaystyle d(x,x)=0} for points x {\displaystyle x} on the boundary, but otherwise d ( x , x ) {\displaystyle d(x,x)} is approximately the distance from x {\displaystyle x} to the boundary. Metametrics were first defined by Jussi Väisälä. In other work, a function satisfying these axioms is called a partial metric or a dislocated metric. === Semimetrics === A semimetric on X {\displaystyle X} is a function d : X × X → R {\displaystyle d:X\times X\to \mathbb {R} } that satisfies the first three axioms, but not necessarily the triangle inequality: d ( x , y ) ≥ 0 {\displaystyle d(x,y)\geq 0} d ( x , y ) = 0 ⟺ x = y {\displaystyle d(x,y)=0\iff x=y} d ( x , y ) = d ( y , x ) {\displaystyle d(x,y)=d(y,x)} Some authors work with a weaker form of the triangle inequality, such as: The ρ-inframetric inequality implies the ρ-relaxed triangle inequality (assuming the first axiom), and the ρ-relaxed triangle inequality implies the 2ρ-inframetric inequality. Semimetrics satisfying these equivalent conditions have sometimes been referred to as quasimetrics, nearmetrics or inframetrics. The ρ-inframetric inequalities were introduced to model round-trip delay times in the internet. The triangle inequality implies the 2-inframetric inequality, and the ultrametric inequality is exactly the 1-inframetric inequality. === Premetrics === Relaxing the last three axioms leads to the notion of a premetric, i.e. a function satisfying the following conditions: d ( x , y ) ≥ 0 {\displaystyle d(x,y)\geq 0} d ( x , x ) = 0 {\displaystyle d(x,x)=0} This is not a standard term. Sometimes it is used to refer to other generalizations of metrics such as pseudosemimetrics or pseudometrics; in translations of Russian books it sometimes appears as "prametric". A premetric that satisfies symmetry, i.e. a pseudosemimetric, is also called a distance. Any premetric gives rise to a topology as follows. For a positive real r {\displaystyle r} , the r {\displaystyle r} -ball centered at a point p {\displaystyle p} is defined as B r ( p ) = { x | d ( x , p ) < r } . {\displaystyle B_{r}(p)=\{x|d(x,p)<r\}.} A set is called open if for any point p {\displaystyle p} in the set there is an r {\displaystyle r} -ball centered at p {\displaystyle p} which is contained in the set. Every premetric space is a topological space, and in fact a sequential space. In general, the r {\displaystyle r} -balls themselves need not be open sets with respect to this topology. As for metrics, the distance between two sets A {\displaystyle A} and B {\displaystyle B} , is defined as d ( A , B ) = inf x ∈ A , y ∈ B d ( x , y ) . {\displaystyle d(A,B)={\underset {x\in A,y\in B}{\inf }}d(x,y).} This defines a premetric on the power set of a premetric space. If we start with a (pseudosemi-)metric space, we get a pseudosemimetric, i.e. a symmetric premetric. Any premetric gives rise to a preclosure operator c l {\displaystyle cl} as follows: c l ( A ) = { x | d ( x , A ) = 0 } . {\displaystyle cl(A)=\{x|d(x,A)=0\}.} === Pseudoquasimetrics === The prefixes pseudo-, quasi- and semi- can also be combined, e.g., a pseudoquasimetric (sometimes called hemimetric) relaxes both the indiscernibility axiom and the symmetry axiom and is simply a premetric satisfying the triangle inequality. For pseudoquasimetric spaces the open r {\displaystyle r} -balls form a basis of open sets. A very basic example of a pseudoquasimetric space is the set { 0 , 1 } {\displaystyle \{0,1\}} with the premetric given by d ( 0 , 1 ) = 1 {\displaystyle d(0,1)=1} and d ( 1 , 0 ) = 0. {\displaystyle d(1,0)=0.} The associated topological space is the Sierpiński space. Sets equipped with an extended pseudoquasimetric were studied by William Lawvere as "generalized metric spaces". From a categorical point of view, the extended pseudometric spaces and the extended pseudoquasimetric spaces, along with their corresponding nonexpansive maps, are the best behaved of the metric space categories. One can take arbitrary products and coproducts and form quotient objects within the given category. If one drops "extended", one can only take finite products and coproducts. If one drops "pseudo", one cannot take quotients. Lawvere also gave an alternate definition of such spaces as enriched categories. The ordered set ( R , ≥ ) {\displaystyle (\mathbb {R} ,\geq )} can be seen as a category with one morphism a → b {\displaystyle a\to b} if a ≥ b {\displaystyle a\geq b} and none otherwise. Using + as the tensor product and 0 as the identity makes this category into a monoidal category R ∗ {\displaystyle R^{*}} . Every (extended pseudoquasi-)metric space ( M , d ) {\displaystyle (M,d)} can now be viewed as a category M ∗ {\displaystyle M^{*}} enriched over R ∗ {\displaystyle R^{*}} : The objects of the category are the points of M. For every pair of points x and y such that d ( x , y ) < ∞ {\displaystyle d(x,y)<\infty } , there is a single morphism which is assigned the object d ( x , y ) {\displaystyle d(x,y)} of R ∗ {\displaystyle R^{*}} . The triangle inequality and the fact that d ( x , x ) = 0 {\displaystyle d(x,x)=0} for all points x derive from the properties of composition and identity in an enriched category. Since R ∗ {\displaystyle R^{*}} is a poset, all diagrams that are required for an enriched category commute automatically. === Metrics on multisets === The notion of a metric can be generalized from a distance between two elements to a number assigned to a multiset of elements. A multiset is a generalization of the notion of a set in which an element can occur more than once. Define the multiset union U = X Y {\displaystyle U=XY} as follows: if an element x occurs m times in X and n times in Y then it occurs m + n times in U. A function d on the set of nonempty finite multisets of elements of a set M is a metric if d ( X ) = 0 {\displaystyle d(X)=0} if all elements of X are equal and d ( X ) > 0 {\displaystyle d(X)>0} otherwise (positive definiteness) d ( X ) {\displaystyle d(X)} depends only on the (unordered) multiset X (symmetry) d ( X Y ) ≤ d ( X Z ) + d ( Z Y ) {\displaystyle d(XY)\leq d(XZ)+d(ZY)} (triangle inequality) By considering the cases of axioms 1 and 2 in which the multiset X has two elements and the case of axiom 3 in which the multisets X, Y, and Z have one element each, one recovers the usual axioms for a metric. That is, every multiset metric yields an ordinary metric when restricted to sets of two elements. A simple example is the set of all nonempty finite multisets X {\displaystyle X} of integers with d ( X ) = max ( X ) − min ( X ) {\displaystyle d(X)=\max(X)-\min(X)} . More complex examples are information distance in multisets; and normalized compression distance (NCD) in multisets. == See also == Acoustic metric – Tensor characterizing signal-carrying properties in a medium Complete metric space – Metric geometry Diversity (mathematics) – Generalization of metric spaces Generalized metric space Hilbert's fourth problem – Construct all metric spaces where lines resemble those on a sphere Metric tree Minkowski distance – Vector distance using pth powers Signed distance function – Distance from a point to the boundary of a set Similarity measure – Real-valued function that quantifies similarity between two objects Space (mathematics) – Mathematical set with some added structure Ultrametric space – Type of metric space == Notes == == Citations == == References == == External links == "Metric space", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Far and near—several examples of distance functions at cut-the-knot.
Wikipedia/Distance_function
In linear algebra, an eigenvector ( EYE-gən-) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector v {\displaystyle \mathbf {v} } of a linear transformation T {\displaystyle T} is scaled by a constant factor λ {\displaystyle \lambda } when the linear transformation is applied to it: T v = λ v {\displaystyle T\mathbf {v} =\lambda \mathbf {v} } . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor λ {\displaystyle \lambda } (possibly negative). Geometrically, vectors are multi-dimensional quantities with magnitude and direction, often pictured as arrows. A linear transformation rotates, stretches, or shears the vectors upon which it acts. A linear transformation's eigenvectors are those vectors that are only stretched or shrunk, with neither rotation nor shear. The corresponding eigenvalue is the factor by which an eigenvector is stretched or shrunk. If the eigenvalue is negative, the eigenvector's direction is reversed. The eigenvectors and eigenvalues of a linear transformation serve to characterize it, and so they play important roles in all areas where linear algebra is applied, from geology to quantum mechanics. In particular, it is often the case that a system is represented by a linear transformation whose outputs are fed as inputs to the same transformation (feedback). In such an application, the largest eigenvalue is of particular importance, because it governs the long-term behavior of the system after many applications of the linear transformation, and the associated eigenvector is the steady state of the system. == Matrices == For an n × n {\displaystyle n{\times }n} matrix A and a nonzero vector v {\displaystyle \mathbf {v} } of length n {\displaystyle n} , if multiplying A by v {\displaystyle \mathbf {v} } (denoted A v {\displaystyle A\mathbf {v} } ) simply scales v {\displaystyle \mathbf {v} } by a factor λ, where λ is a scalar, then v {\displaystyle \mathbf {v} } is called an eigenvector of A, and λ is the corresponding eigenvalue. This relationship can be expressed as: A v = λ v {\displaystyle A\mathbf {v} =\lambda \mathbf {v} } . Given an n-dimensional vector space and a choice of basis, there is a direct correspondence between linear transformations from the vector space into itself and n-by-n square matrices. Hence, in a finite-dimensional vector space, it is equivalent to define eigenvalues and eigenvectors using either the language of linear transformations, or the language of matrices. == Overview == Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations. The prefix eigen- is adopted from the German word eigen (cognate with the English word own) for 'proper', 'characteristic', 'own'. Originally used to study principal axes of the rotational motion of rigid bodies, eigenvalues and eigenvectors have a wide range of applications, for example in stability analysis, vibration analysis, atomic orbitals, facial recognition, and matrix diagonalization. In essence, an eigenvector v of a linear transformation T is a nonzero vector that, when T is applied to it, does not change direction. Applying T to the eigenvector only scales the eigenvector by the scalar value λ, called an eigenvalue. This condition can be written as the equation T ( v ) = λ v , {\displaystyle T(\mathbf {v} )=\lambda \mathbf {v} ,} referred to as the eigenvalue equation or eigenequation. In general, λ may be any scalar. For example, λ may be negative, in which case the eigenvector reverses direction as part of the scaling, or it may be zero or complex. The example here, based on the Mona Lisa, provides a simple illustration. Each point on the painting can be represented as a vector pointing from the center of the painting to that point. The linear transformation in this example is called a shear mapping. Points in the top half are moved to the right, and points in the bottom half are moved to the left, proportional to how far they are from the horizontal axis that goes through the middle of the painting. The vectors pointing to each point in the original image are therefore tilted right or left, and made longer or shorter by the transformation. Points along the horizontal axis do not move at all when this transformation is applied. Therefore, any vector that points directly to the right or left with no vertical component is an eigenvector of this transformation, because the mapping does not change its direction. Moreover, these eigenvectors all have an eigenvalue equal to one, because the mapping does not change their length either. Linear transformations can take many different forms, mapping vectors in a variety of vector spaces, so the eigenvectors can also take many forms. For example, the linear transformation could be a differential operator like d d x {\displaystyle {\tfrac {d}{dx}}} , in which case the eigenvectors are functions called eigenfunctions that are scaled by that differential operator, such as d d x e λ x = λ e λ x . {\displaystyle {\frac {d}{dx}}e^{\lambda x}=\lambda e^{\lambda x}.} Alternatively, the linear transformation could take the form of an n by n matrix, in which case the eigenvectors are n by 1 matrices. If the linear transformation is expressed in the form of an n by n matrix A, then the eigenvalue equation for a linear transformation above can be rewritten as the matrix multiplication A v = λ v , {\displaystyle A\mathbf {v} =\lambda \mathbf {v} ,} where the eigenvector v is an n by 1 matrix. For a matrix, eigenvalues and eigenvectors can be used to decompose the matrix—for example by diagonalizing it. Eigenvalues and eigenvectors give rise to many closely related mathematical concepts, and the prefix eigen- is applied liberally when naming them: The set of all eigenvectors of a linear transformation, each paired with its corresponding eigenvalue, is called the eigensystem of that transformation. The set of all eigenvectors of T corresponding to the same eigenvalue, together with the zero vector, is called an eigenspace, or the characteristic space of T associated with that eigenvalue. If a set of eigenvectors of T forms a basis of the domain of T, then this basis is called an eigenbasis. == History == Eigenvalues are often introduced in the context of linear algebra or matrix theory. Historically, however, they arose in the study of quadratic forms and differential equations. In the 18th century, Leonhard Euler studied the rotational motion of a rigid body, and discovered the importance of the principal axes. Joseph-Louis Lagrange realized that the principal axes are the eigenvectors of the inertia matrix. In the early 19th century, Augustin-Louis Cauchy saw how their work could be used to classify the quadric surfaces, and generalized it to arbitrary dimensions. Cauchy also coined the term racine caractéristique (characteristic root), for what is now called eigenvalue; his term survives in characteristic equation. Later, Joseph Fourier used the work of Lagrange and Pierre-Simon Laplace to solve the heat equation by separation of variables in his 1822 treatise The Analytic Theory of Heat (Théorie analytique de la chaleur). Charles-François Sturm elaborated on Fourier's ideas further, and brought them to the attention of Cauchy, who combined them with his own ideas and arrived at the fact that real symmetric matrices have real eigenvalues. This was extended by Charles Hermite in 1855 to what are now called Hermitian matrices. Around the same time, Francesco Brioschi proved that the eigenvalues of orthogonal matrices lie on the unit circle, and Alfred Clebsch found the corresponding result for skew-symmetric matrices. Finally, Karl Weierstrass clarified an important aspect in the stability theory started by Laplace, by realizing that defective matrices can cause instability. In the meantime, Joseph Liouville studied eigenvalue problems similar to those of Sturm; the discipline that grew out of their work is now called Sturm–Liouville theory. Schwarz studied the first eigenvalue of Laplace's equation on general domains towards the end of the 19th century, while Poincaré studied Poisson's equation a few years later. At the start of the 20th century, David Hilbert studied the eigenvalues of integral operators by viewing the operators as infinite matrices. He was the first to use the German word eigen, which means "own", to denote eigenvalues and eigenvectors in 1904, though he may have been following a related usage by Hermann von Helmholtz. For some time, the standard term in English was "proper value", but the more distinctive term "eigenvalue" is the standard today. The first numerical algorithm for computing eigenvalues and eigenvectors appeared in 1929, when Richard von Mises published the power method. One of the most popular methods today, the QR algorithm, was proposed independently by John G. F. Francis and Vera Kublanovskaya in 1961. == Eigenvalues and eigenvectors of matrices == Eigenvalues and eigenvectors are often introduced to students in the context of linear algebra courses focused on matrices. Furthermore, linear transformations over a finite-dimensional vector space can be represented using matrices, which is especially common in numerical and computational applications. Consider n-dimensional vectors that are formed as a list of n scalars, such as the three-dimensional vectors x = [ 1 − 3 4 ] and y = [ − 20 60 − 80 ] . {\displaystyle \mathbf {x} ={\begin{bmatrix}1\\-3\\4\end{bmatrix}}\quad {\mbox{and}}\quad \mathbf {y} ={\begin{bmatrix}-20\\60\\-80\end{bmatrix}}.} These vectors are said to be scalar multiples of each other, or parallel or collinear, if there is a scalar λ such that x = λ y . {\displaystyle \mathbf {x} =\lambda \mathbf {y} .} In this case, λ = − 1 20 {\displaystyle \lambda =-{\frac {1}{20}}} . Now consider the linear transformation of n-dimensional vectors defined by an n by n matrix A, A v = w , {\displaystyle A\mathbf {v} =\mathbf {w} ,} or [ A 11 A 12 ⋯ A 1 n A 21 A 22 ⋯ A 2 n ⋮ ⋮ ⋱ ⋮ A n 1 A n 2 ⋯ A n n ] [ v 1 v 2 ⋮ v n ] = [ w 1 w 2 ⋮ w n ] {\displaystyle {\begin{bmatrix}A_{11}&A_{12}&\cdots &A_{1n}\\A_{21}&A_{22}&\cdots &A_{2n}\\\vdots &\vdots &\ddots &\vdots \\A_{n1}&A_{n2}&\cdots &A_{nn}\\\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\\\vdots \\v_{n}\end{bmatrix}}={\begin{bmatrix}w_{1}\\w_{2}\\\vdots \\w_{n}\end{bmatrix}}} where, for each row, w i = A i 1 v 1 + A i 2 v 2 + ⋯ + A i n v n = ∑ j = 1 n A i j v j . {\displaystyle w_{i}=A_{i1}v_{1}+A_{i2}v_{2}+\cdots +A_{in}v_{n}=\sum _{j=1}^{n}A_{ij}v_{j}.} If it occurs that v and w are scalar multiples, that is if then v is an eigenvector of the linear transformation A and the scale factor λ is the eigenvalue corresponding to that eigenvector. Equation (1) is the eigenvalue equation for the matrix A. Equation (1) can be stated equivalently as where I is the n by n identity matrix and 0 is the zero vector. === Eigenvalues and the characteristic polynomial === Equation (2) has a nonzero solution v if and only if the determinant of the matrix (A − λI) is zero. Therefore, the eigenvalues of A are values of λ that satisfy the equation Using the Leibniz formula for determinants, the left-hand side of equation (3) is a polynomial function of the variable λ and the degree of this polynomial is n, the order of the matrix A. Its coefficients depend on the entries of A, except that its term of degree n is always (−1)nλn. This polynomial is called the characteristic polynomial of A. Equation (3) is called the characteristic equation or the secular equation of A. The fundamental theorem of algebra implies that the characteristic polynomial of an n-by-n matrix A, being a polynomial of degree n, can be factored into the product of n linear terms, where each λi may be real but in general is a complex number. The numbers λ1, λ2, ..., λn, which may not all have distinct values, are roots of the polynomial and are the eigenvalues of A. As a brief example, which is described in more detail in the examples section later, consider the matrix A = [ 2 1 1 2 ] . {\displaystyle A={\begin{bmatrix}2&1\\1&2\end{bmatrix}}.} Taking the determinant of (A − λI), the characteristic polynomial of A is det ( A − λ I ) = | 2 − λ 1 1 2 − λ | = 3 − 4 λ + λ 2 . {\displaystyle \det(A-\lambda I)={\begin{vmatrix}2-\lambda &1\\1&2-\lambda \end{vmatrix}}=3-4\lambda +\lambda ^{2}.} Setting the characteristic polynomial equal to zero, it has roots at λ=1 and λ=3, which are the two eigenvalues of A. The eigenvectors corresponding to each eigenvalue can be found by solving for the components of v in the equation ( A − λ I ) v = 0 {\displaystyle \left(A-\lambda I\right)\mathbf {v} =\mathbf {0} } . In this example, the eigenvectors are any nonzero scalar multiples of v λ = 1 = [ 1 − 1 ] , v λ = 3 = [ 1 1 ] . {\displaystyle \mathbf {v} _{\lambda =1}={\begin{bmatrix}1\\-1\end{bmatrix}},\quad \mathbf {v} _{\lambda =3}={\begin{bmatrix}1\\1\end{bmatrix}}.} If the entries of the matrix A are all real numbers, then the coefficients of the characteristic polynomial will also be real numbers, but the eigenvalues may still have nonzero imaginary parts. The entries of the corresponding eigenvectors therefore may also have nonzero imaginary parts. Similarly, the eigenvalues may be irrational numbers even if all the entries of A are rational numbers or even if they are all integers. However, if the entries of A are all algebraic numbers, which include the rationals, the eigenvalues must also be algebraic numbers. The non-real roots of a real polynomial with real coefficients can be grouped into pairs of complex conjugates, namely with the two members of each pair having imaginary parts that differ only in sign and the same real part. If the degree is odd, then by the intermediate value theorem at least one of the roots is real. Therefore, any real matrix with odd order has at least one real eigenvalue, whereas a real matrix with even order may not have any real eigenvalues. The eigenvectors associated with these complex eigenvalues are also complex and also appear in complex conjugate pairs. === Spectrum of a matrix === The spectrum of a matrix is the list of eigenvalues, repeated according to multiplicity; in an alternative notation the set of eigenvalues with their multiplicities. An important quantity associated with the spectrum is the maximum absolute value of any eigenvalue. This is known as the spectral radius of the matrix. === Algebraic multiplicity === Let λi be an eigenvalue of an n by n matrix A. The algebraic multiplicity μA(λi) of the eigenvalue is its multiplicity as a root of the characteristic polynomial, that is, the largest integer k such that (λ − λi)k divides evenly that polynomial. Suppose a matrix A has dimension n and d ≤ n distinct eigenvalues. Whereas equation (4) factors the characteristic polynomial of A into the product of n linear terms with some terms potentially repeating, the characteristic polynomial can also be written as the product of d terms each corresponding to a distinct eigenvalue and raised to the power of the algebraic multiplicity, det ( A − λ I ) = ( λ 1 − λ ) μ A ( λ 1 ) ( λ 2 − λ ) μ A ( λ 2 ) ⋯ ( λ d − λ ) μ A ( λ d ) . {\displaystyle \det(A-\lambda I)=(\lambda _{1}-\lambda )^{\mu _{A}(\lambda _{1})}(\lambda _{2}-\lambda )^{\mu _{A}(\lambda _{2})}\cdots (\lambda _{d}-\lambda )^{\mu _{A}(\lambda _{d})}.} If d = n then the right-hand side is the product of n linear terms and this is the same as equation (4). The size of each eigenvalue's algebraic multiplicity is related to the dimension n as 1 ≤ μ A ( λ i ) ≤ n , μ A = ∑ i = 1 d μ A ( λ i ) = n . {\displaystyle {\begin{aligned}1&\leq \mu _{A}(\lambda _{i})\leq n,\\\mu _{A}&=\sum _{i=1}^{d}\mu _{A}\left(\lambda _{i}\right)=n.\end{aligned}}} If μA(λi) = 1, then λi is said to be a simple eigenvalue. If μA(λi) equals the geometric multiplicity of λi, γA(λi), defined in the next section, then λi is said to be a semisimple eigenvalue. === Eigenspaces, geometric multiplicity, and the eigenbasis for matrices === Given a particular eigenvalue λ of the n by n matrix A, define the set E to be all vectors v that satisfy equation (2), E = { v : ( A − λ I ) v = 0 } . {\displaystyle E=\left\{\mathbf {v} :\left(A-\lambda I\right)\mathbf {v} =\mathbf {0} \right\}.} On one hand, this set is precisely the kernel or nullspace of the matrix (A − λI). On the other hand, by definition, any nonzero vector that satisfies this condition is an eigenvector of A associated with λ. So, the set E is the union of the zero vector with the set of all eigenvectors of A associated with λ, and E equals the nullspace of (A − λI). E is called the eigenspace or characteristic space of A associated with λ. In general λ is a complex number and the eigenvectors are complex n by 1 matrices. A property of the nullspace is that it is a linear subspace, so E is a linear subspace of C n {\displaystyle \mathbb {C} ^{n}} . Because the eigenspace E is a linear subspace, it is closed under addition. That is, if two vectors u and v belong to the set E, written u, v ∈ E, then (u + v) ∈ E or equivalently A(u + v) = λ(u + v). This can be checked using the distributive property of matrix multiplication. Similarly, because E is a linear subspace, it is closed under scalar multiplication. That is, if v ∈ E and α is a complex number, (αv) ∈ E or equivalently A(αv) = λ(αv). This can be checked by noting that multiplication of complex matrices by complex numbers is commutative. As long as u + v and αv are not zero, they are also eigenvectors of A associated with λ. The dimension of the eigenspace E associated with λ, or equivalently the maximum number of linearly independent eigenvectors associated with λ, is referred to as the eigenvalue's geometric multiplicity γ A ( λ ) {\displaystyle \gamma _{A}(\lambda )} . Because E is also the nullspace of (A − λI), the geometric multiplicity of λ is the dimension of the nullspace of (A − λI), also called the nullity of (A − λI), which relates to the dimension and rank of (A − λI) as γ A ( λ ) = n − rank ⁡ ( A − λ I ) . {\displaystyle \gamma _{A}(\lambda )=n-\operatorname {rank} (A-\lambda I).} Because of the definition of eigenvalues and eigenvectors, an eigenvalue's geometric multiplicity must be at least one, that is, each eigenvalue has at least one associated eigenvector. Furthermore, an eigenvalue's geometric multiplicity cannot exceed its algebraic multiplicity. Additionally, recall that an eigenvalue's algebraic multiplicity cannot exceed n. 1 ≤ γ A ( λ ) ≤ μ A ( λ ) ≤ n {\displaystyle 1\leq \gamma _{A}(\lambda )\leq \mu _{A}(\lambda )\leq n} To prove the inequality γ A ( λ ) ≤ μ A ( λ ) {\displaystyle \gamma _{A}(\lambda )\leq \mu _{A}(\lambda )} , consider how the definition of geometric multiplicity implies the existence of γ A ( λ ) {\displaystyle \gamma _{A}(\lambda )} orthonormal eigenvectors v 1 , … , v γ A ( λ ) {\displaystyle {\boldsymbol {v}}_{1},\,\ldots ,\,{\boldsymbol {v}}_{\gamma _{A}(\lambda )}} , such that A v k = λ v k {\displaystyle A{\boldsymbol {v}}_{k}=\lambda {\boldsymbol {v}}_{k}} . We can therefore find a (unitary) matrix V whose first γ A ( λ ) {\displaystyle \gamma _{A}(\lambda )} columns are these eigenvectors, and whose remaining columns can be any orthonormal set of n − γ A ( λ ) {\displaystyle n-\gamma _{A}(\lambda )} vectors orthogonal to these eigenvectors of A. Then V has full rank and is therefore invertible. Evaluating D := V T A V {\displaystyle D:=V^{T}AV} , we get a matrix whose top left block is the diagonal matrix λ I γ A ( λ ) {\displaystyle \lambda I_{\gamma _{A}(\lambda )}} . This can be seen by evaluating what the left-hand side does to the first column basis vectors. By reorganizing and adding − ξ V {\displaystyle -\xi V} on both sides, we get ( A − ξ I ) V = V ( D − ξ I ) {\displaystyle (A-\xi I)V=V(D-\xi I)} since I commutes with V. In other words, A − ξ I {\displaystyle A-\xi I} is similar to D − ξ I {\displaystyle D-\xi I} , and det ( A − ξ I ) = det ( D − ξ I ) {\displaystyle \det(A-\xi I)=\det(D-\xi I)} . But from the definition of D, we know that det ( D − ξ I ) {\displaystyle \det(D-\xi I)} contains a factor ( ξ − λ ) γ A ( λ ) {\displaystyle (\xi -\lambda )^{\gamma _{A}(\lambda )}} , which means that the algebraic multiplicity of λ {\displaystyle \lambda } must satisfy μ A ( λ ) ≥ γ A ( λ ) {\displaystyle \mu _{A}(\lambda )\geq \gamma _{A}(\lambda )} . Suppose A has d ≤ n {\displaystyle d\leq n} distinct eigenvalues λ 1 , … , λ d {\displaystyle \lambda _{1},\ldots ,\lambda _{d}} , where the geometric multiplicity of λ i {\displaystyle \lambda _{i}} is γ A ( λ i ) {\displaystyle \gamma _{A}(\lambda _{i})} . The total geometric multiplicity of A, γ A = ∑ i = 1 d γ A ( λ i ) , d ≤ γ A ≤ n , {\displaystyle {\begin{aligned}\gamma _{A}&=\sum _{i=1}^{d}\gamma _{A}(\lambda _{i}),\\d&\leq \gamma _{A}\leq n,\end{aligned}}} is the dimension of the sum of all the eigenspaces of A's eigenvalues, or equivalently the maximum number of linearly independent eigenvectors of A. If γ A = n {\displaystyle \gamma _{A}=n} , then The direct sum of the eigenspaces of all of A's eigenvalues is the entire vector space C n {\displaystyle \mathbb {C} ^{n}} . A basis of C n {\displaystyle \mathbb {C} ^{n}} can be formed from n linearly independent eigenvectors of A; such a basis is called an eigenbasis Any vector in C n {\displaystyle \mathbb {C} ^{n}} can be written as a linear combination of eigenvectors of A. === Additional properties === Let A {\displaystyle A} be an arbitrary n × n {\displaystyle n\times n} matrix of complex numbers with eigenvalues λ 1 , … , λ n {\displaystyle \lambda _{1},\ldots ,\lambda _{n}} . Each eigenvalue appears μ A ( λ i ) {\displaystyle \mu _{A}(\lambda _{i})} times in this list, where μ A ( λ i ) {\displaystyle \mu _{A}(\lambda _{i})} is the eigenvalue's algebraic multiplicity. The following are properties of this matrix and its eigenvalues: The trace of A {\displaystyle A} , defined as the sum of its diagonal elements, is also the sum of all eigenvalues, tr ⁡ ( A ) = ∑ i = 1 n a i i = ∑ i = 1 n λ i = λ 1 + λ 2 + ⋯ + λ n . {\displaystyle \operatorname {tr} (A)=\sum _{i=1}^{n}a_{ii}=\sum _{i=1}^{n}\lambda _{i}=\lambda _{1}+\lambda _{2}+\cdots +\lambda _{n}.} The determinant of A {\displaystyle A} is the product of all its eigenvalues, det ( A ) = ∏ i = 1 n λ i = λ 1 λ 2 ⋯ λ n . {\displaystyle \det(A)=\prod _{i=1}^{n}\lambda _{i}=\lambda _{1}\lambda _{2}\cdots \lambda _{n}.} The eigenvalues of the k {\displaystyle k} th power of A {\displaystyle A} ; i.e., the eigenvalues of A k {\displaystyle A^{k}} , for any positive integer k {\displaystyle k} , are λ 1 k , … , λ n k {\displaystyle \lambda _{1}^{k},\ldots ,\lambda _{n}^{k}} . The matrix A {\displaystyle A} is invertible if and only if every eigenvalue is nonzero. If A {\displaystyle A} is invertible, then the eigenvalues of A − 1 {\displaystyle A^{-1}} are 1 λ 1 , … , 1 λ n {\textstyle {\frac {1}{\lambda _{1}}},\ldots ,{\frac {1}{\lambda _{n}}}} and each eigenvalue's geometric multiplicity coincides. Moreover, since the characteristic polynomial of the inverse is the reciprocal polynomial of the original, the eigenvalues share the same algebraic multiplicity. If A {\displaystyle A} is equal to its conjugate transpose A ∗ {\displaystyle A^{*}} , or equivalently if A {\displaystyle A} is Hermitian, then every eigenvalue is real. The same is true of any symmetric real matrix. If A {\displaystyle A} is not only Hermitian but also positive-definite, positive-semidefinite, negative-definite, or negative-semidefinite, then every eigenvalue is positive, non-negative, negative, or non-positive, respectively. If A {\displaystyle A} is unitary, every eigenvalue has absolute value | λ i | = 1 {\displaystyle |\lambda _{i}|=1} . If A {\displaystyle A} is a n × n {\displaystyle n\times n} matrix and { λ 1 , … , λ k } {\displaystyle \{\lambda _{1},\ldots ,\lambda _{k}\}} are its eigenvalues, then the eigenvalues of matrix I + A {\displaystyle I+A} (where I {\displaystyle I} is the identity matrix) are { λ 1 + 1 , … , λ k + 1 } {\displaystyle \{\lambda _{1}+1,\ldots ,\lambda _{k}+1\}} . Moreover, if α ∈ C {\displaystyle \alpha \in \mathbb {C} } , the eigenvalues of α I + A {\displaystyle \alpha I+A} are { λ 1 + α , … , λ k + α } {\displaystyle \{\lambda _{1}+\alpha ,\ldots ,\lambda _{k}+\alpha \}} . More generally, for a polynomial P {\displaystyle P} the eigenvalues of matrix P ( A ) {\displaystyle P(A)} are { P ( λ 1 ) , … , P ( λ k ) } {\displaystyle \{P(\lambda _{1}),\ldots ,P(\lambda _{k})\}} . === Left and right eigenvectors === Many disciplines traditionally represent vectors as matrices with a single column rather than as matrices with a single row. For that reason, the word "eigenvector" in the context of matrices almost always refers to a right eigenvector, namely a column vector that right multiplies the n × n {\displaystyle n\times n} matrix A {\displaystyle A} in the defining equation, equation (1), A v = λ v . {\displaystyle A\mathbf {v} =\lambda \mathbf {v} .} The eigenvalue and eigenvector problem can also be defined for row vectors that left multiply matrix A {\displaystyle A} . In this formulation, the defining equation is u A = κ u , {\displaystyle \mathbf {u} A=\kappa \mathbf {u} ,} where κ {\displaystyle \kappa } is a scalar and u {\displaystyle u} is a 1 × n {\displaystyle 1\times n} matrix. Any row vector u {\displaystyle u} satisfying this equation is called a left eigenvector of A {\displaystyle A} and κ {\displaystyle \kappa } is its associated eigenvalue. Taking the transpose of this equation, A T u T = κ u T . {\displaystyle A^{\textsf {T}}\mathbf {u} ^{\textsf {T}}=\kappa \mathbf {u} ^{\textsf {T}}.} Comparing this equation to equation (1), it follows immediately that a left eigenvector of A {\displaystyle A} is the same as the transpose of a right eigenvector of A T {\displaystyle A^{\textsf {T}}} , with the same eigenvalue. Furthermore, since the characteristic polynomial of A T {\displaystyle A^{\textsf {T}}} is the same as the characteristic polynomial of A {\displaystyle A} , the left and right eigenvectors of A {\displaystyle A} are associated with the same eigenvalues. === Diagonalization and the eigendecomposition === Suppose the eigenvectors of A form a basis, or equivalently A has n linearly independent eigenvectors v1, v2, ..., vn with associated eigenvalues λ1, λ2, ..., λn. The eigenvalues need not be distinct. Define a square matrix Q whose columns are the n linearly independent eigenvectors of A, Q = [ v 1 v 2 ⋯ v n ] . {\displaystyle Q={\begin{bmatrix}\mathbf {v} _{1}&\mathbf {v} _{2}&\cdots &\mathbf {v} _{n}\end{bmatrix}}.} Since each column of Q is an eigenvector of A, right multiplying A by Q scales each column of Q by its associated eigenvalue, A Q = [ λ 1 v 1 λ 2 v 2 ⋯ λ n v n ] . {\displaystyle AQ={\begin{bmatrix}\lambda _{1}\mathbf {v} _{1}&\lambda _{2}\mathbf {v} _{2}&\cdots &\lambda _{n}\mathbf {v} _{n}\end{bmatrix}}.} With this in mind, define a diagonal matrix Λ where each diagonal element Λii is the eigenvalue associated with the ith column of Q. Then A Q = Q Λ . {\displaystyle AQ=Q\Lambda .} Because the columns of Q are linearly independent, Q is invertible. Right multiplying both sides of the equation by Q−1, A = Q Λ Q − 1 , {\displaystyle A=Q\Lambda Q^{-1},} or by instead left multiplying both sides by Q−1, Q − 1 A Q = Λ . {\displaystyle Q^{-1}AQ=\Lambda .} A can therefore be decomposed into a matrix composed of its eigenvectors, a diagonal matrix with its eigenvalues along the diagonal, and the inverse of the matrix of eigenvectors. This is called the eigendecomposition and it is a similarity transformation. Such a matrix A is said to be similar to the diagonal matrix Λ or diagonalizable. The matrix Q is the change of basis matrix of the similarity transformation. Essentially, the matrices A and Λ represent the same linear transformation expressed in two different bases. The eigenvectors are used as the basis when representing the linear transformation as Λ. Conversely, suppose a matrix A is diagonalizable. Let P be a non-singular square matrix such that P−1AP is some diagonal matrix D. Left multiplying both by P, AP = PD. Each column of P must therefore be an eigenvector of A whose eigenvalue is the corresponding diagonal element of D. Since the columns of P must be linearly independent for P to be invertible, there exist n linearly independent eigenvectors of A. It then follows that the eigenvectors of A form a basis if and only if A is diagonalizable. A matrix that is not diagonalizable is said to be defective. For defective matrices, the notion of eigenvectors generalizes to generalized eigenvectors and the diagonal matrix of eigenvalues generalizes to the Jordan normal form. Over an algebraically closed field, any matrix A has a Jordan normal form and therefore admits a basis of generalized eigenvectors and a decomposition into generalized eigenspaces. === Variational characterization === In the Hermitian case, eigenvalues can be given a variational characterization. The largest eigenvalue of H {\displaystyle H} is the maximum value of the quadratic form x T H x / x T x {\displaystyle \mathbf {x} ^{\textsf {T}}H\mathbf {x} /\mathbf {x} ^{\textsf {T}}\mathbf {x} } . A value of x {\displaystyle \mathbf {x} } that realizes that maximum is an eigenvector. === Matrix examples === ==== Two-dimensional matrix example ==== Consider the matrix A = [ 2 1 1 2 ] . {\displaystyle A={\begin{bmatrix}2&1\\1&2\end{bmatrix}}.} The figure on the right shows the effect of this transformation on point coordinates in the plane. The eigenvectors v of this transformation satisfy equation (1), and the values of λ for which the determinant of the matrix (A − λI) equals zero are the eigenvalues. Taking the determinant to find characteristic polynomial of A, det ( A − λ I ) = | [ 2 1 1 2 ] − λ [ 1 0 0 1 ] | = | 2 − λ 1 1 2 − λ | = 3 − 4 λ + λ 2 = ( λ − 3 ) ( λ − 1 ) . {\displaystyle {\begin{aligned}\det(A-\lambda I)&=\left|{\begin{bmatrix}2&1\\1&2\end{bmatrix}}-\lambda {\begin{bmatrix}1&0\\0&1\end{bmatrix}}\right|={\begin{vmatrix}2-\lambda &1\\1&2-\lambda \end{vmatrix}}\\[6pt]&=3-4\lambda +\lambda ^{2}\\[6pt]&=(\lambda -3)(\lambda -1).\end{aligned}}} Setting the characteristic polynomial equal to zero, it has roots at λ=1 and λ=3, which are the two eigenvalues of A. For λ=1, equation (2) becomes, ( A − I ) v λ = 1 = [ 1 1 1 1 ] [ v 1 v 2 ] = [ 0 0 ] {\displaystyle (A-I)\mathbf {v} _{\lambda =1}={\begin{bmatrix}1&1\\1&1\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}}} 1 v 1 + 1 v 2 = 0 {\displaystyle 1v_{1}+1v_{2}=0} Any nonzero vector with v1 = −v2 solves this equation. Therefore, v λ = 1 = [ v 1 − v 1 ] = [ 1 − 1 ] {\displaystyle \mathbf {v} _{\lambda =1}={\begin{bmatrix}v_{1}\\-v_{1}\end{bmatrix}}={\begin{bmatrix}1\\-1\end{bmatrix}}} is an eigenvector of A corresponding to λ = 1, as is any scalar multiple of this vector. For λ=3, equation (2) becomes ( A − 3 I ) v λ = 3 = [ − 1 1 1 − 1 ] [ v 1 v 2 ] = [ 0 0 ] − 1 v 1 + 1 v 2 = 0 ; 1 v 1 − 1 v 2 = 0 {\displaystyle {\begin{aligned}(A-3I)\mathbf {v} _{\lambda =3}&={\begin{bmatrix}-1&1\\1&-1\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}}\\-1v_{1}+1v_{2}&=0;\\1v_{1}-1v_{2}&=0\end{aligned}}} Any nonzero vector with v1 = v2 solves this equation. Therefore, v λ = 3 = [ v 1 v 1 ] = [ 1 1 ] {\displaystyle \mathbf {v} _{\lambda =3}={\begin{bmatrix}v_{1}\\v_{1}\end{bmatrix}}={\begin{bmatrix}1\\1\end{bmatrix}}} is an eigenvector of A corresponding to λ = 3, as is any scalar multiple of this vector. Thus, the vectors vλ=1 and vλ=3 are eigenvectors of A associated with the eigenvalues λ=1 and λ=3, respectively. ==== Three-dimensional matrix example ==== Consider the matrix A = [ 2 0 0 0 3 4 0 4 9 ] . {\displaystyle A={\begin{bmatrix}2&0&0\\0&3&4\\0&4&9\end{bmatrix}}.} The characteristic polynomial of A is det ( A − λ I ) = | [ 2 0 0 0 3 4 0 4 9 ] − λ [ 1 0 0 0 1 0 0 0 1 ] | = | 2 − λ 0 0 0 3 − λ 4 0 4 9 − λ | , = ( 2 − λ ) [ ( 3 − λ ) ( 9 − λ ) − 16 ] = − λ 3 + 14 λ 2 − 35 λ + 22. {\displaystyle {\begin{aligned}\det(A-\lambda I)&=\left|{\begin{bmatrix}2&0&0\\0&3&4\\0&4&9\end{bmatrix}}-\lambda {\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}}\right|={\begin{vmatrix}2-\lambda &0&0\\0&3-\lambda &4\\0&4&9-\lambda \end{vmatrix}},\\[6pt]&=(2-\lambda ){\bigl [}(3-\lambda )(9-\lambda )-16{\bigr ]}=-\lambda ^{3}+14\lambda ^{2}-35\lambda +22.\end{aligned}}} The roots of the characteristic polynomial are 2, 1, and 11, which are the only three eigenvalues of A. These eigenvalues correspond to the eigenvectors [ 1 0 0 ] T {\displaystyle {\begin{bmatrix}1&0&0\end{bmatrix}}^{\textsf {T}}} , [ 0 − 2 1 ] T {\displaystyle {\begin{bmatrix}0&-2&1\end{bmatrix}}^{\textsf {T}}} , and [ 0 1 2 ] T {\displaystyle {\begin{bmatrix}0&1&2\end{bmatrix}}^{\textsf {T}}} , or any nonzero multiple thereof. ==== Three-dimensional matrix example with complex eigenvalues ==== Consider the cyclic permutation matrix A = [ 0 1 0 0 0 1 1 0 0 ] . {\displaystyle A={\begin{bmatrix}0&1&0\\0&0&1\\1&0&0\end{bmatrix}}.} This matrix shifts the coordinates of the vector up by one position and moves the first coordinate to the bottom. Its characteristic polynomial is 1 − λ3, whose roots are λ 1 = 1 λ 2 = − 1 2 + i 3 2 λ 3 = λ 2 ∗ = − 1 2 − i 3 2 {\displaystyle {\begin{aligned}\lambda _{1}&=1\\\lambda _{2}&=-{\frac {1}{2}}+i{\frac {\sqrt {3}}{2}}\\\lambda _{3}&=\lambda _{2}^{*}=-{\frac {1}{2}}-i{\frac {\sqrt {3}}{2}}\end{aligned}}} where i {\displaystyle i} is an imaginary unit with i 2 = − 1 {\displaystyle i^{2}=-1} . For the real eigenvalue λ1 = 1, any vector with three equal nonzero entries is an eigenvector. For example, A [ 5 5 5 ] = [ 5 5 5 ] = 1 ⋅ [ 5 5 5 ] . {\displaystyle A{\begin{bmatrix}5\\5\\5\end{bmatrix}}={\begin{bmatrix}5\\5\\5\end{bmatrix}}=1\cdot {\begin{bmatrix}5\\5\\5\end{bmatrix}}.} For the complex conjugate pair of imaginary eigenvalues, λ 2 λ 3 = 1 , λ 2 2 = λ 3 , λ 3 2 = λ 2 . {\displaystyle \lambda _{2}\lambda _{3}=1,\quad \lambda _{2}^{2}=\lambda _{3},\quad \lambda _{3}^{2}=\lambda _{2}.} Then A [ 1 λ 2 λ 3 ] = [ λ 2 λ 3 1 ] = λ 2 ⋅ [ 1 λ 2 λ 3 ] , {\displaystyle A{\begin{bmatrix}1\\\lambda _{2}\\\lambda _{3}\end{bmatrix}}={\begin{bmatrix}\lambda _{2}\\\lambda _{3}\\1\end{bmatrix}}=\lambda _{2}\cdot {\begin{bmatrix}1\\\lambda _{2}\\\lambda _{3}\end{bmatrix}},} and A [ 1 λ 3 λ 2 ] = [ λ 3 λ 2 1 ] = λ 3 ⋅ [ 1 λ 3 λ 2 ] . {\displaystyle A{\begin{bmatrix}1\\\lambda _{3}\\\lambda _{2}\end{bmatrix}}={\begin{bmatrix}\lambda _{3}\\\lambda _{2}\\1\end{bmatrix}}=\lambda _{3}\cdot {\begin{bmatrix}1\\\lambda _{3}\\\lambda _{2}\end{bmatrix}}.} Therefore, the other two eigenvectors of A are complex and are v λ 2 = [ 1 λ 2 λ 3 ] T {\displaystyle \mathbf {v} _{\lambda _{2}}={\begin{bmatrix}1&\lambda _{2}&\lambda _{3}\end{bmatrix}}^{\textsf {T}}} and v λ 3 = [ 1 λ 3 λ 2 ] T {\displaystyle \mathbf {v} _{\lambda _{3}}={\begin{bmatrix}1&\lambda _{3}&\lambda _{2}\end{bmatrix}}^{\textsf {T}}} with eigenvalues λ2 and λ3, respectively. The two complex eigenvectors also appear in a complex conjugate pair, v λ 2 = v λ 3 ∗ . {\displaystyle \mathbf {v} _{\lambda _{2}}=\mathbf {v} _{\lambda _{3}}^{*}.} ==== Diagonal matrix example ==== Matrices with entries only along the main diagonal are called diagonal matrices. The eigenvalues of a diagonal matrix are the diagonal elements themselves. Consider the matrix A = [ 1 0 0 0 2 0 0 0 3 ] . {\displaystyle A={\begin{bmatrix}1&0&0\\0&2&0\\0&0&3\end{bmatrix}}.} The characteristic polynomial of A is det ( A − λ I ) = ( 1 − λ ) ( 2 − λ ) ( 3 − λ ) , {\displaystyle \det(A-\lambda I)=(1-\lambda )(2-\lambda )(3-\lambda ),} which has the roots λ1 = 1, λ2 = 2, and λ3 = 3. These roots are the diagonal elements as well as the eigenvalues of A. Each diagonal element corresponds to an eigenvector whose only nonzero component is in the same row as that diagonal element. In the example, the eigenvalues correspond to the eigenvectors, v λ 1 = [ 1 0 0 ] , v λ 2 = [ 0 1 0 ] , v λ 3 = [ 0 0 1 ] , {\displaystyle \mathbf {v} _{\lambda _{1}}={\begin{bmatrix}1\\0\\0\end{bmatrix}},\quad \mathbf {v} _{\lambda _{2}}={\begin{bmatrix}0\\1\\0\end{bmatrix}},\quad \mathbf {v} _{\lambda _{3}}={\begin{bmatrix}0\\0\\1\end{bmatrix}},} respectively, as well as scalar multiples of these vectors. ==== Triangular matrix example ==== A matrix whose elements above the main diagonal are all zero is called a lower triangular matrix, while a matrix whose elements below the main diagonal are all zero is called an upper triangular matrix. As with diagonal matrices, the eigenvalues of triangular matrices are the elements of the main diagonal. Consider the lower triangular matrix, A = [ 1 0 0 1 2 0 2 3 3 ] . {\displaystyle A={\begin{bmatrix}1&0&0\\1&2&0\\2&3&3\end{bmatrix}}.} The characteristic polynomial of A is det ( A − λ I ) = ( 1 − λ ) ( 2 − λ ) ( 3 − λ ) , {\displaystyle \det(A-\lambda I)=(1-\lambda )(2-\lambda )(3-\lambda ),} which has the roots λ1 = 1, λ2 = 2, and λ3 = 3. These roots are the diagonal elements as well as the eigenvalues of A. These eigenvalues correspond to the eigenvectors, v λ 1 = [ 1 − 1 1 2 ] , v λ 2 = [ 0 1 − 3 ] , v λ 3 = [ 0 0 1 ] , {\displaystyle \mathbf {v} _{\lambda _{1}}={\begin{bmatrix}1\\-1\\{\frac {1}{2}}\end{bmatrix}},\quad \mathbf {v} _{\lambda _{2}}={\begin{bmatrix}0\\1\\-3\end{bmatrix}},\quad \mathbf {v} _{\lambda _{3}}={\begin{bmatrix}0\\0\\1\end{bmatrix}},} respectively, as well as scalar multiples of these vectors. ==== Matrix with repeated eigenvalues example ==== As in the previous example, the lower triangular matrix A = [ 2 0 0 0 1 2 0 0 0 1 3 0 0 0 1 3 ] , {\displaystyle A={\begin{bmatrix}2&0&0&0\\1&2&0&0\\0&1&3&0\\0&0&1&3\end{bmatrix}},} has a characteristic polynomial that is the product of its diagonal elements, det ( A − λ I ) = | 2 − λ 0 0 0 1 2 − λ 0 0 0 1 3 − λ 0 0 0 1 3 − λ | = ( 2 − λ ) 2 ( 3 − λ ) 2 . {\displaystyle \det(A-\lambda I)={\begin{vmatrix}2-\lambda &0&0&0\\1&2-\lambda &0&0\\0&1&3-\lambda &0\\0&0&1&3-\lambda \end{vmatrix}}=(2-\lambda )^{2}(3-\lambda )^{2}.} The roots of this polynomial, and hence the eigenvalues, are 2 and 3. The algebraic multiplicity of each eigenvalue is 2; in other words they are both double roots. The sum of the algebraic multiplicities of all distinct eigenvalues is μA = 4 = n, the order of the characteristic polynomial and the dimension of A. On the other hand, the geometric multiplicity of the eigenvalue 2 is only 1, because its eigenspace is spanned by just one vector [ 0 1 − 1 1 ] T {\displaystyle {\begin{bmatrix}0&1&-1&1\end{bmatrix}}^{\textsf {T}}} and is therefore 1-dimensional. Similarly, the geometric multiplicity of the eigenvalue 3 is 1 because its eigenspace is spanned by just one vector [ 0 0 0 1 ] T {\displaystyle {\begin{bmatrix}0&0&0&1\end{bmatrix}}^{\textsf {T}}} . The total geometric multiplicity γA is 2, which is the smallest it could be for a matrix with two distinct eigenvalues. Geometric multiplicities are defined in a later section. === Eigenvector-eigenvalue identity === For a Hermitian matrix, the norm squared of the jth component of a normalized eigenvector can be calculated using only the matrix eigenvalues and the eigenvalues of the corresponding minor matrix, | v i , j | 2 = ∏ k ( λ i − λ k ( M j ) ) ∏ k ≠ i ( λ i − λ k ) , {\displaystyle |v_{i,j}|^{2}={\frac {\prod _{k}{(\lambda _{i}-\lambda _{k}(M_{j}))}}{\prod _{k\neq i}{(\lambda _{i}-\lambda _{k})}}},} where M j {\textstyle M_{j}} is the submatrix formed by removing the jth row and column from the original matrix. This identity also extends to diagonalizable matrices, and has been rediscovered many times in the literature. == Eigenvalues and eigenfunctions of differential operators == The definitions of eigenvalue and eigenvectors of a linear transformation T remains valid even if the underlying vector space is an infinite-dimensional Hilbert or Banach space. A widely used class of linear transformations acting on infinite-dimensional spaces are the differential operators on function spaces. Let D be a linear differential operator on the space C∞ of infinitely differentiable real functions of a real argument t. The eigenvalue equation for D is the differential equation D f ( t ) = λ f ( t ) {\displaystyle Df(t)=\lambda f(t)} The functions that satisfy this equation are eigenvectors of D and are commonly called eigenfunctions. === Derivative operator example === Consider the derivative operator d d t {\displaystyle {\tfrac {d}{dt}}} with eigenvalue equation d d t f ( t ) = λ f ( t ) . {\displaystyle {\frac {d}{dt}}f(t)=\lambda f(t).} This differential equation can be solved by multiplying both sides by dt/f(t) and integrating. Its solution, the exponential function f ( t ) = f ( 0 ) e λ t , {\displaystyle f(t)=f(0)e^{\lambda t},} is the eigenfunction of the derivative operator. In this case the eigenfunction is itself a function of its associated eigenvalue. In particular, for λ = 0 the eigenfunction f(t) is a constant. The main eigenfunction article gives other examples. == General definition == The concept of eigenvalues and eigenvectors extends naturally to arbitrary linear transformations on arbitrary vector spaces. Let V be any vector space over some field K of scalars, and let T be a linear transformation mapping V into V, T : V → V . {\displaystyle T:V\to V.} We say that a nonzero vector v ∈ V is an eigenvector of T if and only if there exists a scalar λ ∈ K such that This equation is called the eigenvalue equation for T, and the scalar λ is the eigenvalue of T corresponding to the eigenvector v. T(v) is the result of applying the transformation T to the vector v, while λv is the product of the scalar λ with v. === Eigenspaces, geometric multiplicity, and the eigenbasis === Given an eigenvalue λ, consider the set E = { v : T ( v ) = λ v } , {\displaystyle E=\left\{\mathbf {v} :T(\mathbf {v} )=\lambda \mathbf {v} \right\},} which is the union of the zero vector with the set of all eigenvectors associated with λ. E is called the eigenspace or characteristic space of T associated with λ. By definition of a linear transformation, T ( x + y ) = T ( x ) + T ( y ) , T ( α x ) = α T ( x ) , {\displaystyle {\begin{aligned}T(\mathbf {x} +\mathbf {y} )&=T(\mathbf {x} )+T(\mathbf {y} ),\\T(\alpha \mathbf {x} )&=\alpha T(\mathbf {x} ),\end{aligned}}} for x, y ∈ V and α ∈ K. Therefore, if u and v are eigenvectors of T associated with eigenvalue λ, namely u, v ∈ E, then T ( u + v ) = λ ( u + v ) , T ( α v ) = λ ( α v ) . {\displaystyle {\begin{aligned}T(\mathbf {u} +\mathbf {v} )&=\lambda (\mathbf {u} +\mathbf {v} ),\\T(\alpha \mathbf {v} )&=\lambda (\alpha \mathbf {v} ).\end{aligned}}} So, both u + v and αv are either zero or eigenvectors of T associated with λ, namely u + v, αv ∈ E, and E is closed under addition and scalar multiplication. The eigenspace E associated with λ is therefore a linear subspace of V. If that subspace has dimension 1, it is sometimes called an eigenline. The geometric multiplicity γT(λ) of an eigenvalue λ is the dimension of the eigenspace associated with λ, i.e., the maximum number of linearly independent eigenvectors associated with that eigenvalue. By the definition of eigenvalues and eigenvectors, γT(λ) ≥ 1 because every eigenvalue has at least one eigenvector. The eigenspaces of T always form a direct sum. As a consequence, eigenvectors of different eigenvalues are always linearly independent. Therefore, the sum of the dimensions of the eigenspaces cannot exceed the dimension n of the vector space on which T operates, and there cannot be more than n distinct eigenvalues. Any subspace spanned by eigenvectors of T is an invariant subspace of T, and the restriction of T to such a subspace is diagonalizable. Moreover, if the entire vector space V can be spanned by the eigenvectors of T, or equivalently if the direct sum of the eigenspaces associated with all the eigenvalues of T is the entire vector space V, then a basis of V called an eigenbasis can be formed from linearly independent eigenvectors of T. When T admits an eigenbasis, T is diagonalizable. === Spectral theory === If λ is an eigenvalue of T, then the operator (T − λI) is not one-to-one, and therefore its inverse (T − λI)−1 does not exist. The converse is true for finite-dimensional vector spaces, but not for infinite-dimensional vector spaces. In general, the operator (T − λI) may not have an inverse even if λ is not an eigenvalue. For this reason, in functional analysis eigenvalues can be generalized to the spectrum of a linear operator T as the set of all scalars λ for which the operator (T − λI) has no bounded inverse. The spectrum of an operator always contains all its eigenvalues but is not limited to them. === Associative algebras and representation theory === One can generalize the algebraic object that is acting on the vector space, replacing a single operator acting on a vector space with an algebra representation – an associative algebra acting on a module. The study of such actions is the field of representation theory. The representation-theoretical concept of weight is an analog of eigenvalues, while weight vectors and weight spaces are the analogs of eigenvectors and eigenspaces, respectively. Hecke eigensheaf is a tensor-multiple of itself and is considered in Langlands correspondence. == Dynamic equations == The simplest difference equations have the form x t = a 1 x t − 1 + a 2 x t − 2 + ⋯ + a k x t − k . {\displaystyle x_{t}=a_{1}x_{t-1}+a_{2}x_{t-2}+\cdots +a_{k}x_{t-k}.} The solution of this equation for x in terms of t is found by using its characteristic equation λ k − a 1 λ k − 1 − a 2 λ k − 2 − ⋯ − a k − 1 λ − a k = 0 , {\displaystyle \lambda ^{k}-a_{1}\lambda ^{k-1}-a_{2}\lambda ^{k-2}-\cdots -a_{k-1}\lambda -a_{k}=0,} which can be found by stacking into matrix form a set of equations consisting of the above difference equation and the k – 1 equations x t − 1 = x t − 1 , … , x t − k + 1 = x t − k + 1 , {\displaystyle x_{t-1}=x_{t-1},\ \dots ,\ x_{t-k+1}=x_{t-k+1},} giving a k-dimensional system of the first order in the stacked variable vector [ x t ⋯ x t − k + 1 ] {\displaystyle {\begin{bmatrix}x_{t}&\cdots &x_{t-k+1}\end{bmatrix}}} in terms of its once-lagged value, and taking the characteristic equation of this system's matrix. This equation gives k characteristic roots λ 1 , … , λ k , {\displaystyle \lambda _{1},\,\ldots ,\,\lambda _{k},} for use in the solution equation x t = c 1 λ 1 t + ⋯ + c k λ k t . {\displaystyle x_{t}=c_{1}\lambda _{1}^{t}+\cdots +c_{k}\lambda _{k}^{t}.} A similar procedure is used for solving a differential equation of the form d k x d t k + a k − 1 d k − 1 x d t k − 1 + ⋯ + a 1 d x d t + a 0 x = 0. {\displaystyle {\frac {d^{k}x}{dt^{k}}}+a_{k-1}{\frac {d^{k-1}x}{dt^{k-1}}}+\cdots +a_{1}{\frac {dx}{dt}}+a_{0}x=0.} == Calculation == The calculation of eigenvalues and eigenvectors is a topic where theory, as presented in elementary linear algebra textbooks, is often very far from practice. === Classical method === The classical method is to first find the eigenvalues, and then calculate the eigenvectors for each eigenvalue. It is in several ways poorly suited for non-exact arithmetics such as floating-point. ==== Eigenvalues ==== The eigenvalues of a matrix A {\displaystyle A} can be determined by finding the roots of the characteristic polynomial. This is easy for 2 × 2 {\displaystyle 2\times 2} matrices, but the difficulty increases rapidly with the size of the matrix. In theory, the coefficients of the characteristic polynomial can be computed exactly, since they are sums of products of matrix elements; and there are algorithms that can find all the roots of a polynomial of arbitrary degree to any required accuracy. However, this approach is not viable in practice because the coefficients would be contaminated by unavoidable round-off errors, and the roots of a polynomial can be an extremely sensitive function of the coefficients (as exemplified by Wilkinson's polynomial). Even for matrices whose elements are integers the calculation becomes nontrivial, because the sums are very long; the constant term is the determinant, which for an n × n {\displaystyle n\times n} matrix is a sum of n ! {\displaystyle n!} different products. Explicit algebraic formulas for the roots of a polynomial exist only if the degree n {\displaystyle n} is 4 or less. According to the Abel–Ruffini theorem there is no general, explicit and exact algebraic formula for the roots of a polynomial with degree 5 or more. (Generality matters because any polynomial with degree n {\displaystyle n} is the characteristic polynomial of some companion matrix of order n {\displaystyle n} .) Therefore, for matrices of order 5 or more, the eigenvalues and eigenvectors cannot be obtained by an explicit algebraic formula, and must therefore be computed by approximate numerical methods. Even the exact formula for the roots of a degree 3 polynomial is numerically impractical. ==== Eigenvectors ==== Once the (exact) value of an eigenvalue is known, the corresponding eigenvectors can be found by finding nonzero solutions of the eigenvalue equation, that becomes a system of linear equations with known coefficients. For example, once it is known that 6 is an eigenvalue of the matrix A = [ 4 1 6 3 ] {\displaystyle A={\begin{bmatrix}4&1\\6&3\end{bmatrix}}} we can find its eigenvectors by solving the equation A v = 6 v {\displaystyle Av=6v} , that is [ 4 1 6 3 ] [ x y ] = 6 ⋅ [ x y ] {\displaystyle {\begin{bmatrix}4&1\\6&3\end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}=6\cdot {\begin{bmatrix}x\\y\end{bmatrix}}} This matrix equation is equivalent to two linear equations { 4 x + y = 6 x 6 x + 3 y = 6 y {\displaystyle \left\{{\begin{aligned}4x+y&=6x\\6x+3y&=6y\end{aligned}}\right.} that is { − 2 x + y = 0 6 x − 3 y = 0 {\displaystyle \left\{{\begin{aligned}-2x+y&=0\\6x-3y&=0\end{aligned}}\right.} Both equations reduce to the single linear equation y = 2 x {\displaystyle y=2x} . Therefore, any vector of the form [ a 2 a ] T {\displaystyle {\begin{bmatrix}a&2a\end{bmatrix}}^{\textsf {T}}} , for any nonzero real number a {\displaystyle a} , is an eigenvector of A {\displaystyle A} with eigenvalue λ = 6 {\displaystyle \lambda =6} . The matrix A {\displaystyle A} above has another eigenvalue λ = 1 {\displaystyle \lambda =1} . A similar calculation shows that the corresponding eigenvectors are the nonzero solutions of 3 x + y = 0 {\displaystyle 3x+y=0} , that is, any vector of the form [ b − 3 b ] T {\displaystyle {\begin{bmatrix}b&-3b\end{bmatrix}}^{\textsf {T}}} , for any nonzero real number b {\displaystyle b} . === Simple iterative methods === The converse approach, of first seeking the eigenvectors and then determining each eigenvalue from its eigenvector, turns out to be far more tractable for computers. The easiest algorithm here consists of picking an arbitrary starting vector and then repeatedly multiplying it with the matrix (optionally normalizing the vector to keep its elements of reasonable size); this makes the vector converge towards an eigenvector. A variation is to instead multiply the vector by ( A − μ I ) − 1 {\displaystyle (A-\mu I)^{-1}} ; this causes it to converge to an eigenvector of the eigenvalue closest to μ ∈ C {\displaystyle \mu \in \mathbb {C} } . If v {\displaystyle \mathbf {v} } is (a good approximation of) an eigenvector of A {\displaystyle A} , then the corresponding eigenvalue can be computed as λ = v ∗ A v v ∗ v {\displaystyle \lambda ={\frac {\mathbf {v} ^{*}A\mathbf {v} }{\mathbf {v} ^{*}\mathbf {v} }}} where v ∗ {\displaystyle \mathbf {v} ^{*}} denotes the conjugate transpose of v {\displaystyle \mathbf {v} } . === Modern methods === Efficient, accurate methods to compute eigenvalues and eigenvectors of arbitrary matrices were not known until the QR algorithm was designed in 1961. Combining the Householder transformation with the LU decomposition results in an algorithm with better convergence than the QR algorithm. For large Hermitian sparse matrices, the Lanczos algorithm is one example of an efficient iterative method to compute eigenvalues and eigenvectors, among several other possibilities. Most numeric methods that compute the eigenvalues of a matrix also determine a set of corresponding eigenvectors as a by-product of the computation, although sometimes implementors choose to discard the eigenvector information as soon as it is no longer needed. == Applications == === Geometric transformations === Eigenvectors and eigenvalues can be useful for understanding linear transformations of geometric shapes. The following table presents some example transformations in the plane along with their 2×2 matrices, eigenvalues, and eigenvectors. The characteristic equation for a rotation is a quadratic equation with discriminant D = − 4 ( sin ⁡ θ ) 2 {\displaystyle D=-4(\sin \theta )^{2}} , which is a negative number whenever θ is not an integer multiple of 180°. Therefore, except for these special cases, the two eigenvalues are complex numbers, cos ⁡ θ ± i sin ⁡ θ {\displaystyle \cos \theta \pm i\sin \theta } ; and all eigenvectors have non-real entries. Indeed, except for those special cases, a rotation changes the direction of every nonzero vector in the plane. A linear transformation that takes a square to a rectangle of the same area (a squeeze mapping) has reciprocal eigenvalues. === Principal component analysis === The eigendecomposition of a symmetric positive semidefinite (PSD) matrix yields an orthogonal basis of eigenvectors, each of which has a nonnegative eigenvalue. The orthogonal decomposition of a PSD matrix is used in multivariate analysis, where the sample covariance matrices are PSD. This orthogonal decomposition is called principal component analysis (PCA) in statistics. PCA studies linear relations among variables. PCA is performed on the covariance matrix or the correlation matrix (in which each variable is scaled to have its sample variance equal to one). For the covariance or correlation matrix, the eigenvectors correspond to principal components and the eigenvalues to the variance explained by the principal components. Principal component analysis of the correlation matrix provides an orthogonal basis for the space of the observed data: In this basis, the largest eigenvalues correspond to the principal components that are associated with most of the covariability among a number of observed data. Principal component analysis is used as a means of dimensionality reduction in the study of large data sets, such as those encountered in bioinformatics. In Q methodology, the eigenvalues of the correlation matrix determine the Q-methodologist's judgment of practical significance (which differs from the statistical significance of hypothesis testing; cf. criteria for determining the number of factors). More generally, principal component analysis can be used as a method of factor analysis in structural equation modeling. === Graphs === In spectral graph theory, an eigenvalue of a graph is defined as an eigenvalue of the graph's adjacency matrix A {\displaystyle A} , or (increasingly) of the graph's Laplacian matrix due to its discrete Laplace operator, which is either D − A {\displaystyle D-A} (sometimes called the combinatorial Laplacian) or I − D − 1 / 2 A D − 1 / 2 {\displaystyle I-D^{-1/2}AD^{-1/2}} (sometimes called the normalized Laplacian), where D {\displaystyle D} is a diagonal matrix with D i i {\displaystyle D_{ii}} equal to the degree of vertex v i {\displaystyle v_{i}} , and in D − 1 / 2 {\displaystyle D^{-1/2}} , the i {\displaystyle i} th diagonal entry is 1 / deg ⁡ ( v i ) {\textstyle 1/{\sqrt {\deg(v_{i})}}} . The k {\displaystyle k} th principal eigenvector of a graph is defined as either the eigenvector corresponding to the k {\displaystyle k} th largest or k {\displaystyle k} th smallest eigenvalue of the Laplacian. The first principal eigenvector of the graph is also referred to merely as the principal eigenvector. The principal eigenvector is used to measure the centrality of its vertices. An example is Google's PageRank algorithm. The principal eigenvector of a modified adjacency matrix of the World Wide Web graph gives the page ranks as its components. This vector corresponds to the stationary distribution of the Markov chain represented by the row-normalized adjacency matrix; however, the adjacency matrix must first be modified to ensure a stationary distribution exists. The second smallest eigenvector can be used to partition the graph into clusters, via spectral clustering. Other methods are also available for clustering. === Markov chains === A Markov chain is represented by a matrix whose entries are the transition probabilities between states of a system. In particular the entries are non-negative, and every row of the matrix sums to one, being the sum of probabilities of transitions from one state to some other state of the system. The Perron–Frobenius theorem gives sufficient conditions for a Markov chain to have a unique dominant eigenvalue, which governs the convergence of the system to a steady state. === Vibration analysis === Eigenvalue problems occur naturally in the vibration analysis of mechanical structures with many degrees of freedom. The eigenvalues are the natural frequencies (or eigenfrequencies) of vibration, and the eigenvectors are the shapes of these vibrational modes. In particular, undamped vibration is governed by m x ¨ + k x = 0 {\displaystyle m{\ddot {x}}+kx=0} or m x ¨ = − k x {\displaystyle m{\ddot {x}}=-kx} That is, acceleration is proportional to position (i.e., we expect x {\displaystyle x} to be sinusoidal in time). In n {\displaystyle n} dimensions, m {\displaystyle m} becomes a mass matrix and k {\displaystyle k} a stiffness matrix. Admissible solutions are then a linear combination of solutions to the generalized eigenvalue problem k x = ω 2 m x {\displaystyle kx=\omega ^{2}mx} where ω 2 {\displaystyle \omega ^{2}} is the eigenvalue and ω {\displaystyle \omega } is the (imaginary) angular frequency. The principal vibration modes are different from the principal compliance modes, which are the eigenvectors of k {\displaystyle k} alone. Furthermore, damped vibration, governed by m x ¨ + c x ˙ + k x = 0 {\displaystyle m{\ddot {x}}+c{\dot {x}}+kx=0} leads to a so-called quadratic eigenvalue problem, ( ω 2 m + ω c + k ) x = 0. {\displaystyle \left(\omega ^{2}m+\omega c+k\right)x=0.} This can be reduced to a generalized eigenvalue problem by algebraic manipulation at the cost of solving a larger system. The orthogonality properties of the eigenvectors allows decoupling of the differential equations so that the system can be represented as linear summation of the eigenvectors. The eigenvalue problem of complex structures is often solved using finite element analysis, but neatly generalize the solution to scalar-valued vibration problems. === Tensor of moment of inertia === In mechanics, the eigenvectors of the moment of inertia tensor define the principal axes of a rigid body. The tensor of moment of inertia is a key quantity required to determine the rotation of a rigid body around its center of mass. === Stress tensor === In solid mechanics, the stress tensor is symmetric and so can be decomposed into a diagonal tensor with the eigenvalues on the diagonal and eigenvectors as a basis. Because it is diagonal, in this orientation, the stress tensor has no shear components; the components it does have are the principal components. === Schrödinger equation === An example of an eigenvalue equation where the transformation T {\displaystyle T} is represented in terms of a differential operator is the time-independent Schrödinger equation in quantum mechanics: H ψ E = E ψ E {\displaystyle H\psi _{E}=E\psi _{E}\,} where H {\displaystyle H} , the Hamiltonian, is a second-order differential operator and ψ E {\displaystyle \psi _{E}} , the wavefunction, is one of its eigenfunctions corresponding to the eigenvalue E {\displaystyle E} , interpreted as its energy. However, in the case where one is interested only in the bound state solutions of the Schrödinger equation, one looks for ψ E {\displaystyle \psi _{E}} within the space of square integrable functions. Since this space is a Hilbert space with a well-defined scalar product, one can introduce a basis set in which ψ E {\displaystyle \psi _{E}} and H {\displaystyle H} can be represented as a one-dimensional array (i.e., a vector) and a matrix respectively. This allows one to represent the Schrödinger equation in a matrix form. The bra–ket notation is often used in this context. A vector, which represents a state of the system, in the Hilbert space of square integrable functions is represented by | Ψ E ⟩ {\displaystyle |\Psi _{E}\rangle } . In this notation, the Schrödinger equation is: H | Ψ E ⟩ = E | Ψ E ⟩ {\displaystyle H|\Psi _{E}\rangle =E|\Psi _{E}\rangle } where | Ψ E ⟩ {\displaystyle |\Psi _{E}\rangle } is an eigenstate of H {\displaystyle H} and E {\displaystyle E} represents the eigenvalue. H {\displaystyle H} is an observable self-adjoint operator, the infinite-dimensional analog of Hermitian matrices. As in the matrix case, in the equation above H | Ψ E ⟩ {\displaystyle H|\Psi _{E}\rangle } is understood to be the vector obtained by application of the transformation H {\displaystyle H} to | Ψ E ⟩ {\displaystyle |\Psi _{E}\rangle } . === Wave transport === Light, acoustic waves, and microwaves are randomly scattered numerous times when traversing a static disordered system. Even though multiple scattering repeatedly randomizes the waves, ultimately coherent wave transport through the system is a deterministic process which can be described by a field transmission matrix t {\displaystyle \mathbf {t} } . The eigenvectors of the transmission operator t † t {\displaystyle \mathbf {t} ^{\dagger }\mathbf {t} } form a set of disorder-specific input wavefronts which enable waves to couple into the disordered system's eigenchannels: the independent pathways waves can travel through the system. The eigenvalues, τ {\displaystyle \tau } , of t † t {\displaystyle \mathbf {t} ^{\dagger }\mathbf {t} } correspond to the intensity transmittance associated with each eigenchannel. One of the remarkable properties of the transmission operator of diffusive systems is their bimodal eigenvalue distribution with τ max = 1 {\displaystyle \tau _{\max }=1} and τ min = 0 {\displaystyle \tau _{\min }=0} . Furthermore, one of the striking properties of open eigenchannels, beyond the perfect transmittance, is the statistically robust spatial profile of the eigenchannels. === Molecular orbitals === In quantum mechanics, and in particular in atomic and molecular physics, within the Hartree–Fock theory, the atomic and molecular orbitals can be defined by the eigenvectors of the Fock operator. The corresponding eigenvalues are interpreted as ionization potentials via Koopmans' theorem. In this case, the term eigenvector is used in a somewhat more general meaning, since the Fock operator is explicitly dependent on the orbitals and their eigenvalues. Thus, if one wants to underline this aspect, one speaks of nonlinear eigenvalue problems. Such equations are usually solved by an iteration procedure, called in this case self-consistent field method. In quantum chemistry, one often represents the Hartree–Fock equation in a non-orthogonal basis set. This particular representation is a generalized eigenvalue problem called Roothaan equations. === Geology and glaciology === In geology, especially in the study of glacial till, eigenvectors and eigenvalues are used as a method by which a mass of information of a clast's fabric can be summarized in a 3-D space by six numbers. In the field, a geologist may collect such data for hundreds or thousands of clasts in a soil sample, which can be compared graphically or as a stereographic projection. Graphically, many geologists use a Tri-Plot (Sneed and Folk) diagram,. A stereographic projection projects 3-dimensional spaces onto a two-dimensional plane. A type of stereographic projection is Wulff Net, which is commonly used in crystallography to create stereograms. The output for the orientation tensor is in the three orthogonal (perpendicular) axes of space. The three eigenvectors are ordered v 1 , v 2 , v 3 {\displaystyle \mathbf {v} _{1},\mathbf {v} _{2},\mathbf {v} _{3}} by their eigenvalues E 1 ≥ E 2 ≥ E 3 {\displaystyle E_{1}\geq E_{2}\geq E_{3}} ; v 1 {\displaystyle \mathbf {v} _{1}} then is the primary orientation/dip of clast, v 2 {\displaystyle \mathbf {v} _{2}} is the secondary and v 3 {\displaystyle \mathbf {v} _{3}} is the tertiary, in terms of strength. The clast orientation is defined as the direction of the eigenvector, on a compass rose of 360°. Dip is measured as the eigenvalue, the modulus of the tensor: this is valued from 0° (no dip) to 90° (vertical). The relative values of E 1 {\displaystyle E_{1}} , E 2 {\displaystyle E_{2}} , and E 3 {\displaystyle E_{3}} are dictated by the nature of the sediment's fabric. If E 1 = E 2 = E 3 {\displaystyle E_{1}=E_{2}=E_{3}} , the fabric is said to be isotropic. If E 1 = E 2 > E 3 {\displaystyle E_{1}=E_{2}>E_{3}} , the fabric is said to be planar. If E 1 > E 2 > E 3 {\displaystyle E_{1}>E_{2}>E_{3}} , the fabric is said to be linear. === Basic reproduction number === The basic reproduction number ( R 0 {\displaystyle R_{0}} ) is a fundamental number in the study of how infectious diseases spread. If one infectious person is put into a population of completely susceptible people, then R 0 {\displaystyle R_{0}} is the average number of people that one typical infectious person will infect. The generation time of an infection is the time, t G {\displaystyle t_{G}} , from one person becoming infected to the next person becoming infected. In a heterogeneous population, the next generation matrix defines how many people in the population will become infected after time t G {\displaystyle t_{G}} has passed. The value R 0 {\displaystyle R_{0}} is then the largest eigenvalue of the next generation matrix. === Eigenfaces === In image processing, processed images of faces can be seen as vectors whose components are the brightnesses of each pixel. The dimension of this vector space is the number of pixels. The eigenvectors of the covariance matrix associated with a large set of normalized pictures of faces are called eigenfaces; this is an example of principal component analysis. They are very useful for expressing any face image as a linear combination of some of them. In the facial recognition branch of biometrics, eigenfaces provide a means of applying data compression to faces for identification purposes. Research related to eigen vision systems determining hand gestures has also been made. Similar to this concept, eigenvoices represent the general direction of variability in human pronunciations of a particular utterance, such as a word in a language. Based on a linear combination of such eigenvoices, a new voice pronunciation of the word can be constructed. These concepts have been found useful in automatic speech recognition systems for speaker adaptation. == See also == Antieigenvalue theory Eigenoperator Eigenplane Eigenmoments Eigenvalue algorithm Quantum states Jordan normal form List of numerical-analysis software Nonlinear eigenproblem Normal eigenvalue Quadratic eigenvalue problem Singular value Spectrum of a matrix == Notes == === Citations === == Sources == == Further reading == == External links == What are Eigen Values? – non-technical introduction from PhysLink.com's "Ask the Experts" Eigen Values and Eigen Vectors Numerical Examples – Tutorial and Interactive Program from Revoledu. Introduction to Eigen Vectors and Eigen Values – lecture from Khan Academy Eigenvectors and eigenvalues | Essence of linear algebra, chapter 10 – A visual explanation with 3Blue1Brown Matrix Eigenvectors Calculator from Symbolab (Click on the bottom right button of the 2×12 grid to select a matrix size. Select an n × n {\displaystyle n\times n} size (for a square matrix), then fill out the entries numerically and click on the Go button. It can accept complex numbers as well.) Wikiversity uses introductory physics to introduce Eigenvalues and eigenvectors === Theory === Computation of Eigenvalues Numerical solution of eigenvalue problems Edited by Zhaojun Bai, James Demmel, Jack Dongarra, Axel Ruhe, and Henk van der Vorst
Wikipedia/Eigenvalue,_eigenvector_and_eigenspace
In geometry, the hyperboloid model, also known as the Minkowski model after Hermann Minkowski, is a model of n-dimensional hyperbolic geometry in which points are represented by points on the forward sheet S+ of a two-sheeted hyperboloid in (n+1)-dimensional Minkowski space or by the displacement vectors from the origin to those points, and m-planes are represented by the intersections of (m+1)-planes passing through the origin in Minkowski space with S+ or by wedge products of m vectors. Hyperbolic space is embedded isometrically in Minkowski space; that is, the hyperbolic distance function is inherited from Minkowski space, analogous to the way spherical distance is inherited from Euclidean distance when the n-sphere is embedded in (n+1)-dimensional Euclidean space. Other models of hyperbolic space can be thought of as map projections of S+: the Beltrami–Klein model is the projection of S+ through the origin onto a plane perpendicular to a vector from the origin to specific point in S+ analogous to the gnomonic projection of the sphere; the Poincaré disk model is a projection of S+ through a point on the other sheet S− onto perpendicular plane, analogous to the stereographic projection of the sphere; the Gans model is the orthogonal projection of S+ onto a plane perpendicular to a specific point in S+, analogous to the orthographic projection; the band model of the hyperbolic plane is a conformal “cylindrical” projection analogous to the Mercator projection of the sphere; Lobachevsky coordinates are a cylindrical projection analogous to the equirectangular projection (longitude, latitude) of the sphere. == Minkowski quadratic form == If (x0, x1, ..., xn) is a vector in the (n + 1)-dimensional coordinate space Rn+1, the Minkowski quadratic form is defined to be Q ( x 0 , x 1 , … , x n ) = − x 0 2 + x 1 2 + … + x n 2 . {\displaystyle Q(x_{0},x_{1},\ldots ,x_{n})=-x_{0}^{2}+x_{1}^{2}+\ldots +x_{n}^{2}.} The vectors v ∈ Rn+1 such that Q(v) = −1 form an n-dimensional hyperboloid S consisting of two connected components, or sheets: the forward, or future, sheet S+, where x0>0 and the backward, or past, sheet S−, where x0<0. The points of the n-dimensional hyperboloid model are the points on the forward sheet S+. The metric on the hyperboloid is d s 2 = Q ( d x 0 , d x 1 , … , d x n ) = − d x 0 2 + d x 1 2 + … + d x n 2 . {\displaystyle ds^{2}=Q(dx_{0},dx_{1},\ldots ,dx_{n})=-dx_{0}^{2}+dx_{1}^{2}+\ldots +dx_{n}^{2}.} The Minkowski bilinear form B is the polarization of the Minkowski quadratic form Q, B ( u , v ) = ( Q ( u + v ) − Q ( u ) − Q ( v ) ) / 2. {\displaystyle B(\mathbf {u} ,\mathbf {v} )=(Q(\mathbf {u} +\mathbf {v} )-Q(\mathbf {u} )-Q(\mathbf {v} ))/2.} (This is sometimes also written using scalar product notation u ⋅ v . {\displaystyle \mathbf {u} \cdot \mathbf {v} .} ) Explicitly, B ( ( x 0 , x 1 , … , x n ) , ( y 0 , y 1 , … , y n ) ) = − x 0 y 0 + x 1 y 1 + … + x n y n . {\displaystyle B((x_{0},x_{1},\ldots ,x_{n}),(y_{0},y_{1},\ldots ,y_{n}))=-x_{0}y_{0}+x_{1}y_{1}+\ldots +x_{n}y_{n}.} The hyperbolic distance between two points u and v of S+ is given by the formula d ( u , v ) = arcosh ⁡ ( − B ( u , v ) ) , {\displaystyle d(\mathbf {u} ,\mathbf {v} )=\operatorname {arcosh} (-B(\mathbf {u} ,\mathbf {v} )),} where arcosh is the inverse function of hyperbolic cosine. === Choice of metric signature === The bilinear form B {\displaystyle B} also functions as the metric tensor over the space. In n+1 dimensional Minkowski space, there are two choices for the metric with opposite signature, in the 3-dimensional case either (+, −, −) or (−, +, +). If the signature (−, +, +) is chosen, then the scalar square of chords between distinct points on the same sheet of the hyperboloid will be positive, which more closely aligns with conventional definitions and expectations in mathematics. Then n-dimensional hyperbolic space is a Riemannian space and distance or length can be defined as the square root of the scalar square. If the signature (+, −, −) is chosen, scalar square between distinct points on the hyperboloid will be negative, so various definitions of basic terms must be adjusted, which can be inconvenient. Nonetheless, the signature (+, −, −, −) is also common for describing spacetime in physics. (Cf. Sign convention#Metric signature.) == Straight lines == A straight line in hyperbolic n-space is modeled by a geodesic on the hyperboloid. A geodesic on the hyperboloid is the (non-empty) intersection of the hyperboloid with a two-dimensional linear subspace (including the origin) of the n+1-dimensional Minkowski space. If we take u and v to be basis vectors of that linear subspace with B ( u , u ) = 1 {\displaystyle B(\mathbf {u} ,\mathbf {u} )=1} B ( v , v ) = − 1 {\displaystyle B(\mathbf {v} ,\mathbf {v} )=-1} B ( u , v ) = B ( v , u ) = 0 {\displaystyle B(\mathbf {u} ,\mathbf {v} )=B(\mathbf {v} ,\mathbf {u} )=0} and use w as a real parameter for points on the geodesic, then u sinh ⁡ w + v cosh ⁡ w {\displaystyle \mathbf {u} \sinh w+\mathbf {v} \cosh w} will be a point on the geodesic. More generally, a k-dimensional "flat" in the hyperbolic n-space will be modeled by the (non-empty) intersection of the hyperboloid with a k+1-dimensional linear subspace (including the origin) of the Minkowski space. == Isometries == The indefinite orthogonal group O(1,n), also called the (n+1)-dimensional Lorentz group, is the Lie group of real (n+1)×(n+1) matrices which preserve the Minkowski bilinear form. In a different language, it is the group of linear isometries of the Minkowski space. In particular, this group preserves the hyperboloid S. Recall that indefinite orthogonal groups have four connected components, corresponding to reversing or preserving the orientation on each subspace (here 1-dimensional and n-dimensional), and form a Klein four-group. The subgroup of O(1,n) that preserves the sign of the first coordinate is the orthochronous Lorentz group, denoted O+(1,n), and has two components, corresponding to preserving or reversing the orientation of the spatial subspace. Its subgroup SO+(1,n) consisting of matrices with determinant one is a connected Lie group of dimension n(n+1)/2 which acts on S+ by linear automorphisms and preserves the hyperbolic distance. This action is transitive and the stabilizer of the vector (1,0,...,0) consists of the matrices of the form ( 1 0 … 0 0 ⋮ A 0 ) {\displaystyle {\begin{pmatrix}1&0&\ldots &0\\0&&&\\[-4mu]\vdots &&A&\\0&&&\\\end{pmatrix}}} Where A {\displaystyle A} belongs to the compact special orthogonal group SO(n) (generalizing the rotation group SO(3) for n = 3). It follows that the n-dimensional hyperbolic space can be exhibited as the homogeneous space and a Riemannian symmetric space of rank 1, H n = S O + ( 1 , n ) / S O ( n ) . {\displaystyle \mathbb {H} ^{n}=\mathrm {SO} ^{+}(1,n)/\mathrm {SO} (n).} The group SO+(1,n) is the full group of orientation-preserving isometries of the n-dimensional hyperbolic space. In more concrete terms, SO+(1,n) can be split into n(n−1)/2 rotations (formed with a regular Euclidean rotation matrix in the lower-right block) and n hyperbolic translations, which take the form ( cosh ⁡ α sinh ⁡ α 0 ⋯ sinh ⁡ α cosh ⁡ α 0 ⋯ 0 0 1 ⋮ ⋮ ⋱ ) {\displaystyle {\begin{pmatrix}\cosh \alpha &\sinh \alpha &0&\cdots \\[2mu]\sinh \alpha &\cosh \alpha &0&\cdots \\[2mu]0&0&1&\\[-7mu]\vdots &\vdots &&\ddots \\\end{pmatrix}}} where α {\displaystyle \alpha } is the distance translated (along the x-axis in this case), and the 2nd row/column can be exchanged with a different pair to change to a translation along a different axis. The general form of a translation in 3 dimensions along the vector ( w , x , y , z ) {\displaystyle (w,x,y,z)} is: ( w x y z x x 2 w + 1 + 1 y x w + 1 z x w + 1 y x y w + 1 y 2 w + 1 + 1 z y w + 1 z x z w + 1 y z w + 1 z 2 w + 1 + 1 ) | , {\displaystyle {\begin{pmatrix}w&x&y&z\\[2mu]x&\ {\dfrac {x^{2}}{w+1}}+1&{\dfrac {yx}{w+1}}&{\dfrac {zx}{w+1}}\\[2mu]y&{\dfrac {xy}{w+1}}&\,{\dfrac {y^{2}}{w+1}}+1&{\dfrac {zy}{w+1}}\\[2mu]z&{\dfrac {xz}{w+1}}&{\dfrac {yz}{w+1}}&{\dfrac {z^{2}}{w+1}}+1\end{pmatrix}}_{\vphantom {|}},} where ⁠ w = x 2 + y 2 + z 2 + 1 {\displaystyle \textstyle w={\sqrt {x^{2}+y^{2}+z^{2}+1}}} ⁠. This extends naturally to more dimensions, and is also the simplified version of a Lorentz boost when you remove the relativity-specific terms. === Examples of groups of isometries === The group of all isometries of the hyperboloid model is O+(1,n). Any group of isometries is a subgroup of it. ==== Reflections ==== For two points p , q ∈ H n , p ≠ q {\displaystyle \mathbf {p} ,\mathbf {q} \in \mathbb {H} ^{n},\mathbf {p} \neq \mathbf {q} } , there is a unique reflection exchanging them. Let u = p − q Q ( p − q ) {\displaystyle \mathbf {u} ={\frac {\mathbf {p} -\mathbf {q} }{\sqrt {Q(\mathbf {p} -\mathbf {q} )}}}} . Note that Q ( u ) = 1 {\displaystyle Q(\mathbf {u} )=1} , and therefore u ∉ H n {\displaystyle u\notin \mathbb {H} ^{n}} . Then x ↦ x − 2 B ( x , u ) u {\displaystyle \mathbf {x} \mapsto \mathbf {x} -2B(\mathbf {x} ,\mathbf {u} )\mathbf {u} } is a reflection that exchanges p {\displaystyle \mathbf {p} } and q {\displaystyle \mathbf {q} } . This is equivalent to the following matrix: R = I − 2 u u T ( − 1 0 0 I ) {\displaystyle R=I-2\mathbf {u} \mathbf {u} ^{\operatorname {T} }{\begin{pmatrix}-1&0\\0&I\\\end{pmatrix}}} (note the use of block matrix notation). Then { I , R } {\displaystyle \{I,R\}} is a group of isometries. All such subgroups are conjugate. ==== Rotations and reflections ==== S = { ( 1 0 0 A ) : A ∈ O ( n ) } {\displaystyle S=\left\{{\begin{pmatrix}1&0\\0&A\\\end{pmatrix}}:A\in O(n)\right\}} is the group of rotations and reflections that preserve ( 1 , 0 , … , 0 ) {\displaystyle (1,0,\dots ,0)} . The function A ↦ ( 1 0 0 A ) {\displaystyle A\mapsto {\begin{pmatrix}1&0\\0&A\\\end{pmatrix}}} is an isomorphism from O(n) to this group. For any point p {\displaystyle p} , if X {\displaystyle X} is an isometry that maps ( 1 , 0 , … , 0 ) {\displaystyle (1,0,\dots ,0)} to p {\displaystyle p} , then X S X − 1 {\displaystyle XSX^{-1}} is the group of rotations and reflections that preserve p {\displaystyle p} . ==== Translations ==== For any real number t {\displaystyle t} , there is a translation L t = ( cosh ⁡ t sinh ⁡ t 0 sinh ⁡ t cosh ⁡ t 0 0 0 I ) = e ( 0 t 0 t 0 0 0 0 0 ) {\displaystyle L_{t}={\begin{pmatrix}\cosh t&\sinh t&0\\\sinh t&\cosh t&0\\0&0&I\\\end{pmatrix}}=e^{\begin{pmatrix}0&t&0\\t&0&0\\0&0&0\\\end{pmatrix}}} (The expression on the RHS is a matrix exponential.) This is a translation of distance t {\displaystyle t} in the positive x direction if t ≥ 0 {\displaystyle t\geq 0} or of distance − t {\displaystyle -t} in the negative x direction if t ≤ 0 {\displaystyle t\leq 0} . Any translation of distance t {\displaystyle t} is conjugate to L t {\displaystyle L_{t}} and L − t {\displaystyle L_{-t}} . The set { L t : t ∈ R } {\displaystyle \left\{L_{t}:t\in \mathbb {R} \right\}} is the group of translations through the x-axis, and a group of isometries is conjugate to it if and only if it is a group of isometries through a line. For example, let's say we want to find the group of translations through a line p q ¯ {\displaystyle {\overline {\mathbf {p} \mathbf {q} }}} . Let X {\displaystyle X} be an isometry that maps ( 1 , 0 , … , 0 ) {\displaystyle (1,0,\dots ,0)} to p {\displaystyle p} and let Y {\displaystyle Y} be an isometry that fixes p {\displaystyle p} and maps X L d ( p , q ) [ 1 , 0 , … , 0 ] T {\displaystyle XL_{d(\mathbf {p} ,\mathbf {q} )}[1,0,\dots ,0]^{\operatorname {T} }} to q {\displaystyle q} . An example of such a Y {\displaystyle Y} is a reflection exchanging X L d ( p , q ) [ 1 , 0 , … , 0 ] T {\displaystyle XL_{d(\mathbf {p} ,\mathbf {q} )}[1,0,\dots ,0]^{\operatorname {T} }} and q {\displaystyle q} (assuming they are different), because they are both the same distance from p {\displaystyle p} . Then Y X {\displaystyle YX} is an isometry mapping ( 1 , 0 , … , 0 ) {\displaystyle (1,0,\dots ,0)} to p {\displaystyle p} and a point on the positive x-axis to q {\displaystyle q} . ( Y X ) L t ( Y X ) − 1 {\displaystyle (YX)L_{t}(YX)^{-1}} is a translation through the line p q ¯ {\displaystyle {\overline {\mathbf {p} \mathbf {q} }}} of distance | t | {\displaystyle |t|} . If t ≥ 0 {\displaystyle t\geq 0} , it is in the p q → {\displaystyle {\overrightarrow {\mathbf {p} \mathbf {q} }}} direction. If t ≤ 0 {\displaystyle t\leq 0} , it is in the q p → {\displaystyle {\overrightarrow {\mathbf {q} \mathbf {p} }}} direction. { ( Y X ) L t ( Y X ) − 1 : t ∈ R } {\displaystyle \left\{(YX)L_{t}(YX)^{-1}:t\in \mathbb {R} \right\}} is the group of translations through p q ¯ {\displaystyle {\overline {\mathbf {p} \mathbf {q} }}} . ==== Symmetries of horospheres ==== Let H be some horosphere such that points of the form ( w , x , 0 , … , 0 ) {\displaystyle (w,x,0,\dots ,0)} are inside of it for arbitrarily large x. For any vector b in R n − 1 {\displaystyle \mathbb {R} ^{n-1}} ( 1 + 1 2 ‖ b ‖ 2 − 1 2 ‖ b ‖ 2 b T 1 2 ‖ b ‖ 2 1 − 1 2 ‖ b ‖ 2 b T b − b I ) = e ( 0 0 b T 0 0 b T b − b 0 ) {\displaystyle {\begin{pmatrix}1+{\tfrac {1}{2}}\|\mathbf {b} \|^{2}&-{\tfrac {1}{2}}\|\mathbf {b} \|^{2}&\mathbf {b} ^{\operatorname {T} }\\{\tfrac {1}{2}}\|\mathbf {b} \|^{2}&1-{\tfrac {1}{2}}\|\mathbf {b} \|^{2}&\mathbf {b} ^{\operatorname {T} }\\\mathbf {b} &-\mathbf {b} &I\end{pmatrix}}=e^{\begin{pmatrix}0&0&\mathbf {b} ^{\operatorname {T} }\\0&0&\mathbf {b} ^{\operatorname {T} }\\\mathbf {b} &-\mathbf {b} &0\end{pmatrix}}} is a hororotation that maps H to itself. The set of such hororotations is the group of hororotations preserving H. All hororotations are conjugate to each other. For any A {\displaystyle A} in O(n−1) ( 1 0 0 0 1 0 0 0 A ) {\displaystyle {\begin{pmatrix}1&0&0\\0&1&0\\0&0&A\\\end{pmatrix}}} is a rotation or reflection that preserves H and the x-axis. These hororotations, rotations, and reflections generate the group of symmetries of H. The symmetry group of any horosphere is conjugate to it. They are isomorphic to the Euclidean group E(n−1). == History == In several papers between 1878–1885, Wilhelm Killing used the representation he attributed to Karl Weierstrass for Lobachevskian geometry. In particular, he discussed quadratic forms such as k 2 t 2 + u 2 + v 2 + w 2 = k 2 {\displaystyle k^{2}t^{2}+u^{2}+v^{2}+w^{2}=k^{2}} or in arbitrary dimensions k 2 x 0 2 + x 1 2 + ⋯ + x n 2 = k 2 {\displaystyle k^{2}x_{0}^{2}+x_{1}^{2}+\dots +x_{n}^{2}=k^{2}} , where k {\displaystyle k} is the reciprocal measure of curvature, k 2 = ∞ {\displaystyle k^{2}=\infty } denotes Euclidean geometry, k 2 > 0 {\displaystyle k^{2}>0} elliptic geometry, and k 2 < 0 {\displaystyle k^{2}<0} hyperbolic geometry. According to Jeremy Gray (1986), Poincaré used the hyperboloid model in his personal notes in 1880. Poincaré published his results in 1881, in which he discussed the invariance of the quadratic form ξ 2 + η 2 − ζ 2 = − 1 {\displaystyle \xi ^{2}+\eta ^{2}-\zeta ^{2}=-1} . Gray shows where the hyperboloid model is implicit in later writing by Poincaré. Also Homersham Cox in 1882 used Weierstrass coordinates (without using this name) satisfying the relation z 2 − x 2 − y 2 = 1 {\displaystyle z^{2}-x^{2}-y^{2}=1} as well as w 2 − x 2 − y 2 − z 2 = 1 {\displaystyle w^{2}-x^{2}-y^{2}-z^{2}=1} . Further exposure of the model was given by Alfred Clebsch and Ferdinand Lindemann in 1891 discussing the relation x 1 2 + x 2 2 − 4 k 2 x 3 2 = − 4 k 2 {\displaystyle x_{1}^{2}+x_{2}^{2}-4k^{2}x_{3}^{2}=-4k^{2}} and x 1 2 + x 2 2 + x 3 2 − 4 k 2 x 4 2 = − 4 k 2 {\displaystyle x_{1}^{2}+x_{2}^{2}+x_{3}^{2}-4k^{2}x_{4}^{2}=-4k^{2}} . Weierstrass coordinates were also used by Gérard (1892), Felix Hausdorff (1899), Frederick S. Woods (1903)], Heinrich Liebmann (1905). The hyperboloid was explored as a metric space by Alexander Macfarlane in his Papers in Space Analysis (1894). He noted that points on the hyperboloid could be written as cosh ⁡ A + α sinh ⁡ A , {\displaystyle \cosh A+\alpha \sinh A,} where α is a basis vector orthogonal to the hyperboloid axis. For example, he obtained the hyperbolic law of cosines through use of his Algebra of Physics. H. Jansen made the hyperboloid model the explicit focus of his 1909 paper "Representation of hyperbolic geometry on a two sheeted hyperboloid". In 1993 W.F. Reynolds recounted some of the early history of the model in his article in the American Mathematical Monthly. Being a commonplace model by the twentieth century, it was identified with the Geschwindigkeitsvectoren (velocity vectors) by Hermann Minkowski in his 1907 Göttingen lecture 'The Relativity Principle'. Scott Walter, in his 1999 paper "The Non-Euclidean Style of Minkowskian Relativity" recalls Minkowski's awareness, but traces the lineage of the model to Hermann Helmholtz rather than Weierstrass and Killing. In the early years of relativity the hyperboloid model was used by Vladimir Varićak to explain the physics of velocity. In his speech to the German mathematical union in 1912 he referred to Weierstrass coordinates. == See also == Poincaré disk model Hyperbolic quaternions == Notes and references == Alekseevskij, D.V.; Vinberg, E.B.; Solodovnikov, A.S. (1993), Geometry of Spaces of Constant Curvature, Encyclopaedia of Mathematical Sciences, Berlin, New York: Springer-Verlag, ISBN 978-3-540-52000-9 Anderson, James (2005), Hyperbolic Geometry, Springer Undergraduate Mathematics Series (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-1-85233-934-0 Ratcliffe, John G. (1994), Foundations of hyperbolic manifolds, Berlin, New York: Springer-Verlag, ISBN 978-0-387-94348-0, Chapter 3 Miles Reid & Balázs Szendröi (2005) Geometry and Topology, Figure 3.10, p 45, Cambridge University Press, ISBN 0-521-61325-6, MR2194744. Ryan, Patrick J. (1986), Euclidean and non-Euclidean geometry: An analytical approach, Cambridge, London, New York, New Rochelle, Melbourne, Sydney: Cambridge University Press, ISBN 978-0-521-25654-4 Parkkonen, Jouni. "HYPERBOLIC GEOMETRY" (PDF). Retrieved September 5, 2020.
Wikipedia/Hyperboloid_model
In differential geometry, the Willmore conjecture is a lower bound on the Willmore energy of a torus. It is named after the English mathematician Tom Willmore, who conjectured it in 1965. A proof by Fernando Codá Marques and André Neves was announced in 2012 and published in 2014. == Willmore energy == Let v : M → R3 be a smooth immersion of a compact, orientable surface. Giving M the Riemannian metric induced by v, let H : M → R be the mean curvature (the arithmetic mean of the principal curvatures κ1 and κ2 at each point). In this notation, the Willmore energy W(M) of M is given by W ( M ) = ∫ M H 2 d A . {\displaystyle W(M)=\int _{M}H^{2}\,dA.} It is not hard to prove that the Willmore energy satisfies W(M) ≥ 4π, with equality if and only if M is an embedded round sphere. == Statement == Calculation of W(M) for a few examples suggests that there should be a better bound than W(M) ≥ 4π for surfaces with genus g(M) > 0. In particular, calculation of W(M) for tori with various symmetries led Willmore to propose in 1965 the following conjecture, which now bears his name For every smooth immersed torus M in R3, W(M) ≥ 2π2. In 1982, Peter Wai-Kwong Li and Shing-Tung Yau proved the conjecture in the non-embedded case, showing that if f : Σ → S 3 {\displaystyle f:\Sigma \to S^{3}} is an immersion of a compact surface, which is not an embedding, then W(M) is at least 8π. In 2012, Fernando Codá Marques and André Neves proved the conjecture in the embedded case, using the Almgren–Pitts min-max theory of minimal surfaces. Martin Schmidt claimed a proof in 2002, but it was not accepted for publication in any peer-reviewed mathematical journal (although it did not contain a proof of the Willmore conjecture, he proved some other important conjectures in it). Prior to the proof of Marques and Neves, the Willmore conjecture had already been proved for many special cases, such as tube tori (by Willmore himself), and for tori of revolution (by Langer & Singer). == References ==
Wikipedia/Willmore_conjecture
In mathematics, specifically in ring theory, a torsion element is an element of a module that yields zero when multiplied by some non-zero-divisor of the ring. The torsion submodule of a module is the submodule formed by the torsion elements (in cases when this is indeed a submodule, such as when the ring is commutative). A torsion module is a module consisting entirely of torsion elements. A module is torsion-free if its only torsion element is the zero element. This terminology is more commonly used for modules over a domain, that is, when the regular elements of the ring are all its nonzero elements. This terminology applies to abelian groups (with "module" and "submodule" replaced by "group" and "subgroup"). This is just a special case of the more general situation, because abelian groups are modules over the ring of integers. (In fact, this is the origin of the terminology, which was introduced for abelian groups before being generalized to modules.) In the case of groups that are noncommutative, a torsion element is an element of finite order. Contrary to the commutative case, the torsion elements do not form a subgroup, in general. == Definition == An element m of a module M over a ring R is called a torsion element of the module if there exists a regular element r of the ring (an element that is neither a left nor a right zero divisor) that annihilates m, i.e., r m = 0. In an integral domain (a commutative ring without zero divisors), every non-zero element is regular, so a torsion element of a module over an integral domain is one annihilated by a non-zero element of the integral domain. Some authors use this as the definition of a torsion element, but this definition does not work well over more general rings. A module M over a ring R is called a torsion module if all its elements are torsion elements, and torsion-free if zero is the only torsion element. If the ring R is commutative then the set of all torsion elements forms a submodule of M, called the torsion submodule of M, sometimes denoted T(M). If R is not commutative, T(M) may or may not be a submodule. It is shown in (Lam 2007) that R is a right Ore ring if and only if T(M) is a submodule of M for all right R-modules. Since right Noetherian domains are Ore, this covers the case when R is a right Noetherian domain (which might not be commutative). More generally, let M be a module over a ring R and S be a multiplicatively closed subset of R. An element m of M is called an S-torsion element if there exists an element s in S such that s annihilates m, i.e., s m = 0. In particular, one can take for S the set of regular elements of the ring R and recover the definition above. An element g of a group G is called a torsion element of the group if it has finite order, i.e., if there is a positive integer m such that gm = e, where e denotes the identity element of the group, and gm denotes the product of m copies of g. A group is called a torsion (or periodic) group if all its elements are torsion elements, and a torsion-free group if its only torsion element is the identity element. Any abelian group may be viewed as a module over the ring Z of integers, and in this case the two notions of torsion coincide. == Examples == Let M be a free module over any ring R. Then it follows immediately from the definitions that M is torsion-free (if the ring R is not a domain then torsion is considered with respect to the set S of non-zero-divisors of R). In particular, any free abelian group is torsion-free and any vector space over a field K is torsion-free when viewed as a module over K. By contrast with example 1, any finite group (abelian or not) is periodic and finitely generated. Burnside's problem, conversely, asks whether a finitely generated periodic group must be finite. The answer is "no" in general, even if the period is fixed. The torsion elements of the multiplicative group of a field are its roots of unity. In the modular group, Γ obtained from the group SL(2, Z) of 2×2 integer matrices with unit determinant by factoring out its center, any nontrivial torsion element either has order two and is conjugate to the element S or has order three and is conjugate to the element ST. In this case, torsion elements do not form a subgroup, for example, S · ST = T, which has infinite order. The abelian group Q/Z, consisting of the rational numbers modulo 1, is periodic, i.e. every element has finite order. Analogously, the module K(t)/K[t] over the ring R = K[t] of polynomials in one variable is pure torsion. Both these examples can be generalized as follows: if R is an integral domain and Q is its field of fractions, then Q/R is a torsion R-module. The torsion subgroup of (R/Z, +) is (Q/Z, +) while the groups (R, +) and (Z, +) are torsion-free. The quotient of a torsion-free abelian group by a subgroup is torsion-free exactly when the subgroup is a pure subgroup. Consider a linear operator L acting on a finite-dimensional vector space V over the field K. If we view V as an K[L]-module in the natural way, then (as a result of many things, either simply by finite-dimensionality or as a consequence of the Cayley–Hamilton theorem), V is a torsion K[L]-module. == Case of a principal ideal domain == Suppose that R is a (commutative) principal ideal domain and M is a finitely generated R-module. Then the structure theorem for finitely generated modules over a principal ideal domain gives a detailed description of the module M up to isomorphism. In particular, it claims that M ≃ F ⊕ T ( M ) , {\displaystyle M\simeq F\oplus \mathrm {T} (M),} where F is a free R-module of finite rank (depending only on M) and T(M) is the torsion submodule of M. As a corollary, any finitely generated torsion-free module over R is free. This corollary does not hold for more general commutative domains, even for R = K[x,y], the ring of polynomials in two variables. For non-finitely generated modules, the above direct decomposition is not true. The torsion subgroup of an abelian group may not be a direct summand of it. == Torsion and localization == Assume that R is a commutative domain and M is an R-module. Let Q be the field of fractions of the ring R. Then one can consider the Q-module M Q = M ⊗ R Q , {\displaystyle M_{Q}=M\otimes _{R}Q,} obtained from M by extension of scalars. Since Q is a field, a module over Q is a vector space, possibly infinite-dimensional. There is a canonical homomorphism of abelian groups from M to MQ, and the kernel of this homomorphism is precisely the torsion submodule T(M). More generally, if S is a multiplicatively closed subset of the ring R, then we may consider localization of the R-module M, M S = M ⊗ R R S , {\displaystyle M_{S}=M\otimes _{R}R_{S},} which is a module over the localization RS. There is a canonical map from M to MS, whose kernel is precisely the S-torsion submodule of M. Thus the torsion submodule of M can be interpreted as the set of the elements that "vanish in the localization". The same interpretation continues to hold in the non-commutative setting for rings satisfying the Ore condition, or more generally for any right denominator set S and right R-module M. == Torsion in homological algebra == The concept of torsion plays an important role in homological algebra. If M and N are two modules over a commutative domain R (for example, two abelian groups, when R = Z), Tor functors yield a family of R-modules Tori (M,N). The S-torsion of an R-module M is canonically isomorphic to TorR1(M, RS/R) by the exact sequence of TorR*: The short exact sequence 0 → R → R S → R S / R → 0 {\displaystyle 0\to R\to R_{S}\to R_{S}/R\to 0} of R-modules yields an exact sequence 0 → Tor 1 R ⁡ ( M , R S / R ) → M → M S {\displaystyle 0\to \operatorname {Tor} _{1}^{R}(M,R_{S}/R)\to M\to M_{S}} , and hence Tor 1 R ⁡ ( M , R S / R ) {\displaystyle \operatorname {Tor} _{1}^{R}(M,R_{S}/R)} is the kernel of the localisation map of M. The symbol Tor denoting the functors reflects this relation with the algebraic torsion. This same result holds for non-commutative rings as well as long as the set S is a right denominator set. == Abelian varieties == The torsion elements of an abelian variety are torsion points or, in an older terminology, division points. On elliptic curves they may be computed in terms of division polynomials. == See also == Analytic torsion Arithmetic dynamics Flat module Annihilator (ring theory) Localization of a module Rank of an abelian group Ray–Singer torsion Torsion-free abelian group Universal coefficient theorem == References == == Sources == Ernst Kunz, "Introduction to Commutative algebra and algebraic geometry", Birkhauser 1985, ISBN 0-8176-3065-1 Irving Kaplansky, "Infinite abelian groups", University of Michigan, 1954. Michiel Hazewinkel (2001) [1994], "Torsion submodule", Encyclopedia of Mathematics, EMS Press Lam, Tsit Yuen (2007), Exercises in modules and rings, Problem Books in Mathematics, New York: Springer, pp. xviii+412, doi:10.1007/978-0-387-48899-8, ISBN 978-0-387-98850-4, MR 2278849 Roman, Stephen (2008), Advanced Linear Algebra, Graduate Texts in Mathematics (Third ed.), Springer, p. 446, ISBN 978-0-387-72828-5.
Wikipedia/Torsion_(algebra)
In mathematics, the differential geometry of surfaces deals with the differential geometry of smooth surfaces with various additional structures, most often, a Riemannian metric. Surfaces have been extensively studied from various perspectives: extrinsically, relating to their embedding in Euclidean space and intrinsically, reflecting their properties determined solely by the distance within the surface as measured along curves on the surface. One of the fundamental concepts investigated is the Gaussian curvature, first studied in depth by Carl Friedrich Gauss, who showed that curvature was an intrinsic property of a surface, independent of its isometric embedding in Euclidean space. Surfaces naturally arise as graphs of functions of a pair of variables, and sometimes appear in parametric form or as loci associated to space curves. An important role in their study has been played by Lie groups (in the spirit of the Erlangen program), namely the symmetry groups of the Euclidean plane, the sphere and the hyperbolic plane. These Lie groups can be used to describe surfaces of constant Gaussian curvature; they also provide an essential ingredient in the modern approach to intrinsic differential geometry through connections. On the other hand, extrinsic properties relying on an embedding of a surface in Euclidean space have also been extensively studied. This is well illustrated by the non-linear Euler–Lagrange equations in the calculus of variations: although Euler developed the one variable equations to understand geodesics, defined independently of an embedding, one of Lagrange's main applications of the two variable equations was to minimal surfaces, a concept that can only be defined in terms of an embedding. == History == The volumes of certain quadric surfaces of revolution were calculated by Archimedes. The development of calculus in the seventeenth century provided a more systematic way of computing them. Curvature of general surfaces was first studied by Euler. In 1760 he proved a formula for the curvature of a plane section of a surface and in 1771 he considered surfaces represented in a parametric form. Monge laid down the foundations of their theory in his classical memoir L'application de l'analyse à la géometrie which appeared in 1795. The defining contribution to the theory of surfaces was made by Gauss in two remarkable papers written in 1825 and 1827. This marked a new departure from tradition because for the first time Gauss considered the intrinsic geometry of a surface, the properties which are determined only by the geodesic distances between points on the surface independently of the particular way in which the surface is located in the ambient Euclidean space. The crowning result, the Theorema Egregium of Gauss, established that the Gaussian curvature is an intrinsic invariant, i.e. invariant under local isometries. This point of view was extended to higher-dimensional spaces by Riemann and led to what is known today as Riemannian geometry. The nineteenth century was the golden age for the theory of surfaces, from both the topological and the differential-geometric point of view, with most leading geometers devoting themselves to their study. Darboux collected many results in his four-volume treatise Théorie des surfaces (1887–1896). == Overview == It is intuitively quite familiar to say that the leaf of a plant, the surface of a glass, or the shape of a face, are curved in certain ways, and that all of these shapes, even after ignoring any distinguishing markings, have certain geometric features which distinguish one from another. The differential geometry of surfaces is concerned with a mathematical understanding of such phenomena. The study of this field, which was initiated in its modern form in the 1700s, has led to the development of higher-dimensional and abstract geometry, such as Riemannian geometry and general relativity. The essential mathematical object is that of a regular surface. Although conventions vary in their precise definition, these form a general class of subsets of three-dimensional Euclidean space (ℝ3) which capture part of the familiar notion of "surface." By analyzing the class of curves which lie on such a surface, and the degree to which the surfaces force them to curve in ℝ3, one can associate to each point of the surface two numbers, called the principal curvatures. Their average is called the mean curvature of the surface, and their product is called the Gaussian curvature. There are many classic examples of regular surfaces, including: familiar examples such as planes, cylinders, and spheres minimal surfaces, which are defined by the property that their mean curvature is zero at every point. The best-known examples are catenoids and helicoids, although many more have been discovered. Minimal surfaces can also be defined by properties to do with surface area, with the consequence that they provide a mathematical model for the shape of soap films when stretched across a wire frame ruled surfaces, which are surfaces that have at least one straight line running through every point; examples include the cylinder and the hyperboloid of one sheet. A surprising result of Carl Friedrich Gauss, known as the Theorema Egregium, showed that the Gaussian curvature of a surface, which by its definition has to do with how curves on the surface change directions in three dimensional space, can actually be measured by the lengths of curves lying on the surfaces together with the angles made when two curves on the surface intersect. Terminologically, this says that the Gaussian curvature can be calculated from the first fundamental form (also called metric tensor) of the surface. The second fundamental form, by contrast, is an object which encodes how lengths and angles of curves on the surface are distorted when the curves are pushed off of the surface. Despite measuring different aspects of length and angle, the first and second fundamental forms are not independent from one another, and they satisfy certain constraints called the Gauss–Codazzi equations. A major theorem, often called the fundamental theorem of the differential geometry of surfaces, asserts that whenever two objects satisfy the Gauss-Codazzi constraints, they will arise as the first and second fundamental forms of a regular surface. Using the first fundamental form, it is possible to define new objects on a regular surface. Geodesics are curves on the surface which satisfy a certain second-order ordinary differential equation which is specified by the first fundamental form. They are very directly connected to the study of lengths of curves; a geodesic of sufficiently short length will always be the curve of shortest length on the surface which connects its two endpoints. Thus, geodesics are fundamental to the optimization problem of determining the shortest path between two given points on a regular surface. One can also define parallel transport along any given curve, which gives a prescription for how to deform a tangent vector to the surface at one point of the curve to tangent vectors at all other points of the curve. The prescription is determined by a first-order ordinary differential equation which is specified by the first fundamental form. The above concepts are essentially all to do with multivariable calculus. The Gauss–Bonnet theorem is a more global result, which relates the Gaussian curvature of a surface together with its topological type. It asserts that the average value of the Gaussian curvature is completely determined by the Euler characteristic of the surface together with its surface area. Any regular surface is an example both of a Riemannian manifold and Riemann surface. Essentially all of the theory of regular surfaces as discussed here has a generalization in the theory of Riemannian manifolds and their submanifolds. == Regular surfaces in Euclidean space == === Definition === It is intuitively clear that a sphere is smooth, while a cone or a pyramid, due to their vertex or edges, are not. The notion of a "regular surface" is a formalization of the notion of a smooth surface. The definition utilizes the local representation of a surface via maps between Euclidean spaces. There is a standard notion of smoothness for such maps; a map between two open subsets of Euclidean space is smooth if its partial derivatives of every order exist at every point of the domain. A regular surface in Euclidean space ℝ3 is a subset S of ℝ3 such that every point of S admits any of the following three concepts: local parametrizations, Monge patches, or implicit functions. The following table gives definitions of such objects; Monge patches is perhaps the most visually intuitive, as it essentially says that a regular surface is a subset of ℝ3 which is locally the graph of a smooth function (whether over a region in the yz plane, the xz plane, or the xy plane). The homeomorphisms appearing in the first definition are known as local parametrizations or local coordinate systems or local charts on S. The equivalence of the first two definitions asserts that, around any point on a regular surface, there always exist local parametrizations of the form (u, v) ↦ (h(u, v), u, v), (u, v) ↦ (u, h(u, v), v), or (u, v) ↦ (u, v, h(u, v)), known as Monge patches. Functions F as in the third definition are called local defining functions. The equivalence of all three definitions follows from the implicit function theorem. Given any two local parametrizations f : V → U and f ′ : V ′→ U ′ of a regular surface, the composition f −1 ∘ f ′ is necessarily smooth as a map between open subsets of ℝ2. This shows that any regular surface naturally has the structure of a smooth manifold, with a smooth atlas being given by the inverses of local parametrizations. In the classical theory of differential geometry, surfaces are usually studied only in the regular case. It is, however, also common to study non-regular surfaces, in which the two partial derivatives ∂u f and ∂v f of a local parametrization may fail to be linearly independent. In this case, S may have singularities such as cuspidal edges. Such surfaces are typically studied in singularity theory. Other weakened forms of regular surfaces occur in computer-aided design, where a surface is broken apart into disjoint pieces, with the derivatives of local parametrizations failing to even be continuous along the boundaries. Simple examples. A simple example of a regular surface is given by the 2-sphere {(x, y, z) | x2 + y2 + z2 = 1}; this surface can be covered by six Monge patches (two of each of the three types given above), taking h(u, v) = ± (1 − u2 − v2)1/2. It can also be covered by two local parametrizations, using stereographic projection. The set {(x, y, z) : ((x2 + y2)1/2 − r)2 + z2 = R2} is a torus of revolution with radii r and R. It is a regular surface; local parametrizations can be given of the form f ( s , t ) = ( ( R cos ⁡ s + r ) cos ⁡ t , ( R cos ⁡ s + r ) sin ⁡ t , R sin ⁡ s ) . {\displaystyle f(s,t)={\big (}(R\cos s+r)\cos t,(R\cos s+r)\sin t,R\sin s{\big )}.} The hyperboloid on two sheets {(x, y, z) : z2 = 1 + x2 + y2} is a regular surface; it can be covered by two Monge patches, with h(u, v) = ±(1 + u2 + v2)1/2. The helicoid appears in the theory of minimal surfaces. It is covered by a single local parametrization, f(u, v) = (u sin v, u cos v, v). === Tangent vectors and normal vectors === Let S be a regular surface in ℝ3, and let p be an element of S. Using any of the above definitions, one can single out certain vectors in ℝ3 as being tangent to S at p, and certain vectors in ℝ3 as being orthogonal to S at p. One sees that the tangent space or tangent plane to S at p, which is defined to consist of all tangent vectors to S at p, is a two-dimensional linear subspace of ℝ3; it is often denoted by TpS. The normal space to S at p, which is defined to consist of all normal vectors to S at p, is a one-dimensional linear subspace of ℝ3 which is orthogonal to the tangent space TpS. As such, at each point p of S, there are two normal vectors of unit length (unit normal vectors). The unit normal vectors at p can be given in terms of local parametrizations, Monge patches, or local defining functions, via the formulas ± ∂ f ∂ u × ∂ f ∂ v ‖ ∂ f ∂ u × ∂ f ∂ v ‖ | f − 1 ( p ) , ± ( ∂ h ∂ u , ∂ h ∂ v , − 1 ) 1 + ( ∂ h ∂ u ) 2 + ( ∂ h ∂ v ) 2 | ( p 1 , p 2 ) , or ± ∇ F ( p ) ‖ ∇ F ( p ) ‖ , {\displaystyle \pm \left.{\frac {{\frac {\partial f}{\partial u}}\times {\frac {\partial f}{\partial v}}}{{\big \|}{\frac {\partial f}{\partial u}}\times {\frac {\partial f}{\partial v}}{\big \|}}}\right|_{f^{-1}(p)},\qquad \pm \left.{\frac {{\big (}{\frac {\partial h}{\partial u}},{\frac {\partial h}{\partial v}},-1{\big )}}{\sqrt {1+{\big (}{\frac {\partial h}{\partial u}}{\big )}^{2}+{\big (}{\frac {\partial h}{\partial v}}{\big )}^{2}}}}\right|_{(p_{1},p_{2})},\qquad {\text{or}}\qquad \pm {\frac {\nabla F(p)}{{\big \|}\nabla F(p){\big \|}}},} following the same notations as in the previous definitions. It is also useful to note an "intrinsic" definition of tangent vectors, which is typical of the generalization of regular surface theory to the setting of smooth manifolds. It defines the tangent space as an abstract two-dimensional real vector space, rather than as a linear subspace of ℝ3. In this definition, one says that a tangent vector to S at p is an assignment, to each local parametrization f : V → S with p ∈ f(V), of two numbers X1 and X2, such that for any other local parametrization f ′ : V → S with p ∈ f(V) (and with corresponding numbers (X ′)1 and (X ′)2), one has ( X 1 X 2 ) = A f ′ ( p ) ( ( X ′ ) 1 ( X ′ ) 2 ) , {\displaystyle {\begin{pmatrix}X^{1}\\X^{2}\end{pmatrix}}=A_{f'(p)}{\begin{pmatrix}(X')^{1}\\(X')^{2}\end{pmatrix}},} where Af ′(p) is the Jacobian matrix of the mapping f −1 ∘ f ′, evaluated at the point f ′(p). The collection of tangent vectors to S at p naturally has the structure of a two-dimensional vector space. A tangent vector in this sense corresponds to a tangent vector in the previous sense by considering the vector X 1 ∂ f ∂ u + X 2 ∂ f ∂ v . {\displaystyle X^{1}{\frac {\partial f}{\partial u}}+X^{2}{\frac {\partial f}{\partial v}}.} in ℝ3. The Jacobian condition on X1 and X2 ensures, by the chain rule, that this vector does not depend on f. For smooth functions on a surface, vector fields (i.e. tangent vector fields) have an important interpretation as first order operators or derivations. Let S {\displaystyle S} be a regular surface, U {\displaystyle U} an open subset of the plane and f : U → S {\displaystyle f:U\rightarrow S} a coordinate chart. If V = f ( U ) {\displaystyle V=f(U)} , the space C ∞ ( U ) {\displaystyle C^{\infty }(U)} can be identified with C ∞ ( V ) {\displaystyle C^{\infty }(V)} . Similarly f {\displaystyle f} identifies vector fields on U {\displaystyle U} with vector fields on V {\displaystyle V} . Taking standard variables u and v, a vector field has the form X = a ∂ u + b ∂ v {\displaystyle X=a\partial _{u}+b\partial _{v}} , with a and b smooth functions. If X {\displaystyle X} is a vector field and g {\displaystyle g} is a smooth function, then X g {\displaystyle Xg} is also a smooth function. The first order differential operator X {\displaystyle X} is a derivation, i.e. it satisfies the Leibniz rule X ( g h ) = ( X g ) h + g ( X h ) . {\displaystyle X(gh)=(Xg)h+g(Xh).} For vector fields X and Y it is simple to check that the operator [ X , Y ] = X Y − Y X {\displaystyle [X,Y]=XY-YX} is a derivation corresponding to a vector field. It is called the Lie bracket [ X , Y ] {\displaystyle [X,Y]} . It is skew-symmetric [ X , Y ] = − [ Y , X ] {\displaystyle [X,Y]=-[Y,X]} and satisfies the Jacobi identity: [ [ X , Y ] , Z ] + [ [ Y , Z ] , X ] + [ [ Z , X ] , Y ] = 0. {\displaystyle [[X,Y],Z]+[[Y,Z],X]+[[Z,X],Y]=0.} In summary, vector fields on U {\displaystyle U} or V {\displaystyle V} form a Lie algebra under the Lie bracket. === First and second fundamental forms, the shape operator, and the curvature === Let S be a regular surface in ℝ3. Given a local parametrization f : V → S and a unit normal vector field n to f(V), one defines the following objects as real-valued or matrix-valued functions on V. The first fundamental form depends only on f, and not on n. The fourth column records the way in which these functions depend on f, by relating the functions E ′, F ′, G ′, L ′, etc., arising for a different choice of local parametrization, f ′ : V ′ → S, to those arising for f. Here A denotes the Jacobian matrix of f –1 ∘ f ′. The key relation in establishing the formulas of the fourth column is then ( ∂ f ′ ∂ u ∂ f ′ ∂ v ) = A ( ∂ f ∂ u ∂ f ∂ v ) , {\displaystyle {\begin{pmatrix}{\frac {\partial f'}{\partial u}}\\{\frac {\partial f'}{\partial v}}\end{pmatrix}}=A{\begin{pmatrix}{\frac {\partial f}{\partial u}}\\{\frac {\partial f}{\partial v}}\end{pmatrix}},} as follows by the chain rule. By a direct calculation with the matrix defining the shape operator, it can be checked that the Gaussian curvature is the determinant of the shape operator, the mean curvature is half of the trace of the shape operator, and the principal curvatures are the eigenvalues of the shape operator; moreover the Gaussian curvature is the product of the principal curvatures and the mean curvature is their sum. These observations can also be formulated as definitions of these objects. These observations also make clear that the last three rows of the fourth column follow immediately from the previous row, as similar matrices have identical determinant, trace, and eigenvalues. It is fundamental to note E, G, and EG − F2 are all necessarily positive. This ensures that the matrix inverse in the definition of the shape operator is well-defined, and that the principal curvatures are real numbers. Note also that a negation of the choice of unit normal vector field will negate the second fundamental form, the shape operator, the mean curvature, and the principal curvatures, but will leave the Gaussian curvature unchanged. In summary, this has shown that, given a regular surface S, the Gaussian curvature of S can be regarded as a real-valued function on S; relative to a choice of unit normal vector field on all of S, the two principal curvatures and the mean curvature are also real-valued functions on S. Geometrically, the first and second fundamental forms can be viewed as giving information on how f(u, v) moves around in ℝ3 as (u, v) moves around in V. In particular, the first fundamental form encodes how quickly f moves, while the second fundamental form encodes the extent to which its motion is in the direction of the normal vector n. In other words, the second fundamental form at a point p encodes the length of the orthogonal projection from S to the tangent plane to S at p; in particular it gives the quadratic function which best approximates this length. This thinking can be made precise by the formulas lim ( h , k ) → ( 0 , 0 ) | f ( u + h , v + k ) − f ( u , v ) | 2 − ( E h 2 + 2 F h k + G k 2 ) h 2 + k 2 = 0 lim ( h , k ) → ( 0 , 0 ) ( f ( u + h , v + k ) − f ( u , v ) ) ⋅ n − 1 2 ( L h 2 + 2 M h k + N k 2 ) h 2 + k 2 = 0 , {\displaystyle {\begin{aligned}\lim _{(h,k)\to (0,0)}{\frac {{\big |}f(u+h,v+k)-f(u,v){\big |}^{2}-{\big (}Eh^{2}+2Fhk+Gk^{2}{\big )}}{h^{2}+k^{2}}}&=0\\\lim _{(h,k)\to (0,0)}{\frac {{\big (}f(u+h,v+k)-f(u,v){\big )}\cdot n-{\frac {1}{2}}{\big (}Lh^{2}+2Mhk+Nk^{2}{\big )}}{h^{2}+k^{2}}}&=0,\end{aligned}}} as follows directly from the definitions of the fundamental forms and Taylor's theorem in two dimensions. The principal curvatures can be viewed in the following way. At a given point p of S, consider the collection of all planes which contain the orthogonal line to S. Each such plane has a curve of intersection with S, which can be regarded as a plane curve inside of the plane itself. The two principal curvatures at p are the maximum and minimum possible values of the curvature of this plane curve at p, as the plane under consideration rotates around the normal line. The following summarizes the calculation of the above quantities relative to a Monge patch f(u, v) = (u, v, h(u, v)). Here hu and hv denote the two partial derivatives of h, with analogous notation for the second partial derivatives. The second fundamental form and all subsequent quantities are calculated relative to the given choice of unit normal vector field. === Christoffel symbols, Gauss–Codazzi equations, and the Theorema Egregium === Let S be a regular surface in ℝ3. The Christoffel symbols assign, to each local parametrization f : V → S, eight functions on V, defined by ( Γ 11 1 Γ 12 1 Γ 21 1 Γ 22 1 Γ 11 2 Γ 12 2 Γ 21 2 Γ 22 2 ) = ( E F F G ) − 1 ( 1 2 ∂ E ∂ u 1 2 ∂ E ∂ v 1 2 ∂ E ∂ v ∂ F ∂ v − 1 2 ∂ G ∂ u ∂ F ∂ u − 1 2 ∂ E ∂ v 1 2 ∂ G ∂ u 1 2 ∂ G ∂ u 1 2 ∂ G ∂ v ) . {\displaystyle {\begin{pmatrix}\Gamma _{11}^{1}&\Gamma _{12}^{1}&\Gamma _{21}^{1}&\Gamma _{22}^{1}\\\Gamma _{11}^{2}&\Gamma _{12}^{2}&\Gamma _{21}^{2}&\Gamma _{22}^{2}\end{pmatrix}}={\begin{pmatrix}E&F\\F&G\end{pmatrix}}^{-1}{\begin{pmatrix}{\frac {1}{2}}{\frac {\partial E}{\partial u}}&{\frac {1}{2}}{\frac {\partial E}{\partial v}}&{\frac {1}{2}}{\frac {\partial E}{\partial v}}&{\frac {\partial F}{\partial v}}-{\frac {1}{2}}{\frac {\partial G}{\partial u}}\\{\frac {\partial F}{\partial u}}-{\frac {1}{2}}{\frac {\partial E}{\partial v}}&{\frac {1}{2}}{\frac {\partial G}{\partial u}}&{\frac {1}{2}}{\frac {\partial G}{\partial u}}&{\frac {1}{2}}{\frac {\partial G}{\partial v}}\end{pmatrix}}.} They can also be defined by the following formulas, in which n is a unit normal vector field along f(V) and L, M, N are the corresponding components of the second fundamental form: ∂ 2 f ∂ u 2 = Γ 11 1 ∂ f ∂ u + Γ 11 2 ∂ f ∂ v + L n ∂ 2 f ∂ u ∂ v = Γ 12 1 ∂ f ∂ u + Γ 12 2 ∂ f ∂ v + M n ∂ 2 f ∂ v 2 = Γ 22 1 ∂ f ∂ u + Γ 22 2 ∂ f ∂ v + N n . {\displaystyle {\begin{aligned}{\frac {\partial ^{2}f}{\partial u^{2}}}&=\Gamma _{11}^{1}{\frac {\partial f}{\partial u}}+\Gamma _{11}^{2}{\frac {\partial f}{\partial v}}+Ln\\{\frac {\partial ^{2}f}{\partial u\partial v}}&=\Gamma _{12}^{1}{\frac {\partial f}{\partial u}}+\Gamma _{12}^{2}{\frac {\partial f}{\partial v}}+Mn\\{\frac {\partial ^{2}f}{\partial v^{2}}}&=\Gamma _{22}^{1}{\frac {\partial f}{\partial u}}+\Gamma _{22}^{2}{\frac {\partial f}{\partial v}}+Nn.\end{aligned}}} The key to this definition is that ⁠∂f/∂u⁠, ⁠∂f/∂v⁠, and n form a basis of ℝ3 at each point, relative to which each of the three equations uniquely specifies the Christoffel symbols as coordinates of the second partial derivatives of f. The choice of unit normal has no effect on the Christoffel symbols, since if n is exchanged for its negation, then the components of the second fundamental form are also negated, and so the signs of Ln, Mn, Nn are left unchanged. The second definition shows, in the context of local parametrizations, that the Christoffel symbols are geometrically natural. Although the formulas in the first definition appear less natural, they have the importance of showing that the Christoffel symbols can be calculated from the first fundamental form, which is not immediately apparent from the second definition. The equivalence of the definitions can be checked by directly substituting the first definition into the second, and using the definitions of E, F, G. The Codazzi equations assert that ∂ L ∂ v − ∂ M ∂ u = L Γ 12 1 + M ( Γ 12 2 − Γ 11 1 ) − N Γ 11 2 ∂ M ∂ v − ∂ N ∂ u = L Γ 22 1 + M ( Γ 22 2 − Γ 12 1 ) − N Γ 12 2 . {\displaystyle {\begin{aligned}{\frac {\partial L}{\partial v}}-{\frac {\partial M}{\partial u}}&=L\Gamma _{12}^{1}+M(\Gamma _{12}^{2}-\Gamma _{11}^{1})-N\Gamma _{11}^{2}\\{\frac {\partial M}{\partial v}}-{\frac {\partial N}{\partial u}}&=L\Gamma _{22}^{1}+M(\Gamma _{22}^{2}-\Gamma _{12}^{1})-N\Gamma _{12}^{2}.\end{aligned}}} These equations can be directly derived from the second definition of Christoffel symbols given above; for instance, the first Codazzi equation is obtained by differentiating the first equation with respect to v, the second equation with respect to u, subtracting the two, and taking the dot product with n. The Gauss equation asserts that K E = ∂ Γ 11 2 ∂ v − ∂ Γ 21 2 ∂ u + Γ 21 2 Γ 11 1 + Γ 22 2 Γ 11 2 − Γ 11 2 Γ 21 1 − Γ 12 2 Γ 21 2 K F = ∂ Γ 12 2 ∂ v − ∂ Γ 22 2 ∂ u + Γ 21 2 Γ 12 1 − Γ 11 2 Γ 22 1 K G = ∂ Γ 22 1 ∂ u − ∂ Γ 12 1 ∂ v + Γ 11 1 Γ 22 1 + Γ 12 1 Γ 22 2 − Γ 21 1 Γ 12 1 − Γ 22 1 Γ 12 2 {\displaystyle {\begin{aligned}KE&={\frac {\partial \Gamma _{11}^{2}}{\partial v}}-{\frac {\partial \Gamma _{21}^{2}}{\partial u}}+\Gamma _{21}^{2}\Gamma _{11}^{1}+\Gamma _{22}^{2}\Gamma _{11}^{2}-\Gamma _{11}^{2}\Gamma _{21}^{1}-\Gamma _{12}^{2}\Gamma _{21}^{2}\\KF&={\frac {\partial \Gamma _{12}^{2}}{\partial v}}-{\frac {\partial \Gamma _{22}^{2}}{\partial u}}+\Gamma _{21}^{2}\Gamma _{12}^{1}-\Gamma _{11}^{2}\Gamma _{22}^{1}\\KG&={\frac {\partial \Gamma _{22}^{1}}{\partial u}}-{\frac {\partial \Gamma _{12}^{1}}{\partial v}}+\Gamma _{11}^{1}\Gamma _{22}^{1}+\Gamma _{12}^{1}\Gamma _{22}^{2}-\Gamma _{21}^{1}\Gamma _{12}^{1}-\Gamma _{22}^{1}\Gamma _{12}^{2}\end{aligned}}} These can be similarly derived as the Codazzi equations, with one using the Weingarten equations instead of taking the dot product with n. Although these are written as three separate equations, they are identical when the definitions of the Christoffel symbols, in terms of the first fundamental form, are substituted in. There are many ways to write the resulting expression, one of them derived in 1852 by Brioschi using a skillful use of determinants: K = 1 ( E G − F 2 ) 2 det ( − 1 2 ∂ 2 E ∂ v 2 + ∂ 2 F ∂ u ∂ v − 1 2 ∂ 2 G ∂ u 2 1 2 ∂ E ∂ u ∂ F ∂ u − 1 2 ∂ E ∂ v ∂ F ∂ v − 1 2 ∂ G ∂ u E F 1 2 ∂ G ∂ v F G ) − 1 ( E G − F 2 ) 2 det ( 0 1 2 ∂ E ∂ v 1 2 ∂ G ∂ u 1 2 ∂ E ∂ v E F 1 2 ∂ G ∂ u F G ) . {\displaystyle K={\frac {1}{(EG-F^{2})^{2}}}\det {\begin{pmatrix}-{1 \over 2}{\frac {\partial ^{2}E}{\partial v^{2}}}+{\frac {\partial ^{2}F}{\partial u\partial v}}-{1 \over 2}{\frac {\partial ^{2}G}{\partial u^{2}}}&{1 \over 2}{\frac {\partial E}{\partial u}}&{\frac {\partial F}{\partial u}}-{1 \over 2}{\frac {\partial E}{\partial v}}\\{\frac {\partial F}{\partial v}}-{1 \over 2}{\frac {\partial G}{\partial u}}&E&F\\{1 \over 2}{\frac {\partial G}{\partial v}}&F&G\end{pmatrix}}-{\frac {1}{(EG-F^{2})^{2}}}\det {\begin{pmatrix}0&{1 \over 2}{\frac {\partial E}{\partial v}}&{1 \over 2}{\frac {\partial G}{\partial u}}\\{1 \over 2}{\frac {\partial E}{\partial v}}&E&F\\{1 \over 2}{\frac {\partial G}{\partial u}}&F&G\end{pmatrix}}.} When the Christoffel symbols are considered as being defined by the first fundamental form, the Gauss and Codazzi equations represent certain constraints between the first and second fundamental forms. The Gauss equation is particularly noteworthy, as it shows that the Gaussian curvature can be computed directly from the first fundamental form, without the need for any other information; equivalently, this says that LN − M2 can actually be written as a function of E, F, G, even though the individual components L, M, N cannot. This is known as the theorema egregium, and was a major discovery of Carl Friedrich Gauss. It is particularly striking when one recalls the geometric definition of the Gaussian curvature of S as being defined by the maximum and minimum radii of osculating circles; they seem to be fundamentally defined by the geometry of how S bends within ℝ3. Nevertheless, the theorem shows that their product can be determined from the "intrinsic" geometry of S, having only to do with the lengths of curves along S and the angles formed at their intersections. As said by Marcel Berger: This theorem is baffling. [...] It is the kind of theorem which could have waited dozens of years more before being discovered by another mathematician since, unlike so much of intellectual history, it was absolutely not in the air. [...] To our knowledge there is no simple geometric proof of the theorema egregium today. The Gauss-Codazzi equations can also be succinctly expressed and derived in the language of connection forms due to Élie Cartan. In the language of tensor calculus, making use of natural metrics and connections on tensor bundles, the Gauss equation can be written as H2 − |h|2 = R and the two Codazzi equations can be written as ∇1 h12 = ∇2 h11 and ∇1 h22 = ∇2 h12; the complicated expressions to do with Christoffel symbols and the first fundamental form are completely absorbed into the definitions of the covariant tensor derivative ∇h and the scalar curvature R. Pierre Bonnet proved that two quadratic forms satisfying the Gauss-Codazzi equations always uniquely determine an embedded surface locally. For this reason the Gauss-Codazzi equations are often called the fundamental equations for embedded surfaces, precisely identifying where the intrinsic and extrinsic curvatures come from. They admit generalizations to surfaces embedded in more general Riemannian manifolds. === Isometries === A diffeomorphism φ {\displaystyle \varphi } between open sets U {\displaystyle U} and V {\displaystyle V} in a regular surface S {\displaystyle S} is said to be an isometry if it preserves the metric, i.e. the first fundamental form. Thus for every point p {\displaystyle p} in U {\displaystyle U} and tangent vectors w 1 , w 2 {\displaystyle w_{1},\,\,w_{2}} at p {\displaystyle p} , there are equalities E ( p ) w 1 ⋅ w 1 + 2 F ( p ) w 1 ⋅ w 2 + G ( p ) w 2 ⋅ w 2 = E ( φ ( p ) ) φ ′ ( w 1 ) ⋅ φ ′ ( w 1 ) + 2 F ( φ ( p ) ) φ ′ ( w 1 ) ⋅ φ ′ ( w 2 ) + G ( φ ( p ) ) φ ′ ( w 1 ) ⋅ φ ′ ( w 2 ) . {\displaystyle E(p)w_{1}\cdot w_{1}+2F(p)w_{1}\cdot w_{2}+G(p)w_{2}\cdot w_{2}=E(\varphi (p))\varphi ^{\prime }(w_{1})\cdot \varphi ^{\prime }(w_{1})+2F(\varphi (p))\varphi ^{\prime }(w_{1})\cdot \varphi ^{\prime }(w_{2})+G(\varphi (p))\varphi ^{\prime }(w_{1})\cdot \varphi ^{\prime }(w_{2}).} In terms of the inner product coming from the first fundamental form, this can be rewritten as ( w 1 , w 2 ) p = ( φ ′ ( w 1 ) , φ ′ ( w 2 ) ) φ ( p ) {\displaystyle (w_{1},w_{2})_{p}=(\varphi ^{\prime }(w_{1}),\varphi ^{\prime }(w_{2}))_{\varphi (p)}} . On the other hand, the length of a parametrized curve γ ( t ) = ( x ( t ) , y ( t ) ) {\displaystyle \gamma (t)=(x(t),y(t))} can be calculated as L ( γ ) = ∫ a b E x ˙ ⋅ x ˙ + 2 F x ˙ ⋅ y ˙ + G y ˙ ⋅ y ˙ d t {\displaystyle L(\gamma )=\int _{a}^{b}{\sqrt {E{\dot {x}}\cdot {\dot {x}}+2F{\dot {x}}\cdot {\dot {y}}+G{\dot {y}}\cdot {\dot {y}}}}\,dt} and, if the curve lies in U {\displaystyle U} , the rules for change of variables show that L ( φ ∘ γ ) = L ( γ ) . {\displaystyle L(\varphi \circ \gamma )=L(\gamma ).} Conversely if φ {\displaystyle \varphi } preserves the lengths of all parametrized in curves then φ {\displaystyle \varphi } is an isometry. Indeed, for suitable choices of γ {\displaystyle \gamma } , the tangent vectors x ˙ {\displaystyle {\dot {x}}} and y ˙ {\displaystyle {\dot {y}}} give arbitrary tangent vectors w 1 {\displaystyle w_{1}} and w 2 {\displaystyle w_{2}} . The equalities must hold for all choice of tangent vectors w 1 {\displaystyle w_{1}} and w 2 {\displaystyle w_{2}} as well as φ ′ ( w 1 ) {\displaystyle \varphi ^{\prime }(w_{1})} and φ ′ ( w 2 ) {\displaystyle \varphi ^{\prime }(w_{2})} , so that ( φ ′ ( w 1 ) , φ ′ ( w 2 ) ) φ ( p ) = ( w 1 , w 1 ) p {\displaystyle (\varphi ^{\prime }(w_{1}),\varphi ^{\prime }(w_{2}))_{\varphi (p)}=(w_{1},w_{1})_{p}} . A simple example of an isometry is provided by two parametrizations f 1 {\displaystyle f_{1}} and f 2 {\displaystyle f_{2}} of an open set U {\displaystyle U} into regular surfaces S 1 {\displaystyle S_{1}} and S 2 {\displaystyle S_{2}} . If E 1 = E 2 {\displaystyle E_{1}=E_{2}} , F 1 = F 2 {\displaystyle F_{1}=F_{2}} and G 1 = G 2 {\displaystyle G_{1}=G_{2}} , then φ = f 2 ∘ f 1 − 1 {\displaystyle \varphi =f_{2}\circ f_{1}^{-1}} is an isometry of f 1 ( U ) {\displaystyle f_{1}(U)} onto f 2 ( U ) {\displaystyle f_{2}(U)} . The cylinder and the plane give examples of surfaces that are locally isometric but which cannot be extended to an isometry for topological reasons. As another example, the catenoid and helicoid are locally isometric. === Covariant derivatives === A tangential vector field X on S assigns, to each p in S, a tangent vector Xp to S at p. According to the "intrinsic" definition of tangent vectors given above, a tangential vector field X then assigns, to each local parametrization f : V → S, two real-valued functions X1 and X2 on V, so that X p = X 1 ( f − 1 ( p ) ) ∂ f ∂ u | f − 1 ( p ) + X 2 ( f − 1 ( p ) ) ∂ f ∂ v | f − 1 ( p ) {\displaystyle X_{p}=X^{1}{\big (}f^{-1}(p){\big )}{\frac {\partial f}{\partial u}}{\Big |}_{f^{-1}(p)}+X^{2}{\big (}f^{-1}(p){\big )}{\frac {\partial f}{\partial v}}{\Big |}_{f^{-1}(p)}} for each p in S. One says that X is smooth if the functions X1 and X2 are smooth, for any choice of f. According to the other definitions of tangent vectors given above, one may also regard a tangential vector field X on S as a map X : S → ℝ3 such that X(p) is contained in the tangent space TpS ⊂ ℝ3 for each p in S. As is common in the more general situation of smooth manifolds, tangential vector fields can also be defined as certain differential operators on the space of smooth functions on S. The covariant derivatives (also called "tangential derivatives") of Tullio Levi-Civita and Gregorio Ricci-Curbastro provide a means of differentiating smooth tangential vector fields. Given a tangential vector field X and a tangent vector Y to S at p, the covariant derivative ∇YX is a certain tangent vector to S at p. Consequently, if X and Y are both tangential vector fields, then ∇YX can also be regarded as a tangential vector field; iteratively, if X, Y, and Z are tangential vector fields, the one may compute ∇Z∇YX, which will be another tangential vector field. There are a few ways to define the covariant derivative; the first below uses the Christoffel symbols and the "intrinsic" definition of tangent vectors, and the second is more manifestly geometric. Given a tangential vector field X and a tangent vector Y to S at p, one defines ∇YX to be the tangent vector to p which assigns to a local parametrization f : V → S the two numbers ( ∇ Y X ) k = D ( Y 1 , Y 2 ) X k | f − 1 ( p ) + ∑ i = 1 2 ∑ j = 1 2 ( Γ i j k X j ) | f − 1 ( p ) Y i , ( k = 1 , 2 ) {\displaystyle (\nabla _{Y}X)^{k}=D_{(Y^{1},Y^{2})}X^{k}{\Big |}_{f^{-1}(p)}+\sum _{i=1}^{2}\sum _{j=1}^{2}{\big (}\Gamma _{ij}^{k}X^{j}{\big )}{\Big |}_{f^{-1}(p)}Y^{i},\qquad (k=1,2)} where D(Y1, Y2) is the directional derivative. This is often abbreviated in the less cumbersome form (∇YX)k = ∂Y(X k) + Y iΓkijX j, making use of Einstein notation and with the locations of function evaluation being implicitly understood. This follows a standard prescription in Riemannian geometry for obtaining a connection from a Riemannian metric. It is a fundamental fact that the vector ( ∇ Y X ) 1 ∂ f ∂ u + ( ∇ Y X ) 2 ∂ f ∂ v {\displaystyle (\nabla _{Y}X)^{1}{\frac {\partial f}{\partial u}}+(\nabla _{Y}X)^{2}{\frac {\partial f}{\partial v}}} in ℝ3 is independent of the choice of local parametization f, although this is rather tedious to check. One can also define the covariant derivative by the following geometric approach, which does not make use of Christoffel symbols or local parametrizations. Let X be a vector field on S, viewed as a function S → ℝ3. Given any curve c : (a, b) → S, one may consider the composition X ∘ c : (a, b) → ℝ3. As a map between Euclidean spaces, it can be differentiated at any input value to get an element (X ∘ c)′(t) of ℝ3. The orthogonal projection of this vector onto Tc(t)S defines the covariant derivative ∇c ′(t)X. Although this is a very geometrically clean definition, it is necessary to show that the result only depends on c′(t) and X, and not on c and X; local parametrizations can be used for this small technical argument. It is not immediately apparent from the second definition that covariant differentiation depends only on the first fundamental form of S; however, this is immediate from the first definition, since the Christoffel symbols can be defined directly from the first fundamental form. It is straightforward to check that the two definitions are equivalent. The key is that when one regards X1⁠∂f/∂u⁠ + X2⁠∂f/∂v⁠ as a ℝ3-valued function, its differentiation along a curve results in second partial derivatives ∂2f; the Christoffel symbols enter with orthogonal projection to the tangent space, due to the formulation of the Christoffel symbols as the tangential components of the second derivatives of f relative to the basis ⁠∂f/∂u⁠, ⁠∂f/∂v⁠, n. This is discussed in the above section. The right-hand side of the three Gauss equations can be expressed using covariant differentiation. For instance, the right-hand side ∂ Γ 11 2 ∂ v − ∂ Γ 21 2 ∂ u + Γ 21 2 Γ 11 1 + Γ 22 2 Γ 11 2 − Γ 11 2 Γ 21 1 − Γ 12 2 Γ 21 2 {\displaystyle {\frac {\partial \Gamma _{11}^{2}}{\partial v}}-{\frac {\partial \Gamma _{21}^{2}}{\partial u}}+\Gamma _{21}^{2}\Gamma _{11}^{1}+\Gamma _{22}^{2}\Gamma _{11}^{2}-\Gamma _{11}^{2}\Gamma _{21}^{1}-\Gamma _{12}^{2}\Gamma _{21}^{2}} can be recognized as the second coordinate of ∇ ∂ f ∂ v ∇ ∂ f ∂ u ∂ f ∂ u − ∇ ∂ f ∂ u ∇ ∂ f ∂ v ∂ f ∂ u {\displaystyle \nabla _{\frac {\partial f}{\partial v}}\nabla _{\frac {\partial f}{\partial u}}{\frac {\partial f}{\partial u}}-\nabla _{\frac {\partial f}{\partial u}}\nabla _{\frac {\partial f}{\partial v}}{\frac {\partial f}{\partial u}}} relative to the basis ⁠∂f/∂u⁠, ⁠∂f/∂v⁠, as can be directly verified using the definition of covariant differentiation by Christoffel symbols. In the language of Riemannian geometry, this observation can also be phrased as saying that the right-hand sides of the Gauss equations are various components of the Ricci curvature of the Levi-Civita connection of the first fundamental form, when interpreted as a Riemannian metric. == Examples == === Surfaces of revolution === A surface of revolution is obtained by rotating a curve in the xz-plane about the z-axis. Such surfaces include spheres, cylinders, cones, tori, and the catenoid. The general ellipsoids, hyperboloids, and paraboloids are not. Suppose that the curve is parametrized by x = c 1 ( s ) , z = c 2 ( s ) {\displaystyle x=c_{1}(s),\,\,z=c_{2}(s)} with s drawn from an interval (a, b). If c1 is never zero, if c1′ and c2′ are never both equal to zero, and if c1 and c2 are both smooth, then the corresponding surface of revolution S = { ( c 1 ( s ) cos ⁡ t , c 1 ( s ) sin ⁡ t , c 2 ( s ) ) : s ∈ ( a , b ) and t ∈ R } {\displaystyle S={\Big \{}{\big (}c_{1}(s)\cos t,c_{1}(s)\sin t,c_{2}(s){\big )}\colon s\in (a,b){\text{ and }}t\in \mathbb {R} {\Big \}}} will be a regular surface in ℝ3. A local parametrization f : (a, b) × (0, 2π) → S is given by f ( s , t ) = ( c 1 ( s ) cos ⁡ t , c 1 ( s ) sin ⁡ t , c 2 ( s ) ) . {\displaystyle f(s,t)={\big (}c_{1}(s)\cos t,c_{1}(s)\sin t,c_{2}(s){\big )}.} Relative to this parametrization, the geometric data is: In the special case that the original curve is parametrized by arclength, i.e. (c1′(s))2 + (c2′(s))2 = 1, one can differentiate to find c1′(s)c1′′(s) + c2′(s)c2′′(s) = 0. On substitution into the Gaussian curvature, one has the simplified K = − c 1 ″ ( s ) c 1 ( s ) and H = c 1 ′ ( s ) c 2 ″ ( s ) − c 2 ′ ( s ) c 1 ″ ( s ) + c 2 ′ ( s ) c 1 ( s ) . {\displaystyle K=-{\frac {c_{1}''(s)}{c_{1}(s)}}\qquad {\text{and}}\qquad H=c_{1}'(s)c_{2}''(s)-c_{2}'(s)c_{1}''(s)+{\frac {c_{2}'(s)}{c_{1}(s)}}.} The simplicity of this formula makes it particularly easy to study the class of rotationally symmetric surfaces with constant Gaussian curvature. By reduction to the alternative case that c2(s) = s, one can study the rotationally symmetric minimal surfaces, with the result that any such surface is part of a plane or a scaled catenoid. Each constant-t curve on S can be parametrized as a geodesic; a constant-s curve on S can be parametrized as a geodesic if and only if c1′(s) is equal to zero. Generally, geodesics on S are governed by Clairaut's relation. === Quadric surfaces === Consider the quadric surface defined by x 2 a + y 2 b + z 2 c = 1. {\displaystyle {x^{2} \over a}+{y^{2} \over b}+{z^{2} \over c}=1.} This surface admits a parametrization x = a ( a − u ) ( a − v ) ( a − b ) ( a − c ) , y = b ( b − u ) ( b − v ) ( b − a ) ( b − c ) , z = c ( c − u ) ( c − v ) ( c − b ) ( c − a ) . {\displaystyle x={\sqrt {a(a-u)(a-v) \over (a-b)(a-c)}},\,\,y={\sqrt {b(b-u)(b-v) \over (b-a)(b-c)}},\,\,z={\sqrt {c(c-u)(c-v) \over (c-b)(c-a)}}.} The Gaussian curvature and mean curvature are given by K = a b c u 2 v 2 , K m = − ( u + v ) a b c u 3 v 3 . {\displaystyle K={abc \over u^{2}v^{2}},\,\,K_{m}=-(u+v){\sqrt {abc \over u^{3}v^{3}}}.} === Ruled surfaces === A ruled surface is one which can be generated by the motion of a straight line in E3. Choosing a directrix on the surface, i.e. a smooth unit speed curve c(t) orthogonal to the straight lines, and then choosing u(t) to be unit vectors along the curve in the direction of the lines, the velocity vector v = ct and u satisfy u ⋅ v = 0 , ‖ u ‖ = 1 , ‖ v ‖ = 1. {\displaystyle u\cdot v=0,\,\,\|u\|=1,\,\,\|v\|=1.} The surface consists of points c ( t ) + s ⋅ u ( t ) {\displaystyle c(t)+s\cdot u(t)} as s and t vary. Then, if a = ‖ u t ‖ , b = u t ⋅ v , α = − b a 2 , β = a 2 − b 2 a 2 , {\displaystyle a=\|u_{t}\|,\,\,b=u_{t}\cdot v,\,\,\alpha =-{\frac {b}{a^{2}}},\,\,\beta ={\frac {\sqrt {a^{2}-b^{2}}}{a^{2}}},} the Gaussian and mean curvature are given by K = − β 2 ( ( s − α ) 2 + β 2 ) 2 , K m = − r [ ( s − α ) 2 + β 2 ) ] + β t ( s − α ) + β α t [ ( s − α ) 2 + β 2 ] 3 2 . {\displaystyle K=-{\beta ^{2} \over ((s-\alpha )^{2}+\beta ^{2})^{2}},\,\,K_{m}=-{r[(s-\alpha )^{2}+\beta ^{2})]+\beta _{t}(s-\alpha )+\beta \alpha _{t} \over [(s-\alpha )^{2}+\beta ^{2}]^{\frac {3}{2}}}.} The Gaussian curvature of the ruled surface vanishes if and only if ut and v are proportional, This condition is equivalent to the surface being the envelope of the planes along the curve containing the tangent vector v and the orthogonal vector u, i.e. to the surface being developable along the curve. More generally a surface in E3 has vanishing Gaussian curvature near a point if and only if it is developable near that point. (An equivalent condition is given below in terms of the metric.) === Minimal surfaces === In 1760 Lagrange extended Euler's results on the calculus of variations involving integrals in one variable to two variables. He had in mind the following problem: Given a closed curve in E3, find a surface having the curve as boundary with minimal area. Such a surface is called a minimal surface. In 1776 Jean Baptiste Meusnier showed that the differential equation derived by Lagrange was equivalent to the vanishing of the mean curvature of the surface: A surface is minimal if and only if its mean curvature vanishes. Minimal surfaces have a simple interpretation in real life: they are the shape a soap film will assume if a wire frame shaped like the curve is dipped into a soap solution and then carefully lifted out. The question as to whether a minimal surface with given boundary exists is called Plateau's problem after the Belgian physicist Joseph Plateau who carried out experiments on soap films in the mid-nineteenth century. In 1930 Jesse Douglas and Tibor Radó gave an affirmative answer to Plateau's problem (Douglas was awarded one of the first Fields medals for this work in 1936). Many explicit examples of minimal surface are known explicitly, such as the catenoid, the helicoid, the Scherk surface and the Enneper surface. There has been extensive research in this area, summarised in Osserman (2002). In particular a result of Osserman shows that if a minimal surface is non-planar, then its image under the Gauss map is dense in S2. === Surfaces of constant Gaussian curvature === If a surface has constant Gaussian curvature, it is called a surface of constant curvature. The unit sphere in E3 has constant Gaussian curvature +1. The Euclidean plane and the cylinder both have constant Gaussian curvature 0. A unit pseudosphere has constant Gaussian curvature -1 (apart from its equator, that is singular). Pseudosphere can be obtained by rotating a tractrix around its asymptote. In 1868 Eugenio Beltrami showed that the geometry of the pseudosphere was directly related to that of the more abstract hyperbolic plane, discovered independently by Lobachevsky (1830) and Bolyai (1832). Already in 1840, F. Minding, a student of Gauss, had obtained trigonometric formulas for the pseudosphere identical to those for the hyperbolic plane. The intrinsic geometry of this surface is now better understood in terms of the Poincaré metric on the upper half plane or the unit disc, and has been described by other models such as the Klein model or the hyperboloid model, obtained by considering the two-sheeted hyperboloid q(x, y, z) = −1 in three-dimensional Minkowski space, where q(x, y, z) = x2 + y2 – z2. The sphere, the plane and the hyperbolic plane have transitive Lie group of symmetries. This group theoretic fact has far-reaching consequences, all the more remarkable because of the central role these special surfaces play in the geometry of surfaces, due to Poincaré's uniformization theorem (see below). Other examples of surfaces with Gaussian curvature 0 include cones, tangent developables, and more generally any developable surface. == Local metric structure == For any surface embedded in Euclidean space of dimension 3 or higher, it is possible to measure the length of a curve on the surface, the angle between two curves and the area of a region on the surface. This structure is encoded infinitesimally in a Riemannian metric on the surface through line elements and area elements. Classically in the nineteenth and early twentieth centuries only surfaces embedded in R3 were considered and the metric was given as a 2×2 positive definite matrix varying smoothly from point to point in a local parametrization of the surface. The idea of local parametrization and change of coordinate was later formalized through the current abstract notion of a manifold, a topological space where the smooth structure is given by local charts on the manifold, exactly as the planet Earth is mapped by atlases today. Changes of coordinates between different charts of the same region are required to be smooth. Just as contour lines on real-life maps encode changes in elevation, taking into account local distortions of the Earth's surface to calculate true distances, so the Riemannian metric describes distances and areas "in the small" in each local chart. In each local chart a Riemannian metric is given by smoothly assigning a 2×2 positive definite matrix to each point; when a different chart is taken, the matrix is transformed according to the Jacobian matrix of the coordinate change. The manifold then has the structure of a 2-dimensional Riemannian manifold. === Shape operator === The differential dn of the Gauss map n can be used to define a type of extrinsic curvature, known as the shape operator or Weingarten map. This operator first appeared implicitly in the work of Wilhelm Blaschke and later explicitly in a treatise by Burali-Forti and Burgati. Since at each point x of the surface, the tangent space is an inner product space, the shape operator Sx can be defined as a linear operator on this space by the formula ( S x v , w ) = ( d n ( v ) , w ) {\displaystyle (S_{x}v,w)=(dn(v),w)} for tangent vectors v, w (the inner product makes sense because dn(v) and w both lie in E3). The right hand side is symmetric in v and w, so the shape operator is self-adjoint on the tangent space. The eigenvalues of Sx are just the principal curvatures k1 and k2 at x. In particular the determinant of the shape operator at a point is the Gaussian curvature, but it also contains other information, since the mean curvature is half the trace of the shape operator. The mean curvature is an extrinsic invariant. In intrinsic geometry, a cylinder is developable, meaning that every piece of it is intrinsically indistinguishable from a piece of a plane since its Gauss curvature vanishes identically. Its mean curvature is not zero, though; hence extrinsically it is different from a plane. Equivalently, the shape operator can be defined as a linear operator on tangent spaces, S p : T p M → T p M {\displaystyle S_{p}:T_{p}M\rightarrow T_{p}M} . If n is a unit normal field to M and v is a tangent vector then S ( v ) = ± ∇ v n {\displaystyle S(v)=\pm \nabla _{v}n} (there is no standard agreement whether to use + or − in the definition). In general, the eigenvectors and eigenvalues of the shape operator at each point determine the directions in which the surface bends at each point. The eigenvalues correspond to the principal curvatures of the surface and the eigenvectors are the corresponding principal directions. The principal directions specify the directions that a curve embedded in the surface must travel to have maximum and minimum curvature, these being given by the principal curvatures. == Geodesic curves on a surface == Curves on a surface which minimize length between the endpoints are called geodesics; they are the shape that an elastic band stretched between the two points would take. Mathematically they are described using ordinary differential equations and the calculus of variations. The differential geometry of surfaces revolves around the study of geodesics. It is still an open question whether every Riemannian metric on a 2-dimensional local chart arises from an embedding in 3-dimensional Euclidean space: the theory of geodesics has been used to show this is true in the important case when the components of the metric are analytic. === Geodesics === Given a piecewise smooth path c ( t ) = ( x ( t ) , y ( t ) ) {\displaystyle c(t)=(x(t),y(t))} in the chart for t {\displaystyle t} in [ a , b ] {\displaystyle [a,b]} , its length is defined by L ( c ) = ∫ a b ( E x ˙ 2 + 2 F x ˙ y ˙ + G y ˙ 2 ) 1 2 d t {\displaystyle L(c)=\int _{a}^{b}(E{\dot {x}}^{2}+2F{\dot {x}}{\dot {y}}+G{\dot {y}}^{2})^{\frac {1}{2}}\,dt} and energy by E ( c ) = ∫ a b ( E x ˙ 2 + 2 F x ˙ y ˙ + G y ˙ 2 ) d t . {\displaystyle E(c)=\int _{a}^{b}(E{\dot {x}}^{2}+2F{\dot {x}}{\dot {y}}+G{\dot {y}}^{2})\,dt.} The length is independent of the parametrization of a path. By the Euler–Lagrange equations, if c(t) is a path minimising length, parametrized by arclength, it must satisfy the Euler equations x ¨ + Γ 11 1 x ˙ 2 + 2 Γ 12 1 x ˙ y ˙ + Γ 22 1 y ˙ 2 = 0 {\displaystyle {\ddot {x}}+\Gamma _{11}^{1}{\dot {x}}^{2}+2\Gamma _{12}^{1}{\dot {x}}{\dot {y}}+\Gamma _{22}^{1}{\dot {y}}^{2}=0} y ¨ + Γ 11 2 x ˙ 2 + 2 Γ 12 2 x ˙ y ˙ + Γ 22 2 y ˙ 2 = 0 {\displaystyle {\ddot {y}}+\Gamma _{11}^{2}{\dot {x}}^{2}+2\Gamma _{12}^{2}{\dot {x}}{\dot {y}}+\Gamma _{22}^{2}{\dot {y}}^{2}=0} where the Christoffel symbols Γkij are given by Γ i j k = 1 2 g k m ( ∂ j g i m + ∂ i g j m − ∂ m g i j ) {\displaystyle \Gamma _{ij}^{k}={\tfrac {1}{2}}g^{km}(\partial _{j}g_{im}+\partial _{i}g_{jm}-\partial _{m}g_{ij})} where g11 = E, g12 = F, g22 = G and gij is the inverse matrix to gij. A path satisfying the Euler equations is called a geodesic. By the Cauchy–Schwarz inequality a path minimising energy is just a geodesic parametrised by arc length; and, for any geodesic, the parameter t is proportional to arclength. === Geodesic curvature === The geodesic curvature kg at a point of a curve c(t), parametrised by arc length, on an oriented surface is defined to be k g = c ¨ ( t ) ⋅ n ( t ) . {\displaystyle k_{g}={\ddot {c}}(t)\cdot \mathbf {n} (t).} where n(t) is the "principal" unit normal to the curve in the surface, constructed by rotating the unit tangent vector ċ(t) through an angle of +90°. The geodesic curvature at a point is an intrinsic invariant depending only on the metric near the point. A unit speed curve on a surface is a geodesic if and only if its geodesic curvature vanishes at all points on the curve. A unit speed curve c(t) in an embedded surface is a geodesic if and only if its acceleration vector c̈(t) is normal to the surface. The geodesic curvature measures in a precise way how far a curve on the surface is from being a geodesic. === Orthogonal coordinates === When F = 0 throughout a coordinate chart, such as with the geodesic polar coordinates discussed below, the images of lines parallel to the x- and y-axes are orthogonal and provide orthogonal coordinates. If H = (EG)1⁄2, then the Gaussian curvature is given by K = − 1 2 H [ ∂ x ( G x H ) + ∂ y ( E y H ) ] . {\displaystyle K=-{1 \over 2H}\left[\partial _{x}\left({\frac {G_{x}}{H}}\right)+\partial _{y}\left({\frac {E_{y}}{H}}\right)\right].} If in addition E = 1, so that H = G1⁄2, then the angle φ at the intersection between geodesic (x(t),y(t)) and the line y = constant is given by the equation tan ⁡ φ = H ⋅ y ˙ x ˙ . {\displaystyle \tan \varphi =H\cdot {\frac {\dot {y}}{\dot {x}}}.} The derivative of φ is given by a classical derivative formula of Gauss: φ ˙ = − H x ⋅ y ˙ . {\displaystyle {\dot {\varphi }}=-H_{x}\cdot {\dot {y}}.} == Geodesic polar coordinates == Once a metric is given on a surface and a base point is fixed, there is a unique geodesic connecting the base point to each sufficiently nearby point. The direction of the geodesic at the base point and the distance uniquely determine the other endpoint. These two bits of data, a direction and a magnitude, thus determine a tangent vector at the base point. The map from tangent vectors to endpoints smoothly sweeps out a neighbourhood of the base point and defines what is called the exponential map, defining a local coordinate chart at that base point. The neighbourhood swept out has similar properties to balls in Euclidean space, namely any two points in it are joined by a unique geodesic. This property is called "geodesic convexity" and the coordinates are called normal coordinates. The explicit calculation of normal coordinates can be accomplished by considering the differential equation satisfied by geodesics. The convexity properties are consequences of Gauss's lemma and its generalisations. Roughly speaking this lemma states that geodesics starting at the base point must cut the spheres of fixed radius centred on the base point at right angles. Geodesic polar coordinates are obtained by combining the exponential map with polar coordinates on tangent vectors at the base point. The Gaussian curvature of the surface is then given by the second order deviation of the metric at the point from the Euclidean metric. In particular the Gaussian curvature is an invariant of the metric, Gauss's celebrated Theorema Egregium. A convenient way to understand the curvature comes from an ordinary differential equation, first considered by Gauss and later generalized by Jacobi, arising from the change of normal coordinates about two different points. The Gauss–Jacobi equation provides another way of computing the Gaussian curvature. Geometrically it explains what happens to geodesics from a fixed base point as the endpoint varies along a small curve segment through data recorded in the Jacobi field, a vector field along the geodesic. One and a quarter centuries after Gauss and Jacobi, Marston Morse gave a more conceptual interpretation of the Jacobi field in terms of second derivatives of the energy function on the infinite-dimensional Hilbert manifold of paths. === Exponential map === The theory of ordinary differential equations shows that if f(t, v) is smooth then the differential equation ⁠dv/dt⁠ = f(t, v) with initial condition v(0) = v0 has a unique solution for |t| sufficiently small and the solution depends smoothly on t and v0. This implies that for sufficiently small tangent vectors v at a given point p = (x0, y0), there is a geodesic cv(t) defined on (−2, 2) with cv(0) = (x0, y0) and ċv(0) = v. Moreover, if |s| ≤ 1, then csv = cv(st). The exponential map is defined by expp(v) = cv (1) and gives a diffeomorphism between a disc ‖v‖ < δ and a neighbourhood of p; more generally the map sending (p, v) to expp(v) gives a local diffeomorphism onto a neighbourhood of (p, p). The exponential map gives geodesic normal coordinates near p. === Computation of normal coordinates === There is a standard technique (see for example Berger (2004)) for computing the change of variables to normal coordinates u, v at a point as a formal Taylor series expansion. If the coordinates x, y at (0,0) are locally orthogonal, write x(u,v) = αu + L(u,v) + λ(u,v) + … y(u,v) = βv + M(u,v) + μ(u,v) + … where L, M are quadratic and λ, μ cubic homogeneous polynomials in u and v. If u and v are fixed, x(t) = x(tu,tv) and y(t) = y(tu, tv) can be considered as formal power series solutions of the Euler equations: this uniquely determines α, β, L, M, λ and μ. === Gauss's lemma === In these coordinates the matrix g(x) satisfies g(0) = I and the lines t ↦ tv are geodesics through 0. Euler's equations imply the matrix equation g(v)v = v, a key result, usually called the Gauss lemma. Geometrically it states that Taking polar coordinates (r,θ), it follows that the metric has the form ds2 = dr2 + G(r,θ) dθ2. In geodesic coordinates, it is easy to check that the geodesics through zero minimize length. The topology on the Riemannian manifold is then given by a distance function d(p,q), namely the infimum of the lengths of piecewise smooth paths between p and q. This distance is realised locally by geodesics, so that in normal coordinates d(0,v) = ‖v‖. If the radius δ is taken small enough, a slight sharpening of the Gauss lemma shows that the image U of the disc ‖v‖ < δ under the exponential map is geodesically convex, i.e. any two points in U are joined by a unique geodesic lying entirely inside U. === Theorema Egregium === Gauss's Theorema Egregium, the "Remarkable Theorem", shows that the Gaussian curvature of a surface can be computed solely in terms of the metric and is thus an intrinsic invariant of the surface, independent of any isometric embedding in E3 and unchanged under coordinate transformations. In particular, isometries and local isometries of surfaces preserve Gaussian curvature. This theorem can expressed in terms of the power series expansion of the metric, ds, is given in normal coordinates (u, v) as ds2 = du2 + dv2 − K(u dv – v du)2/12 + …. === Gauss–Jacobi equation === Taking a coordinate change from normal coordinates at p to normal coordinates at a nearby point q, yields the Sturm–Liouville equation satisfied by H(r,θ) = G(r,θ)1⁄2, discovered by Gauss and later generalised by Jacobi, Hrr = –KH. The Jacobian of this coordinate change at q is equal to Hr. This gives another way of establishing the intrinsic nature of Gaussian curvature. Because H(r,θ) can be interpreted as the length of the line element in the θ direction, the Gauss–Jacobi equation shows that the Gaussian curvature measures the spreading of geodesics on a geometric surface as they move away from a point. === Laplace–Beltrami operator === On a surface with local metric d s 2 = E d x 2 + 2 F d x d y + G d y 2 {\displaystyle ds^{2}=E\,dx^{2}+2F\,dx\,dy+G\,dy^{2}} and Laplace–Beltrami operator Δ f = 1 H ( ∂ x G H ∂ x f − ∂ x F H ∂ y f − ∂ y F H ∂ x f + ∂ y E H ∂ y f ) , {\displaystyle \Delta f={1 \over H}\left(\partial _{x}{G \over H}\partial _{x}f-\partial _{x}{F \over H}\partial _{y}f-\partial _{y}{F \over H}\partial _{x}f+\partial _{y}{E \over H}\partial _{y}f\right),} where H2 = EG − F2, the Gaussian curvature at a point is given by the formula K = − 3 lim r → 0 Δ ( log ⁡ r ) , {\displaystyle K=-3\lim _{r\rightarrow 0}\Delta (\log r),} where r denotes the geodesic distance from the point. In isothermal coordinates, first considered by Gauss, the metric is required to be of the special form d s 2 = e φ ( d x 2 + d y 2 ) . {\displaystyle ds^{2}=e^{\varphi }(dx^{2}+dy^{2}).\,} In this case the Laplace–Beltrami operator is given by Δ = e − φ ( ∂ 2 ∂ x 2 + ∂ 2 ∂ y 2 ) {\displaystyle \Delta =e^{-\varphi }\left({\frac {\partial ^{2}}{\partial x^{2}}}+{\frac {\partial ^{2}}{\partial y^{2}}}\right)} and φ satisfies Liouville's equation Δ φ = − 2 K . {\displaystyle \Delta \varphi =-2K.\,} Isothermal coordinates are known to exist in a neighbourhood of any point on the surface, although all proofs to date rely on non-trivial results on partial differential equations. There is an elementary proof for minimal surfaces. == Gauss–Bonnet theorem == On a sphere or a hyperboloid, the area of a geodesic triangle, i.e. a triangle all the sides of which are geodesics, is proportional to the difference of the sum of the interior angles and π. The constant of proportionality is just the Gaussian curvature, a constant for these surfaces. For the torus, the difference is zero, reflecting the fact that its Gaussian curvature is zero. These are standard results in spherical, hyperbolic and high school trigonometry (see below). Gauss generalised these results to an arbitrary surface by showing that the integral of the Gaussian curvature over the interior of a geodesic triangle is also equal to this angle difference or excess. His formula showed that the Gaussian curvature could be calculated near a point as the limit of area over angle excess for geodesic triangles shrinking to the point. Since any closed surface can be decomposed up into geodesic triangles, the formula could also be used to compute the integral of the curvature over the whole surface. As a special case of what is now called the Gauss–Bonnet theorem, Gauss proved that this integral was remarkably always 2π times an integer, a topological invariant of the surface called the Euler characteristic. This invariant is easy to compute combinatorially in terms of the number of vertices, edges, and faces of the triangles in the decomposition, also called a triangulation. This interaction between analysis and topology was the forerunner of many later results in geometry, culminating in the Atiyah-Singer index theorem. In particular properties of the curvature impose restrictions on the topology of the surface. === Geodesic triangles === Gauss proved that, if Δ is a geodesic triangle on a surface with angles α, β and γ at vertices A, B and C, then ∫ Δ K d A = α + β + γ − π . {\displaystyle \int _{\Delta }K\,dA=\alpha +\beta +\gamma -\pi .} In fact taking geodesic polar coordinates with origin A and AB, AC the radii at polar angles 0 and α: ∫ Δ K d A = ∫ Δ K H d r d θ = − ∫ 0 α ∫ 0 r θ H r r d r d θ = ∫ 0 α 1 − H r ( r θ , θ ) d θ = ∫ 0 α d θ + ∫ π − β γ d φ = α + β + γ − π , {\displaystyle {\begin{aligned}\int _{\Delta }K\,dA&=\int _{\Delta }KH\,dr\,d\theta =-\int _{0}^{\alpha }\int _{0}^{r_{\theta }}\!H_{rr}\,dr\,d\theta \\&=\int _{0}^{\alpha }1-H_{r}(r_{\theta },\theta )\,d\theta =\int _{0}^{\alpha }d\theta +\int _{\pi -\beta }^{\gamma }\!\!d\varphi \\&=\alpha +\beta +\gamma -\pi ,\end{aligned}}} where the second equality follows from the Gauss–Jacobi equation and the fourth from Gauss's derivative formula in the orthogonal coordinates (r,θ). Gauss's formula shows that the curvature at a point can be calculated as the limit of angle excess α + β + γ − π over area for successively smaller geodesic triangles near the point. Qualitatively a surface is positively or negatively curved according to the sign of the angle excess for arbitrarily small geodesic triangles. === Gauss–Bonnet theorem === Since every compact oriented 2-manifold M can be triangulated by small geodesic triangles, it follows that ∫ M K d A = 2 π χ ( M ) {\displaystyle \int _{M}KdA=2\pi \,\chi (M)} where χ(M) denotes the Euler characteristic of the surface. In fact if there are F faces, E edges and V vertices, then 3F = 2E and the left hand side equals 2πV – πF = 2π(V – E + F) = 2πχ(M). This is the celebrated Gauss–Bonnet theorem: it shows that the integral of the Gaussian curvature is a topological invariant of the manifold, namely the Euler characteristic. This theorem can be interpreted in many ways; perhaps one of the most far-reaching has been as the index theorem for an elliptic differential operator on M, one of the simplest cases of the Atiyah-Singer index theorem. Another related result, which can be proved using the Gauss–Bonnet theorem, is the Poincaré-Hopf index theorem for vector fields on M which vanish at only a finite number of points: the sum of the indices at these points equals the Euler characteristic, where the index of a point is defined as follows: on a small circle round each isolated zero, the vector field defines a map into the unit circle; the index is just the winding number of this map.) === Curvature and embeddings === If the Gaussian curvature of a surface M is everywhere positive, then the Euler characteristic is positive so M is homeomorphic (and therefore diffeomorphic) to S2. If in addition the surface is isometrically embedded in E3, the Gauss map provides an explicit diffeomorphism. As Hadamard observed, in this case the surface is convex; this criterion for convexity can be viewed as a 2-dimensional generalisation of the well-known second derivative criterion for convexity of plane curves. Hilbert proved that every isometrically embedded closed surface must have a point of positive curvature. Thus a closed Riemannian 2-manifold of non-positive curvature can never be embedded isometrically in E3; however, as Adriano Garsia showed using the Beltrami equation for quasiconformal mappings, this is always possible for some conformally equivalent metric. == Surfaces of constant curvature == The simply connected surfaces of constant curvature 0, +1 and –1 are the Euclidean plane, the unit sphere in E3, and the hyperbolic plane. Each of these has a transitive three-dimensional Lie group of orientation preserving isometries G, which can be used to study their geometry. Each of the two non-compact surfaces can be identified with the quotient G / K where K is a maximal compact subgroup of G. Here K is isomorphic to SO(2). Any other closed Riemannian 2-manifold M of constant Gaussian curvature, after scaling the metric by a constant factor if necessary, will have one of these three surfaces as its universal covering space. In the orientable case, the fundamental group Γ of M can be identified with a torsion-free uniform subgroup of G and M can then be identified with the double coset space Γ \ G / K. In the case of the sphere and the Euclidean plane, the only possible examples are the sphere itself and tori obtained as quotients of R2 by discrete rank 2 subgroups. For closed surfaces of genus g ≥ 2, the moduli space of Riemann surfaces obtained as Γ varies over all such subgroups, has real dimension 6g − 6. By Poincaré's uniformization theorem, any orientable closed 2-manifold is conformally equivalent to a surface of constant curvature 0, +1 or –1. In other words, by multiplying the metric by a positive scaling factor, the Gaussian curvature can be made to take exactly one of these values (the sign of the Euler characteristic of M). === Euclidean geometry === In the case of the Euclidean plane, the symmetry group is the Euclidean motion group, the semidirect product of the two dimensional group of translations by the group of rotations. Geodesics are straight lines and the geometry is encoded in the elementary formulas of trigonometry, such as the cosine rule for a triangle with sides a, b, c and angles α, β, γ: c 2 = a 2 + b 2 − 2 a b cos ⁡ γ . {\displaystyle c^{2}=a^{2}+b^{2}-2ab\,\cos \gamma .} Flat tori can be obtained by taking the quotient of R2 by a lattice, i.e. a free Abelian subgroup of rank 2. These closed surfaces have no isometric embeddings in E3. They do nevertheless admit isometric embeddings in E4; in the easiest case this follows from the fact that the torus is a product of two circles and each circle can be isometrically embedded in E2. === Spherical geometry === The isometry group of the unit sphere S2 in E3 is the orthogonal group O(3), with the rotation group SO(3) as the subgroup of isometries preserving orientation. It is the direct product of SO(3) with the antipodal map, sending x to –x. The group SO(3) acts transitively on S2. The stabilizer subgroup of the unit vector (0,0,1) can be identified with SO(2), so that S2 = SO(3)/SO(2). The geodesics between two points on the sphere are the great circle arcs with these given endpoints. If the points are not antipodal, there is a unique shortest geodesic between the points. The geodesics can also be described group theoretically: each geodesic through the North pole (0,0,1) is the orbit of the subgroup of rotations about an axis through antipodal points on the equator. A spherical triangle is a geodesic triangle on the sphere. It is defined by points A, B, C on the sphere with sides BC, CA, AB formed from great circle arcs of length less than π. If the lengths of the sides are a, b, c and the angles between the sides α, β, γ, then the spherical cosine law states that cos ⁡ c = cos ⁡ a cos ⁡ b + sin ⁡ a sin ⁡ b cos ⁡ γ . {\displaystyle \cos c=\cos a\,\cos b+\sin a\,\sin b\,\cos \gamma .} The area of the triangle is given by Area = α + β + γ − π. Using stereographic projection from the North pole, the sphere can be identified with the extended complex plane C ∪ {∞}. The explicit map is given by π ( x , y , z ) = x + i y 1 − z ≡ u + i v . {\displaystyle \pi (x,y,z)={x+iy \over 1-z}\equiv u+iv.} Under this correspondence every rotation of S2 corresponds to a Möbius transformation in SU(2), unique up to sign. With respect to the coordinates (u, v) in the complex plane, the spherical metric becomes d s 2 = 4 ( d u 2 + d v 2 ) ( 1 + u 2 + v 2 ) 2 . {\displaystyle ds^{2}={4(du^{2}+dv^{2}) \over (1+u^{2}+v^{2})^{2}}.} The unit sphere is the unique closed orientable surface with constant curvature +1. The quotient SO(3)/O(2) can be identified with the real projective plane. It is non-orientable and can be described as the quotient of S2 by the antipodal map (multiplication by −1). The sphere is simply connected, while the real projective plane has fundamental group Z2. The finite subgroups of SO(3), corresponding to the finite subgroups of O(2) and the symmetry groups of the platonic solids, do not act freely on S2, so the corresponding quotients are not 2-manifolds, just orbifolds. === Hyperbolic geometry === Non-Euclidean geometry was first discussed in letters of Gauss, who made extensive computations at the turn of the nineteenth century which, although privately circulated, he decided not to put into print. In 1830 Lobachevsky and independently in 1832 Bolyai, the son of one Gauss's correspondents, published synthetic versions of this new geometry, for which they were severely criticized. However it was not until 1868 that Beltrami, followed by Klein in 1871 and Poincaré in 1882, gave concrete analytic models for what Klein dubbed hyperbolic geometry. The four models of 2-dimensional hyperbolic geometry that emerged were: the Beltrami-Klein model; the Poincaré disk; the Poincaré upper half-plane; the hyperboloid model of Wilhelm Killing in 3-dimensional Minkowski space. The first model, based on a disk, has the advantage that geodesics are actually line segments (that is, intersections of Euclidean lines with the open unit disk). The last model has the advantage that it gives a construction which is completely parallel to that of the unit sphere in 3-dimensional Euclidean space. Because of their application in complex analysis and geometry, however, the models of Poincaré are the most widely used: they are interchangeable thanks to the Möbius transformations between the disk and the upper half-plane. Let D = { z : | z | < 1 } {\displaystyle D=\{z\,\colon |z|<1\}} be the Poincaré disk in the complex plane with Poincaré metric d s 2 = 4 ( d x 2 + d y 2 ) ( 1 − x 2 − y 2 ) 2 . {\displaystyle ds^{2}={4(dx^{2}+dy^{2}) \over (1-x^{2}-y^{2})^{2}}.} In polar coordinates (r, θ) the metric is given by d s 2 = 4 ( d r 2 + r 2 d θ 2 ) ( 1 − r 2 ) 2 . {\displaystyle ds^{2}={4(dr^{2}+r^{2}\,d\theta ^{2}) \over (1-r^{2})^{2}}.} The length of a curve γ:[a,b] → D is given by the formula ℓ ( γ ) = ∫ a b 2 | γ ′ ( t ) | d t 1 − | γ ( t ) | 2 . {\displaystyle \ell (\gamma )=\int _{a}^{b}{2|\gamma ^{\prime }(t)|\,dt \over 1-|\gamma (t)|^{2}}.} The group G = SU(1,1) given by G = { ( α β β ¯ α ¯ ) : α , β ∈ C , | α | 2 − | β | 2 = 1 } {\displaystyle G=\left\{{\begin{pmatrix}\alpha &\beta \\{\overline {\beta }}&{\overline {\alpha }}\end{pmatrix}}:\alpha ,\beta \in \mathbf {C} ,\,|\alpha |^{2}-|\beta |^{2}=1\right\}} acts transitively by Möbius transformations on D and the stabilizer subgroup of 0 is the rotation group K = { ( ζ 0 0 ζ ¯ ) : ζ ∈ C , | ζ | = 1 } . {\displaystyle K=\left\{{\begin{pmatrix}\zeta &0\\0&{\overline {\zeta }}\end{pmatrix}}:\zeta \in \mathbf {C} ,\,|\zeta |=1\right\}.} The quotient group SU(1,1)/±I is the group of orientation-preserving isometries of D. Any two points z, w in D are joined by a unique geodesic, given by the portion of the circle or straight line passing through z and w and orthogonal to the boundary circle. The distance between z and w is given by d ( z , w ) = 2 tanh − 1 ⁡ | z − w | | 1 − w ¯ z | . {\displaystyle d(z,w)=2\tanh ^{-1}{\frac {|z-w|}{|1-{\overline {w}}z|}}.} In particular d(0,r) = 2 tanh−1 r and c(t) = ⁠1/2⁠tanh t is the geodesic through 0 along the real axis, parametrized by arclength. The topology defined by this metric is equivalent to the usual Euclidean topology, although as a metric space (D,d) is complete. A hyperbolic triangle is a geodesic triangle for this metric: any three points in D are vertices of a hyperbolic triangle. If the sides have length a, b, c with corresponding angles α, β, γ, then the hyperbolic cosine rule states that cosh ⁡ c = cosh ⁡ a cosh ⁡ b − sinh ⁡ a sinh ⁡ b cos ⁡ γ . {\displaystyle \cosh c=\cosh a\,\cosh b-\sinh a\,\sinh b\,\cos \gamma .} The area of the hyperbolic triangle is given by Area = π – α – β – γ. The unit disk and the upper half-plane H = { w = x + i y : y > 0 } {\displaystyle H=\{w=x+iy\,\colon \,y>0\}} are conformally equivalent by the Möbius transformations w = i 1 + z 1 − z , z = w − i w + i . {\displaystyle w=i{1+z \over 1-z},\,\,z={w-i \over w+i}.} Under this correspondence the action of SL(2,R) by Möbius transformations on H corresponds to that of SU(1,1) on D. The metric on H becomes d s 2 = d x 2 + d y 2 y 2 . {\displaystyle ds^{2}={dx^{2}+dy^{2} \over y^{2}}.} Since lines or circles are preserved under Möbius transformations, geodesics are again described by lines or circles orthogonal to the real axis. The unit disk with the Poincaré metric is the unique simply connected oriented 2-dimensional Riemannian manifold with constant curvature −1. Any oriented closed surface M with this property has D as its universal covering space. Its fundamental group can be identified with a torsion-free concompact subgroup Γ of SU(1,1), in such a way that M = Γ ∖ G / K . {\displaystyle M=\Gamma \backslash G/K.} In this case Γ is a finitely presented group. The generators and relations are encoded in a geodesically convex fundamental geodesic polygon in D (or H) corresponding geometrically to closed geodesics on M. Examples. the Bolza surface of genus 2; the Klein quartic of genus 3; the Macbeath surface of genus 7; the First Hurwitz triplet of genus 14. === Uniformization === Given an oriented closed surface M with Gaussian curvature K, the metric on M can be changed conformally by scaling it by a factor e2u. The new Gaussian curvature K′ is then given by K ′ ( x ) = e − 2 u ( K ( x ) − Δ u ) , {\displaystyle K^{\prime }(x)=e^{-2u}(K(x)-\Delta u),} where Δ is the Laplacian for the original metric. Thus to show that a given surface is conformally equivalent to a metric with constant curvature K′ it suffices to solve the following variant of Liouville's equation: Δ u = K ′ e 2 u + K ( x ) . {\displaystyle \Delta u=K^{\prime }e^{2u}+K(x).} When M has Euler characteristic 0, so is diffeomorphic to a torus, K′ = 0, so this amounts to solving Δ u = K ( x ) . {\displaystyle \Delta u=K(x).} By standard elliptic theory, this is possible because the integral of K over M is zero, by the Gauss–Bonnet theorem. When M has negative Euler characteristic, K′ = −1, so the equation to be solved is: Δ u = − e 2 u + K ( x ) . {\displaystyle \Delta u=-e^{2u}+K(x).} Using the continuity of the exponential map on Sobolev space due to Neil Trudinger, this non-linear equation can always be solved. Finally in the case of the 2-sphere, K′ = 1 and the equation becomes: Δ u = e 2 u + K ( x ) . {\displaystyle \Delta u=e^{2u}+K(x).} So far this non-linear equation has not been analysed directly, although classical results such as the Riemann–Roch theorem imply that it always has a solution. The method of Ricci flow, developed by Richard S. Hamilton, gives another proof of existence based on non-linear partial differential equations to prove existence. In fact the Ricci flow on conformal metrics on S2 is defined on functions u(x, t) by u t = 4 π − K ′ ( x , t ) = 4 π − e − 2 u ( K ( x ) − Δ u ) . {\displaystyle u_{t}=4\pi -K'(x,t)=4\pi -e^{-2u}(K(x)-\Delta u).} After finite time, Chow showed that K′ becomes positive; previous results of Hamilton could then be used to show that K′ converges to +1. Prior to these results on Ricci flow, Osgood, Phillips & Sarnak (1988) had given an alternative and technically simpler approach to uniformization based on the flow on Riemannian metrics g defined by log det Δg. A proof using elliptic operators, discovered in 1988, can be found in Ding (2001). Let G be the Green's function on S2 satisfying ΔG = 1 + 4πδP, where δP is the point measure at a fixed point P of S2. The equation Δv = 2K – 2, has a smooth solution v, because the right hand side has integral 0 by the Gauss–Bonnet theorem. Thus φ = 2G + v satisfies Δφ = 2K away from P. It follows that g1 = eφg is a complete metric of constant curvature 0 on the complement of P, which is therefore isometric to the plane. Composing with stereographic projection, it follows that there is a smooth function u such that e2ug has Gaussian curvature +1 on the complement of P. The function u automatically extends to a smooth function on the whole of S2. == Riemannian connection and parallel transport == The classical approach of Gauss to the differential geometry of surfaces was the standard elementary approach which predated the emergence of the concepts of Riemannian manifold initiated by Bernhard Riemann in the mid-nineteenth century and of connection developed by Tullio Levi-Civita, Élie Cartan and Hermann Weyl in the early twentieth century. The notion of connection, covariant derivative and parallel transport gave a more conceptual and uniform way of understanding curvature, which not only allowed generalisations to higher dimensional manifolds but also provided an important tool for defining new geometric invariants, called characteristic classes. The approach using covariant derivatives and connections is nowadays the one adopted in more advanced textbooks. === Covariant derivative === Connections on a surface can be defined from various equivalent but equally important points of view. The Riemannian connection or Levi-Civita connection. is perhaps most easily understood in terms of lifting vector fields, considered as first order differential operators acting on functions on the manifold, to differential operators on the tangent bundle or frame bundle. In the case of an embedded surface, the lift to an operator on vector fields, called the covariant derivative, is very simply described in terms of orthogonal projection. Indeed, a vector field on a surface embedded in R3 can be regarded as a function from the surface into R3. Another vector field acts as a differential operator component-wise. The resulting vector field will not be tangent to the surface, but this can be corrected taking its orthogonal projection onto the tangent space at each point of the surface. As Ricci and Levi-Civita realised at the turn of the twentieth century, this process depends only on the metric and can be locally expressed in terms of the Christoffel symbols. === Parallel transport === Parallel transport of tangent vectors along a curve in the surface was the next major advance in the subject, due to Levi-Civita. It is related to the earlier notion of covariant derivative, because it is the monodromy of the ordinary differential equation on the curve defined by the covariant derivative with respect to the velocity vector of the curve. Parallel transport along geodesics, the "straight lines" of the surface, can also easily be described directly. A vector in the tangent plane is transported along a geodesic as the unique vector field with constant length and making a constant angle with the velocity vector of the geodesic. For a general curve, this process has to be modified using the geodesic curvature, which measures how far the curve departs from being a geodesic. A vector field v(t) along a unit speed curve c(t), with geodesic curvature kg(t), is said to be parallel along the curve if it has constant length the angle θ(t) that it makes with the velocity vector ċ(t) satisfies θ ˙ ( t ) = − k g ( t ) {\displaystyle {\dot {\theta }}(t)=-k_{g}(t)} This recaptures the rule for parallel transport along a geodesic or piecewise geodesic curve, because in that case kg = 0, so that the angle θ(t) should remain constant on any geodesic segment. The existence of parallel transport follows because θ(t) can be computed as the integral of the geodesic curvature. Since it therefore depends continuously on the L2 norm of kg, it follows that parallel transport for an arbitrary curve can be obtained as the limit of the parallel transport on approximating piecewise geodesic curves. The connection can thus be described in terms of lifting paths in the manifold to paths in the tangent or orthonormal frame bundle, thus formalising the classical theory of the "moving frame", favoured by French authors. Lifts of loops about a point give rise to the holonomy group at that point. The Gaussian curvature at a point can be recovered from parallel transport around increasingly small loops at the point. Equivalently curvature can be calculated directly at an infinitesimal level in terms of Lie brackets of lifted vector fields. === Connection 1-form === The approach of Cartan and Weyl, using connection 1-forms on the frame bundle of M, gives a third way to understand the Riemannian connection. They noticed that parallel transport dictates that a path in the surface be lifted to a path in the frame bundle so that its tangent vectors lie in a special subspace of codimension one in the three-dimensional tangent space of the frame bundle. The projection onto this subspace is defined by a differential 1-form on the orthonormal frame bundle, the connection form. This enabled the curvature properties of the surface to be encoded in differential forms on the frame bundle and formulas involving their exterior derivatives. This approach is particularly simple for an embedded surface. Thanks to a result of Kobayashi (1956), the connection 1-form on a surface embedded in Euclidean space E3 is just the pullback under the Gauss map of the connection 1-form on S2. Using the identification of S2 with the homogeneous space SO(3)/SO(2), the connection 1-form is just a component of the Maurer–Cartan 1-form on SO(3). == Global differential geometry of surfaces == Although the characterisation of curvature involves only the local geometry of a surface, there are important global aspects such as the Gauss–Bonnet theorem, the uniformization theorem, the von Mangoldt-Hadamard theorem, and the embeddability theorem. There are other important aspects of the global geometry of surfaces. These include: Injectivity radius, defined as the largest r such that two points at a distance less than r are joined by a unique geodesic. Wilhelm Klingenberg proved in 1959 that the injectivity radius of a closed surface is bounded below by the minimum of δ = ⁠π/√sup K⁠ and the length of its smallest closed geodesic. This improved a theorem of Bonnet who showed in 1855 that the diameter of a closed surface of positive Gaussian curvature is always bounded above by δ; in other words a geodesic realising the metric distance between two points cannot have length greater than δ. Rigidity. In 1927 Cohn-Vossen proved that two ovaloids – closed surfaces with positive Gaussian curvature – that are isometric are necessarily congruent by an isometry of E3. Moreover, a closed embedded surface with positive Gaussian curvature and constant mean curvature is necessarily a sphere; likewise a closed embedded surface of constant Gaussian curvature must be a sphere (Liebmann 1899). Heinz Hopf showed in 1950 that a closed embedded surface with constant mean curvature and genus 0, i.e. homeomorphic to a sphere, is necessarily a sphere; five years later Alexandrov removed the topological assumption. In the 1980s, Wente constructed immersed tori of constant mean curvature in Euclidean 3-space. Carathéodory conjecture: This conjecture states that a closed convex three times differentiable surface admits at least two umbilic points. The first work on this conjecture was in 1924 by Hans Hamburger, who noted that it follows from the following stronger claim: the half-integer valued index of the principal curvature foliation of an isolated umbilic is at most one. It was proven for smooth surfaces by Brendan Guilfoyle and Wilhelm Klingenberg in three parts concluding in 2024, the centenary of the conjecture. Zero Gaussian curvature: a complete surface in E3 with zero Gaussian curvature must be a cylinder or a plane. Hilbert's theorem (1901): no complete surface with constant negative curvature can be immersed isometrically in E3. The Willmore conjecture. This conjecture states that the integral of the square of the mean curvature of a torus immersed in E3 should be bounded below by 2π2. It is known that the integral is Moebius invariant. It was solved in 2012 by Fernando Codá Marques and André Neves. Isoperimetric inequalities. In 1939 Schmidt proved that the classical isoperimetric inequality for curves in the Euclidean plane is also valid on the sphere or in the hyperbolic plane: namely he showed that among all closed curves bounding a domain of fixed area, the perimeter is minimized by when the curve is a circle for the metric. In one dimension higher, it is known that among all closed surfaces in E3 arising as the boundary of a bounded domain of unit volume, the surface area is minimized for a Euclidean ball. Systolic inequalities for curves on surfaces. Given a closed surface, its systole is defined to be the smallest length of any non-contractible closed curve on the surface. In 1949 Loewner proved a torus inequality for metrics on the torus, namely that the area of the torus over the square of its systole is bounded below by ⁠√3/2⁠, with equality in the flat (constant curvature) case. A similar result is given by Pu's inequality for the real projective plane from 1952, with a lower bound of ⁠2/π⁠ also attained in the constant curvature case. For the Klein bottle, Blatter and Bavard later obtained a lower bound of ⁠√8/π⁠. For a closed surface of genus g, Hebda and Burago showed that the ratio is bounded below by ⁠1/2⁠. Three years later Mikhail Gromov found a lower bound given by a constant times g1⁄2, although this is not optimal. Asymptotically sharp upper and lower bounds given by constant times ⁠g/(log g)2⁠ are due to Gromov and Buser-Sarnak, and can be found in Katz (2007). There is also a version for metrics on the sphere, taking for the systole the length of the smallest closed geodesic. Gromov conjectured a lower bound of ⁠1/2√3⁠ in 1980: the best result so far is the lower bound of ⁠1/8⁠ obtained by Regina Rotman in 2006. == Reading guide == One of the most comprehensive introductory surveys of the subject, charting the historical development from before Gauss to modern times, is by Berger (2004). Accounts of the classical theory are given in Eisenhart (2004), Kreyszig (1991) and Struik (1988); the more modern copiously illustrated undergraduate textbooks by Gray, Abbena & Salamon (2006), Pressley (2001) and Wilson (2008) might be found more accessible. An accessible account of the classical theory can be found in Hilbert & Cohn-Vossen (1952). More sophisticated graduate-level treatments using the Riemannian connection on a surface can be found in Singer & Thorpe (1967), do Carmo (2016) and O'Neill (2006). == See also == Tangent vector Zoll surface == Notes == == References == Andrews, Ben; Bryan, Paul (2010), "Curvature bounds by isoperimetric comparison for normalized Ricci flow on the two-sphere", Calc. Var. Partial Differential Equations, 39 (3–4): 419–428, arXiv:0908.3606, doi:10.1007/s00526-010-0315-5, S2CID 1095459 Arnold, V.I. (1989), Mathematical methods of classical mechanics, Graduate Texts in Mathematics, vol. 60 (2nd ed.), New York: Springer-Verlag, ISBN 978-0-387-90314-9; translated from the Russian by K. Vogtmann and A. Weinstein. Berger, Marcel (2004), A Panoramic View of Riemannian Geometry, Springer-Verlag, ISBN 978-3-540-65317-2 Berger, Melvyn S. (1977), Nonlinearity and Functional Analysis, Academic Press, ISBN 978-0-12-090350-4 Bonola, Roberto; Carslaw, H. S.; Enriques, F. (1955), Non-Euclidean Geometry: A Critical and Historical Study of Its Development, Dover, ISBN 978-0-486-60027-7 {{citation}}: ISBN / Date incompatibility (help) Boothby, William M. (1986), An introduction to differentiable manifolds and Riemannian geometry, Pure and Applied Mathematics, vol. 120 (2nd ed.), Academic Press, ISBN 0121160521 Cartan, Élie (1983), Geometry of Riemannian Spaces, Math Sci Press, ISBN 978-0-915692-34-7; translated from 2nd edition of Leçons sur la géométrie des espaces de Riemann (1951) by James Glazebrook. Cartan, Élie (2001), Riemannian Geometry in an Orthogonal Frame (from lectures delivered by É Cartan at the Sorbonne in 1926-27), World Scientific, doi:10.1142/4808, ISBN 978-981-02-4746-1; translated from Russian by V. V. Goldberg with a foreword by S. S. Chern. Cartan, Henri (1971), Calcul Differentiel (in French), Hermann, ISBN 9780395120330 Chen, Xiuxiong; Lu, Peng; Tian, Gang (2006), "A note on uniformization of Riemann surfaces by Ricci flow", Proc. AMS, 134 (11): 3391–3393, doi:10.1090/S0002-9939-06-08360-2 Chern, S. S. (1967), Curves and Surfaces in Euclidean Spaces, MAA Studies in Mathematics, Mathematical Association of America Chow, B. (1991), "The Ricci flow on a 2-sphere", J. Diff. Geom., 33 (2): 325–334, doi:10.4310/jdg/1214446319 Courant, Richard (1950), Dirichlet's Principle, Conformal Mapping and Minimal Surfaces, John Wiley & Sons, ISBN 978-0-486-44552-6 {{citation}}: ISBN / Date incompatibility (help) Darboux, Gaston, Leçons sur la théorie générale des surfaces, Gauthier-Villars Volume I (1887), Volume II (1915) [1889], Volume III (1894), Volume IV (1896). Ding, W. (2001), "A proof of the uniformization theorem on S2", J. Partial Differential Equations, 14: 247–250 do Carmo, Manfredo P. (2016), Differential Geometry of Curves and Surfaces (revised & updated 2nd ed.), Mineola, NY: Dover Publications, Inc., ISBN 978-0-486-80699-0 do Carmo, Manfredo (1992), Riemannian geometry, Mathematics: Theory & Applications, Birkhäuser, ISBN 0-8176-3490-8 Eisenhart, Luther Pfahler (2004), A Treatise on the Differential Geometry of Curves and Surfaces (reprint of the 1909 ed.), Dover Publications, Inc., ISBN 0-486-43820-1 Euler, Leonhard (1760), "Recherches sur la courbure des surfaces", Mémoires de l'Académie des Sciences de Berlin, 16 (published 1767): 119–143. Euler, Leonhard (1771), "De solidis quorum superficiem in planum explicare licet", Novi Commentarii Academiae Scientiarum Petropolitanae, 16 (published 1772): 3–34. Gauss, Carl Friedrich (1902), General Investigations of Curved Surfaces of 1825 and 1827, Princeton University Library translated by A.M. Hiltebeitel and J.C. Morehead; "Disquisitiones generales circa superficies curvas", Commentationes Societatis Regiae Scientiarum Gottingesis Recentiores Vol. VI (1827), pp. 99–146. Gauss, Carl Friedrich (1965), General Investigations of Curved Surfaces, translated by A.M. Hiltebeitel; J.C. Morehead, Hewlett, NY: Raven Press, OCLC 228665499. Gauss, Carl Friedrich (2005), General Investigations of Curved Surfaces, edited with a new introduction and notes by Peter Pesic, Mineola, NY: Dover Publications, ISBN 978-0-486-44645-5. Gray, Alfred; Abbena, Elsa; Salamon, Simon (2006), Modern Differential Geometry of Curves And Surfaces With Mathematica®, Studies in Advanced Mathematics (3rd ed.), Boca Raton, FL: Chapman & Hall/CRC, ISBN 978-1-58488-448-4 Han, Qing; Hong, Jia-Xing (2006), Isometric Embedding of Riemannian Manifolds in Euclidean Spaces, American Mathematical Society, ISBN 978-0-8218-4071-9 Helgason, Sigurdur (1978), Differential Geometry,Lie Groups, and Symmetric Spaces, Academic Press, New York, ISBN 978-0-12-338460-7 Hilbert, David; Cohn-Vossen, Stephan (1952), Geometry and the Imagination (2nd ed.), New York: Chelsea, ISBN 978-0-8284-1087-8 {{citation}}: ISBN / Date incompatibility (help) Hitchin, Nigel (2013), Geometry of Surfaces (PDF) Hopf, Heinz (1989), Lectures on Differential Geometry in the Large, Lecture Notes in Mathematics, vol. 1000, Springer-Verlag, ISBN 978-3-540-51497-8 Imayoshi, Y.; Taniguchi, M. (1992), An Introduction to Techmüller spaces, Springer-Verlag, ISBN 978-0-387-70088-5 Ivey, Thomas A.; Landsberg, J.M. (2003), Cartan for Beginners: Differential Geometry via Moving Frames and Exterior Systems, Graduate Studies in Mathematics, vol. 61, American Mathematical Society, ISBN 978-0-8218-3375-9 Katz, Mikhail G. (2007), Systolic geometry and topology, Mathematical Surveys and Monographs, vol. 137, American Mathematical Society, ISBN 978-0-8218-4177-8 Milnor, J. (1963). Morse theory. Annals of Mathematics Studies. Vol. 51. Princeton, N.J.: Princeton University Press. MR 0163331. Zbl 0108.10401. Kobayashi, Shoshichi (1956), "Induced connections and imbedded Riemannian space", Nagoya Math. J., 10: 15–25, doi:10.1017/S0027763000000052 Kobayashi, Shoshichi (1957), "Theory of connections", Annali di Matematica Pura ed Applicata, Series 4, 43: 119–194, doi:10.1007/BF02411907, S2CID 120972987, Kobayashi, Shoshichi; Nomizu, Katsumi (1963), Foundations of Differential Geometry, Vol. I, Wiley Interscience, ISBN 978-0-470-49648-0 {{citation}}: ISBN / Date incompatibility (help) Kobayashi, Shoshichi; Nomizu, Katsumi (1969), Foundations of Differential Geometry, Vol. II, Wiley Interscience, ISBN 978-0-470-49648-0 Kreyszig, Erwin (1991), Differential Geometry, Dover, ISBN 978-0-486-66721-8 Kühnel, Wolfgang (2006), Differential Geometry: Curves - Surfaces - Manifolds, American Mathematical Society, ISBN 978-0-8218-3988-1 Levi-Civita, Tullio (1917), "Nozione di parallelismo in una varietà qualunque", Rend. Circ. Mat. Palermo, 42: 173–205, doi:10.1007/BF03014898, S2CID 122088291 O'Neill, Barrett (2006), Elementary Differential Geometry (revised 2nd ed.), Amsterdam: Elsevier/Academic Press, ISBN 0-12-088735-5 Osgood, B.; Phillips, R.; Sarnak, P. (1988), "Extremals of determinants of Laplacians", J. Funct. Anal., 80: 148–211, doi:10.1016/0022-1236(88)90070-5 Osserman, Robert (2002), A Survey of Minimal Surfaces, Dover, ISBN 978-0-486-49514-9 Ian R. Porteous (2001) Geometric Differentiation: for the intelligence of curves and surfaces, Cambridge University Press ISBN 0-521-00264-8. Pressley, Andrew (2001), Elementary Differential Geometry, Springer Undergraduate Mathematics Series, Springer-Verlag, ISBN 978-1-85233-152-8 Sacks, J.; Uhlenbeck, Karen (1981), "The existence of minimal immersions of 2-spheres", Ann. of Math., 112 (1): 1–24, doi:10.2307/1971131, JSTOR 1971131 Singer, Isadore M.; Thorpe, John A. (1967), Lecture Notes on Elementary Topology and Geometry, Springer-Verlag, ISBN 978-0-387-90202-9 Spivak, Michael (1965), Calculus on manifolds. A modern approach to classical theorems of advanced calculus, W. A. Benjamin Stillwell, John (1996), Sources of Hyperbolic Geometry, American Mathematical Society, ISBN 978-0-8218-0558-9 Struik, Dirk (1987), A Concise History of Mathematics (4th ed.), Dover Publications, ISBN 0486602559 Struik, Dirk J. (1988) [1961], Lectures on Classical Differential Geometry (reprint of 2nd ed.), New York: Dover Publications, Inc., ISBN 0-486-65609-8 Taylor, Michael E. (1996a), Partial Differential Equations II: Qualitative Studies of Linear Equations, Springer-Verlag, ISBN 978-1-4419-7051-0 Taylor, Michael E. (1996b), Partial Differential Equations III: Nonlinear equations, Springer-Verlag, ISBN 978-1-4419-7048-0 Thorpe, John A. (1994), Elementary topics in differential geometry, Undergraduate Texts in Mathematics, Springer-Verlag, ISBN 0387903577 Toponogov, Victor A. (2005), Differential Geometry of Curves and Surfaces: A Concise Guide, Springer-Verlag, ISBN 978-0-8176-4384-3 Valiron, Georges (1986), The Classical Differential Geometry of Curves and Surfaces, Math Sci Press, ISBN 978-0-915692-39-2 Full text of book Warner, Frank W. (1983), Foundations of differentiable manifolds and Lie groups, Graduate Texts in Mathematics, vol. 94, Springer, ISBN 0-387-90894-3 Wells, R. O. (2017), Differential and complex geometry: origins, abstractions and embeddings, Springer, ISBN 9783319581842 Wilson, Pelham (2008), Curved Space: From Classical Geometries to Elementary Differential Geometry, Cambridge University Press, ISBN 978-0-521-71390-0 == External links == Media related to Differential geometry of surfaces at Wikimedia Commons
Wikipedia/Regular_surface_(differential_geometry)
In mathematics, triangulation describes the replacement of topological spaces with simplicial complexes by the choice of an appropriate homeomorphism. A space that admits such a homeomorphism is called a triangulable space. Triangulations can also be used to define a piecewise linear structure for a space, if one exists. Triangulation has various applications both in and outside of mathematics, for instance in algebraic topology, in complex analysis, and in modeling. == Motivation == On the one hand, it is sometimes useful to forget about superfluous information of topological spaces: The replacement of the original spaces with simplicial complexes may help to recognize crucial properties and to gain a better understanding of the considered object. On the other hand, simplicial complexes are objects of combinatorial character and therefore one can assign them quantities arising from their combinatorial pattern, for instance, the Euler characteristic. Triangulation allows now to assign such quantities to topological spaces. Investigations concerning the existence and uniqueness of triangulations established a new branch in topology, namely piecewise linear topology (or PL topology). Its main purpose is to study the topological properties of simplicial complexes and their generalizations, cell-complexes. == Simplicial complexes == === Abstract simplicial complexes === An abstract simplicial complex above a set V {\displaystyle V} is a system T ⊂ P ( V ) {\displaystyle {\mathcal {T}}\subset {\mathcal {P}}(V)} of non-empty subsets such that: { v 0 } ∈ T {\displaystyle \{v_{0}\}\in {\mathcal {T}}} for each v 0 ∈ V {\displaystyle v_{0}\in V} ; if E ∈ T {\displaystyle E\in {\mathcal {T}}} and ∅ ≠ F ⊂ E , {\displaystyle \emptyset \neq F\subset E,} then F ∈ T {\displaystyle F\in {\mathcal {T}}} . The elements of T {\displaystyle {\mathcal {T}}} are called simplices, the elements of V {\displaystyle V} are called vertices. A simplex with n + 1 {\displaystyle n+1} vertices has dimension n {\displaystyle n} by definition. The dimension of an abstract simplicial complex is defined as dim ( T ) = sup { dim ( F ) : F ∈ T } ∈ N ∪ ∞ {\displaystyle {\text{dim}}({\mathcal {T}})={\text{sup}}\;\{{\text{dim}}(F):F\in {\mathcal {T}}\}\in \mathbb {N} \cup \infty } . Abstract simplicial complexes can be realized as geometrical objects by associating each abstract simplex with a geometric simplex, defined below. === Geometric simplices === Let p 0 , . . . p n {\displaystyle p_{0},...p_{n}} be n + 1 {\displaystyle n+1} affinely independent points in R n {\displaystyle \mathbb {R} ^{n}} ; i.e. the vectors ( p 1 − p 0 ) , ( p 2 − p 0 ) , … ( p n − p 0 ) {\displaystyle (p_{1}-p_{0}),(p_{2}-p_{0}),\dots (p_{n}-p_{0})} are linearly independent. The set Δ = { ∑ i = 0 n t i p i | each t i ∈ [ 0 , 1 ] and ∑ i = 0 n t i = 1 } {\textstyle \Delta =\{\sum _{i=0}^{n}t_{i}p_{i}\,|\,{\text{each}}\,t_{i}\in [0,1]\,{\text{and}}\,\sum _{i=0}^{n}t_{i}=1\}} is said to be the simplex spanned by p 0 , . . . p n {\displaystyle p_{0},...p_{n}} . It has dimension n {\displaystyle n} by definition. The points p 0 , . . . p n {\displaystyle p_{0},...p_{n}} are called the vertices of Δ {\displaystyle \Delta } , the simplices spanned by n {\displaystyle n} of the n + 1 {\displaystyle n+1} vertices are called faces, and the boundary ∂ Δ {\displaystyle \partial \Delta } is defined to be the union of the faces. The n {\displaystyle n} -dimensional standard-simplex is the simplex spanned by the unit vectors e 0 , . . . e n {\displaystyle e_{0},...e_{n}} === Geometric simplicial complexes === A geometric simplicial complex S ⊆ P ( R n ) {\displaystyle {\mathcal {S}}\subseteq {\mathcal {P}}(\mathbb {R} ^{n})} is a collection of geometric simplices such that If S {\displaystyle S} is a simplex in S {\displaystyle {\mathcal {S}}} , then all its faces are in S {\displaystyle {\mathcal {S}}} . If S , T {\displaystyle S,T} are two distinct simplices in S {\displaystyle {\mathcal {S}}} , their interiors are disjoint. The union of all the simplices in S {\displaystyle {\mathcal {S}}} gives the set of points of S {\displaystyle {\mathcal {S}}} , denoted | S | = ⋃ S ∈ S S . {\textstyle |{\mathcal {S}}|=\bigcup _{S\in {\mathcal {S}}}S.} This set | S | {\displaystyle |{\mathcal {S}}|} is endowed with a topology by choosing the closed sets to be { A ⊆ | S | ∣ A ∩ Δ {\displaystyle \{A\subseteq |{\mathcal {S}}|\;\mid \;A\cap \Delta } is closed for all Δ ∈ S } {\displaystyle \Delta \in {\mathcal {S}}\}} . Note that, in general, this topology is not the same as the subspace topology that | S | {\displaystyle |{\mathcal {S}}|} inherits from R n {\displaystyle \mathbb {R} ^{n}} . The topologies do coincide in the case that each point in the complex lies only in finitely many simplices. Each geometric complex can be associated with an abstract complex by choosing as a ground set V {\displaystyle V} the set of vertices that appear in any simplex of S {\displaystyle {\mathcal {S}}} and as system of subsets the subsets of V {\displaystyle V} which correspond to vertex sets of simplices in S {\displaystyle {\mathcal {S}}} . A natural question is if vice versa, any abstract simplicial complex corresponds to a geometric complex. In general, the geometric construction as mentioned here is not flexible enough: consider for instance an abstract simplicial complex of infinite dimension. However, the following more abstract construction provides a topological space for any kind of abstract simplicial complex: Let T {\displaystyle {\mathcal {T}}} be an abstract simplicial complex above a set V {\displaystyle V} . Choose a union of simplices ( Δ F ) F ∈ T {\displaystyle (\Delta _{F})_{F\in {\mathcal {T}}}} , but each in R N {\displaystyle \mathbb {R} ^{N}} of dimension sufficiently large, such that the geometric simplex Δ F {\displaystyle \Delta _{F}} is of dimension n {\displaystyle n} if the abstract geometric simplex F {\displaystyle F} has dimension n {\displaystyle n} . If E ⊂ F {\displaystyle E\subset F} , Δ E ⊂ R N {\displaystyle \Delta _{E}\subset \mathbb {R} ^{N}} can be identified with a face of Δ F ⊂ R M {\displaystyle \Delta _{F}\subset \mathbb {R} ^{M}} and the resulting topological space is the gluing Δ E ∪ i Δ F . {\displaystyle \Delta _{E}\cup _{i}\Delta _{F}.} Effectuating the gluing for each inclusion, one ends up with the desired topological space. This space is actually unique up to homeomorphism for each choice of T , {\displaystyle {\mathcal {T}},} so it makes sense to talk about the geometric realization | T | {\displaystyle |{\mathcal {T}}|} of T . {\displaystyle {\mathcal {T}}.} As in the previous construction, by the topology induced by gluing, the closed sets in this space are the subsets that are closed in the subspace topology of every simplex Δ F {\displaystyle \Delta _{F}} in the complex. The simplicial complex T n {\displaystyle {\mathcal {T_{n}}}} which consists of all simplices T {\displaystyle {\mathcal {T}}} of dimension ≤ n {\displaystyle \leq n} is called the n {\displaystyle n} -th skeleton of T {\displaystyle {\mathcal {T}}} . A natural neighbourhood of a vertex v ∈ V {\displaystyle v\in V} in a simplicial complex S {\displaystyle {\mathcal {S}}} is considered to be given by the star star ⁡ ( v ) = { L ∈ S ∣ v ∈ L } {\displaystyle \operatorname {star} (v)=\{L\in {\mathcal {S}}\;\mid \;v\in L\}} of a simplex, whose boundary is the link link ⁡ ( v ) {\displaystyle \operatorname {link} (v)} . === Simplicial maps === The maps considered in this category are simplicial maps: Let K {\displaystyle {\mathcal {K}}} , L {\displaystyle {\mathcal {L}}} be abstract simplicial complexes above sets V K {\displaystyle V_{K}} , V L {\displaystyle V_{L}} . A simplicial map is a function f : V K → V L {\displaystyle f:V_{K}\rightarrow V_{L}} which maps each simplex in K {\displaystyle {\mathcal {K}}} onto a simplex in L {\displaystyle {\mathcal {L}}} . By affine-linear extension on the simplices, f {\displaystyle f} induces a map between the geometric realizations of the complexes. === Examples === Let W = { a , b , c , d , e , f } {\displaystyle W=\{a,b,c,d,e,f\}} and let T = { { a } , { b } , { c } , { d } , { e } , { f } , { a , b } , { a , c } , { a , d } , { a , e } , { a , f } } {\displaystyle {\mathcal {T}}={\Big \{}\{a\},\{b\},\{c\},\{d\},\{e\},\{f\},\{a,b\},\{a,c\},\{a,d\},\{a,e\},\{a,f\}{\Big \}}} . The associated geometric complex is a star with center { a } {\displaystyle \{a\}} . Let V = { A , B , C , D } {\displaystyle V=\{A,B,C,D\}} and let S = P ( V ) {\displaystyle {\mathcal {S}}={\mathcal {P}}(V)} . Its geometric realization | S | {\displaystyle |{\mathcal {S}}|} is a tetrahedron. Let V {\displaystyle V} as above and let S ′ = P ( V ) ∖ { A , B , C , D } {\displaystyle {\mathcal {S}}'=\;{\mathcal {P}}(V)\setminus \{A,B,C,D\}} . The geometric simplicial complex is the boundary of a tetrahedron | S ′ | = ∂ | S | {\displaystyle |{\mathcal {S'}}|=\partial |{\mathcal {S}}|} . == Definition == A triangulation of a topological space X {\displaystyle X} is a homeomorphism t : | T | → X {\displaystyle t:|{\mathcal {T}}|\rightarrow X} where T {\displaystyle {\mathcal {T}}} is a simplicial complex. Topological spaces do not necessarily admit a triangulation and if they do, it is not necessarily unique. === Examples === Simplicial complexes can be triangulated by identity. Let S , S ′ {\displaystyle {\mathcal {S}},{\mathcal {S'}}} be as in the examples seen above. The closed unit ball D 3 {\displaystyle \mathbb {D} ^{3}} is homeomorphic to a tetrahedron so it admits a triangulation, namely the homeomorphism t : | S | → D 3 {\displaystyle t:|{\mathcal {S}}|\rightarrow \mathbb {D} ^{3}} . Restricting t {\displaystyle t} to | S ′ | {\displaystyle |{\mathcal {S}}'|} yields a homeomorphism t ′ : | S ′ | → S 2 {\displaystyle t':|{\mathcal {S}}'|\rightarrow \mathbb {S} ^{2}} . The torus T 2 = S 1 × S 1 {\displaystyle \mathbb {T} ^{2}=\mathbb {S} ^{1}\times \mathbb {S} ^{1}} admits a triangulation. To see this, consider the torus as a square where the parallel faces are glued together. A triangulation of the square that respects the gluings, like that shown below, also defines a triangulation of the torus. The projective plane P 2 {\displaystyle \mathbb {P} ^{2}} admits a triangulation (see CW-complexes) One can show that differentiable manifolds admit triangulations. == Invariants == Triangulations of spaces allow assigning combinatorial invariants rising from their dedicated simplicial complexes to spaces. These are characteristics that equal for complexes that are isomorphic via a simplicial map and thus have the same combinatorial pattern. This data might be useful to classify topological spaces up to homeomorphism but only given that the characteristics are also topological invariants, meaning, they do not depend on the chosen triangulation. For the data listed here, this is the case. For details and the link to singular homology, see topological invariance. === Homology === Via triangulation, one can assign a chain complex to topological spaces that arise from its simplicial complex and compute its simplicial homology. Compact spaces always admit finite triangulations and therefore their homology groups are finitely generated and only finitely many of them do not vanish. Other data as Betti-numbers or Euler characteristic can be derived from homology. ==== Betti-numbers and Euler-characteristics ==== Let | S | {\displaystyle |{\mathcal {S}}|} be a finite simplicial complex. The n {\displaystyle n} -th Betti-number b n ( S ) {\displaystyle b_{n}({\mathcal {S}})} is defined to be the rank of the n {\displaystyle n} -th simplicial homology group of the spaces. These numbers encode geometric properties of the spaces: The Betti-number b 0 ( S ) {\displaystyle b_{0}({\mathcal {S}})} for instance represents the number of connected components. For a triangulated, closed orientable surfaces F {\displaystyle F} , b 1 ( F ) = 2 g {\displaystyle b_{1}(F)=2g} holds where g {\displaystyle g} denotes the genus of the surface: Therefore its first Betti-number represents the doubled number of handles of the surface. With the comments above, for compact spaces all Betti-numbers are finite and almost all are zero. Therefore, one can form their alternating sum ∑ k = 0 ∞ ( − 1 ) k b k ( S ) {\displaystyle \sum _{k=0}^{\infty }(-1)^{k}b_{k}({\mathcal {S}})} which is called the Euler characteristic of the complex, a catchy topological invariant. === Topological invariance === To use these invariants for the classification of topological spaces up to homeomorphism one needs invariance of the characteristics regarding homeomorphism. A famous approach to the question was at the beginning of the 20th century the attempt to show that any two triangulations of the same topological space admit a common subdivision. This assumption is known as Hauptvermutung ( German: Main assumption). Let | L | ⊂ R N {\displaystyle |{\mathcal {L}}|\subset \mathbb {R} ^{N}} be a simplicial complex. A complex | L ′ | ⊂ R N {\displaystyle |{\mathcal {L'}}|\subset \mathbb {R} ^{N}} is said to be a subdivision of L {\displaystyle {\mathcal {L}}} iff: every simplex of L ′ {\displaystyle {\mathcal {L'}}} is contained in a simplex of L {\displaystyle {\mathcal {L}}} and every simplex of L {\displaystyle {\mathcal {L}}} is a finite union of simplices in L ′ {\displaystyle {\mathcal {L'}}} . Those conditions ensure that subdivisions does not change the simplicial complex as a set or as a topological space. A map f : K → L {\displaystyle f:{\mathcal {K}}\rightarrow {\mathcal {L}}} between simplicial complexes is said to be piecewise linear if there is a refinement K ′ {\displaystyle {\mathcal {K'}}} of K {\displaystyle {\mathcal {K}}} such that f {\displaystyle f} is piecewise linear on each simplex of K {\displaystyle {\mathcal {K}}} . Two complexes that correspond to another via piecewise linear bijection are said to be combinatorial isomorphic. In particular, two complexes that have a common refinement are combinatorially equivalent. Homology groups are invariant to combinatorial equivalence and therefore the Hauptvermutung would give the topological invariance of simplicial homology groups. In 1918, Alexander introduced the concept of singular homology. Henceforth, most of the invariants arising from triangulation were replaced by invariants arising from singular homology. For those new invariants, it can be shown that they were invariant regarding homeomorphism and even regarding homotopy equivalence. Furthermore it was shown that singular and simplicial homology groups coincide. This workaround has shown the invariance of the data to homeomorphism. Hauptvermutung lost in importance but it was initial for a new branch in topology: The piecewise linear topology (short PL-topology). == Hauptvermutung == The Hauptvermutung (German for main conjecture) states that two triangulations always admit a common subdivision. Originally, its purpose was to prove invariance of combinatorial invariants regarding homeomorphisms. The assumption that such subdivisions exist in general is intuitive, as subdivision are easy to construct for simple spaces, for instance for low dimensional manifolds. Indeed the assumption was proven for manifolds of dimension ≤ 3 {\displaystyle \leq 3} and for differentiable manifolds but it was disproved in general: An important tool to show that triangulations do not admit a common subdivision, that is, their underlying complexes are not combinatorially isomorphic is the combinatorial invariant of Reidemeister torsion. === Reidemeister torsion === To disprove the Hauptvermutung it is helpful to use combinatorial invariants which are not topological invariants. A famous example is Reidemeister torsion. It can be assigned to a tuple ( K , L ) {\displaystyle (K,L)} of CW-complexes: If L = ∅ {\displaystyle L=\emptyset } this characteristic will be a topological invariant but if L ≠ ∅ {\displaystyle L\neq \emptyset } in general not. An approach to Hauptvermutung was to find homeomorphic spaces with different values of Reidemeister torsion. This invariant was used initially to classify lens-spaces and first counterexamples to the Hauptvermutung were built based on lens-spaces: === Classification of lens spaces === In its original formulation, lens spaces are 3-manifolds, constructed as quotient spaces of the 3-sphere: Let p , q {\displaystyle p,q} be natural numbers, such that p , q {\displaystyle p,q} are coprime. The lens space L ( p , q ) {\displaystyle L(p,q)} is defined to be the orbit space of the free group action Z / p Z × S 3 → S 3 {\displaystyle \mathbb {Z} /p\mathbb {Z} \times S^{3}\to S^{3}} ( k , ( z 1 , z 2 ) ) ↦ ( z 1 ⋅ e 2 π i k / p , z 2 ⋅ e 2 π i k q / p ) {\displaystyle (k,(z_{1},z_{2}))\mapsto (z_{1}\cdot e^{2\pi ik/p},z_{2}\cdot e^{2\pi ikq/p})} . For different tuples ( p , q ) {\displaystyle (p,q)} , lens spaces will be homotopy equivalent but not homeomorphic. Therefore they can't be distinguished with the help of classical invariants as the fundamental group but by the use of Reidemeister torsion. Two lens spaces L ( p , q 1 ) , L ( p , q 2 ) {\displaystyle L(p,q_{1}),L(p,q_{2})} are homeomorphic, if and only if q 1 ≡ ± q 2 ± 1 ( mod p ) {\displaystyle q_{1}\equiv \pm q_{2}^{\pm 1}{\pmod {p}}} . This is the case if and only if two lens spaces are simple homotopy equivalent. The fact can be used to construct counterexamples for the Hauptvermutung as follows. Suppose there are spaces L 1 ′ , L 2 ′ {\displaystyle L'_{1},L'_{2}} derived from non-homeomorphic lens spaces L ( p , q 1 ) , L ( p , q 2 ) {\displaystyle L(p,q_{1}),L(p,q_{2})} having different Reidemeister torsion. Suppose further that the modification into L 1 ′ , L 2 ′ {\displaystyle L'_{1},L'_{2}} does not affect Reidemeister torsion but such that after modification L 1 ′ {\displaystyle L'_{1}} and L 2 ′ {\displaystyle L'_{2}} are homeomorphic. The resulting spaces will disprove the Hauptvermutung. == Existence of triangulation == Besides the question of concrete triangulations for computational issues, there are statements about spaces that are easier to prove given that they are simplicial complexes. Especially manifolds are of interest. Topological manifolds of dimension ≤ 3 {\displaystyle \leq 3} are always triangulable but there are non-triangulable manifolds for dimension n {\displaystyle n} , for n {\displaystyle n} arbitrary but greater than three. Further, differentiable manifolds always admit triangulations. == Piecewise linear structures == Manifolds are an important class of spaces. It is natural to require them not only to be triangulable but moreover to admit a piecewise linear atlas, a PL-structure: Let | X | {\displaystyle |X|} be a simplicial complex such that every point admits an open neighborhood U {\displaystyle U} such that there is a triangulation of U {\displaystyle U} and a piecewise linear homeomorphism f : U → R n {\displaystyle f:U\rightarrow \mathbb {R} ^{n}} . Then | X | {\displaystyle |X|} is said to be a piecewise linear (PL) manifold of dimension n {\displaystyle n} and the triangulation together with the PL-atlas is said to be a PL-structure on | X | {\displaystyle |X|} . An important lemma is the following: Let X {\displaystyle X} be a topological space. Then the following statements are equivalent: X {\displaystyle X} is an n {\displaystyle n} -dimensional manifold and admits a PL-structure. There is a triangulation of X {\displaystyle X} such that the link of each vertex is an n − 1 {\displaystyle n-1} sphere. For each triangulation of X {\displaystyle X} the link of each vertex is an n − 1 {\displaystyle n-1} sphere. The equivalence of the second and the third statement is because that the link of a vertex is independent of the chosen triangulation up to combinatorial isomorphism. One can show that differentiable manifolds admit a PL-structure as well as manifolds of dimension ≤ 3 {\displaystyle \leq 3} . Counterexamples for the triangulation conjecture are counterexamples for the conjecture of the existence of PL-structure of course. Moreover, there are examples for triangulated spaces which do not admit a PL-structure. Consider an n − 2 {\displaystyle n-2} -dimensional PL-homology-sphere X {\displaystyle X} . The double suspension S 2 X {\displaystyle S^{2}X} is a topological n {\displaystyle n} -sphere. Choosing a triangulation t : | S | → S 2 X {\displaystyle t:|{\mathcal {S}}|\rightarrow S^{2}X} obtained via the suspension operation on triangulations the resulting simplicial complex is not a PL-manifold, because there is a vertex v {\displaystyle v} such that l i n k ( v ) {\displaystyle link(v)} is not a n − 1 {\displaystyle n-1} sphere. A question arising with the definition is if PL-structures are always unique: Given two PL-structures for the same space Y {\displaystyle Y} , is there a there a homeomorphism F : Y → Y {\displaystyle F:Y\rightarrow Y} which is piecewise linear with respect to both PL-structures? The assumption is similar to the Hauptvermutung and indeed there are spaces which have different PL-structures which are not equivalent. Triangulation of PL-equivalent spaces can be transformed into one another via Pachner moves: === Pachner Moves === Pachner moves are a way to manipulate triangulations: Let S {\displaystyle {\mathcal {S}}} be a simplicial complex. For two simplices K , L , {\displaystyle K,L,} the Join K ∗ L = { ( 1 − t ) k + t l | k ∈ K , l ∈ L , t ∈ [ 0 , 1 ] } {\textstyle K*L=\{(1-t)k+tl\;|\;k\in K,l\in L,t\in [0,1]\}} is the set of points that lie on straights between points in K {\displaystyle K} and in L {\displaystyle L} . Choose S ∈ S {\displaystyle S\in {\mathcal {S}}} such that l k ( S ) = ∂ K {\displaystyle lk(S)=\partial K} for any K {\displaystyle K} lying not in S {\displaystyle {\mathcal {S}}} . A new complex S ′ {\displaystyle {\mathcal {S'}}} , can be obtained by replacing S ∗ ∂ K {\displaystyle S*\partial K} by ∂ S ∗ K {\displaystyle \partial S*K} . This replacement is called a Pachner move. The theorem of Pachner states that whenever two triangulated manifolds are PL-equivalent, there is a series of Pachner moves transforming both into another. == Cellular complexes == A similar but more flexible construction than simplicial complexes is the one of cellular complexes (or CW-complexes). Its construction is as follows: An n {\displaystyle n} -cell is the closed n {\displaystyle n} -dimensional unit-ball B n = [ 0 , 1 ] n {\displaystyle B_{n}=[0,1]^{n}} , an open n {\displaystyle n} -cell is its inner B n = [ 0 , 1 ] n ∖ S n − 1 {\displaystyle B_{n}=[0,1]^{n}\setminus \mathbb {S} ^{n-1}} . Let X {\displaystyle X} be a topological space, let f : S n − 1 → X {\displaystyle f:\mathbb {S} ^{n-1}\rightarrow X} be a continuous map. The gluing X ∪ f B n {\displaystyle X\cup _{f}B_{n}} is said to be obtained by gluing on an n {\displaystyle n} -cell. A cell complex is a union X = ∪ n ≥ 0 X n {\displaystyle X=\cup _{n\geq 0}X_{n}} of topological spaces such that X 0 {\displaystyle X_{0}} is a discrete set each X n {\displaystyle X_{n}} is obtained from X n − 1 {\displaystyle X_{n-1}} by gluing on a family of n {\displaystyle n} -cells. Each simplicial complex is a CW-complex, the inverse is not true. The construction of CW-complexes can be used to define cellular homology and one can show that cellular homology and simplicial homology coincide. For computational issues, it is sometimes easier to assume spaces to be CW-complexes and determine their homology via cellular decomposition, an example is the projective plane P 2 {\displaystyle \mathbb {P} ^{2}} : Its construction as a CW-complex needs three cells, whereas its simplicial complex consists of 54 simplices. == Other applications == === Classification of manifolds === By triangulating 1-dimensional manifolds, one can show that they are always homeomorphic to disjoint copies of the real line and the unit sphere S 1 {\displaystyle \mathbb {S} ^{1}} . The classification of closed surfaces, i.e. compact 2-manifolds, can also be proven by using triangulations. This is done by showing any such surface can be triangulated and then using the triangulation to construct a fundamental polygon for the surface. === Maps on simplicial complexes === Giving spaces simplicial structures can help to understand continuous maps defined on the spaces. The maps can often be assumed to be simplicial maps via the simplicial approximation theorem: ==== Simplicial approximation ==== Let K {\displaystyle {\mathcal {K}}} , L {\displaystyle {\mathcal {L}}} be abstract simplicial complexes above sets V K {\displaystyle V_{K}} , V L {\displaystyle V_{L}} . A simplicial map is a function f : V K → V L {\displaystyle f:V_{K}\rightarrow V_{L}} which maps each simplex in K {\displaystyle {\mathcal {K}}} onto a simplex in L {\displaystyle {\mathcal {L}}} . By affin-linear extension on the simplices, f {\displaystyle f} induces a map between the geometric realizations of the complexes. Each point in a geometric complex lies in the inner of exactly one simplex, its support. Consider now a continuous map f : K → L {\displaystyle f:{\mathcal {K}}\rightarrow {\mathcal {L}}} . A simplicial map g : K → L {\displaystyle g:{\mathcal {K}}\rightarrow {\mathcal {L}}} is said to be a simplicial approximation of f {\displaystyle f} if and only if each x ∈ K {\displaystyle x\in {\mathcal {K}}} is mapped by g {\displaystyle g} onto the support of f ( x ) {\displaystyle f(x)} in L {\displaystyle {\mathcal {L}}} . If such an approximation exists, one can construct a homotopy H {\displaystyle H} transforming f {\displaystyle f} into g {\displaystyle g} by defining it on each simplex; there it always exists, because simplices are contractible. The simplicial approximation theorem guarantees for every continuous function f : V K → V L {\displaystyle f:V_{K}\rightarrow V_{L}} the existence of a simplicial approximation at least after refinement of K {\displaystyle {\mathcal {K}}} , for instance by replacing K {\displaystyle {\mathcal {K}}} by its iterated barycentric subdivision. The theorem plays an important role for certain statements in algebraic topology in order to reduce the behavior of continuous maps on those of simplicial maps, for instance in Lefschetz's fixed-point theorem. ==== Lefschetz's fixed-point theorem ==== The Lefschetz number is a useful tool to find out whether a continuous function admits fixed-points. This data is computed as follows: Suppose that X {\displaystyle X} and Y {\displaystyle Y} are topological spaces that admit finite triangulations. A continuous map f : X → Y {\displaystyle f:X\rightarrow Y} induces homomorphisms f i : H i ( X , K ) → H i ( Y , K ) {\displaystyle f_{i}:H_{i}(X,K)\rightarrow H_{i}(Y,K)} between its simplicial homology groups with coefficients in a field K {\displaystyle K} . These are linear maps between K {\displaystyle K} -vector spaces, so their trace tr i {\displaystyle \operatorname {tr} _{i}} can be determined and their alternating sum L K ( f ) = ∑ i ( − 1 ) i tr i ⁡ ( f ) ∈ K {\displaystyle L_{K}(f)=\sum _{i}(-1)^{i}\operatorname {tr} _{i}(f)\in K} is called the Lefschetz number of f {\displaystyle f} . If f = i d {\displaystyle f={\rm {id}}} , this number is the Euler characteristic of K {\displaystyle K} . The fixpoint theorem states that whenever L K ( f ) ≠ 0 {\displaystyle L_{K}(f)\neq 0} , f {\displaystyle f} has a fixed-point. In the proof this is first shown only for simplicial maps and then generalized for any continuous functions via the approximation theorem. Brouwer's fixpoint theorem treats the case where f : D n → D n {\displaystyle f:\mathbb {D} ^{n}\rightarrow \mathbb {D} ^{n}} is an endomorphism of the unit-ball. For k ≥ 1 {\displaystyle k\geq 1} all its homology groups H k ( D n ) {\displaystyle H_{k}(\mathbb {D} ^{n})} vanishes, and f 0 {\displaystyle f_{0}} is always the identity, so L K ( f ) = tr 0 ⁡ ( f ) = 1 ≠ 0 {\displaystyle L_{K}(f)=\operatorname {tr} _{0}(f)=1\neq 0} , so f {\displaystyle f} has a fixpoint. ==== Formula of Riemann-Hurwitz ==== The Riemann-Hurwitz formula allows to determine the genus of a compact, connected Riemann surface X {\displaystyle X} without using explicit triangulation. The proof needs the existence of triangulations for surfaces in an abstract sense: Let F : X → Y {\displaystyle F:X\rightarrow Y} be a non-constant holomorphic function on a surface with known genus. The relation between the genus g {\displaystyle g} of the surfaces X {\displaystyle X} and Y {\displaystyle Y} is 2 g ( X ) − 2 = deg ⁡ ( F ) ( 2 g ( Y ) − 2 ) + ∑ x ∈ X ( ord ⁡ ( F ) − 1 ) {\displaystyle 2g(X)-2=\deg(F)(2g(Y)-2)+\sum _{x\in X}(\operatorname {ord} (F)-1)} where deg ⁡ ( F ) {\displaystyle \deg(F)} denotes the degree of the map. The sum is well defined as it counts only the ramifying points of the function. The background of this formula is that holomorphic functions on Riemann surfaces are ramified coverings. The formula can be found by examining the image of the simplicial structure near to ramifiying points. == Citations == == See also == Triangulation (geometry) Triangle mesh == Literature == Allen Hatcher: Algebraic Topology, Cambridge University Press, Cambridge/New York/Melbourne 2006, ISBN 0-521-79160-X James R. Munkres: . Band 1984. Addison Wesley, Menlo Park, California 1984, ISBN 0-201-04586-9 Marshall M. Cohen: A course in Simple-Homotopy Theory . In: Graduate Texts in Mathematics. 1973, ISSN 0072-5285, doi:10.1007/978-1-4684-9372-6.
Wikipedia/Triangulation_(topology)
In Riemannian geometry, Gauss's lemma asserts that any sufficiently small sphere centered at a point in a Riemannian manifold is perpendicular to every geodesic through the point. More formally, let M be a Riemannian manifold, equipped with its Levi-Civita connection, and p a point of M. The exponential map is a mapping from the tangent space at p to M: e x p : T p M → M {\displaystyle \mathrm {exp} :T_{p}M\to M} which is a diffeomorphism in a neighborhood of zero. Gauss' lemma asserts that the image of a sphere of sufficiently small radius in TpM under the exponential map is perpendicular to all geodesics originating at p. The lemma allows the exponential map to be understood as a radial isometry, and is of fundamental importance in the study of geodesic convexity and normal coordinates. == Introduction == We define the exponential map at p ∈ M {\displaystyle p\in M} by exp p : T p M ⊃ B ϵ ( 0 ) ⟶ M , v t ⟼ γ p , v ( t ) , {\displaystyle \exp _{p}:T_{p}M\supset B_{\epsilon }(0)\longrightarrow M,\quad vt\longmapsto \gamma _{p,v}(t),} where γ p , v {\displaystyle \gamma _{p,v}} is the unique geodesic with γ p , v ( 0 ) = p {\displaystyle \gamma _{p,v}(0)=p} and tangent γ p , v ′ ( 0 ) = v ∈ T p M {\displaystyle \gamma _{p,v}'(0)=v\in T_{p}M} and ϵ {\displaystyle \epsilon } is chosen small enough so that for every t ∈ [ 0 , 1 ] , v t ∈ B ϵ ( 0 ) ⊂ T p M {\displaystyle t\in [0,1],vt\in B_{\epsilon }(0)\subset T_{p}M} the geodesic γ p , v ( t ) {\displaystyle \gamma _{p,v}(t)} is defined. So, if M {\displaystyle M} is complete, then, by the Hopf–Rinow theorem, exp p {\displaystyle \exp _{p}} is defined on the whole tangent space. Let α : I → T p M {\displaystyle \alpha :I\rightarrow T_{p}M} be a curve differentiable in T p M {\displaystyle T_{p}M} such that α ( 0 ) := 0 {\displaystyle \alpha (0):=0} and α ′ ( 0 ) := v {\displaystyle \alpha '(0):=v} . Since T p M ≅ R n {\displaystyle T_{p}M\cong \mathbb {R} ^{n}} , it is clear that we can choose α ( t ) := v t {\displaystyle \alpha (t):=vt} . In this case, by the definition of the differential of the exponential in 0 {\displaystyle 0} applied over v {\displaystyle v} , we obtain: T 0 exp p ⁡ ( v ) = d d t ( exp p ∘ α ( t ) ) | t = 0 = d d t ( exp p ⁡ ( v t ) ) | t = 0 = d d t ( γ p , v ( t ) ) | t = 0 = γ p , v ′ ( 0 ) = v . {\displaystyle T_{0}\exp _{p}(v)={\frac {\mathrm {d} }{\mathrm {d} t}}{\Bigl (}\exp _{p}\circ \alpha (t){\Bigr )}{\Big \vert }_{t=0}={\frac {\mathrm {d} }{\mathrm {d} t}}{\Bigl (}\exp _{p}(vt){\Bigr )}{\Big \vert }_{t=0}={\frac {\mathrm {d} }{\mathrm {d} t}}{\Bigl (}\gamma _{p,v}(t){\Bigr )}{\Big \vert }_{t=0}=\gamma _{p,v}'(0)=v.} So (with the right identification T 0 T p M ≅ T p M {\displaystyle T_{0}T_{p}M\cong T_{p}M} ) the differential of exp p {\displaystyle \exp _{p}} is the identity. By the implicit function theorem, exp p {\displaystyle \exp _{p}} is a diffeomorphism on a neighborhood of 0 ∈ T p M {\displaystyle 0\in T_{p}M} . The Gauss Lemma now tells that exp p {\displaystyle \exp _{p}} is also a radial isometry. == The exponential map is a radial isometry == Let p ∈ M {\displaystyle p\in M} . In what follows, we make the identification T v T p M ≅ T p M ≅ R n {\displaystyle T_{v}T_{p}M\cong T_{p}M\cong \mathbb {R} ^{n}} . Gauss's Lemma states: Let v , w ∈ B ϵ ( 0 ) ⊂ T v T p M ≅ T p M {\displaystyle v,w\in B_{\epsilon }(0)\subset T_{v}T_{p}M\cong T_{p}M} and M ∋ q := exp p ⁡ ( v ) {\displaystyle M\ni q:=\exp _{p}(v)} . Then, ⟨ T v exp p ⁡ ( v ) , T v exp p ⁡ ( w ) ⟩ q = ⟨ v , w ⟩ p . {\displaystyle \langle T_{v}\exp _{p}(v),T_{v}\exp _{p}(w)\rangle _{q}=\langle v,w\rangle _{p}.} For p ∈ M {\displaystyle p\in M} , this lemma means that exp p {\displaystyle \exp _{p}} is a radial isometry in the following sense: let v ∈ B ϵ ( 0 ) {\displaystyle v\in B_{\epsilon }(0)} , i.e. such that exp p {\displaystyle \exp _{p}} is well defined. And let q := exp p ⁡ ( v ) ∈ M {\displaystyle q:=\exp _{p}(v)\in M} . Then the exponential exp p {\displaystyle \exp _{p}} remains an isometry in q {\displaystyle q} , and, more generally, all along the geodesic γ {\displaystyle \gamma } (in so far as γ p , v ( 1 ) = exp p ⁡ ( v ) {\displaystyle \gamma _{p,v}(1)=\exp _{p}(v)} is well defined)! Then, radially, in all the directions permitted by the domain of definition of exp p {\displaystyle \exp _{p}} , it remains an isometry. == Proof == Recall that T v exp p : T p M ≅ T v T p M ⊃ T v B ϵ ( 0 ) ⟶ T exp p ⁡ ( v ) M . {\displaystyle T_{v}\exp _{p}\colon T_{p}M\cong T_{v}T_{p}M\supset T_{v}B_{\epsilon }(0)\longrightarrow T_{\exp _{p}(v)}M.} We proceed in three steps: T v exp p ⁡ ( v ) = v {\displaystyle T_{v}\exp _{p}(v)=v} : let us construct a curve α : R ⊃ I → T p M {\displaystyle \alpha :\mathbb {R} \supset I\rightarrow T_{p}M} such that α ( 0 ) := v ∈ T p M {\displaystyle \alpha (0):=v\in T_{p}M} and α ′ ( 0 ) := v ∈ T v T p M ≅ T p M {\displaystyle \alpha '(0):=v\in T_{v}T_{p}M\cong T_{p}M} . Since T v T p M ≅ T p M ≅ R n {\displaystyle T_{v}T_{p}M\cong T_{p}M\cong \mathbb {R} ^{n}} , we can put α ( t ) := v ( t + 1 ) {\displaystyle \alpha (t):=v(t+1)} . Therefore, T v exp p ⁡ ( v ) = d d t ( exp p ∘ α ( t ) ) | t = 0 = d d t ( exp p ⁡ ( t v ) ) | t = 1 = Γ ( γ ) p exp p ⁡ ( v ) v = v , {\displaystyle T_{v}\exp _{p}(v)={\frac {\mathrm {d} }{\mathrm {d} t}}{\Bigl (}\exp _{p}\circ \alpha (t){\Bigr )}{\Big \vert }_{t=0}={\frac {\mathrm {d} }{\mathrm {d} t}}{\Bigl (}\exp _{p}(tv){\Bigr )}{\Big \vert }_{t=1}=\Gamma (\gamma )_{p}^{\exp _{p}(v)}v=v,} where Γ {\displaystyle \Gamma } is the parallel transport operator and γ ( t ) = exp p ⁡ ( t v ) {\displaystyle \gamma (t)=\exp _{p}(tv)} . The last equality is true because γ {\displaystyle \gamma } is a geodesic, therefore γ ′ {\displaystyle \gamma '} is parallel. Now let us calculate the scalar product ⟨ T v exp p ⁡ ( v ) , T v exp p ⁡ ( w ) ⟩ {\displaystyle \langle T_{v}\exp _{p}(v),T_{v}\exp _{p}(w)\rangle } . We separate w {\displaystyle w} into a component w T {\displaystyle w_{T}} parallel to v {\displaystyle v} and a component w N {\displaystyle w_{N}} normal to v {\displaystyle v} . In particular, we put w T := a v {\displaystyle w_{T}:=av} , a ∈ R {\displaystyle a\in \mathbb {R} } . The preceding step implies directly: ⟨ T v exp p ⁡ ( v ) , T v exp p ⁡ ( w ) ⟩ = ⟨ T v exp p ⁡ ( v ) , T v exp p ⁡ ( w T ) ⟩ + ⟨ T v exp p ⁡ ( v ) , T v exp p ⁡ ( w N ) ⟩ {\displaystyle \langle T_{v}\exp _{p}(v),T_{v}\exp _{p}(w)\rangle =\langle T_{v}\exp _{p}(v),T_{v}\exp _{p}(w_{T})\rangle +\langle T_{v}\exp _{p}(v),T_{v}\exp _{p}(w_{N})\rangle } = a ⟨ T v exp p ⁡ ( v ) , T v exp p ⁡ ( v ) ⟩ + ⟨ T v exp p ⁡ ( v ) , T v exp p ⁡ ( w N ) ⟩ = ⟨ v , w T ⟩ + ⟨ T v exp p ⁡ ( v ) , T v exp p ⁡ ( w N ) ⟩ . {\displaystyle =a\langle T_{v}\exp _{p}(v),T_{v}\exp _{p}(v)\rangle +\langle T_{v}\exp _{p}(v),T_{v}\exp _{p}(w_{N})\rangle =\langle v,w_{T}\rangle +\langle T_{v}\exp _{p}(v),T_{v}\exp _{p}(w_{N})\rangle .} We must therefore show that the second term is null, because, according to Gauss's Lemma, we must have: ⟨ T v exp p ⁡ ( v ) , T v exp p ⁡ ( w N ) ⟩ = ⟨ v , w N ⟩ = 0. {\displaystyle \langle T_{v}\exp _{p}(v),T_{v}\exp _{p}(w_{N})\rangle =\langle v,w_{N}\rangle =0.} ⟨ T v exp p ⁡ ( v ) , T v exp p ⁡ ( w N ) ⟩ = 0 {\displaystyle \langle T_{v}\exp _{p}(v),T_{v}\exp _{p}(w_{N})\rangle =0} : Let us define the curve α : [ − ϵ , ϵ ] × [ 0 , 1 ] ⟶ T p M , ( s , t ) ⟼ t v + t s w N . {\displaystyle \alpha \colon [-\epsilon ,\epsilon ]\times [0,1]\longrightarrow T_{p}M,\qquad (s,t)\longmapsto tv+tsw_{N}.} Note that α ( 0 , 1 ) = v , ∂ α ∂ t ( s , t ) = v + s w N , ∂ α ∂ s ( 0 , t ) = t w N . {\displaystyle \alpha (0,1)=v,\qquad {\frac {\partial \alpha }{\partial t}}(s,t)=v+sw_{N},\qquad {\frac {\partial \alpha }{\partial s}}(0,t)=tw_{N}.} Let us put: f : [ − ϵ , ϵ ] × [ 0 , 1 ] ⟶ M , ( s , t ) ⟼ exp p ⁡ ( t v + t s w N ) , {\displaystyle f\colon [-\epsilon ,\epsilon ]\times [0,1]\longrightarrow M,\qquad (s,t)\longmapsto \exp _{p}(tv+tsw_{N}),} and we calculate: T v exp p ⁡ ( v ) = T α ( 0 , 1 ) exp p ⁡ ( ∂ α ∂ t ( 0 , 1 ) ) = ∂ ∂ t ( exp p ∘ α ( s , t ) ) | t = 1 , s = 0 = ∂ f ∂ t ( 0 , 1 ) {\displaystyle T_{v}\exp _{p}(v)=T_{\alpha (0,1)}\exp _{p}\left({\frac {\partial \alpha }{\partial t}}(0,1)\right)={\frac {\partial }{\partial t}}{\Bigl (}\exp _{p}\circ \alpha (s,t){\Bigr )}{\Big \vert }_{t=1,s=0}={\frac {\partial f}{\partial t}}(0,1)} and T v exp p ⁡ ( w N ) = T α ( 0 , 1 ) exp p ⁡ ( ∂ α ∂ s ( 0 , 1 ) ) = ∂ ∂ s ( exp p ∘ α ( s , t ) ) | t = 1 , s = 0 = ∂ f ∂ s ( 0 , 1 ) . {\displaystyle T_{v}\exp _{p}(w_{N})=T_{\alpha (0,1)}\exp _{p}\left({\frac {\partial \alpha }{\partial s}}(0,1)\right)={\frac {\partial }{\partial s}}{\Bigl (}\exp _{p}\circ \alpha (s,t){\Bigr )}{\Big \vert }_{t=1,s=0}={\frac {\partial f}{\partial s}}(0,1).} Hence ⟨ T v exp p ⁡ ( v ) , T v exp p ⁡ ( w N ) ⟩ = ⟨ ∂ f ∂ t , ∂ f ∂ s ⟩ ( 0 , 1 ) . {\displaystyle \langle T_{v}\exp _{p}(v),T_{v}\exp _{p}(w_{N})\rangle =\left\langle {\frac {\partial f}{\partial t}},{\frac {\partial f}{\partial s}}\right\rangle (0,1).} We can now verify that this scalar product is actually independent of the variable t {\displaystyle t} , and therefore that, for example: ⟨ ∂ f ∂ t , ∂ f ∂ s ⟩ ( 0 , 1 ) = ⟨ ∂ f ∂ t , ∂ f ∂ s ⟩ ( 0 , 0 ) = 0 , {\displaystyle \left\langle {\frac {\partial f}{\partial t}},{\frac {\partial f}{\partial s}}\right\rangle (0,1)=\left\langle {\frac {\partial f}{\partial t}},{\frac {\partial f}{\partial s}}\right\rangle (0,0)=0,} because, according to what has been given above: lim t → 0 ∂ f ∂ s ( 0 , t ) = lim t → 0 T t v exp p ⁡ ( t w N ) = 0 {\displaystyle \lim _{t\rightarrow 0}{\frac {\partial f}{\partial s}}(0,t)=\lim _{t\rightarrow 0}T_{tv}\exp _{p}(tw_{N})=0} being given that the differential is a linear map. This will therefore prove the lemma. We verify that ∂ ∂ t ⟨ ∂ f ∂ t , ∂ f ∂ s ⟩ = 0 {\displaystyle {\frac {\partial }{\partial t}}\left\langle {\frac {\partial f}{\partial t}},{\frac {\partial f}{\partial s}}\right\rangle =0} : this is a direct calculation. Since the maps t ↦ f ( s , t ) {\displaystyle t\mapsto f(s,t)} are geodesics, ∂ ∂ t ⟨ ∂ f ∂ t , ∂ f ∂ s ⟩ = ⟨ D ∂ t ∂ f ∂ t ⏟ = 0 , ∂ f ∂ s ⟩ + ⟨ ∂ f ∂ t , D ∂ t ∂ f ∂ s ⟩ = ⟨ ∂ f ∂ t , D ∂ s ∂ f ∂ t ⟩ = 1 2 ∂ ∂ s ⟨ ∂ f ∂ t , ∂ f ∂ t ⟩ . {\displaystyle {\frac {\partial }{\partial t}}\left\langle {\frac {\partial f}{\partial t}},{\frac {\partial f}{\partial s}}\right\rangle =\left\langle \underbrace {{\frac {D}{\partial t}}{\frac {\partial f}{\partial t}}} _{=0},{\frac {\partial f}{\partial s}}\right\rangle +\left\langle {\frac {\partial f}{\partial t}},{\frac {D}{\partial t}}{\frac {\partial f}{\partial s}}\right\rangle =\left\langle {\frac {\partial f}{\partial t}},{\frac {D}{\partial s}}{\frac {\partial f}{\partial t}}\right\rangle ={\frac {1}{2}}{\frac {\partial }{\partial s}}\left\langle {\frac {\partial f}{\partial t}},{\frac {\partial f}{\partial t}}\right\rangle .} Since the maps t ↦ f ( s , t ) {\displaystyle t\mapsto f(s,t)} are geodesics, the function t ↦ ⟨ ∂ f ∂ t , ∂ f ∂ t ⟩ {\displaystyle t\mapsto \left\langle {\frac {\partial f}{\partial t}},{\frac {\partial f}{\partial t}}\right\rangle } is constant. Thus, ∂ ∂ s ⟨ ∂ f ∂ t , ∂ f ∂ t ⟩ = ∂ ∂ s ⟨ v + s w N , v + s w N ⟩ = 2 ⟨ v , w N ⟩ = 0. {\displaystyle {\frac {\partial }{\partial s}}\left\langle {\frac {\partial f}{\partial t}},{\frac {\partial f}{\partial t}}\right\rangle ={\frac {\partial }{\partial s}}\left\langle v+sw_{N},v+sw_{N}\right\rangle =2\left\langle v,w_{N}\right\rangle =0.} == See also == Riemannian geometry Metric tensor == References == do Carmo, Manfredo (1992), Riemannian geometry, Basel, Boston, Berlin: Birkhäuser, ISBN 978-0-8176-3490-2
Wikipedia/Gauss's_lemma_(Riemannian_geometry)
In the theory of partial differential equations, elliptic operators are differential operators that generalize the Laplace operator. They are defined by the condition that the coefficients of the highest-order derivatives be positive, which implies the key property that the principal symbol is invertible, or equivalently that there are no real characteristic directions. Elliptic operators are typical of potential theory, and they appear frequently in electrostatics and continuum mechanics. Elliptic regularity implies that their solutions tend to be smooth functions (if the coefficients in the operator are smooth). Steady-state solutions to hyperbolic and parabolic equations generally solve elliptic equations. == Definitions == Let L {\displaystyle L} be a linear differential operator of order m on a domain Ω {\displaystyle \Omega } in Rn given by L u = ∑ | α | ≤ m a α ( x ) ∂ α u {\displaystyle Lu=\sum _{|\alpha |\leq m}a_{\alpha }(x)\partial ^{\alpha }u} where α = ( α 1 , … , α n ) {\displaystyle \alpha =(\alpha _{1},\dots ,\alpha _{n})} denotes a multi-index, and ∂ α u = ∂ 1 α 1 ⋯ ∂ n α n u {\displaystyle \partial ^{\alpha }u=\partial _{1}^{\alpha _{1}}\cdots \partial _{n}^{\alpha _{n}}u} denotes the partial derivative of order α i {\displaystyle \alpha _{i}} in x i {\displaystyle x_{i}} . Then L {\displaystyle L} is called elliptic if for every x in Ω {\displaystyle \Omega } and every non-zero ξ {\displaystyle \xi } in Rn, ∑ | α | = m a α ( x ) ξ α ≠ 0 , {\displaystyle \sum _{|\alpha |=m}a_{\alpha }(x)\xi ^{\alpha }\neq 0,} where ξ α = ξ 1 α 1 ⋯ ξ n α n {\displaystyle \xi ^{\alpha }=\xi _{1}^{\alpha _{1}}\cdots \xi _{n}^{\alpha _{n}}} . In many applications, this condition is not strong enough, and instead a uniform ellipticity condition may be imposed for operators of order m = 2k: ( − 1 ) k ∑ | α | = 2 k a α ( x ) ξ α > C | ξ | 2 k , {\displaystyle (-1)^{k}\sum _{|\alpha |=2k}a_{\alpha }(x)\xi ^{\alpha }>C|\xi |^{2k},} where C is a positive constant. Note that ellipticity only depends on the highest-order terms. A nonlinear operator L ( u ) = F ( x , u , ( ∂ α u ) | α | ≤ m ) {\displaystyle L(u)=F\left(x,u,\left(\partial ^{\alpha }u\right)_{|\alpha |\leq m}\right)} is elliptic if its linearization is; i.e. the first-order Taylor expansion with respect to u and its derivatives about any point is an elliptic operator. Example 1 The negative of the Laplacian in Rd given by − Δ u = − ∑ i = 1 d ∂ i 2 u {\displaystyle -\Delta u=-\sum _{i=1}^{d}\partial _{i}^{2}u} is a uniformly elliptic operator. The Laplace operator occurs frequently in electrostatics. If ρ is the charge density within some region Ω, the potential Φ must satisfy the equation − Δ Φ = 4 π ρ . {\displaystyle -\Delta \Phi =4\pi \rho .} Example 2 Given a matrix-valued function A(x) which is uniformly positive definite for every x, having components aij, the operator L u = − ∂ i ( a i j ( x ) ∂ j u ) + b j ( x ) ∂ j u + c u {\displaystyle Lu=-\partial _{i}\left(a^{ij}(x)\partial _{j}u\right)+b^{j}(x)\partial _{j}u+cu} is elliptic. This is the most general form of a second-order divergence form linear elliptic differential operator. The Laplace operator is obtained by taking A = I. These operators also occur in electrostatics in polarized media. Example 3 For p a non-negative number, the p-Laplacian is a nonlinear elliptic operator defined by L ( u ) = − ∑ i = 1 d ∂ i ( | ∇ u | p − 2 ∂ i u ) . {\displaystyle L(u)=-\sum _{i=1}^{d}\partial _{i}\left(|\nabla u|^{p-2}\partial _{i}u\right).} A similar nonlinear operator occurs in glacier mechanics. The Cauchy stress tensor of ice, according to Glen's flow law, is given by τ i j = B ( ∑ k , l = 1 3 ( ∂ l u k ) 2 ) − 1 3 ⋅ 1 2 ( ∂ j u i + ∂ i u j ) {\displaystyle \tau _{ij}=B\left(\sum _{k,l=1}^{3}\left(\partial _{l}u_{k}\right)^{2}\right)^{-{\frac {1}{3}}}\cdot {\frac {1}{2}}\left(\partial _{j}u_{i}+\partial _{i}u_{j}\right)} for some constant B. The velocity of an ice sheet in steady state will then solve the nonlinear elliptic system ∑ j = 1 3 ∂ j τ i j + ρ g i − ∂ i p = Q , {\displaystyle \sum _{j=1}^{3}\partial _{j}\tau _{ij}+\rho g_{i}-\partial _{i}p=Q,} where ρ is the ice density, g is the gravitational acceleration vector, p is the pressure and Q is a forcing term. == Elliptic regularity theorems == Let L be an elliptic operator of order 2k with coefficients having 2k continuous derivatives. The Dirichlet problem for L is to find a function u, given a function f and some appropriate boundary values, such that Lu = f and such that u has the appropriate boundary values and normal derivatives. The existence theory for elliptic operators, using Gårding's inequality, Lax–Milgram lemma and Fredholm alternative, states the sufficient condition for a weak solution u to exist in the Sobolev space Hk. For example, for a Second-order Elliptic operator as in Example 2, There is a number γ>0 such that for each μ>γ, each f ∈ L 2 ( U ) {\displaystyle f\in L^{2}(U)} , there exists a unique solution u ∈ H 0 1 ( U ) {\displaystyle u\in H_{0}^{1}(U)} of the boundary value problem L u + μ u = f in U , u = 0 on ∂ U {\displaystyle Lu+\mu u=f{\text{ in }}U,u=0{\text{ on }}\partial U} , which is based on Lax-Milgram lemma. Either (a) for any f ∈ L 2 ( U ) {\displaystyle f\in L^{2}(U)} , L u = f in U , u = 0 on ∂ U {\displaystyle Lu=f{\text{ in }}U,u=0{\text{ on }}\partial U} (1) has a unique solution, or (b) L u = 0 in U , u = 0 on ∂ U {\displaystyle Lu=0{\text{ in }}U,u=0{\text{ on }}\partial U} has a solution u ≢ 0 {\displaystyle u\not \equiv 0} , which is based on the property of compact operators and Fredholm alternative. This situation is ultimately unsatisfactory, as the weak solution u might not have enough derivatives for the expression Lu to be well-defined in the classical sense. The elliptic regularity theorem guarantees that, provided f is square-integrable, u will in fact have 2k square-integrable weak derivatives. In particular, if f is infinitely-often differentiable, then so is u. For L as in Example 2, Interior regularity: If m is a natural number, a i j , b j , c ∈ C m + 1 ( U ) , f ∈ H m ( U ) {\displaystyle a^{ij},b^{j},c\in C^{m+1}(U),f\in H^{m}(U)} (2) , u ∈ H 0 1 ( U ) {\displaystyle u\in H_{0}^{1}(U)} is a weak solution to (1), then for any open set V in U with compact closure, ‖ u ‖ H m + 2 ( V ) ≤ C ( ‖ f ‖ H m ( U ) + ‖ u ‖ L 2 ( U ) ) {\displaystyle \|u\|_{H^{m+2}(V)}\leq C(\|f\|_{H^{m}(U)}+\|u\|_{L^{2}(U)})} (3), where C depends on U, V, L, m, per se u ∈ H l o c m + 2 ( U ) {\displaystyle u\in H_{loc}^{m+2}(U)} , which also holds if m is infinity by Sobolev embedding theorem. Boundary regularity: (2) together with the assumption that ∂ U {\displaystyle \partial U} is C m + 2 {\displaystyle C^{m+2}} indicates that (3) still holds after replacing V with U, i.e. u ∈ H m + 2 ( U ) {\displaystyle u\in H^{m+2}(U)} , which also holds if m is infinity. Any differential operator exhibiting this property is called a hypoelliptic operator; thus, every elliptic operator is hypoelliptic. The property also means that every fundamental solution of an elliptic operator is infinitely differentiable in any neighborhood not containing 0. As an application, suppose a function f {\displaystyle f} satisfies the Cauchy–Riemann equations. Since the Cauchy-Riemann equations form an elliptic operator, it follows that f {\displaystyle f} is smooth. == Properties == For L as in Example 2 on U, which is an open domain with C1 boundary, then there is a number γ>0 such that for each μ>γ, L + μ I : H 0 1 ( U ) → H 0 1 ( U ) {\displaystyle L+\mu I:H_{0}^{1}(U)\rightarrow H_{0}^{1}(U)} satisfies the assumptions of Lax–Milgram lemma. Invertibility: For each μ>γ, L + μ I : L 2 ( U ) → L 2 ( U ) {\displaystyle L+\mu I:L^{2}(U)\rightarrow L^{2}(U)} admits a compact inverse. Eigenvalues and eigenvectors: If A is symmetric, bi,c are zero, then (1) Eigenvalues of L, are real, positive, countable, unbounded (2) There is an orthonormal basis of L2(U) composed of eigenvectors of L. (See Spectral theorem.) Generates a semigroup on L2(U): −L generates a semigroup { S ( t ) ; t ≥ 0 } {\displaystyle \{S(t);t\geq 0\}} of bounded linear operators on L2(U) s.t. d d t S ( t ) u 0 = − L S ( t ) u 0 , ‖ S ( t ) ‖ ≤ e γ t {\displaystyle {\frac {d}{dt}}S(t)u_{0}=-LS(t)u_{0},\|S(t)\|\leq e^{\gamma t}} in the norm of L2(U), for every u 0 ∈ L 2 ( U ) {\displaystyle u_{0}\in L^{2}(U)} , by Hille–Yosida theorem. == General definition == Let D {\displaystyle D} be a (possibly nonlinear) differential operator between vector bundles of any rank. Take its principal symbol σ ξ ( D ) {\displaystyle \sigma _{\xi }(D)} with respect to a one-form ξ {\displaystyle \xi } . (Basically, what we are doing is replacing the highest order covariant derivatives ∇ {\displaystyle \nabla } by vector fields ξ {\displaystyle \xi } .) We say D {\displaystyle D} is weakly elliptic if σ ξ ( D ) {\displaystyle \sigma _{\xi }(D)} is a linear isomorphism for every non-zero ξ {\displaystyle \xi } . We say D {\displaystyle D} is (uniformly) strongly elliptic if for some constant c > 0 {\displaystyle c>0} , ( [ σ ξ ( D ) ] ( v ) , v ) ≥ c ‖ v ‖ 2 {\displaystyle \left([\sigma _{\xi }(D)](v),v\right)\geq c\|v\|^{2}} for all ‖ ξ ‖ = 1 {\displaystyle \|\xi \|=1} and all v {\displaystyle v} . The definition of ellipticity in the previous part of the article is strong ellipticity. Here ( ⋅ , ⋅ ) {\displaystyle (\cdot ,\cdot )} is an inner product. Notice that the ξ {\displaystyle \xi } are covector fields or one-forms, but the v {\displaystyle v} are elements of the vector bundle upon which D {\displaystyle D} acts. The quintessential example of a (strongly) elliptic operator is the Laplacian (or its negative, depending upon convention). It is not hard to see that D {\displaystyle D} needs to be of even order for strong ellipticity to even be an option. Otherwise, just consider plugging in both ξ {\displaystyle \xi } and its negative. On the other hand, a weakly elliptic first-order operator, such as the Dirac operator can square to become a strongly elliptic operator, such as the Laplacian. The composition of weakly elliptic operators is weakly elliptic. Weak ellipticity is nevertheless strong enough for the Fredholm alternative, Schauder estimates, and the Atiyah–Singer index theorem. On the other hand, we need strong ellipticity for the maximum principle, and to guarantee that the eigenvalues are discrete, and their only limit point is infinity. == See also == Sobolev space Hypoelliptic operator Elliptic partial differential equation Hyperbolic partial differential equation Parabolic partial differential equation Hopf maximum principle Elliptic complex Ultrahyperbolic wave equation Semi-elliptic operator Weyl's lemma == Notes == == References == Evans, L. C. (2010) [1998], Partial differential equations, Graduate Studies in Mathematics, vol. 19 (2nd ed.), Providence, RI: American Mathematical Society, ISBN 978-0-8218-4974-3, MR 2597943 Review: Rauch, J. (2000). "Partial differential equations, by L. C. Evans" (PDF). Journal of the American Mathematical Society. 37 (3): 363–367. doi:10.1090/s0273-0979-00-00868-5. Gilbarg, D.; Trudinger, N. S. (1983) [1977], Elliptic partial differential equations of second order, Grundlehren der Mathematischen Wissenschaften, vol. 224 (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-13025-3, MR 0737190 Shubin, M. A. (2001) [1994], "Elliptic operator", Encyclopedia of Mathematics, EMS Press == External links == Linear Elliptic Equations at EqWorld: The World of Mathematical Equations. Nonlinear Elliptic Equations at EqWorld: The World of Mathematical Equations.
Wikipedia/Elliptic_differential_operator
In differential geometry, the Carathéodory conjecture is a mathematical conjecture attributed to Constantin Carathéodory by Hans Ludwig Hamburger in a session of the Berlin Mathematical Society in 1924. Carathéodory never committed the conjecture into writing, but did publish a paper on a related subject. In John Edensor Littlewood mentions the conjecture and Hamburger's contribution as an example of a mathematical claim that is easy to state but difficult to prove. Dirk Struik describes in the formal analogy of the conjecture with the four-vertex theorem for plane curves. Modern references to the conjecture are the problem list of Shing-Tung Yau, the books of Marcel Berger, as well as the books. The local real analytic version of the conjecture has had a troubled history with published proofs which contained gaps. The proof for surfaces of Hölder smoothness C 3 , α {\displaystyle C^{3,\alpha }} by Brendan Guilfoyle and Wilhelm Klingenberg, first announced in 2008, was published in three parts concluding in 2024. Their proof involves techniques spanning a number of areas of mathematics, including neutral Kähler geometry, parabolic PDEs, and Sard-Smale theory. == Statement of the conjecture == The conjecture claims that any convex, closed and sufficiently smooth surface in three dimensional Euclidean space needs to admit at least two umbilic points. In the sense of the conjecture, the spheroid with only two umbilic points and the sphere, all points of which are umbilic, are examples of surfaces with minimal and maximal numbers of the umbilicus. For the conjecture to be well posed, or the umbilic points to be well-defined, the surface needs to be at least twice differentiable. The proof of Guilfoyle and Klingenberg requires that the surface have Hölder third-order derivatives, a reflection of their use of second order parabolic methods in the 1-jet of the surface. == The case of real analytic surfaces == The invited address of Stefan Cohn-Vossen to the International Congress of Mathematicians of 1928 in Bologna was on the subject and in the 1929 edition of Wilhelm Blaschke's third volume on Differential Geometry he states: While this book goes into print, Mr. Cohn-Vossen has succeeded in proving that closed real-analytic surfaces do not have umbilic points of index > 2 (invited talk at the ICM in Bologna 1928). This proves the conjecture of Carathéodory for such surfaces, namely that they need to have at least two umbilics. Here Blaschke's index is twice the usual definition for an index of an umbilic point, and the global conjecture follows by the Poincaré–Hopf index theorem. No paper was submitted by Cohn-Vossen to the proceedings of the International Congress, while in later editions of Blaschke's book the above comments were removed. It is, therefore, reasonable to assume that this work was inconclusive. For analytic surfaces, an affirmative answer to this conjecture was given in 1940 by Hans Hamburger in a long paper published in three parts. The approach of Hamburger was also via a local index estimate for isolated umbilics, which he had shown to imply the conjecture in his earlier work. In 1943, a shorter proof was proposed by Gerrit Bol, see also, but, in 1959, Tilla Klotz found and corrected a gap in Bol's proof. Her proof, in turn, was announced to be incomplete in Hanspeter Scherbel's dissertation (no results of that dissertation related to the Carathéodory conjecture were published for decades). Among other publications we refer to the following papers. All the proofs mentioned above are based on Hamburger's reduction of the Carathéodory conjecture to the following conjecture: the index of every isolated umbilic point is never greater than one. Roughly speaking, the main difficulty lies in the resolution of singularities generated by umbilical points. All the above-mentioned authors resolve the singularities by induction on 'degree of degeneracy' of the umbilical point, but none of them was able to present the induction process clearly. In 2002, Vladimir Ivanov revisited the work of Hamburger on analytic surfaces with the following stated intent: "First, considering analytic surfaces, we assert with full responsibility that Carathéodory was right. Second, we know how this can be proved rigorously. Third, we intend to exhibit here a proof which, in our opinion, will convince every reader who is really ready to undertake a long and tiring journey with us." First he follows the way passed by Gerrit Bol and Tilla Klotz, but later he proposes his own way for singularity resolution where crucial role belongs to complex analysis (more precisely, to techniques involving analytic implicit functions, Weierstrass preparation theorem, Puiseux series, and circular root systems). == Application of the analytic index bound == Hamburger’s umbilic index bound for analytic surfaces leads to restrictions on the position of the roots of certain types of holomorphic polynomials. In particular, a holomorphic polynomial is said to be self-inversive if the set of roots is invariant under reflection in the unit circle. It can be shown that for a polynomial with self-inversive second derivative, none of whose roots lie on the unit circle, the number of roots (counted with multiplicity) inside the unit circle is less than or equal to ⌊N/2⌋ + 1. The proof takes any holomorphic polynomial with the stipulated properties and constructs a real analytic surface with an isolated umbilic point. The index is determined by the number of zeros of the polynomial that lie inside the unit circle, and then Hamburger’s bound yields the stated result. == The general smooth case == In 2008, Brendan Guilfoyle and Wilhelm Klingenberg announced a proof of the global conjecture for surfaces of smoothness C 3 , α {\displaystyle C^{3,\alpha }} . The proof was published in three parts. Their method uses neutral Kähler geometry of the Klein quadric to define an associated Riemann-Hilbert boundary value problem, and then applies mean curvature flow and the Sard–Smale Theorem on regular values of Fredholm operators to prove a contradiction for a surface with a single umbilic point. In particular, the boundary value problem seeks to find a holomorphic curve with boundary lying on the Lagrangian surface in the Klein quadric determined by the normal lines to the surface in Euclidean 3-space. Previously it was proven that the number of isolated umbilic points contained on the surface in R 3 {\displaystyle R^{3}} determines the Keller-Maslov class of the boundary curve and therefore, when the problem is Fredholm regular, determines the dimension of the space of holomorphic disks. All of the geometric quantities referred to are defined with respect to the canonical neutral Kähler structure, for which surfaces can be both holomorphic and Lagrangian. In addressing the global conjecture, the question is “what would be so special about a smooth closed convex surface in R 3 {\displaystyle R^{3}} with a single umbilic point?” This is answered by Guilfoyle and Klingenberg: the associated Riemann-Hilbert boundary value problem would be Fredholm regular. The existence of an isometry group of sufficient size to fix a point has been proven to be enough to ensure this, thus identifying the size of the Euclidean isometry group of R 3 {\displaystyle R^{3}} as the underlying reason why the Carathéodory conjecture is true. This is reinforced by a more recent result in which ambient smooth metrics (without symmetries) that are different but arbitrarily close to the Euclidean metric on R 3 {\displaystyle R^{3}} , are constructed that admit smooth convex surfaces violating both the local and the global conjectures. By Fredholm regularity, for a generic convex surface close to a putative counter-example of the global Carathéodory Conjecture, the associated Riemann-Hilbert problem would have no solutions. The second step of the proof is to show that such solutions always exist, thus concluding the non-existence of a counter-example. This is done using co-dimension 2 mean curvature flow with boundary. The required interior estimates for higher codimensional mean curvature flow in an indefinite geometry appear in. The final part is the establishment of sufficient boundary control under mean curvature flow to ensure weak convergence. This is carried out while also proving a related conjecture of Toponogov regarding umbilic points on complete planes for which the same methods work. In 2012 the proof was announced of a weaker version of the local index conjecture for smooth surfaces, namely that an isolated umbilic must have index less than or equal to 3/2. The proof follows that of the global conjecture, but also uses more topological methods, in particular, replacing hyperbolic umbilic points by totally real cross-caps in the boundary of the associated Riemann-Hilbert problem. It leaves open the possibility of a smooth (non-real analytic by Hamburger) convex surface with an isolated umbilic of index 3/2. In 2012, Mohammad Ghomi and Ralph Howard showed, using a Möbius transformation, that the global conjecture for surfaces of smoothness C 2 {\displaystyle C^{2}} can be reformulated in terms of the number of umbilic points on graphs subject to certain asymptotics of the gradient. == See also == Differential geometry of surfaces Second fundamental form Principal curvature Umbilical point == References ==
Wikipedia/Carathéodory_conjecture
In geometry, the Beltrami–Klein model, also called the projective model, Klein disk model, and the Cayley–Klein model, is a model of hyperbolic geometry in which points are represented by the points in the interior of the unit disk (or n-dimensional unit ball) and lines are represented by the chords, straight line segments with ideal endpoints on the boundary sphere. It is analogous to the gnomonic projection of spherical geometry, in that geodesics (great circles in spherical geometry) are mapped to straight lines. This model is not conformal: angles are not faithfully represented, and circles become ellipses, increasingly flattened near the edge. This is in contrast to the Poincaré disk model, which is conformal. However, lines in the Poincaré model are not represented by straight line segments, but by arcs that meet the boundary orthogonally. The Beltrami–Klein model is named after the Italian geometer Eugenio Beltrami and the German Felix Klein while "Cayley" in Cayley–Klein model refers to the English geometer Arthur Cayley. == History == This model made its first appearance for hyperbolic geometry in two memoirs of Eugenio Beltrami published in 1868, first for dimension n = 2 and then for general n, and these essays proved the equiconsistency of hyperbolic geometry with ordinary Euclidean geometry. The papers of Beltrami remained little noticed until recently and the model was named after Klein ("The Klein disk model"). In 1859 Arthur Cayley used the cross-ratio definition of angle due to Laguerre to show how Euclidean geometry could be defined using projective geometry. His definition of distance later became known as the Cayley metric. In 1869, the young (twenty-year-old) Felix Klein became acquainted with Cayley's work. He recalled that in 1870 he gave a talk on the work of Cayley at the seminar of Weierstrass and he wrote: "I finished with a question whether there might exist a connection between the ideas of Cayley and Lobachevsky. I was given the answer that these two systems were conceptually widely separated." Later, Felix Klein realized that Cayley's ideas give rise to a projective model of the non-Euclidean plane. As Klein puts it, "I allowed myself to be convinced by these objections and put aside this already mature idea." However, in 1871, he returned to this idea, formulated it mathematically, and published it. == Distance formula == The distance function for the Beltrami–Klein model is a Cayley–Klein metric. Given two distinct points p and q in the open unit ball, the unique straight line connecting them intersects the boundary at two ideal points, a and b, label them so that the points are, in order, a, p, q, b, so that |aq| > |ap| and |pb| > |qb|. The hyperbolic distance between p and q is then: d ( p , q ) = 1 2 ln ⁡ | a q | | p b | | a p | | q b | {\displaystyle d(p,q)={\frac {1}{2}}\ln {\frac {\left|aq\right|\,\left|pb\right|}{\left|ap\right|\,\left|qb\right|}}} The vertical bars indicate Euclidean distances between the points in the model, where ln is the natural logarithm and the factor of one half is needed to give the model the standard curvature of −1. When one of the points is the origin and Euclidean distance between the points is r then the hyperbolic distance is: 1 2 ln ⁡ ( 1 + r 1 − r ) = artanh ⁡ r , {\displaystyle {\frac {1}{2}}\ln \left({\frac {1+r}{1-r}}\right)=\operatorname {artanh} r,} where artanh is the inverse hyperbolic function of the hyperbolic tangent. == The Klein disk model == In two dimensions the Beltrami–Klein model is called the Klein disk model. It is a disk and the inside of the disk is a model of the entire hyperbolic plane. Lines in this model are represented by chords of the boundary circle (also called the absolute). The points on the boundary circle are called ideal points; although well defined, they do not belong to the hyperbolic plane. Points outside the disk do not belong to the hyperbolic plane either, and they are sometimes called ultra ideal points. The model is not conformal, meaning that angles are distorted, and circles on the hyperbolic plane are in general not circular in the model. Only circles that have their centre at the centre of the boundary circle are not distorted. All other circles are distorted, as are horocycles and hypercycles. === Properties === Chords that meet on the boundary circle are limiting parallel lines. Two chords are perpendicular if, when extended outside the disk, each goes through the pole of the other. (The pole of a chord is an ultra ideal point: the point outside the disk where the tangents to the disk at the endpoints of the chord meet.) Chords that go through the centre of the disk have their pole at infinity, orthogonal to the direction of the chord (this implies that right angles on diameters are not distorted). === Compass and straightedge constructions === Here is how one can use compass and straightedge constructions in the model to achieve the effect of the basic constructions in the hyperbolic plane. The pole of a line. While the pole is not a point in the hyperbolic plane (it is an ultra ideal point) most constructions will use the pole of a line in one or more ways. For a line: construct the tangents to the boundary circle through the ideal (end) points of the line. the point where these tangents intersect is the pole. For diameters of the disk: the pole is at infinity perpendicular to the diameter. To construct a perpendicular to a given line through a given point, draw the ray from the pole of the line through the given point. The part of the ray that is inside the disk is the perpendicular. When the line is a diameter of the disk then the perpendicular is the chord that is (Euclidean) perpendicular to that diameter and going through the given point. To find the midpoint of given segment A B {\displaystyle AB} : Draw the lines through A and B that are perpendicular to A B {\displaystyle AB} . (see above) Draw the lines connecting the ideal points of these lines, two of these lines will intersect the segment A B {\displaystyle AB} and will do this at the same point. This point is the (hyperbolic) midpoint of A B {\displaystyle AB} . To bisect a given angle ∠ B A C {\displaystyle \angle BAC} : Draw the rays AB and AC. Draw tangents to the circle where the rays intersect the boundary circle. Draw a line from A to the point where the tangents intersect. The part of this line between A and the boundary circle is the bisector. The common perpendicular of two lines is the chord that when extended goes through both poles of the chords. When one of the chords is a diameter of the boundary circle then the common perpendicular is the chord that is perpendicular to the diameter and that when lengthened goes through the pole of the other chord. To reflect a point P in a line l: From a point R on the line l draw the ray through P. Let X be the ideal point where the ray intersects the absolute. Draw the ray from the pole of line l through X, let Y be another ideal point that intersects the ray. Draw the segment RY. The reflection of point P is the point where the ray from the pole of line l through P intersects RY. === Circles, hypercycles and horocycles === While lines in the hyperbolic plane are straightforward to project into the Klein disk model, circles, hypercycles and horocycles are not. Circles in the model that are not concentric with the model become ellipses, increasing in eccentricity near the edge. Angles, hypercycles, and horocycles in the Klein disk model are also deformed. For constructions in the hyperbolic plane that contain circles, hypercycles, horocycles or non right angles it is perhaps more convenient to use the Poincaré disk model or the Poincaré half-plane model. === Relation to the Poincaré disk model === Both the Poincaré disk model and the Klein disk model are models of the hyperbolic plane. An advantage of the Poincaré disk model is that it is conformal (circles and angles are not distorted); a disadvantage is that straight lines project to circular arcs orthogonal to the boundary circle of the disk. The two models are related through a projection on or from the hemisphere model. The Klein model is an orthographic projection to the hemisphere model, while the Poincaré disk model is a stereographic projection. Straight lines always intersect the circular boundary of the two models in the same place, regardless of which model is used. Also, the pole of the chord is the centre of the circle that contains the arc. If P is a point a distance s {\displaystyle s} from the centre of the unit circle in the Beltrami–Klein model, then the corresponding point on the Poincaré disk model a distance of u on the same radius: u = s 1 + 1 − s 2 = ( 1 − 1 − s 2 ) s . {\displaystyle u={\frac {s}{1+{\sqrt {1-s^{2}}}}}={\frac {\left(1-{\sqrt {1-s^{2}}}\right)}{s}}.} Conversely, If P is a point a distance u {\displaystyle u} from the centre of the unit circle in the Poincaré disk model, then the corresponding point of the Beltrami–Klein model is a distance of s on the same radius: s = 2 u 1 + u 2 . {\displaystyle s={\frac {2u}{1+u^{2}}}.} === Relation of the disk model to the hyperboloid model and the gnomonic projection of the sphere === The gnomonic projection of the sphere projects from the sphere's center onto a tangent plane. Every great circle on the sphere is projected to a straight line, but it is not conformal. Angles are not faithfully represented, and circles become ellipses, increasingly stretched as they get further from the tangent point. Similarly the Klein disk (K, in the picture) is a gnomonic projection of the hyperboloid model (Hy) with as center the center of the hyperboloid (O) and the projection plane tangent to the hyperboloid. == Distance and metric tensor == Given two distinct points U and V in the open unit ball of the model in Euclidean space, the unique straight line connecting them intersects the unit sphere at two ideal points A and B, labeled so that the points are, in order along the line, A, U, V, B. Taking the centre of the unit ball of the model as the origin, and assigning position vectors u, v, a, b respectively to the points U, V, A, B, we have that that ‖a − v‖ > ‖a − u‖ and ‖u − b‖ > ‖v − b‖, where ‖ · ‖ denotes the Euclidean norm. Then the distance between U and V in the modelled hyperbolic space is expressed as d ( u , v ) = 1 2 ln ⁡ ‖ v − a ‖ ‖ b − u ‖ ‖ u − a ‖ ‖ b − v ‖ , {\displaystyle d(\mathbf {u} ,\mathbf {v} )={\frac {1}{2}}\ln {\frac {\left\|\mathbf {v} -\mathbf {a} \right\|\,\left\|\mathbf {b} -\mathbf {u} \right\|}{\left\|\mathbf {u} -\mathbf {a} \right\|\,\left\|\mathbf {b} -\mathbf {v} \right\|}},} where the factor of one half is needed to make the curvature −1. The associated metric tensor is given by d s 2 = g ( x , d x ) = ‖ d x ‖ 2 1 − ‖ x ‖ 2 + ( x ⋅ d x ) 2 ( 1 − ‖ x ‖ 2 ) 2 = ( 1 − ‖ x ‖ 2 ) ‖ d x ‖ 2 + ( x ⋅ d x ) 2 ( 1 − ‖ x ‖ 2 ) 2 {\displaystyle ds^{2}=g(\mathbf {x} ,d\mathbf {x} )={\frac {\left\|d\mathbf {x} \right\|^{2}}{1-\left\|\mathbf {x} \right\|^{2}}}+{\frac {(\mathbf {x} \cdot d\mathbf {x} )^{2}}{{\bigl (}1-\left\|\mathbf {x} \right\|^{2}{\bigr )}^{2}}}={\frac {(1-\left\|\mathbf {x} \right\|^{2})\left\|d\mathbf {x} \right\|^{2}+(\mathbf {x} \cdot d\mathbf {x} )^{2}}{{\bigl (}1-\left\|\mathbf {x} \right\|^{2}{\bigr )}^{2}}}} == Relation to the hyperboloid model == The hyperboloid model is a model of hyperbolic geometry within (n + 1)-dimensional Minkowski space. The Minkowski inner product is given by x ⋅ y = x 0 y 0 − x 1 y 1 − ⋯ − x n y n {\displaystyle \mathbf {x} \cdot \mathbf {y} =x_{0}y_{0}-x_{1}y_{1}-\cdots -x_{n}y_{n}\,} and the norm by ‖ x ‖ = x ⋅ x {\displaystyle \left\|\mathbf {x} \right\|={\sqrt {\mathbf {x} \cdot \mathbf {x} }}} . The hyperbolic plane is embedded in this space as the vectors x with ‖x‖ = 1 and x0 (the "timelike component") positive. The intrinsic distance (in the embedding) between points u and v is then given by d ( u , v ) = arcosh ⁡ ( u ⋅ v ) . {\displaystyle d(\mathbf {u} ,\mathbf {v} )=\operatorname {arcosh} (\mathbf {u} \cdot \mathbf {v} ).} This may also be written in the homogeneous form d ( u , v ) = arcosh ⁡ ( u ‖ u ‖ ⋅ v ‖ v ‖ ) , {\displaystyle d(\mathbf {u} ,\mathbf {v} )=\operatorname {arcosh} \left({\frac {\mathbf {u} }{\left\|\mathbf {u} \right\|}}\cdot {\frac {\mathbf {v} }{\left\|\mathbf {v} \right\|}}\right),} which allows the vectors to be rescaled for convenience. The Beltrami–Klein model is obtained from the hyperboloid model by rescaling all vectors so that the timelike component is 1, that is, by projecting the hyperboloid embedding through the origin onto the plane x0 = 1. The distance function, in its homogeneous form, is unchanged. Since the intrinsic lines (geodesics) of the hyperboloid model are the intersection of the embedding with planes through the Minkowski origin, the intrinsic lines of the Beltrami–Klein model are the chords of the sphere. == Relation to the Poincaré ball model == Both the Poincaré ball model and the Beltrami–Klein model are models of the n-dimensional hyperbolic space in the n-dimensional unit ball in Rn. If u {\displaystyle u} is a vector of norm less than one representing a point of the Poincaré disk model, then the corresponding point of the Beltrami–Klein model is given by s = 2 u 1 + u ⋅ u . {\displaystyle s={\frac {2u}{1+u\cdot u}}.} Conversely, from a vector s {\displaystyle s} of norm less than one representing a point of the Beltrami–Klein model, the corresponding point of the Poincaré disk model is given by u = s 1 + 1 − s ⋅ s = ( 1 − 1 − s ⋅ s ) s s ⋅ s . {\displaystyle u={\frac {s}{1+{\sqrt {1-s\cdot s}}}}={\frac {\left(1-{\sqrt {1-s\cdot s}}\right)s}{s\cdot s}}.} Given two points on the boundary of the unit disk, which are called ideal points, the straight line connecting them in the Beltrami–Klein model is the chord between them, while in the corresponding Poincaré model the line is a circular arc on the two-dimensional subspace generated by the two boundary point vectors, meeting the boundary of the ball at right angles. The two models are related through a projection from the center of the disk; a ray from the center passing through a point of one model line passes through the corresponding point of the line in the other model. == See also == Poincaré half-plane model Poincaré disk model Poincaré metric Inversive geometry == Notes == == References == Luis Santaló (1961), Geometrias no Euclidianas, EUDEBA. Stahl, Saul (2007), A Gateway to Modern Geometry: The Poincare Half-Plane (2nd ed.), Jones & Bartlett Learning, ISBN 978-0-7637-5381-8 Nielsen, Frank; Nock, Richard (2009), "Hyperbolic Voronoi diagrams made easy", 2010 International Conference on Computational Science and Its Applications, pp. 74–80, arXiv:0903.3287, doi:10.1109/ICCSA.2010.37, ISBN 978-1-4244-6461-6, S2CID 14129082
Wikipedia/Beltrami-Klein_model
The Weingarten equations give the expansion of the derivative of the unit normal vector to a surface in terms of the first derivatives of the position vector of a point on the surface. These formulas were established in 1861 by the German mathematician Julius Weingarten. == Statement in classical differential geometry == Let S be a surface in three-dimensional Euclidean space that is parametrized by the position vector r(u, v). Let P = P(u, v) be a point on the surface. Then r u = ∂ r ∂ u , r v = ∂ r ∂ v {\displaystyle \mathbf {r} _{u}={\frac {\partial \mathbf {r} }{\partial u}},\quad \mathbf {r} _{v}={\frac {\partial \mathbf {r} }{\partial v}}} are two tangent vectors at point P. Let n(u, v) be the unit normal vector and let (E, F, G) and (L, M, N) be the coefficients of the first and second fundamental forms of this surface, respectively. The Weingarten equation gives the first derivative of the unit normal vector n at point P in terms of the tangent vectors ru and rv: n u = F M − G L E G − F 2 r u + F L − E M E G − F 2 r v {\displaystyle \mathbf {n} _{u}={\frac {FM-GL}{EG-F^{2}}}\mathbf {r} _{u}+{\frac {FL-EM}{EG-F^{2}}}\mathbf {r} _{v}} n v = F N − G M E G − F 2 r u + F M − E N E G − F 2 r v {\displaystyle \mathbf {n} _{v}={\frac {FN-GM}{EG-F^{2}}}\mathbf {r} _{u}+{\frac {FM-EN}{EG-F^{2}}}\mathbf {r} _{v}} This can be expressed compactly in index notation as ∂ a n = K a b r b {\displaystyle \partial _{a}\mathbf {n} =K_{a}^{~b}\mathbf {r} _{b}} , where Kab are the components of the surface's second fundamental form (shape tensor). == Notes == == References == Weisstein, Eric W. "Weingarten Equations". MathWorld. Springer Encyclopedia of Mathematics, Weingarten derivational formulas Struik, Dirk J. (1988), Lectures on Classical Differential Geometry, Dover Publications, p. 108, ISBN 0-486-65609-8 Erwin Kreyszig, Differential Geometry, Dover Publications, 1991, ISBN 0-486-66721-9, section 45.
Wikipedia/Weingarten_equations
In geometry, the Beltrami–Klein model, also called the projective model, Klein disk model, and the Cayley–Klein model, is a model of hyperbolic geometry in which points are represented by the points in the interior of the unit disk (or n-dimensional unit ball) and lines are represented by the chords, straight line segments with ideal endpoints on the boundary sphere. It is analogous to the gnomonic projection of spherical geometry, in that geodesics (great circles in spherical geometry) are mapped to straight lines. This model is not conformal: angles are not faithfully represented, and circles become ellipses, increasingly flattened near the edge. This is in contrast to the Poincaré disk model, which is conformal. However, lines in the Poincaré model are not represented by straight line segments, but by arcs that meet the boundary orthogonally. The Beltrami–Klein model is named after the Italian geometer Eugenio Beltrami and the German Felix Klein while "Cayley" in Cayley–Klein model refers to the English geometer Arthur Cayley. == History == This model made its first appearance for hyperbolic geometry in two memoirs of Eugenio Beltrami published in 1868, first for dimension n = 2 and then for general n, and these essays proved the equiconsistency of hyperbolic geometry with ordinary Euclidean geometry. The papers of Beltrami remained little noticed until recently and the model was named after Klein ("The Klein disk model"). In 1859 Arthur Cayley used the cross-ratio definition of angle due to Laguerre to show how Euclidean geometry could be defined using projective geometry. His definition of distance later became known as the Cayley metric. In 1869, the young (twenty-year-old) Felix Klein became acquainted with Cayley's work. He recalled that in 1870 he gave a talk on the work of Cayley at the seminar of Weierstrass and he wrote: "I finished with a question whether there might exist a connection between the ideas of Cayley and Lobachevsky. I was given the answer that these two systems were conceptually widely separated." Later, Felix Klein realized that Cayley's ideas give rise to a projective model of the non-Euclidean plane. As Klein puts it, "I allowed myself to be convinced by these objections and put aside this already mature idea." However, in 1871, he returned to this idea, formulated it mathematically, and published it. == Distance formula == The distance function for the Beltrami–Klein model is a Cayley–Klein metric. Given two distinct points p and q in the open unit ball, the unique straight line connecting them intersects the boundary at two ideal points, a and b, label them so that the points are, in order, a, p, q, b, so that |aq| > |ap| and |pb| > |qb|. The hyperbolic distance between p and q is then: d ( p , q ) = 1 2 ln ⁡ | a q | | p b | | a p | | q b | {\displaystyle d(p,q)={\frac {1}{2}}\ln {\frac {\left|aq\right|\,\left|pb\right|}{\left|ap\right|\,\left|qb\right|}}} The vertical bars indicate Euclidean distances between the points in the model, where ln is the natural logarithm and the factor of one half is needed to give the model the standard curvature of −1. When one of the points is the origin and Euclidean distance between the points is r then the hyperbolic distance is: 1 2 ln ⁡ ( 1 + r 1 − r ) = artanh ⁡ r , {\displaystyle {\frac {1}{2}}\ln \left({\frac {1+r}{1-r}}\right)=\operatorname {artanh} r,} where artanh is the inverse hyperbolic function of the hyperbolic tangent. == The Klein disk model == In two dimensions the Beltrami–Klein model is called the Klein disk model. It is a disk and the inside of the disk is a model of the entire hyperbolic plane. Lines in this model are represented by chords of the boundary circle (also called the absolute). The points on the boundary circle are called ideal points; although well defined, they do not belong to the hyperbolic plane. Points outside the disk do not belong to the hyperbolic plane either, and they are sometimes called ultra ideal points. The model is not conformal, meaning that angles are distorted, and circles on the hyperbolic plane are in general not circular in the model. Only circles that have their centre at the centre of the boundary circle are not distorted. All other circles are distorted, as are horocycles and hypercycles. === Properties === Chords that meet on the boundary circle are limiting parallel lines. Two chords are perpendicular if, when extended outside the disk, each goes through the pole of the other. (The pole of a chord is an ultra ideal point: the point outside the disk where the tangents to the disk at the endpoints of the chord meet.) Chords that go through the centre of the disk have their pole at infinity, orthogonal to the direction of the chord (this implies that right angles on diameters are not distorted). === Compass and straightedge constructions === Here is how one can use compass and straightedge constructions in the model to achieve the effect of the basic constructions in the hyperbolic plane. The pole of a line. While the pole is not a point in the hyperbolic plane (it is an ultra ideal point) most constructions will use the pole of a line in one or more ways. For a line: construct the tangents to the boundary circle through the ideal (end) points of the line. the point where these tangents intersect is the pole. For diameters of the disk: the pole is at infinity perpendicular to the diameter. To construct a perpendicular to a given line through a given point, draw the ray from the pole of the line through the given point. The part of the ray that is inside the disk is the perpendicular. When the line is a diameter of the disk then the perpendicular is the chord that is (Euclidean) perpendicular to that diameter and going through the given point. To find the midpoint of given segment A B {\displaystyle AB} : Draw the lines through A and B that are perpendicular to A B {\displaystyle AB} . (see above) Draw the lines connecting the ideal points of these lines, two of these lines will intersect the segment A B {\displaystyle AB} and will do this at the same point. This point is the (hyperbolic) midpoint of A B {\displaystyle AB} . To bisect a given angle ∠ B A C {\displaystyle \angle BAC} : Draw the rays AB and AC. Draw tangents to the circle where the rays intersect the boundary circle. Draw a line from A to the point where the tangents intersect. The part of this line between A and the boundary circle is the bisector. The common perpendicular of two lines is the chord that when extended goes through both poles of the chords. When one of the chords is a diameter of the boundary circle then the common perpendicular is the chord that is perpendicular to the diameter and that when lengthened goes through the pole of the other chord. To reflect a point P in a line l: From a point R on the line l draw the ray through P. Let X be the ideal point where the ray intersects the absolute. Draw the ray from the pole of line l through X, let Y be another ideal point that intersects the ray. Draw the segment RY. The reflection of point P is the point where the ray from the pole of line l through P intersects RY. === Circles, hypercycles and horocycles === While lines in the hyperbolic plane are straightforward to project into the Klein disk model, circles, hypercycles and horocycles are not. Circles in the model that are not concentric with the model become ellipses, increasing in eccentricity near the edge. Angles, hypercycles, and horocycles in the Klein disk model are also deformed. For constructions in the hyperbolic plane that contain circles, hypercycles, horocycles or non right angles it is perhaps more convenient to use the Poincaré disk model or the Poincaré half-plane model. === Relation to the Poincaré disk model === Both the Poincaré disk model and the Klein disk model are models of the hyperbolic plane. An advantage of the Poincaré disk model is that it is conformal (circles and angles are not distorted); a disadvantage is that straight lines project to circular arcs orthogonal to the boundary circle of the disk. The two models are related through a projection on or from the hemisphere model. The Klein model is an orthographic projection to the hemisphere model, while the Poincaré disk model is a stereographic projection. Straight lines always intersect the circular boundary of the two models in the same place, regardless of which model is used. Also, the pole of the chord is the centre of the circle that contains the arc. If P is a point a distance s {\displaystyle s} from the centre of the unit circle in the Beltrami–Klein model, then the corresponding point on the Poincaré disk model a distance of u on the same radius: u = s 1 + 1 − s 2 = ( 1 − 1 − s 2 ) s . {\displaystyle u={\frac {s}{1+{\sqrt {1-s^{2}}}}}={\frac {\left(1-{\sqrt {1-s^{2}}}\right)}{s}}.} Conversely, If P is a point a distance u {\displaystyle u} from the centre of the unit circle in the Poincaré disk model, then the corresponding point of the Beltrami–Klein model is a distance of s on the same radius: s = 2 u 1 + u 2 . {\displaystyle s={\frac {2u}{1+u^{2}}}.} === Relation of the disk model to the hyperboloid model and the gnomonic projection of the sphere === The gnomonic projection of the sphere projects from the sphere's center onto a tangent plane. Every great circle on the sphere is projected to a straight line, but it is not conformal. Angles are not faithfully represented, and circles become ellipses, increasingly stretched as they get further from the tangent point. Similarly the Klein disk (K, in the picture) is a gnomonic projection of the hyperboloid model (Hy) with as center the center of the hyperboloid (O) and the projection plane tangent to the hyperboloid. == Distance and metric tensor == Given two distinct points U and V in the open unit ball of the model in Euclidean space, the unique straight line connecting them intersects the unit sphere at two ideal points A and B, labeled so that the points are, in order along the line, A, U, V, B. Taking the centre of the unit ball of the model as the origin, and assigning position vectors u, v, a, b respectively to the points U, V, A, B, we have that that ‖a − v‖ > ‖a − u‖ and ‖u − b‖ > ‖v − b‖, where ‖ · ‖ denotes the Euclidean norm. Then the distance between U and V in the modelled hyperbolic space is expressed as d ( u , v ) = 1 2 ln ⁡ ‖ v − a ‖ ‖ b − u ‖ ‖ u − a ‖ ‖ b − v ‖ , {\displaystyle d(\mathbf {u} ,\mathbf {v} )={\frac {1}{2}}\ln {\frac {\left\|\mathbf {v} -\mathbf {a} \right\|\,\left\|\mathbf {b} -\mathbf {u} \right\|}{\left\|\mathbf {u} -\mathbf {a} \right\|\,\left\|\mathbf {b} -\mathbf {v} \right\|}},} where the factor of one half is needed to make the curvature −1. The associated metric tensor is given by d s 2 = g ( x , d x ) = ‖ d x ‖ 2 1 − ‖ x ‖ 2 + ( x ⋅ d x ) 2 ( 1 − ‖ x ‖ 2 ) 2 = ( 1 − ‖ x ‖ 2 ) ‖ d x ‖ 2 + ( x ⋅ d x ) 2 ( 1 − ‖ x ‖ 2 ) 2 {\displaystyle ds^{2}=g(\mathbf {x} ,d\mathbf {x} )={\frac {\left\|d\mathbf {x} \right\|^{2}}{1-\left\|\mathbf {x} \right\|^{2}}}+{\frac {(\mathbf {x} \cdot d\mathbf {x} )^{2}}{{\bigl (}1-\left\|\mathbf {x} \right\|^{2}{\bigr )}^{2}}}={\frac {(1-\left\|\mathbf {x} \right\|^{2})\left\|d\mathbf {x} \right\|^{2}+(\mathbf {x} \cdot d\mathbf {x} )^{2}}{{\bigl (}1-\left\|\mathbf {x} \right\|^{2}{\bigr )}^{2}}}} == Relation to the hyperboloid model == The hyperboloid model is a model of hyperbolic geometry within (n + 1)-dimensional Minkowski space. The Minkowski inner product is given by x ⋅ y = x 0 y 0 − x 1 y 1 − ⋯ − x n y n {\displaystyle \mathbf {x} \cdot \mathbf {y} =x_{0}y_{0}-x_{1}y_{1}-\cdots -x_{n}y_{n}\,} and the norm by ‖ x ‖ = x ⋅ x {\displaystyle \left\|\mathbf {x} \right\|={\sqrt {\mathbf {x} \cdot \mathbf {x} }}} . The hyperbolic plane is embedded in this space as the vectors x with ‖x‖ = 1 and x0 (the "timelike component") positive. The intrinsic distance (in the embedding) between points u and v is then given by d ( u , v ) = arcosh ⁡ ( u ⋅ v ) . {\displaystyle d(\mathbf {u} ,\mathbf {v} )=\operatorname {arcosh} (\mathbf {u} \cdot \mathbf {v} ).} This may also be written in the homogeneous form d ( u , v ) = arcosh ⁡ ( u ‖ u ‖ ⋅ v ‖ v ‖ ) , {\displaystyle d(\mathbf {u} ,\mathbf {v} )=\operatorname {arcosh} \left({\frac {\mathbf {u} }{\left\|\mathbf {u} \right\|}}\cdot {\frac {\mathbf {v} }{\left\|\mathbf {v} \right\|}}\right),} which allows the vectors to be rescaled for convenience. The Beltrami–Klein model is obtained from the hyperboloid model by rescaling all vectors so that the timelike component is 1, that is, by projecting the hyperboloid embedding through the origin onto the plane x0 = 1. The distance function, in its homogeneous form, is unchanged. Since the intrinsic lines (geodesics) of the hyperboloid model are the intersection of the embedding with planes through the Minkowski origin, the intrinsic lines of the Beltrami–Klein model are the chords of the sphere. == Relation to the Poincaré ball model == Both the Poincaré ball model and the Beltrami–Klein model are models of the n-dimensional hyperbolic space in the n-dimensional unit ball in Rn. If u {\displaystyle u} is a vector of norm less than one representing a point of the Poincaré disk model, then the corresponding point of the Beltrami–Klein model is given by s = 2 u 1 + u ⋅ u . {\displaystyle s={\frac {2u}{1+u\cdot u}}.} Conversely, from a vector s {\displaystyle s} of norm less than one representing a point of the Beltrami–Klein model, the corresponding point of the Poincaré disk model is given by u = s 1 + 1 − s ⋅ s = ( 1 − 1 − s ⋅ s ) s s ⋅ s . {\displaystyle u={\frac {s}{1+{\sqrt {1-s\cdot s}}}}={\frac {\left(1-{\sqrt {1-s\cdot s}}\right)s}{s\cdot s}}.} Given two points on the boundary of the unit disk, which are called ideal points, the straight line connecting them in the Beltrami–Klein model is the chord between them, while in the corresponding Poincaré model the line is a circular arc on the two-dimensional subspace generated by the two boundary point vectors, meeting the boundary of the ball at right angles. The two models are related through a projection from the center of the disk; a ray from the center passing through a point of one model line passes through the corresponding point of the line in the other model. == See also == Poincaré half-plane model Poincaré disk model Poincaré metric Inversive geometry == Notes == == References == Luis Santaló (1961), Geometrias no Euclidianas, EUDEBA. Stahl, Saul (2007), A Gateway to Modern Geometry: The Poincare Half-Plane (2nd ed.), Jones & Bartlett Learning, ISBN 978-0-7637-5381-8 Nielsen, Frank; Nock, Richard (2009), "Hyperbolic Voronoi diagrams made easy", 2010 International Conference on Computational Science and Its Applications, pp. 74–80, arXiv:0903.3287, doi:10.1109/ICCSA.2010.37, ISBN 978-1-4244-6461-6, S2CID 14129082
Wikipedia/Klein_model
In mathematics and its applications, a Sturm–Liouville problem is a second-order linear ordinary differential equation of the form d d x [ p ( x ) d y d x ] + q ( x ) y = − λ w ( x ) y {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} x}}\left[p(x){\frac {\mathrm {d} y}{\mathrm {d} x}}\right]+q(x)y=-\lambda w(x)y} for given functions p ( x ) {\displaystyle p(x)} , q ( x ) {\displaystyle q(x)} and w ( x ) {\displaystyle w(x)} , together with some boundary conditions at extreme values of x {\displaystyle x} . The goals of a given Sturm–Liouville problem are: To find the λ for which there exists a non-trivial solution to the problem. Such values λ are called the eigenvalues of the problem. For each eigenvalue λ, to find the corresponding solution y = y ( x ) {\displaystyle y=y(x)} of the problem. Such functions y {\displaystyle y} are called the eigenfunctions associated to each λ. Sturm–Liouville theory is the general study of Sturm–Liouville problems. In particular, for a "regular" Sturm–Liouville problem, it can be shown that there are an infinite number of eigenvalues each with a unique eigenfunction, and that these eigenfunctions form an orthonormal basis of a certain Hilbert space of functions. This theory is important in applied mathematics, where Sturm–Liouville problems occur very frequently, particularly when dealing with separable linear partial differential equations. For example, in quantum mechanics, the one-dimensional time-independent Schrödinger equation is a Sturm–Liouville problem. Sturm–Liouville theory is named after Jacques Charles François Sturm (1803–1855) and Joseph Liouville (1809–1882), who developed the theory. == Main results == The main results in Sturm–Liouville theory apply to a Sturm–Liouville problem on a finite interval [ a , b ] {\displaystyle [a,b]} that is "regular". The problem is said to be regular if: the coefficient functions p , q , w {\displaystyle p,q,w} and the derivative p ′ {\displaystyle p'} are all continuous on [ a , b ] {\displaystyle [a,b]} ; p ( x ) > 0 {\displaystyle p(x)>0} and w ( x ) > 0 {\displaystyle w(x)>0} for all x ∈ [ a , b ] {\displaystyle x\in [a,b]} ; the problem has separated boundary conditions of the form The function w = w ( x ) {\displaystyle w=w(x)} , sometimes denoted r = r ( x ) {\displaystyle r=r(x)} , is called the weight or density function. The goals of a Sturm–Liouville problem are: to find the eigenvalues: those λ for which there exists a non-trivial solution; for each eigenvalue λ, to find the corresponding eigenfunction y = y ( x ) {\displaystyle y=y(x)} . For a regular Sturm–Liouville problem, a function y = y ( x ) {\displaystyle y=y(x)} is called a solution if it is continuously differentiable and satisfies the equation (1) at every x ∈ ( a , b ) {\displaystyle x\in (a,b)} . In the case of more general p , q , w {\displaystyle p,q,w} , the solutions must be understood in a weak sense. The terms eigenvalue and eigenvector are used because the solutions correspond to the eigenvalues and eigenfunctions of a Hermitian differential operator in an appropriate Hilbert space of functions with inner product defined using the weight function. Sturm–Liouville theory studies the existence and asymptotic behavior of the eigenvalues, the corresponding qualitative theory of the eigenfunctions and their completeness in the function space. The main result of Sturm–Liouville theory states that, for any regular Sturm–Liouville problem: The eigenvalues λ 1 , λ 2 , … {\displaystyle \lambda _{1},\lambda _{2},\dots } are real and can be numbered so that λ 1 < λ 2 < ⋯ < λ n < ⋯ → ∞ . {\displaystyle \lambda _{1}<\lambda _{2}<\cdots <\lambda _{n}<\cdots \to \infty .} Corresponding to each eigenvalue λ n {\displaystyle \lambda _{n}} is a unique (up to constant multiple) eigenfunction y n = y n ( x ) {\displaystyle y_{n}=y_{n}(x)} with exactly n − 1 {\displaystyle n-1} zeros in [ a , b ] {\displaystyle [a,b]} , called the nth fundamental solution. The normalized eigenfunctions y n {\displaystyle y_{n}} form an orthonormal basis under the w-weighted inner product in the Hilbert space L 2 ( [ a , b ] , w ( x ) d x ) {\displaystyle L^{2}{\big (}[a,b],w(x)\,\mathrm {d} x{\big )}} ; that is, ⟨ y n , y m ⟩ = ∫ a b y n ( x ) y m ( x ) w ( x ) d x = δ n m , {\displaystyle \langle y_{n},y_{m}\rangle =\int _{a}^{b}y_{n}(x)y_{m}(x)w(x)\,\mathrm {d} x=\delta _{nm},} where δ n m {\displaystyle \delta _{nm}} is the Kronecker delta. == Reduction to Sturm–Liouville form == The differential equation (1) is said to be in Sturm–Liouville form or self-adjoint form. All second-order linear homogenous ordinary differential equations can be recast in the form on the left-hand side of (1) by multiplying both sides of the equation by an appropriate integrating factor (although the same is not true of second-order partial differential equations, or if y is a vector). Some examples are below. === Bessel equation === x 2 y ″ + x y ′ + ( x 2 − ν 2 ) y = 0 {\displaystyle x^{2}y''+xy'+\left(x^{2}-\nu ^{2}\right)y=0} which can be written in Sturm–Liouville form (first by dividing through by x, then by collapsing the first two terms on the left into one term) as ( x y ′ ) ′ + ( x − ν 2 x ) y = 0. {\displaystyle \left(xy'\right)'+\left(x-{\frac {\nu ^{2}}{x}}\right)y=0.} === Legendre equation === ( 1 − x 2 ) y ″ − 2 x y ′ + ν ( ν + 1 ) y = 0 {\displaystyle \left(1-x^{2}\right)y''-2xy'+\nu (\nu +1)y=0} which can be put into Sturm–Liouville form, since ⁠d/dx⁠(1 − x2) = −2x, so the Legendre equation is equivalent to ( ( 1 − x 2 ) y ′ ) ′ + ν ( ν + 1 ) y = 0 {\displaystyle \left(\left(1-x^{2}\right)y'\right)'+\nu (\nu +1)y=0} === Example using an integrating factor === x 3 y ″ − x y ′ + 2 y = 0 {\displaystyle x^{3}y''-xy'+2y=0} Divide throughout by x3: y ″ − 1 x 2 y ′ + 2 x 3 y = 0 {\displaystyle y''-{\frac {1}{x^{2}}}y'+{\frac {2}{x^{3}}}y=0} Multiplying throughout by an integrating factor of μ ( x ) = exp ⁡ ( ∫ − d x x 2 ) = e 1 / x , {\displaystyle \mu (x)=\exp \left(\int -{\frac {dx}{x^{2}}}\right)=e^{{1}/{x}},} gives e 1 / x y ″ − e 1 / x x 2 y ′ + 2 e 1 / x x 3 y = 0 {\displaystyle e^{{1}/{x}}y''-{\frac {e^{{1}/{x}}}{x^{2}}}y'+{\frac {2e^{{1}/{x}}}{x^{3}}}y=0} which can be put into Sturm–Liouville form since d d x e 1 / x = − e 1 / x x 2 {\displaystyle {\frac {d}{dx}}e^{{1}/{x}}=-{\frac {e^{{1}/{x}}}{x^{2}}}} so the differential equation is equivalent to ( e 1 / x y ′ ) ′ + 2 e 1 / x x 3 y = 0. {\displaystyle \left(e^{{1}/{x}}y'\right)'+{\frac {2e^{{1}/{x}}}{x^{3}}}y=0.} === Integrating factor for general second-order homogenous equation === P ( x ) y ″ + Q ( x ) y ′ + R ( x ) y = 0 {\displaystyle P(x)y''+Q(x)y'+R(x)y=0} Multiplying through by the integrating factor μ ( x ) = 1 P ( x ) exp ⁡ ( ∫ Q ( x ) P ( x ) d x ) , {\displaystyle \mu (x)={\frac {1}{P(x)}}\exp \left(\int {\frac {Q(x)}{P(x)}}\,dx\right),} and then collecting gives the Sturm–Liouville form: d d x ( μ ( x ) P ( x ) y ′ ) + μ ( x ) R ( x ) y = 0 , {\displaystyle {\frac {d}{dx}}\left(\mu (x)P(x)y'\right)+\mu (x)R(x)y=0,} or, explicitly: d d x ( exp ⁡ ( ∫ Q ( x ) P ( x ) d x ) y ′ ) + R ( x ) P ( x ) exp ⁡ ( ∫ Q ( x ) P ( x ) d x ) y = 0. {\displaystyle {\frac {d}{dx}}\left(\exp \left(\int {\frac {Q(x)}{P(x)}}\,dx\right)y'\right)+{\frac {R(x)}{P(x)}}\exp \left(\int {\frac {Q(x)}{P(x)}}\,dx\right)y=0.} == Sturm–Liouville equations as self-adjoint differential operators == The mapping defined by: L u = − 1 w ( x ) ( d d x [ p ( x ) d u d x ] + q ( x ) u ) {\displaystyle Lu=-{\frac {1}{w(x)}}\left({\frac {d}{dx}}\left[p(x)\,{\frac {du}{dx}}\right]+q(x)u\right)} can be viewed as a linear operator L mapping a function u to another function Lu, and it can be studied in the context of functional analysis. In fact, equation (1) can be written as L u = λ u . {\displaystyle Lu=\lambda u.} This is precisely the eigenvalue problem; that is, one seeks eigenvalues λ1, λ2, λ3,... and the corresponding eigenvectors u1, u2, u3,... of the L operator. The proper setting for this problem is the Hilbert space L 2 ( [ a , b ] , w ( x ) d x ) {\displaystyle L^{2}([a,b],w(x)\,dx)} with scalar product ⟨ f , g ⟩ = ∫ a b f ( x ) ¯ g ( x ) w ( x ) d x . {\displaystyle \langle f,g\rangle =\int _{a}^{b}{\overline {f(x)}}g(x)w(x)\,dx.} In this space L is defined on sufficiently smooth functions which satisfy the above regular boundary conditions. Moreover, L is a self-adjoint operator: ⟨ L f , g ⟩ = ⟨ f , L g ⟩ . {\displaystyle \langle Lf,g\rangle =\langle f,Lg\rangle .} This can be seen formally by using integration by parts twice, where the boundary terms vanish by virtue of the boundary conditions. It then follows that the eigenvalues of a Sturm–Liouville operator are real and that eigenfunctions of L corresponding to different eigenvalues are orthogonal. However, this operator is unbounded and hence existence of an orthonormal basis of eigenfunctions is not evident. To overcome this problem, one looks at the resolvent ( L − z ) − 1 , z ∈ R , {\displaystyle \left(L-z\right)^{-1},\qquad z\in \mathbb {R} ,} where z is not an eigenvalue. Then, computing the resolvent amounts to solving a nonhomogeneous equation, which can be done using the variation of parameters formula. This shows that the resolvent is an integral operator with a continuous symmetric kernel (the Green's function of the problem). As a consequence of the Arzelà–Ascoli theorem, this integral operator is compact and existence of a sequence of eigenvalues αn which converge to 0 and eigenfunctions which form an orthonormal basis follows from the spectral theorem for compact operators. Finally, note that ( L − z ) − 1 u = α u , L u = ( z + α − 1 ) u , {\displaystyle \left(L-z\right)^{-1}u=\alpha u,\qquad Lu=\left(z+\alpha ^{-1}\right)u,} are equivalent, so we may take λ = z + α − 1 {\displaystyle \lambda =z+\alpha ^{-1}} with the same eigenfunctions. If the interval is unbounded, or if the coefficients have singularities at the boundary points, one calls L singular. In this case, the spectrum no longer consists of eigenvalues alone and can contain a continuous component. There is still an associated eigenfunction expansion (similar to Fourier series versus Fourier transform). This is important in quantum mechanics, since the one-dimensional time-independent Schrödinger equation is a special case of a Sturm–Liouville equation. == Application to inhomogeneous second-order boundary value problems == Consider a general inhomogeneous second-order linear differential equation P ( x ) y ″ + Q ( x ) y ′ + R ( x ) y = f ( x ) {\displaystyle P(x)y''+Q(x)y'+R(x)y=f(x)} for given functions P ( x ) , Q ( x ) , R ( x ) , f ( x ) {\displaystyle P(x),Q(x),R(x),f(x)} . As before, this can be reduced to the Sturm–Liouville form L y = f {\displaystyle Ly=f} : writing a general Sturm–Liouville operator as: L u = p w ( x ) u ″ + p ′ w ( x ) u ′ + q w ( x ) u , {\displaystyle Lu={\frac {p}{w(x)}}u''+{\frac {p'}{w(x)}}u'+{\frac {q}{w(x)}}u,} one solves the system: p = P w , p ′ = Q w , q = R w . {\displaystyle p=Pw,\quad p'=Qw,\quad q=Rw.} It suffices to solve the first two equations, which amounts to solving (Pw)′ = Qw, or w ′ = Q − P ′ P w := α w . {\displaystyle w'={\frac {Q-P'}{P}}w:=\alpha w.} A solution is: w = exp ⁡ ( ∫ α d x ) , p = P exp ⁡ ( ∫ α d x ) , q = R exp ⁡ ( ∫ α d x ) . {\displaystyle w=\exp \left(\int \alpha \,dx\right),\quad p=P\exp \left(\int \alpha \,dx\right),\quad q=R\exp \left(\int \alpha \,dx\right).} Given this transformation, one is left to solve: L y = f . {\displaystyle Ly=f.} In general, if initial conditions at some point are specified, for example y(a) = 0 and y′(a) = 0, a second order differential equation can be solved using ordinary methods and the Picard–Lindelöf theorem ensures that the differential equation has a unique solution in a neighbourhood of the point where the initial conditions have been specified. But if in place of specifying initial values at a single point, it is desired to specify values at two different points (so-called boundary values), e.g. y(a) = 0 and y(b) = 1, the problem turns out to be much more difficult. Notice that by adding a suitable known differentiable function to y, whose values at a and b satisfy the desired boundary conditions, and injecting inside the proposed differential equation, it can be assumed without loss of generality that the boundary conditions are of the form y(a) = 0 and y(b) = 0. Here, the Sturm–Liouville theory comes in play: indeed, a large class of functions f can be expanded in terms of a series of orthonormal eigenfunctions ui of the associated Liouville operator with corresponding eigenvalues λi: f ( x ) = ∑ i α i u i ( x ) , α i ∈ R . {\displaystyle f(x)=\sum _{i}\alpha _{i}u_{i}(x),\quad \alpha _{i}\in {\mathbb {R} }.} Then a solution to the proposed equation is evidently: y = ∑ i α i λ i u i . {\displaystyle y=\sum _{i}{\frac {\alpha _{i}}{\lambda _{i}}}u_{i}.} This solution will be valid only over the open interval a < x < b, and may fail at the boundaries. === Example: Fourier series === Consider the Sturm–Liouville problem: for the unknowns are λ and u(x). For boundary conditions, we take for example: u ( 0 ) = u ( π ) = 0. {\displaystyle u(0)=u(\pi )=0.} Observe that if k is any integer, then the function u k ( x ) = sin ⁡ k x {\displaystyle u_{k}(x)=\sin kx} is a solution with eigenvalue λ = k2. We know that the solutions of a Sturm–Liouville problem form an orthogonal basis, and we know from Fourier series that this set of sinusoidal functions is an orthogonal basis. Since orthogonal bases are always maximal (by definition) we conclude that the Sturm–Liouville problem in this case has no other eigenvectors. Given the preceding, let us now solve the inhomogeneous problem L y = x , x ∈ ( 0 , π ) {\displaystyle Ly=x,\qquad x\in (0,\pi )} with the same boundary conditions y ( 0 ) = y ( π ) = 0 {\displaystyle y(0)=y(\pi )=0} . In this case, we must expand f(x) = x as a Fourier series. The reader may check, either by integrating ∫ eikxx dx or by consulting a table of Fourier transforms, that we thus obtain L y = ∑ k = 1 ∞ − 2 ( − 1 ) k k sin ⁡ k x . {\displaystyle Ly=\sum _{k=1}^{\infty }-2{\frac {\left(-1\right)^{k}}{k}}\sin kx.} This particular Fourier series is troublesome because of its poor convergence properties. It is not clear a priori whether the series converges pointwise. Because of Fourier analysis, since the Fourier coefficients are "square-summable", the Fourier series converges in L2 which is all we need for this particular theory to function. We mention for the interested reader that in this case we may rely on a result which says that Fourier series converge at every point of differentiability, and at jump points (the function x, considered as a periodic function, has a jump at π) converges to the average of the left and right limits (see convergence of Fourier series). Therefore, by using formula (4), we obtain the solution: y = ∑ k = 1 ∞ 2 ( − 1 ) k k 3 sin ⁡ k x = 1 6 ( x 3 − π 2 x ) . {\displaystyle y=\sum _{k=1}^{\infty }2{\frac {(-1)^{k}}{k^{3}}}\sin kx={\tfrac {1}{6}}(x^{3}-\pi ^{2}x).} In this case, we could have found the answer using antidifferentiation, but this is no longer useful in most cases when the differential equation is in many variables. == Application to partial differential equations == === Normal modes === Certain partial differential equations can be solved with the help of Sturm–Liouville theory. Suppose we are interested in the vibrational modes of a thin membrane, held in a rectangular frame, 0 ≤ x ≤ L1, 0 ≤ y ≤ L2. The equation of motion for the vertical membrane's displacement, W(x,y,t) is given by the wave equation: ∂ 2 W ∂ x 2 + ∂ 2 W ∂ y 2 = 1 c 2 ∂ 2 W ∂ t 2 . {\displaystyle {\frac {\partial ^{2}W}{\partial x^{2}}}+{\frac {\partial ^{2}W}{\partial y^{2}}}={\frac {1}{c^{2}}}{\frac {\partial ^{2}W}{\partial t^{2}}}.} The method of separation of variables suggests looking first for solutions of the simple form W = X(x) × Y(y) × T(t). For such a function W the partial differential equation becomes ⁠X″/X⁠ + ⁠Y″/Y⁠ = ⁠1/c2⁠ ⁠T″/T⁠. Since the three terms of this equation are functions of x, y, t separately, they must be constants. For example, the first term gives X″ = λX for a constant λ. The boundary conditions ("held in a rectangular frame") are W = 0 when x = 0, L1 or y = 0, L2 and define the simplest possible Sturm–Liouville eigenvalue problems as in the example, yielding the "normal mode solutions" for W with harmonic time dependence, W m n ( x , y , t ) = A m n sin ⁡ ( m π x L 1 ) sin ⁡ ( n π y L 2 ) cos ⁡ ( ω m n t ) {\displaystyle W_{mn}(x,y,t)=A_{mn}\sin \left({\frac {m\pi x}{L_{1}}}\right)\sin \left({\frac {n\pi y}{L_{2}}}\right)\cos \left(\omega _{mn}t\right)} where m and n are non-zero integers, Amn are arbitrary constants, and ω m n 2 = c 2 ( m 2 π 2 L 1 2 + n 2 π 2 L 2 2 ) . {\displaystyle \omega _{mn}^{2}=c^{2}\left({\frac {m^{2}\pi ^{2}}{L_{1}^{2}}}+{\frac {n^{2}\pi ^{2}}{L_{2}^{2}}}\right).} The functions Wmn form a basis for the Hilbert space of (generalized) solutions of the wave equation; that is, an arbitrary solution W can be decomposed into a sum of these modes, which vibrate at their individual frequencies ωmn. This representation may require a convergent infinite sum. === Second-order linear equation === Consider a linear second-order differential equation in one spatial dimension and first-order in time of the form: f ( x ) ∂ 2 u ∂ x 2 + g ( x ) ∂ u ∂ x + h ( x ) u = ∂ u ∂ t + k ( t ) u , {\displaystyle f(x){\frac {\partial ^{2}u}{\partial x^{2}}}+g(x){\frac {\partial u}{\partial x}}+h(x)u={\frac {\partial u}{\partial t}}+k(t)u,} u ( a , t ) = u ( b , t ) = 0 , u ( x , 0 ) = s ( x ) . {\displaystyle u(a,t)=u(b,t)=0,\qquad u(x,0)=s(x).} Separating variables, we assume that u ( x , t ) = X ( x ) T ( t ) . {\displaystyle u(x,t)=X(x)T(t).} Then our above partial differential equation may be written as: L ^ X ( x ) X ( x ) = M ^ T ( t ) T ( t ) {\displaystyle {\frac {{\hat {L}}X(x)}{X(x)}}={\frac {{\hat {M}}T(t)}{T(t)}}} where L ^ = f ( x ) d 2 d x 2 + g ( x ) d d x + h ( x ) , M ^ = d d t + k ( t ) . {\displaystyle {\hat {L}}=f(x){\frac {d^{2}}{dx^{2}}}+g(x){\frac {d}{dx}}+h(x),\qquad {\hat {M}}={\frac {d}{dt}}+k(t).} Since, by definition, L̂ and X(x) are independent of time t and M̂ and T(t) are independent of position x, then both sides of the above equation must be equal to a constant: L ^ X ( x ) = λ X ( x ) , X ( a ) = X ( b ) = 0 , M ^ T ( t ) = λ T ( t ) . {\displaystyle {\hat {L}}X(x)=\lambda X(x),\qquad X(a)=X(b)=0,\qquad {\hat {M}}T(t)=\lambda T(t).} The first of these equations must be solved as a Sturm–Liouville problem in terms of the eigenfunctions Xn(x) and eigenvalues λn. The second of these equations can be analytically solved once the eigenvalues are known. d d t T n ( t ) = ( λ n − k ( t ) ) T n ( t ) {\displaystyle {\frac {d}{dt}}T_{n}(t)={\bigl (}\lambda _{n}-k(t){\bigr )}T_{n}(t)} T n ( t ) = a n exp ⁡ ( λ n t − ∫ 0 t k ( τ ) d τ ) {\displaystyle T_{n}(t)=a_{n}\exp \left(\lambda _{n}t-\int _{0}^{t}k(\tau )\,d\tau \right)} u ( x , t ) = ∑ n a n X n ( x ) exp ⁡ ( λ n t − ∫ 0 t k ( τ ) d τ ) {\displaystyle u(x,t)=\sum _{n}a_{n}X_{n}(x)\exp \left(\lambda _{n}t-\int _{0}^{t}k(\tau )\,d\tau \right)} a n = ⟨ X n ( x ) , s ( x ) ⟩ ⟨ X n ( x ) , X n ( x ) ⟩ {\displaystyle a_{n}={\frac {{\bigl \langle }X_{n}(x),s(x){\bigr \rangle }}{{\bigl \langle }X_{n}(x),X_{n}(x){\bigr \rangle }}}} where ⟨ y ( x ) , z ( x ) ⟩ = ∫ a b y ( x ) z ( x ) w ( x ) d x , {\displaystyle {\bigl \langle }y(x),z(x){\bigr \rangle }=\int _{a}^{b}y(x)z(x)w(x)\,dx,} w ( x ) = exp ⁡ ( ∫ g ( x ) f ( x ) d x ) f ( x ) . {\displaystyle w(x)={\frac {\exp \left(\int {\frac {g(x)}{f(x)}}\,dx\right)}{f(x)}}.} == Representation of solutions and numerical calculation == The Sturm–Liouville differential equation (1) with boundary conditions may be solved analytically, which can be exact or provide an approximation, by the Rayleigh–Ritz method, or by the matrix-variational method of Gerck et al. Numerically, a variety of methods are also available. In difficult cases, one may need to carry out the intermediate calculations to several hundred decimal places of accuracy in order to obtain the eigenvalues correctly to a few decimal places. Shooting methods Finite difference method Spectral parameter power series method === Shooting methods === Shooting methods proceed by guessing a value of λ, solving an initial value problem defined by the boundary conditions at one endpoint, say, a, of the interval [a,b], comparing the value this solution takes at the other endpoint b with the other desired boundary condition, and finally increasing or decreasing λ as necessary to correct the original value. This strategy is not applicable for locating complex eigenvalues. === Spectral parameter power series method === The spectral parameter power series (SPPS) method makes use of a generalization of the following fact about homogeneous second-order linear ordinary differential equations: if y is a solution of equation (1) that does not vanish at any point of [a,b], then the function y ( x ) ∫ a x d t p ( t ) y ( t ) 2 {\displaystyle y(x)\int _{a}^{x}{\frac {dt}{p(t)y(t)^{2}}}} is a solution of the same equation and is linearly independent from y. Further, all solutions are linear combinations of these two solutions. In the SPPS algorithm, one must begin with an arbitrary value λ∗0 (often λ∗0 = 0; it does not need to be an eigenvalue) and any solution y0 of (1) with λ = λ∗0 which does not vanish on [a,b]. (Discussion below of ways to find appropriate y0 and λ∗0.) Two sequences of functions X(n)(t), X̃(n)(t) on [a,b], referred to as iterated integrals, are defined recursively as follows. First when n = 0, they are taken to be identically equal to 1 on [a,b]. To obtain the next functions they are multiplied alternately by ⁠1/py20⁠ and wy20 and integrated, specifically, for n > 0: The resulting iterated integrals are now applied as coefficients in the following two power series in λ: u 0 = y 0 ∑ k = 0 ∞ ( λ − λ 0 ∗ ) k X ~ ( 2 k ) , {\displaystyle u_{0}=y_{0}\sum _{k=0}^{\infty }\left(\lambda -\lambda _{0}^{*}\right)^{k}{\tilde {X}}^{(2k)},} u 1 = y 0 ∑ k = 0 ∞ ( λ − λ 0 ∗ ) k X ( 2 k + 1 ) . {\displaystyle u_{1}=y_{0}\sum _{k=0}^{\infty }\left(\lambda -\lambda _{0}^{*}\right)^{k}X^{(2k+1)}.} Then for any λ (real or complex), u0 and u1 are linearly independent solutions of the corresponding equation (1). (The functions p(x) and q(x) take part in this construction through their influence on the choice of y0.) Next one chooses coefficients c0 and c1 so that the combination y = c0u0 + c1u1 satisfies the first boundary condition (2). This is simple to do since X(n)(a) = 0 and X̃(n)(a) = 0, for n > 0. The values of X(n)(b) and X̃(n)(b) provide the values of u0(b) and u1(b) and the derivatives u′0(b) and u′0(b), so the second boundary condition (3) becomes an equation in a power series in λ. For numerical work one may truncate this series to a finite number of terms, producing a calculable polynomial in λ whose roots are approximations of the sought-after eigenvalues. When λ = λ0, this reduces to the original construction described above for a solution linearly independent to a given one. The representations (5) and (6) also have theoretical applications in Sturm–Liouville theory. === Construction of a nonvanishing solution === The SPPS method can, itself, be used to find a starting solution y0. Consider the equation (py′)′ = μqy; i.e., q, w, and λ are replaced in (1) by 0, −q, and μ respectively. Then the constant function 1 is a nonvanishing solution corresponding to the eigenvalue μ0 = 0. While there is no guarantee that u0 or u1 will not vanish, the complex function y0 = u0 + iu1 will never vanish because two linearly-independent solutions of a regular Sturm–Liouville equation cannot vanish simultaneously as a consequence of the Sturm separation theorem. This trick gives a solution y0 of (1) for the value λ0 = 0. In practice if (1) has real coefficients, the solutions based on y0 will have very small imaginary parts which must be discarded. == See also == Normal mode Oscillation theory Self-adjoint Variation of parameters Spectral theory of ordinary differential equations Atkinson–Mingarelli theorem == References == == Further reading == Gesztesy, Fritz; Nichols, Roger; Zinchenko, Maxim (2024-09-24). Sturm–Liouville Operators, Their Spectral Theory, and Some Applications. Providence, Rhode Island: American Mathematical Society. ISBN 978-1-4704-7666-3. "Sturm–Liouville theory", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Hartman, Philip (2002). Ordinary Differential Equations (2 ed.). Philadelphia: SIAM. ISBN 978-0-89871-510-1. Polyanin, A. D. & Zaitsev, V. F. (2003). Handbook of Exact Solutions for Ordinary Differential Equations (2 ed.). Boca Raton: Chapman & Hall/CRC Press. ISBN 1-58488-297-2. Teschl, Gerald (2012). Ordinary Differential Equations and Dynamical Systems. Providence: American Mathematical Society. ISBN 978-0-8218-8328-0. (Chapter 5) Teschl, Gerald (2009). Mathematical Methods in Quantum Mechanics; With Applications to Schrödinger Operators. Providence: American Mathematical Society. ISBN 978-0-8218-4660-5. (see Chapter 9 for singular Sturm–Liouville operators and connections with quantum mechanics) Zettl, Anton (2005). Sturm–Liouville Theory. Providence: American Mathematical Society. ISBN 0-8218-3905-5. Birkhoff, Garrett (1973). A source book in classical analysis. Cambridge, Massachusetts: Harvard University Press. ISBN 0-674-82245-5. (See Chapter 8, part B, for excerpts from the works of Sturm and Liouville and commentary on them.) Kravchenko, Vladislav (2020). Direct and Inverse Sturm-Liouville Problems: A Method of Solution. Cham: Birkhäuser. ISBN 978-3-030-47848-3.
Wikipedia/Sturm–Liouville_equation
Science of morality (also known as science of ethics or scientific ethics) may refer to various forms of ethical naturalism grounding morality and ethics in rational, empirical consideration of the natural world. It is sometimes framed as using the scientific approach to determine what is right and wrong, in contrast to the widespread belief that "science has nothing to say on the subject of human values". == Overview == Moral science may refer to the consideration of what is best for, and how to maximize the flourishing of, either particular individuals or all conscious creatures. It has been proposed that "morality" can be appropriately defined on the basis of fundamental premises necessary for any empirical, secular, or philosophical discussion and that societies can use the methods of science to provide answers to moral questions. The norms advocated by moral scientists (e.g. rights to abortion, euthanasia, and drug liberalization under certain circumstances) would be founded upon the shifting and growing collection of human understanding. Even with science's admitted degree of ignorance, and the various semantic issues, moral scientists can meaningfully discuss things as being almost certainly "better" or "worse" for promoting flourishing. == History == === In philosophy === Utilitarian Jeremy Bentham discussed some of the ways moral investigations are a science. He criticized deontological ethics for failing to recognize that it needed to make the same presumptions as his science of morality to really work – whilst pursuing rules that were to be obeyed in every situation (something that worried Bentham). W. V. O. Quine advocated naturalizing epistemology by looking to natural sciences like psychology for a full explanation of knowledge. His work contributed to a resurgence of moral naturalism in the last half of the 20th century. Paul Kurtz, who believes that the careful, secular pursuit of normative rules is vital to society, coined the term eupraxophy to refer to his approach to normative ethics. Steven Pinker, Sam Harris, and Peter Singer believe that we learn what is right and wrong through reason and empirical methodology. Maria Ossowska thought that sociology was inextricably related to philosophical reflections on morality, including normative ethics. She proposed that science analyse: (a) existing social norms and their history, (b) the psychology of morality, and the way that individuals interact with moral matters and prescriptions, and (c) the sociology of morality. === In popular literature === The theory and methods of a normative science of morality are explicitly discussed in Joseph Daleiden's The Science of Morality: The Individual, Community, and Future Generations (1998). Daleiden's book, in contrast to Harris, extensively discusses the relevant philosophical literature. In The Moral Landscape: How Science Can Determine Human Values, Sam Harris's goal is to show how moral truth can be backed by "science", or more specifically, empirical knowledge, critical thinking, philosophy, but most controversially, the scientific method. Patricia Churchland offers that, accepting David Hume's is–ought problem, the use of induction from premises and definitions remains a valid way of reasoning in life and science: Our moral behavior, while more complex than the social behavior of other animals, is similar in that it represents our attempt to manage well in the existing social ecology. ... from the perspective of neuroscience and brain evolution, the routine rejection of scientific approaches to moral behavior based on Hume's warning against deriving ought from is seems unfortunate, especially as the warning is limited to deductive inferences. ... The truth seems to be that values rooted in the circuitry for caring—for well-being of self, offspring, mates, kin, and others—shape social reasoning about many issues: conflict resolutions, keeping the peace, defense, trade, resource distribution, and many other aspects of social life in all its vast richness. Daleiden and Leonard Carmichael warn that science is probabilistic, and that certainty is not possible. One should therefore expect that moral prescriptions will change as humans gain understanding. === In futurism === Transhumanist philosophers such as David Pearce and Mark Alan Walker have extensively discussed the ethical implications of future technologies. Walker coined the term "biohappiness" to describe the idea of directly manipulating the biological roots of happiness in order to increase it. Pearce argues that suffering could eventually be eradicated entirely, stating that: "It is predicted that the world's last unpleasant experience will be a precisely dateable event." Proposed technological methods of overcoming the hedonic treadmill include wireheading (direct brain stimulation for uniform bliss), which undermines motivation and evolutionary fitness; designer drugs, offering sustainable well-being without side effects, though impractical for lifelong reliance; and genetic engineering, the most promising approach. Genetic recalibration through hyperthymia-promoting genes could raise hedonic set-points, fostering adaptive well-being, creativity, and productivity while maintaining responsiveness to stimuli. While scientifically achievable, this transformation requires careful ethical and societal considerations to navigate its profound implications. On the opposite end of the spectrum, risks of astronomical suffering are possible futures in which vastly more suffering will exist than has ever been produced on earth so far in all of earth's history. Possible sources of these risks include artificial superintelligence, genetic engineering for maximum suffering, space colonization, and terraforming leading to an increase in wild animal suffering. == Views in scientific morality == === Training to promote good behaviour === The science of morality may aim to discover the best ways to motivate and shape individuals. Methods to accomplish this include instilling explicit virtues, building character strengths, and forming mental associations. These generally require some level of practical reason. James Rest suggested that abstract reasoning is also a factor in making moral judgements and emphasized that moral judgements alone do not predict moral behaviour: “Moral judgement may be closely related to advocacy behaviour, which in turn influences social institutions, which in turn creates a system of norms and sanctions that influences people’s behaviour.” Daleiden suggested that religions instill a practical sense of virtue and justice, right and wrong. They also effectively use art and myths to educate people about moral situations. ==== Role of government ==== Harris argues that moral science does not imply an "Orwellian future" with "scientists at every door". Instead, Harris imagines data about normative moral issues being shared in the same way as other sciences (e.g. peer-reviewed journals on medicine). Daleiden specifies that government, like any organization, should have limited power. He says "centralization of power irrevocably in the hands of one person or an elite has always ultimately led to great evil for the human race. It was the novel experiment of democracy—a clear break with tradition—that ended the long tradition of tyranny.” He is also explicit that government should only use law to enforce the most basic, reasonable, proven and widely supported moral norms. In other words, there are a great many moral norms that should never be the task of the government to enforce. ==== Role of punishment ==== One author has argued that to attain a society where people are motivated by conditioned self-interest, punishment must go hand-in-hand with reward. For instance, in this line of reasoning, prison remains necessary for many perpetrators of crimes. This is so, even if libertarian free will is false. This is because punishment can still serve its purposes: it deters others from committing their own crimes, educates and reminds everyone about what the society stands for, incapacitates the criminal from doing more harm, goes some way to relieving or repaying the victim, and corrects the criminal (also see recidivism). This author argues that, at least, any prison system should be pursuing those goals, and that it is an empirical question as to what sorts of punishment realize these goals most effectively, and how well various prison systems actually serve these purposes. === Research === The brain areas that are consistently involved when humans reason about moral issues have been investigated. The neural network underlying moral decisions overlaps with the network pertaining to representing others' intentions (i.e., theory of mind) and the network pertaining to representing others' (vicariously experienced) emotional states (i.e., empathy). This supports the notion that moral reasoning is related to both seeing things from other persons’ points of view and to grasping others’ feelings. These results provide evidence that the neural network underlying moral decisions is probably domain-global (i.e., there might be no such things as a "moral module" in the human brain) and might be dissociable into cognitive and affective sub-systems. An essential, shared component of moral judgment involves the capacity to detect morally salient content within a given social context. Recent research implicated the salience network in this initial detection of moral content. The salience network responds to behaviourally salient events, and may be critical to modulate downstream default and frontal control network interactions in the service of complex moral reasoning and decision-making processes. This suggest that moral cognition involves both bottom-up and top-down attentional processes, mediated by discrete large-scale brain networks and their interactions. === In universities === Moral sciences is offered at the degree level at Ghent University (as "an integrated empirical and philosophical study of values, norms and world views") === Other implications === Daleiden provides examples of how science can use empirical evidence to assess the effect that specific behaviours can have on the well-being of individuals and society with regard to various moral issues. He argues that science supports decriminalization and regulation of drugs, euthanasia under some circumstances, and the permission of sexual behaviours that are not tolerated in some cultures (he cites homosexuality as an example). Daleiden further argues that in seeking to reduce human suffering, abortion should not only be permissible, but at times a moral obligation (as in the case of a mother of a potential child who would face the probability of much suffering). Like all moral claims in his book, however, Daleiden is adamant that these decisions remain grounded in, and contingent on empirical evidence. The ideas of cultural relativity, to Daleiden, do offer some lessons: investigators must be careful not to judge a person's behaviour without understanding the environmental context. An action may be necessary and more moral once we are aware of circumstances. However, Daleiden emphasizes that this does not mean all ethical norms or systems are equally effective at promoting flourishing and he often offers the equal treatment of women as a reliably superior norm, wherever it is practiced. == Criticisms == The idea of a normative science of morality has met with many criticisms from scientists and philosophers. Critics include physicist Sean M. Carroll, who argues that morality cannot be part of science. He and other critics cite the widely held "fact-value distinction", that the scientific method cannot answer "moral" questions, although it can describe the norms of different cultures. In contrast, moral scientists defend the position that such a division between values and scientific facts ("moral relativism") is not only arbitrary and illusory, but impeding progress towards taking action against documented cases of human rights violations in different cultures. Stephen Jay Gould argued that science and religion occupy "non-overlapping magisteria". To Gould, science is concerned with questions of fact and theory, but not with meaning and morality – the magisteria of religion. In the same vein, Edward Teller proposed that politics decides what is right, whereas science decides what is true. During a discussion on the role that naturalism might play in professions like nursing, the philosopher Trevor Hussey calls the popular view that science is unconcerned with morality "too simplistic". Although his main focus in the paper is naturalism in nursing, he goes on to explain that science can, at very least, be interested in morality at a descriptive level. He even briefly entertains the idea that morality could itself be a scientific subject, writing that one might argue "... that moral judgements are subject to the same kinds of rational, empirical examination as the rest of the world: they are a subject for science – although a difficult one. If this could be shown to be so, morality would be contained within naturalism. However, I will not assume the truth of moral realism here." == See also == == Notes == == References ==
Wikipedia/Science_of_morality
A Bellman equation, named after Richard E. Bellman, is a necessary condition for optimality associated with the mathematical optimization method known as dynamic programming. It writes the "value" of a decision problem at a certain point in time in terms of the payoff from some initial choices and the "value" of the remaining decision problem that results from those initial choices. This breaks a dynamic optimization problem into a sequence of simpler subproblems, as Bellman's “principle of optimality" prescribes. The equation applies to algebraic structures with a total ordering; for algebraic structures with a partial ordering, the generic Bellman's equation can be used. The Bellman equation was first applied to engineering control theory and to other topics in applied mathematics, and subsequently became an important tool in economic theory; though the basic concepts of dynamic programming are prefigured in John von Neumann and Oskar Morgenstern's Theory of Games and Economic Behavior and Abraham Wald's sequential analysis. The term "Bellman equation" usually refers to the dynamic programming equation (DPE) associated with discrete-time optimization problems. In continuous-time optimization problems, the analogous equation is a partial differential equation that is called the Hamilton–Jacobi–Bellman equation. In discrete time any multi-stage optimization problem can be solved by analyzing the appropriate Bellman equation. The appropriate Bellman equation can be found by introducing new state variables (state augmentation). However, the resulting augmented-state multi-stage optimization problem has a higher dimensional state space than the original multi-stage optimization problem - an issue that can potentially render the augmented problem intractable due to the “curse of dimensionality”. Alternatively, it has been shown that if the cost function of the multi-stage optimization problem satisfies a "backward separable" structure, then the appropriate Bellman equation can be found without state augmentation. == Analytical concepts in dynamic programming == To understand the Bellman equation, several underlying concepts must be understood. First, any optimization problem has some objective: minimizing travel time, minimizing cost, maximizing profits, maximizing utility, etc. The mathematical function that describes this objective is called the objective function. Dynamic programming breaks a multi-period planning problem into simpler steps at different points in time. Therefore, it requires keeping track of how the decision situation is evolving over time. The information about the current situation that is needed to make a correct decision is called the "state". For example, to decide how much to consume and spend at each point in time, people would need to know (among other things) their initial wealth. Therefore, wealth ( W ) {\displaystyle (W)} would be one of their state variables, but there would probably be others. The variables chosen at any given point in time are often called the control variables. For instance, given their current wealth, people might decide how much to consume now. Choosing the control variables now may be equivalent to choosing the next state; more generally, the next state is affected by other factors in addition to the current control. For example, in the simplest case, today's wealth (the state) and consumption (the control) might exactly determine tomorrow's wealth (the new state), though typically other factors will affect tomorrow's wealth too. The dynamic programming approach describes the optimal plan by finding a rule that tells what the controls should be, given any possible value of the state. For example, if consumption (c) depends only on wealth (W), we would seek a rule c ( W ) {\displaystyle c(W)} that gives consumption as a function of wealth. Such a rule, determining the controls as a function of the states, is called a policy function. Finally, by definition, the optimal decision rule is the one that achieves the best possible value of the objective. For example, if someone chooses consumption, given wealth, in order to maximize happiness (assuming happiness H can be represented by a mathematical function, such as a utility function and is something defined by wealth), then each level of wealth will be associated with some highest possible level of happiness, H ( W ) {\displaystyle H(W)} . The best possible value of the objective, written as a function of the state, is called the value function. Bellman showed that a dynamic optimization problem in discrete time can be stated in a recursive, step-by-step form known as backward induction by writing down the relationship between the value function in one period and the value function in the next period. The relationship between these two value functions is called the "Bellman equation". In this approach, the optimal policy in the last time period is specified in advance as a function of the state variable's value at that time, and the resulting optimal value of the objective function is thus expressed in terms of that value of the state variable. Next, the next-to-last period's optimization involves maximizing the sum of that period's period-specific objective function and the optimal value of the future objective function, giving that period's optimal policy contingent upon the value of the state variable as of the next-to-last period decision. This logic continues recursively back in time, until the first period decision rule is derived, as a function of the initial state variable value, by optimizing the sum of the first-period-specific objective function and the value of the second period's value function, which gives the value for all the future periods. Thus, each period's decision is made by explicitly acknowledging that all future decisions will be optimally made. == Derivation == === A dynamic decision problem === Let x t {\displaystyle x_{t}} be the state at time t {\displaystyle t} . For a decision that begins at time 0, we take as given the initial state x 0 {\displaystyle x_{0}} . At any time, the set of possible actions depends on the current state; we express this as a t ∈ Γ ( x t ) {\displaystyle a_{t}\in \Gamma (x_{t})} , where a particular action a t {\displaystyle a_{t}} represents particular values for one or more control variables, and Γ ( x t ) {\displaystyle \Gamma (x_{t})} is the set of actions available to be taken at state x t {\displaystyle x_{t}} . It is also assumed that the state changes from x {\displaystyle x} to a new state T ( x , a ) {\displaystyle T(x,a)} when action a {\displaystyle a} is taken, and that the current payoff from taking action a {\displaystyle a} in state x {\displaystyle x} is F ( x , a ) {\displaystyle F(x,a)} . Finally, we assume impatience, represented by a discount factor 0 < β < 1 {\displaystyle 0<\beta <1} . Under these assumptions, an infinite-horizon decision problem takes the following form: V ( x 0 ) = max { a t } t = 0 ∞ ∑ t = 0 ∞ β t F ( x t , a t ) , {\displaystyle V(x_{0})\;=\;\max _{\left\{a_{t}\right\}_{t=0}^{\infty }}\sum _{t=0}^{\infty }\beta ^{t}F(x_{t},a_{t}),} subject to the constraints a t ∈ Γ ( x t ) , x t + 1 = T ( x t , a t ) , ∀ t = 0 , 1 , 2 , … {\displaystyle a_{t}\in \Gamma (x_{t}),\;x_{t+1}=T(x_{t},a_{t}),\;\forall t=0,1,2,\dots } Notice that we have defined notation V ( x 0 ) {\displaystyle V(x_{0})} to denote the optimal value that can be obtained by maximizing this objective function subject to the assumed constraints. This function is the value function. It is a function of the initial state variable x 0 {\displaystyle x_{0}} , since the best value obtainable depends on the initial situation. === Bellman's principle of optimality === The dynamic programming method breaks this decision problem into smaller subproblems. Bellman's principle of optimality describes how to do this:Principle of Optimality: An optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision. (See Bellman, 1957, Chap. III.3.) In computer science, a problem that can be broken apart like this is said to have optimal substructure. In the context of dynamic game theory, this principle is analogous to the concept of subgame perfect equilibrium, although what constitutes an optimal policy in this case is conditioned on the decision-maker's opponents choosing similarly optimal policies from their points of view. As suggested by the principle of optimality, we will consider the first decision separately, setting aside all future decisions (we will start afresh from time 1 with the new state x 1 {\displaystyle x_{1}} ). Collecting the future decisions in brackets on the right, the above infinite-horizon decision problem is equivalent to: max a 0 { F ( x 0 , a 0 ) + β [ max { a t } t = 1 ∞ ∑ t = 1 ∞ β t − 1 F ( x t , a t ) : a t ∈ Γ ( x t ) , x t + 1 = T ( x t , a t ) , ∀ t ≥ 1 ] } {\displaystyle \max _{a_{0}}\left\{F(x_{0},a_{0})+\beta \left[\max _{\left\{a_{t}\right\}_{t=1}^{\infty }}\sum _{t=1}^{\infty }\beta ^{t-1}F(x_{t},a_{t}):a_{t}\in \Gamma (x_{t}),\;x_{t+1}=T(x_{t},a_{t}),\;\forall t\geq 1\right]\right\}} subject to the constraints a 0 ∈ Γ ( x 0 ) , x 1 = T ( x 0 , a 0 ) . {\displaystyle a_{0}\in \Gamma (x_{0}),\;x_{1}=T(x_{0},a_{0}).} Here we are choosing a 0 {\displaystyle a_{0}} , knowing that our choice will cause the time 1 state to be x 1 = T ( x 0 , a 0 ) {\displaystyle x_{1}=T(x_{0},a_{0})} . That new state will then affect the decision problem from time 1 on. The whole future decision problem appears inside the square brackets on the right. === The Bellman equation === So far it seems we have only made the problem uglier by separating today's decision from future decisions. But we can simplify by noticing that what is inside the square brackets on the right is the value of the time 1 decision problem, starting from state x 1 = T ( x 0 , a 0 ) {\displaystyle x_{1}=T(x_{0},a_{0})} . Therefore, the problem can be rewritten as a recursive definition of the value function: V ( x 0 ) = max a 0 { F ( x 0 , a 0 ) + β V ( x 1 ) } {\displaystyle V(x_{0})=\max _{a_{0}}\{F(x_{0},a_{0})+\beta V(x_{1})\}} , subject to the constraints: a 0 ∈ Γ ( x 0 ) , x 1 = T ( x 0 , a 0 ) . {\displaystyle a_{0}\in \Gamma (x_{0}),\;x_{1}=T(x_{0},a_{0}).} This is the Bellman equation. It may be simplified even further if the time subscripts are dropped and the value of the next state is plugged in: V ( x ) = max a ∈ Γ ( x ) { F ( x , a ) + β V ( T ( x , a ) ) } . {\displaystyle V(x)=\max _{a\in \Gamma (x)}\{F(x,a)+\beta V(T(x,a))\}.} The Bellman equation is classified as a functional equation, because solving it means finding the unknown function V {\displaystyle V} , which is the value function. Recall that the value function describes the best possible value of the objective, as a function of the state x {\displaystyle x} . By calculating the value function, we will also find the function a ( x ) {\displaystyle a(x)} that describes the optimal action as a function of the state; this is called the policy function. === In a stochastic problem === In the deterministic setting, other techniques besides dynamic programming can be used to tackle the above optimal control problem. However, the Bellman Equation is often the most convenient method of solving stochastic optimal control problems. For a specific example from economics, consider an infinitely-lived consumer with initial wealth endowment a 0 {\displaystyle {\color {Red}a_{0}}} at period 0 {\displaystyle 0} . They have an instantaneous utility function u ( c ) {\displaystyle u(c)} where c {\displaystyle c} denotes consumption and discounts the next period utility at a rate of 0 < β < 1 {\displaystyle 0<\beta <1} . Assume that what is not consumed in period t {\displaystyle t} carries over to the next period with interest rate r {\displaystyle r} . Then the consumer's utility maximization problem is to choose a consumption plan { c t } {\displaystyle \{{\color {OliveGreen}c_{t}}\}} that solves max ∑ t = 0 ∞ β t u ( c t ) {\displaystyle \max \sum _{t=0}^{\infty }\beta ^{t}u({\color {OliveGreen}c_{t}})} subject to a t + 1 = ( 1 + r ) ( a t − c t ) , c t ≥ 0 , {\displaystyle {\color {Red}a_{t+1}}=(1+r)({\color {Red}a_{t}}-{\color {OliveGreen}c_{t}}),\;{\color {OliveGreen}c_{t}}\geq 0,} and lim t → ∞ a t ≥ 0. {\displaystyle \lim _{t\rightarrow \infty }{\color {Red}a_{t}}\geq 0.} The first constraint is the capital accumulation/law of motion specified by the problem, while the second constraint is a transversality condition that the consumer does not carry debt at the end of their life. The Bellman equation is V ( a ) = max 0 ≤ c ≤ a { u ( c ) + β V ( ( 1 + r ) ( a − c ) ) } , {\displaystyle V(a)=\max _{0\leq c\leq a}\{u(c)+\beta V((1+r)(a-c))\},} Alternatively, one can treat the sequence problem directly using, for example, the Hamiltonian equations. Now, if the interest rate varies from period to period, the consumer is faced with a stochastic optimization problem. Let the interest r follow a Markov process with probability transition function Q ( r , d μ r ) {\displaystyle Q(r,d\mu _{r})} where d μ r {\displaystyle d\mu _{r}} denotes the probability measure governing the distribution of interest rate next period if current interest rate is r {\displaystyle r} . In this model the consumer decides their current period consumption after the current period interest rate is announced. Rather than simply choosing a single sequence { c t } {\displaystyle \{{\color {OliveGreen}c_{t}}\}} , the consumer now must choose a sequence { c t } {\displaystyle \{{\color {OliveGreen}c_{t}}\}} for each possible realization of a { r t } {\displaystyle \{r_{t}\}} in such a way that their lifetime expected utility is maximized: max { c t } t = 0 ∞ E ( ∑ t = 0 ∞ β t u ( c t ) ) . {\displaystyle \max _{\left\{c_{t}\right\}_{t=0}^{\infty }}\mathbb {E} {\bigg (}\sum _{t=0}^{\infty }\beta ^{t}u({\color {OliveGreen}c_{t}}){\bigg )}.} The expectation E {\displaystyle \mathbb {E} } is taken with respect to the appropriate probability measure given by Q on the sequences of r's. Because r is governed by a Markov process, dynamic programming simplifies the problem significantly. Then the Bellman equation is simply: V ( a , r ) = max 0 ≤ c ≤ a { u ( c ) + β ∫ V ( ( 1 + r ) ( a − c ) , r ′ ) Q ( r , d μ r ) } . {\displaystyle V(a,r)=\max _{0\leq c\leq a}\{u(c)+\beta \int V((1+r)(a-c),r')Q(r,d\mu _{r})\}.} Under some reasonable assumption, the resulting optimal policy function g(a,r) is measurable. For a general stochastic sequential optimization problem with Markovian shocks and where the agent is faced with their decision ex-post, the Bellman equation takes a very similar form V ( x , z ) = max c ∈ Γ ( x , z ) { F ( x , c , z ) + β ∫ V ( T ( x , c ) , z ′ ) d μ z ( z ′ ) } . {\displaystyle V(x,z)=\max _{c\in \Gamma (x,z)}\{F(x,c,z)+\beta \int V(T(x,c),z')d\mu _{z}(z')\}.} == Solution methods == The method of undetermined coefficients, also known as 'guess and verify', can be used to solve some infinite-horizon, autonomous Bellman equations. The Bellman equation can be solved by backwards induction, either analytically in a few special cases, or numerically on a computer. Numerical backwards induction is applicable to a wide variety of problems, but may be infeasible when there are many state variables, due to the curse of dimensionality. Approximate dynamic programming has been introduced by D. P. Bertsekas and J. N. Tsitsiklis with the use of artificial neural networks (multilayer perceptrons) for approximating the Bellman function. This is an effective mitigation strategy for reducing the impact of dimensionality by replacing the memorization of the complete function mapping for the whole space domain with the memorization of the sole neural network parameters. In particular, for continuous-time systems, an approximate dynamic programming approach that combines both policy iterations with neural networks was introduced. In discrete-time, an approach to solve the HJB equation combining value iterations and neural networks was introduced. By calculating the first-order conditions associated with the Bellman equation, and then using the envelope theorem to eliminate the derivatives of the value function, it is possible to obtain a system of difference equations or differential equations called the 'Euler equations'. Standard techniques for the solution of difference or differential equations can then be used to calculate the dynamics of the state variables and the control variables of the optimization problem. == Applications in economics == The first known application of a Bellman equation in economics is due to Martin Beckmann and Richard Muth. Martin Beckmann also wrote extensively on consumption theory using the Bellman equation in 1959. His work influenced Edmund S. Phelps, among others. A celebrated economic application of a Bellman equation is Robert C. Merton's seminal 1973 article on the intertemporal capital asset pricing model. (See also Merton's portfolio problem). The solution to Merton's theoretical model, one in which investors chose between income today and future income or capital gains, is a form of Bellman's equation. Because economic applications of dynamic programming usually result in a Bellman equation that is a difference equation, economists refer to dynamic programming as a "recursive method" and a subfield of recursive economics is now recognized within economics. Nancy Stokey, Robert E. Lucas, and Edward Prescott describe stochastic and nonstochastic dynamic programming in considerable detail, and develop theorems for the existence of solutions to problems meeting certain conditions. They also describe many examples of modeling theoretical problems in economics using recursive methods. This book led to dynamic programming being employed to solve a wide range of theoretical problems in economics, including optimal economic growth, resource extraction, principal–agent problems, public finance, business investment, asset pricing, factor supply, and industrial organization. Lars Ljungqvist and Thomas Sargent apply dynamic programming to study a variety of theoretical questions in monetary policy, fiscal policy, taxation, economic growth, search theory, and labor economics. Avinash Dixit and Robert Pindyck showed the value of the method for thinking about capital budgeting. Anderson adapted the technique to business valuation, including privately held businesses. Using dynamic programming to solve concrete problems is complicated by informational difficulties, such as choosing the unobservable discount rate. There are also computational issues, the main one being the curse of dimensionality arising from the vast number of possible actions and potential state variables that must be considered before an optimal strategy can be selected. For an extensive discussion of computational issues, see Miranda and Fackler, and Meyn 2007. == Example == In Markov decision processes, a Bellman equation is a recursion for expected rewards. For example, the expected reward for being in a particular state s and following some fixed policy π {\displaystyle \pi } has the Bellman equation: V π ( s ) = R ( s , π ( s ) ) + γ ∑ s ′ P ( s ′ | s , π ( s ) ) V π ( s ′ ) . {\displaystyle V^{\pi }(s)=R(s,\pi (s))+\gamma \sum _{s'}P(s'|s,\pi (s))V^{\pi }(s').\ } This equation describes the expected reward for taking the action prescribed by some policy π {\displaystyle \pi } . The equation for the optimal policy is referred to as the Bellman optimality equation: V π ∗ ( s ) = max a { R ( s , a ) + γ ∑ s ′ P ( s ′ | s , a ) V π ∗ ( s ′ ) } . {\displaystyle V^{\pi *}(s)=\max _{a}\left\{{R(s,a)+\gamma \sum _{s'}P(s'|s,a)V^{\pi *}(s')}\right\}.\ } where π ∗ {\displaystyle {\pi *}} is the optimal policy and V π ∗ {\displaystyle V^{\pi *}} refers to the value function of the optimal policy. The equation above describes the reward for taking the action giving the highest expected return. == See also == Bellman pseudospectral method Dynamic programming – Problem optimization method Hamilton–Jacobi–Bellman equation – Optimality condition in optimal control theory Markov decision process – Mathematical model for sequential decision making under uncertainty Optimal control theory – Mathematical way of attaining a desired output from a dynamic system Optimal substructure – Property of a computational problem Recursive competitive equilibrium – economic equilibrium concept associated with a dynamic programPages displaying wikidata descriptions as a fallback Stochastic dynamic programming – 1957 technique for modelling problems of decision making under uncertainty == References ==
Wikipedia/Bellman_equation
In mathematics, the method of steepest descent or saddle-point method is an extension of Laplace's method for approximating an integral, where one deforms a contour integral in the complex plane to pass near a stationary point (saddle point), in roughly the direction of steepest descent or stationary phase. The saddle-point approximation is used with integrals in the complex plane, whereas Laplace’s method is used with real integrals. The integral to be estimated is often of the form ∫ C f ( z ) e λ g ( z ) d z , {\displaystyle \int _{C}f(z)e^{\lambda g(z)}\,dz,} where C is a contour, and λ is large. One version of the method of steepest descent deforms the contour of integration C into a new path integration C′ so that the following conditions hold: C′ passes through one or more zeros of the derivative g′(z), the imaginary part of g(z) is constant on C′. The method of steepest descent was first published by Debye (1909), who used it to estimate Bessel functions and pointed out that it occurred in the unpublished note by Riemann (1863) about hypergeometric functions. The contour of steepest descent has a minimax property, see Fedoryuk (2001). Siegel (1932) described some other unpublished notes of Riemann, where he used this method to derive the Riemann–Siegel formula. == Basic idea == The method of steepest descent is a method to approximate a complex integral of the form I ( λ ) = ∫ C f ( z ) e λ g ( z ) d z {\displaystyle I(\lambda )=\int _{C}f(z)e^{\lambda g(z)}\,\mathrm {d} z} for large λ → ∞ {\displaystyle \lambda \rightarrow \infty } , where f ( z ) {\displaystyle f(z)} and g ( z ) {\displaystyle g(z)} are analytic functions of z {\displaystyle z} . Because the integrand is analytic, the contour C {\displaystyle C} can be deformed into a new contour C ′ {\displaystyle C'} without changing the integral. In particular, one seeks a new contour on which the imaginary part, denoted ℑ ( ⋅ ) {\displaystyle \Im (\cdot )} , of g ( z ) = ℜ [ g ( z ) ] + i ℑ [ g ( z ) ] {\displaystyle g(z)=\Re [g(z)]+i\,\Im [g(z)]} is constant ( ℜ ( ⋅ ) {\displaystyle \Re (\cdot )} denotes the real part). Then I ( λ ) = e i λ ℑ { g ( z ) } ∫ C ′ f ( z ) e λ ℜ { g ( z ) } d z , {\displaystyle I(\lambda )=e^{i\lambda \Im \{g(z)\}}\int _{C'}f(z)e^{\lambda \Re \{g(z)\}}\,\mathrm {d} z,} and the remaining integral can be approximated with other methods like Laplace's method. == Etymology == The method is called the method of steepest descent because for analytic g ( z ) {\displaystyle g(z)} , constant phase contours are equivalent to steepest descent contours. If g ( z ) = X ( z ) + i Y ( z ) {\displaystyle g(z)=X(z)+iY(z)} is an analytic function of z = x + i y {\displaystyle z=x+iy} , it satisfies the Cauchy–Riemann equations ∂ X ∂ x = ∂ Y ∂ y and ∂ X ∂ y = − ∂ Y ∂ x . {\displaystyle {\frac {\partial X}{\partial x}}={\frac {\partial Y}{\partial y}}\qquad {\text{and}}\qquad {\frac {\partial X}{\partial y}}=-{\frac {\partial Y}{\partial x}}.} Then ∂ X ∂ x ∂ Y ∂ x + ∂ X ∂ y ∂ Y ∂ y = ∇ X ⋅ ∇ Y = 0 , {\displaystyle {\frac {\partial X}{\partial x}}{\frac {\partial Y}{\partial x}}+{\frac {\partial X}{\partial y}}{\frac {\partial Y}{\partial y}}=\nabla X\cdot \nabla Y=0,} so contours of constant phase are also contours of steepest descent. == A simple estimate == Let  f, S : Cn → C and C ⊂ Cn. If M = sup x ∈ C ℜ ( S ( x ) ) < ∞ , {\displaystyle M=\sup _{x\in C}\Re (S(x))<\infty ,} where ℜ ( ⋅ ) {\displaystyle \Re (\cdot )} denotes the real part, and there exists a positive real number λ0 such that ∫ C | f ( x ) e λ 0 S ( x ) | d x < ∞ , {\displaystyle \int _{C}\left|f(x)e^{\lambda _{0}S(x)}\right|dx<\infty ,} then the following estimate holds: | ∫ C f ( x ) e λ S ( x ) d x | ⩽ const ⋅ e λ M , ∀ λ ∈ R , λ ⩾ λ 0 . {\displaystyle \left|\int _{C}f(x)e^{\lambda S(x)}dx\right|\leqslant {\text{const}}\cdot e^{\lambda M},\qquad \forall \lambda \in \mathbb {R} ,\quad \lambda \geqslant \lambda _{0}.} Proof of the simple estimate: | ∫ C f ( x ) e λ S ( x ) d x | ⩽ ∫ C | f ( x ) | | e λ S ( x ) | d x ≡ ∫ C | f ( x ) | e λ M | e λ 0 ( S ( x ) − M ) e ( λ − λ 0 ) ( S ( x ) − M ) | d x ⩽ ∫ C | f ( x ) | e λ M | e λ 0 ( S ( x ) − M ) | d x | e ( λ − λ 0 ) ( S ( x ) − M ) | ⩽ 1 = e − λ 0 M ∫ C | f ( x ) e λ 0 S ( x ) | d x ⏟ const ⋅ e λ M . {\displaystyle {\begin{aligned}\left|\int _{C}f(x)e^{\lambda S(x)}dx\right|&\leqslant \int _{C}|f(x)|\left|e^{\lambda S(x)}\right|dx\\&\equiv \int _{C}|f(x)|e^{\lambda M}\left|e^{\lambda _{0}(S(x)-M)}e^{(\lambda -\lambda _{0})(S(x)-M)}\right|dx\\&\leqslant \int _{C}|f(x)|e^{\lambda M}\left|e^{\lambda _{0}(S(x)-M)}\right|dx&&\left|e^{(\lambda -\lambda _{0})(S(x)-M)}\right|\leqslant 1\\&=\underbrace {e^{-\lambda _{0}M}\int _{C}\left|f(x)e^{\lambda _{0}S(x)}\right|dx} _{\text{const}}\cdot e^{\lambda M}.\end{aligned}}} == The case of a single non-degenerate saddle point == === Basic notions and notation === Let x be a complex n-dimensional vector, and S x x ″ ( x ) ≡ ( ∂ 2 S ( x ) ∂ x i ∂ x j ) , 1 ⩽ i , j ⩽ n , {\displaystyle S''_{xx}(x)\equiv \left({\frac {\partial ^{2}S(x)}{\partial x_{i}\partial x_{j}}}\right),\qquad 1\leqslant i,\,j\leqslant n,} denote the Hessian matrix for a function S(x). If φ ( x ) = ( φ 1 ( x ) , φ 2 ( x ) , … , φ k ( x ) ) {\displaystyle {\boldsymbol {\varphi }}(x)=(\varphi _{1}(x),\varphi _{2}(x),\ldots ,\varphi _{k}(x))} is a vector function, then its Jacobian matrix is defined as φ x ′ ( x ) ≡ ( ∂ φ i ( x ) ∂ x j ) , 1 ⩽ i ⩽ k , 1 ⩽ j ⩽ n . {\displaystyle {\boldsymbol {\varphi }}_{x}'(x)\equiv \left({\frac {\partial \varphi _{i}(x)}{\partial x_{j}}}\right),\qquad 1\leqslant i\leqslant k,\quad 1\leqslant j\leqslant n.} A non-degenerate saddle point, z0 ∈ Cn, of a holomorphic function S(z) is a critical point of the function (i.e., ∇S(z0) = 0) where the function's Hessian matrix has a non-vanishing determinant (i.e., det S z z ″ ( z 0 ) ≠ 0 {\displaystyle \det S''_{zz}(z^{0})\neq 0} ). The following is the main tool for constructing the asymptotics of integrals in the case of a non-degenerate saddle point: === Complex Morse lemma === The Morse lemma for real-valued functions generalizes as follows for holomorphic functions: near a non-degenerate saddle point z0 of a holomorphic function S(z), there exist coordinates in terms of which S(z) − S(z0) is exactly quadratic. To make this precise, let S be a holomorphic function with domain W ⊂ Cn, and let z0 in W be a non-degenerate saddle point of S, that is, ∇S(z0) = 0 and det S z z ″ ( z 0 ) ≠ 0 {\displaystyle \det S''_{zz}(z^{0})\neq 0} . Then there exist neighborhoods U ⊂ W of z0 and V ⊂ Cn of w = 0, and a bijective holomorphic function φ : V → U with φ(0) = z0 such that ∀ w ∈ V : S ( φ ( w ) ) = S ( z 0 ) + 1 2 ∑ j = 1 n μ j w j 2 , det φ w ′ ( 0 ) = 1 , {\displaystyle \forall w\in V:\qquad S({\boldsymbol {\varphi }}(w))=S(z^{0})+{\frac {1}{2}}\sum _{j=1}^{n}\mu _{j}w_{j}^{2},\quad \det {\boldsymbol {\varphi }}_{w}'(0)=1,} Here, the μj are the eigenvalues of the matrix S z z ″ ( z 0 ) {\displaystyle S_{zz}''(z^{0})} . === The asymptotic expansion in the case of a single non-degenerate saddle point === Assume  f (z) and S(z) are holomorphic functions in an open, bounded, and simply connected set Ωx ⊂ Cn such that the Ix = Ωx ∩ Rn is connected; ℜ ( S ( z ) ) {\displaystyle \Re (S(z))} has a single maximum: max z ∈ I x ℜ ( S ( z ) ) = ℜ ( S ( x 0 ) ) {\displaystyle \max _{z\in I_{x}}\Re (S(z))=\Re (S(x^{0}))} for exactly one point x0 ∈ Ix; x0 is a non-degenerate saddle point (i.e., ∇S(x0) = 0 and det S x x ″ ( x 0 ) ≠ 0 {\displaystyle \det S''_{xx}(x^{0})\neq 0} ). Then, the following asymptotic holds where μj are eigenvalues of the Hessian S x x ″ ( x 0 ) {\displaystyle S''_{xx}(x^{0})} and ( − μ j ) − 1 2 {\displaystyle (-\mu _{j})^{-{\frac {1}{2}}}} are defined with arguments This statement is a special case of more general results presented in Fedoryuk (1987). Equation (8) can also be written as where the branch of det ( − S x x ″ ( x 0 ) ) {\displaystyle {\sqrt {\det \left(-S_{xx}''(x^{0})\right)}}} is selected as follows ( det ( − S x x ″ ( x 0 ) ) ) − 1 2 = exp ⁡ ( − i Ind ( − S x x ″ ( x 0 ) ) ) ∏ j = 1 n | μ j | − 1 2 , Ind ( − S x x ″ ( x 0 ) ) = 1 2 ∑ j = 1 n arg ⁡ ( − μ j ) , | arg ⁡ ( − μ j ) | < π 2 . {\displaystyle {\begin{aligned}\left(\det \left(-S_{xx}''(x^{0})\right)\right)^{-{\frac {1}{2}}}&=\exp \left(-i{\text{ Ind}}\left(-S_{xx}''(x^{0})\right)\right)\prod _{j=1}^{n}\left|\mu _{j}\right|^{-{\frac {1}{2}}},\\{\text{Ind}}\left(-S_{xx}''(x^{0})\right)&={\tfrac {1}{2}}\sum _{j=1}^{n}\arg(-\mu _{j}),&&|\arg(-\mu _{j})|<{\tfrac {\pi }{2}}.\end{aligned}}} Consider important special cases: If S(x) is real valued for real x and x0 in Rn (aka, the multidimensional Laplace method), then Ind ( − S x x ″ ( x 0 ) ) = 0. {\displaystyle {\text{Ind}}\left(-S_{xx}''(x^{0})\right)=0.} If S(x) is purely imaginary for real x (i.e., ℜ ( S ( x ) ) = 0 {\displaystyle \Re (S(x))=0} for all x in Rn) and x0 in Rn (aka, the multidimensional stationary phase method), then Ind ( − S x x ″ ( x 0 ) ) = π 4 sign S x x ″ ( x 0 ) , {\displaystyle {\text{Ind}}\left(-S_{xx}''(x^{0})\right)={\frac {\pi }{4}}{\text{sign }}S_{xx}''(x_{0}),} where sign S x x ″ ( x 0 ) {\displaystyle {\text{sign }}S_{xx}''(x_{0})} denotes the signature of matrix S x x ″ ( x 0 ) {\displaystyle S_{xx}''(x_{0})} , which equals to the number of negative eigenvalues minus the number of positive ones. It is noteworthy that in applications of the stationary phase method to the multidimensional WKB approximation in quantum mechanics (as well as in optics), Ind is related to the Maslov index see, e.g., Chaichian & Demichev (2001) and Schulman (2005). == The case of multiple non-degenerate saddle points == If the function S(x) has multiple isolated non-degenerate saddle points, i.e., ∇ S ( x ( k ) ) = 0 , det S x x ″ ( x ( k ) ) ≠ 0 , x ( k ) ∈ Ω x ( k ) , {\displaystyle \nabla S\left(x^{(k)}\right)=0,\quad \det S''_{xx}\left(x^{(k)}\right)\neq 0,\quad x^{(k)}\in \Omega _{x}^{(k)},} where { Ω x ( k ) } k = 1 K {\displaystyle \left\{\Omega _{x}^{(k)}\right\}_{k=1}^{K}} is an open cover of Ωx, then the calculation of the integral asymptotic is reduced to the case of a single saddle point by employing the partition of unity. The partition of unity allows us to construct a set of continuous functions ρk(x) : Ωx → [0, 1], 1 ≤ k ≤ K, such that ∑ k = 1 K ρ k ( x ) = 1 , ∀ x ∈ Ω x , ρ k ( x ) = 0 ∀ x ∈ Ω x ∖ Ω x ( k ) . {\displaystyle {\begin{aligned}\sum _{k=1}^{K}\rho _{k}(x)&=1,&&\forall x\in \Omega _{x},\\\rho _{k}(x)&=0&&\forall x\in \Omega _{x}\setminus \Omega _{x}^{(k)}.\end{aligned}}} Whence, ∫ I x ⊂ Ω x f ( x ) e λ S ( x ) d x ≡ ∑ k = 1 K ∫ I x ⊂ Ω x ρ k ( x ) f ( x ) e λ S ( x ) d x . {\displaystyle \int _{I_{x}\subset \Omega _{x}}f(x)e^{\lambda S(x)}dx\equiv \sum _{k=1}^{K}\int _{I_{x}\subset \Omega _{x}}\rho _{k}(x)f(x)e^{\lambda S(x)}dx.} Therefore as λ → ∞ we have: ∑ k = 1 K ∫ a neighborhood of x ( k ) f ( x ) e λ S ( x ) d x = ( 2 π λ ) n 2 ∑ k = 1 K e λ S ( x ( k ) ) ( det ( − S x x ″ ( x ( k ) ) ) ) − 1 2 f ( x ( k ) ) , {\displaystyle \sum _{k=1}^{K}\int _{{\text{a neighborhood of }}x^{(k)}}f(x)e^{\lambda S(x)}dx=\left({\frac {2\pi }{\lambda }}\right)^{\frac {n}{2}}\sum _{k=1}^{K}e^{\lambda S\left(x^{(k)}\right)}\left(\det \left(-S_{xx}''\left(x^{(k)}\right)\right)\right)^{-{\frac {1}{2}}}f\left(x^{(k)}\right),} where equation (13) was utilized at the last stage, and the pre-exponential function  f (x) at least must be continuous. == The other cases == When ∇S(z0) = 0 and det S z z ″ ( z 0 ) = 0 {\displaystyle \det S''_{zz}(z^{0})=0} , the point z0 ∈ Cn is called a degenerate saddle point of a function S(z). Calculating the asymptotic of ∫ f ( x ) e λ S ( x ) d x , {\displaystyle \int f(x)e^{\lambda S(x)}dx,} when λ → ∞,  f (x) is continuous, and S(z) has a degenerate saddle point, is a very rich problem, whose solution heavily relies on the catastrophe theory. Here, the catastrophe theory replaces the Morse lemma, valid only in the non-degenerate case, to transform the function S(z) into one of the multitude of canonical representations. For further details see, e.g., Poston & Stewart (1978) and Fedoryuk (1987). Integrals with degenerate saddle points naturally appear in many applications including optical caustics and the multidimensional WKB approximation in quantum mechanics. The other cases such as, e.g.,  f (x) and/or S(x) are discontinuous or when an extremum of S(x) lies at the integration region's boundary, require special care (see, e.g., Fedoryuk (1987) and Wong (1989)). == Extensions and generalizations == An extension of the steepest descent method is the so-called nonlinear stationary phase/steepest descent method. Here, instead of integrals, one needs to evaluate asymptotically solutions of Riemann–Hilbert factorization problems. Given a contour C in the complex sphere, a function f defined on that contour and a special point, say infinity, one seeks a function M holomorphic away from the contour C, with prescribed jump across C, and with a given normalization at infinity. If f and hence M are matrices rather than scalars this is a problem that in general does not admit an explicit solution. An asymptotic evaluation is then possible along the lines of the linear stationary phase/steepest descent method. The idea is to reduce asymptotically the solution of the given Riemann–Hilbert problem to that of a simpler, explicitly solvable, Riemann–Hilbert problem. Cauchy's theorem is used to justify deformations of the jump contour. The nonlinear stationary phase was introduced by Deift and Zhou in 1993, based on earlier work of the Russian mathematician Alexander Its. A (properly speaking) nonlinear steepest descent method was introduced by Kamvissis, K. McLaughlin and P. Miller in 2003, based on previous work of Lax, Levermore, Deift, Venakides and Zhou. As in the linear case, steepest descent contours solve a min-max problem. In the nonlinear case they turn out to be "S-curves" (defined in a different context back in the 80s by Stahl, Gonchar and Rakhmanov). The nonlinear stationary phase/steepest descent method has applications to the theory of soliton equations and integrable models, random matrices and combinatorics. Another extension is the Method of Chester–Friedman–Ursell for coalescing saddle points and uniform asymptotic extensions. == See also == Pearcey integral Stationary phase approximation Laplace's method == Notes == == References == Chaichian, M.; Demichev, A. (2001), Path Integrals in Physics Volume 1: Stochastic Process and Quantum Mechanics, Taylor & Francis, p. 174, ISBN 075030801X Debye, P. (1909), "Näherungsformeln für die Zylinderfunktionen für große Werte des Arguments und unbeschränkt veränderliche Werte des Index", Mathematische Annalen, 67 (4): 535–558, doi:10.1007/BF01450097, S2CID 122219667 English translation in Debye, Peter J. W. (1954), The collected papers of Peter J. W. Debye, Interscience Publishers, Inc., New York, ISBN 978-0-918024-58-9, MR 0063975 {{citation}}: ISBN / Date incompatibility (help) Deift, P.; Zhou, X. (1993), "A steepest descent method for oscillatory Riemann-Hilbert problems. Asymptotics for the MKdV equation", Ann. of Math., vol. 137, no. 2, The Annals of Mathematics, Vol. 137, No. 2, pp. 295–368, arXiv:math/9201261, doi:10.2307/2946540, JSTOR 2946540, S2CID 12699956. Erdelyi, A. (1956), Asymptotic Expansions, Dover. Fedoryuk, M. V. (2001) [1994], "Saddle point method", Encyclopedia of Mathematics, EMS Press. Fedoryuk, M. V. (1987), Asymptotic: Integrals and Series, Nauka, Moscow [in Russian]. Kamvissis, S.; McLaughlin, K. T.-R.; Miller, P. (2003), "Semiclassical Soliton Ensembles for the Focusing Nonlinear Schrödinger Equation", Annals of Mathematics Studies, vol. 154, Princeton University Press. Riemann, B. (1863), Sullo svolgimento del quoziente di due serie ipergeometriche in frazione continua infinita (Unpublished note, reproduced in Riemann's collected papers.) Siegel, C. L. (1932), "Über Riemanns Nachlaß zur analytischen Zahlentheorie", Quellen und Studien zur Geschichte der Mathematik, Astronomie und Physik, 2: 45–80 Reprinted in Gesammelte Abhandlungen, Vol. 1. Berlin: Springer-Verlag, 1966. Translated in Barkan, Eric; Sklar, David (2018), "On Riemanns Nachlass for Analytic Number Theory: A translation of Siegel's Uber", arXiv:1810.05198 [math.HO]. Poston, T.; Stewart, I. (1978), Catastrophe Theory and Its Applications, Pitman. Schulman, L. S. (2005), "Ch. 17: The Phase of the Semiclassical Amplitude", Techniques and Applications of Path Integration, Dover, ISBN 0486445283 Wong, R. (1989), Asymptotic approximations of integrals, Academic Press.
Wikipedia/Method_of_steepest_descent
Integrated nested Laplace approximations (INLA) is a method for approximate Bayesian inference based on Laplace's method. It is designed for a class of models called latent Gaussian models (LGMs), for which it can be a fast and accurate alternative for Markov chain Monte Carlo methods to compute posterior marginal distributions. Due to its relative speed even with large data sets for certain problems and models, INLA has been a popular inference method in applied statistics, in particular spatial statistics, ecology, and epidemiology. It is also possible to combine INLA with a finite element method solution of a stochastic partial differential equation to study e.g. spatial point processes and species distribution models. The INLA method is implemented in the R-INLA R package. == Latent Gaussian models == Let y = ( y 1 , … , y n ) {\displaystyle {\boldsymbol {y}}=(y_{1},\dots ,y_{n})} denote the response variable (that is, the observations) which belongs to an exponential family, with the mean μ i {\displaystyle \mu _{i}} (of y i {\displaystyle y_{i}} ) being linked to a linear predictor η i {\displaystyle \eta _{i}} via an appropriate link function. The linear predictor can take the form of a (Bayesian) additive model. All latent effects (the linear predictor, the intercept, coefficients of possible covariates, and so on) are collectively denoted by the vector x {\displaystyle {\boldsymbol {x}}} . The hyperparameters of the model are denoted by θ {\displaystyle {\boldsymbol {\theta }}} . As per Bayesian statistics, x {\displaystyle {\boldsymbol {x}}} and θ {\displaystyle {\boldsymbol {\theta }}} are random variables with prior distributions. The observations are assumed to be conditionally independent given x {\displaystyle {\boldsymbol {x}}} and θ {\displaystyle {\boldsymbol {\theta }}} : π ( y | x , θ ) = ∏ i ∈ I π ( y i | η i , θ ) , {\displaystyle \pi ({\boldsymbol {y}}|{\boldsymbol {x}},{\boldsymbol {\theta }})=\prod _{i\in {\mathcal {I}}}\pi (y_{i}|\eta _{i},{\boldsymbol {\theta }}),} where I {\displaystyle {\mathcal {I}}} is the set of indices for observed elements of y {\displaystyle {\boldsymbol {y}}} (some elements may be unobserved, and for these INLA computes a posterior predictive distribution). Note that the linear predictor η {\displaystyle {\boldsymbol {\eta }}} is part of x {\displaystyle {\boldsymbol {x}}} . For the model to be a latent Gaussian model, it is assumed that x | θ {\displaystyle {\boldsymbol {x}}|{\boldsymbol {\theta }}} is a Gaussian Markov Random Field (GMRF) (that is, a multivariate Gaussian with additional conditional independence properties) with probability density π ( x | θ ) ∝ | Q θ | 1 / 2 exp ⁡ ( − 1 2 x T Q θ x ) , {\displaystyle \pi ({\boldsymbol {x}}|{\boldsymbol {\theta }})\propto \left|{\boldsymbol {Q_{\theta }}}\right|^{1/2}\exp \left(-{\frac {1}{2}}{\boldsymbol {x}}^{T}{\boldsymbol {Q_{\theta }}}{\boldsymbol {x}}\right),} where Q θ {\displaystyle {\boldsymbol {Q_{\theta }}}} is a θ {\displaystyle {\boldsymbol {\theta }}} -dependent sparse precision matrix and | Q θ | {\displaystyle \left|{\boldsymbol {Q_{\theta }}}\right|} is its determinant. The precision matrix is sparse due to the GMRF assumption. The prior distribution π ( θ ) {\displaystyle \pi ({\boldsymbol {\theta }})} for the hyperparameters need not be Gaussian. However, the number of hyperparameters, m = d i m ( θ ) {\displaystyle m=\mathrm {dim} ({\boldsymbol {\theta }})} , is assumed to be small (say, less than 15). == Approximate Bayesian inference with INLA == In Bayesian inference, one wants to solve for the posterior distribution of the latent variables x {\displaystyle {\boldsymbol {x}}} and θ {\displaystyle {\boldsymbol {\theta }}} . Applying Bayes' theorem π ( x , θ | y ) = π ( y | x , θ ) π ( x | θ ) π ( θ ) π ( y ) , {\displaystyle \pi ({\boldsymbol {x}},{\boldsymbol {\theta }}|{\boldsymbol {y}})={\frac {\pi ({\boldsymbol {y}}|{\boldsymbol {x}},{\boldsymbol {\theta }})\pi ({\boldsymbol {x}}|{\boldsymbol {\theta }})\pi ({\boldsymbol {\theta }})}{\pi ({\boldsymbol {y}})}},} the joint posterior distribution of x {\displaystyle {\boldsymbol {x}}} and θ {\displaystyle {\boldsymbol {\theta }}} is given by π ( x , θ | y ) ∝ π ( θ ) π ( x | θ ) ∏ i π ( y i | η i , θ ) ∝ π ( θ ) | Q θ | 1 / 2 exp ⁡ ( − 1 2 x T Q θ x + ∑ i log ⁡ [ π ( y i | η i , θ ) ] ) . {\displaystyle {\begin{aligned}\pi ({\boldsymbol {x}},{\boldsymbol {\theta }}|{\boldsymbol {y}})&\propto \pi ({\boldsymbol {\theta }})\pi ({\boldsymbol {x}}|{\boldsymbol {\theta }})\prod _{i}\pi (y_{i}|\eta _{i},{\boldsymbol {\theta }})\\&\propto \pi ({\boldsymbol {\theta }})\left|{\boldsymbol {Q_{\theta }}}\right|^{1/2}\exp \left(-{\frac {1}{2}}{\boldsymbol {x}}^{T}{\boldsymbol {Q_{\theta }}}{\boldsymbol {x}}+\sum _{i}\log \left[\pi (y_{i}|\eta _{i},{\boldsymbol {\theta }})\right]\right).\end{aligned}}} Obtaining the exact posterior is generally a very difficult problem. In INLA, the main aim is to approximate the posterior marginals π ( x i | y ) = ∫ π ( x i | θ , y ) π ( θ | y ) d θ π ( θ j | y ) = ∫ π ( θ | y ) d θ − j , {\displaystyle {\begin{array}{rcl}\pi (x_{i}|{\boldsymbol {y}})&=&\int \pi (x_{i}|{\boldsymbol {\theta }},{\boldsymbol {y}})\pi ({\boldsymbol {\theta }}|{\boldsymbol {y}})d{\boldsymbol {\theta }}\\\pi (\theta _{j}|{\boldsymbol {y}})&=&\int \pi ({\boldsymbol {\theta }}|{\boldsymbol {y}})d{\boldsymbol {\theta }}_{-j},\end{array}}} where θ − j = ( θ 1 , … , θ j − 1 , θ j + 1 , … , θ m ) {\displaystyle {\boldsymbol {\theta }}_{-j}=\left(\theta _{1},\dots ,\theta _{j-1},\theta _{j+1},\dots ,\theta _{m}\right)} . A key idea of INLA is to construct nested approximations given by π ~ ( x i | y ) = ∫ π ~ ( x i | θ , y ) π ~ ( θ | y ) d θ π ~ ( θ j | y ) = ∫ π ~ ( θ | y ) d θ − j , {\displaystyle {\begin{array}{rcl}{\widetilde {\pi }}(x_{i}|{\boldsymbol {y}})&=&\int {\widetilde {\pi }}(x_{i}|{\boldsymbol {\theta }},{\boldsymbol {y}}){\widetilde {\pi }}({\boldsymbol {\theta }}|{\boldsymbol {y}})d{\boldsymbol {\theta }}\\{\widetilde {\pi }}(\theta _{j}|{\boldsymbol {y}})&=&\int {\widetilde {\pi }}({\boldsymbol {\theta }}|{\boldsymbol {y}})d{\boldsymbol {\theta }}_{-j},\end{array}}} where π ~ ( ⋅ | ⋅ ) {\displaystyle {\widetilde {\pi }}(\cdot |\cdot )} is an approximated posterior density. The approximation to the marginal density π ( x i | y ) {\displaystyle \pi (x_{i}|{\boldsymbol {y}})} is obtained in a nested fashion by first approximating π ( θ | y ) {\displaystyle \pi ({\boldsymbol {\theta }}|{\boldsymbol {y}})} and π ( x i | θ , y ) {\displaystyle \pi (x_{i}|{\boldsymbol {\theta }},{\boldsymbol {y}})} , and then numerically integrating out θ {\displaystyle {\boldsymbol {\theta }}} as π ~ ( x i | y ) = ∑ k π ~ ( x i | θ k , y ) × π ~ ( θ k | y ) × Δ k , {\displaystyle {\begin{aligned}{\widetilde {\pi }}(x_{i}|{\boldsymbol {y}})=\sum _{k}{\widetilde {\pi }}\left(x_{i}|{\boldsymbol {\theta }}_{k},{\boldsymbol {y}}\right)\times {\widetilde {\pi }}({\boldsymbol {\theta }}_{k}|{\boldsymbol {y}})\times \Delta _{k},\end{aligned}}} where the summation is over the values of θ {\displaystyle {\boldsymbol {\theta }}} , with integration weights given by Δ k {\displaystyle \Delta _{k}} . The approximation of π ( θ j | y ) {\displaystyle \pi (\theta _{j}|{\boldsymbol {y}})} is computed by numerically integrating θ − j {\displaystyle {\boldsymbol {\theta }}_{-j}} out from π ~ ( θ | y ) {\displaystyle {\widetilde {\pi }}({\boldsymbol {\theta }}|{\boldsymbol {y}})} . To get the approximate distribution π ~ ( θ | y ) {\displaystyle {\widetilde {\pi }}({\boldsymbol {\theta }}|{\boldsymbol {y}})} , one can use the relation π ( θ | y ) = π ( x , θ , y ) π ( x | θ , y ) π ( y ) , {\displaystyle {\begin{aligned}{\pi }({\boldsymbol {\theta }}|{\boldsymbol {y}})={\frac {\pi \left({\boldsymbol {x}},{\boldsymbol {\theta }},{\boldsymbol {y}}\right)}{\pi \left({\boldsymbol {x}}|{\boldsymbol {\theta }},{\boldsymbol {y}}\right)\pi ({\boldsymbol {y}})}},\end{aligned}}} as the starting point. Then π ~ ( θ | y ) {\displaystyle {\widetilde {\pi }}({\boldsymbol {\theta }}|{\boldsymbol {y}})} is obtained at a specific value of the hyperparameters θ = θ k {\displaystyle {\boldsymbol {\theta }}={\boldsymbol {\theta }}_{k}} with Laplace's approximation π ~ ( θ k | y ) ∝ π ( x , θ k , y ) π ~ G ( x | θ k , y ) | x = x ∗ ( θ k ) , ∝ π ( y | x , θ k ) π ( x | θ k ) π ( θ k ) π ~ G ( x | θ k , y ) | x = x ∗ ( θ k ) , {\displaystyle {\begin{aligned}{\widetilde {\pi }}({\boldsymbol {\theta }}_{k}|{\boldsymbol {y}})&\propto \left.{\frac {\pi \left({\boldsymbol {x}},{\boldsymbol {\theta }}_{k},{\boldsymbol {y}}\right)}{{\widetilde {\pi }}_{G}\left({\boldsymbol {x}}|{\boldsymbol {\theta }}_{k},{\boldsymbol {y}}\right)}}\right\vert _{{\boldsymbol {x}}={\boldsymbol {x}}^{*}({\boldsymbol {\theta }}_{k})},\\&\propto \left.{\frac {\pi ({\boldsymbol {y}}|{\boldsymbol {x}},{\boldsymbol {\theta }}_{k})\pi ({\boldsymbol {x}}|{\boldsymbol {\theta }}_{k})\pi ({\boldsymbol {\theta }}_{k})}{{\widetilde {\pi }}_{G}\left({\boldsymbol {x}}|{\boldsymbol {\theta }}_{k},{\boldsymbol {y}}\right)}}\right\vert _{{\boldsymbol {x}}={\boldsymbol {x}}^{*}({\boldsymbol {\theta }}_{k})},\end{aligned}}} where π ~ G ( x | θ k , y ) {\displaystyle {\widetilde {\pi }}_{G}\left({\boldsymbol {x}}|{\boldsymbol {\theta }}_{k},{\boldsymbol {y}}\right)} is the Gaussian approximation to π ( x | θ k , y ) {\displaystyle {\pi }\left({\boldsymbol {x}}|{\boldsymbol {\theta }}_{k},{\boldsymbol {y}}\right)} whose mode at a given θ k {\displaystyle {\boldsymbol {\theta }}_{k}} is x ∗ ( θ k ) {\displaystyle {\boldsymbol {x}}^{*}({\boldsymbol {\theta }}_{k})} . The mode can be found numerically for example with the Newton-Raphson method. The trick in the Laplace approximation above is the fact that the Gaussian approximation is applied on the full conditional of x {\displaystyle {\boldsymbol {x}}} in the denominator since it is usually close to a Gaussian due to the GMRF property of x {\displaystyle {\boldsymbol {x}}} . Applying the approximation here improves the accuracy of the method, since the posterior π ( θ | y ) {\displaystyle {\pi }({\boldsymbol {\theta }}|{\boldsymbol {y}})} itself need not be close to a Gaussian, and so the Gaussian approximation is not directly applied on π ( θ | y ) {\displaystyle {\pi }({\boldsymbol {\theta }}|{\boldsymbol {y}})} . The second important property of a GMRF, the sparsity of the precision matrix Q θ k {\displaystyle {\boldsymbol {Q}}_{{\boldsymbol {\theta }}_{k}}} , is required for efficient computation of π ~ ( θ k | y ) {\displaystyle {\widetilde {\pi }}({\boldsymbol {\theta }}_{k}|{\boldsymbol {y}})} for each value θ k {\displaystyle {{\boldsymbol {\theta }}_{k}}} . Obtaining the approximate distribution π ~ ( x i | θ k , y ) {\displaystyle {\widetilde {\pi }}\left(x_{i}|{\boldsymbol {\theta }}_{k},{\boldsymbol {y}}\right)} is more involved, and the INLA method provides three options for this: Gaussian approximation, Laplace approximation, or the simplified Laplace approximation. For the numerical integration to obtain π ~ ( x i | y ) {\displaystyle {\widetilde {\pi }}(x_{i}|{\boldsymbol {y}})} , also three options are available: grid search, central composite design, or empirical Bayes. == References == == Further reading == Gomez-Rubio, Virgilio (2021). Bayesian inference with INLA. Chapman and Hall/CRC. ISBN 978-1-03-217453-2.
Wikipedia/Integrated_nested_Laplace_approximations
In mathematics, Laplace's principle is a basic theorem in large deviations theory which is similar to Varadhan's lemma. It gives an asymptotic expression for the Lebesgue integral of exp(−θφ(x)) over a fixed set A as θ becomes large. Such expressions can be used, for example, in statistical mechanics to determining the limiting behaviour of a system as the temperature tends to absolute zero. == Statement of the result == Let A be a Lebesgue-measurable subset of d-dimensional Euclidean space Rd and let φ : Rd → R be a measurable function with ∫ A e − φ ( x ) d x < ∞ . {\displaystyle \int _{A}e^{-\varphi (x)}\,dx<\infty .} Then lim θ → ∞ 1 θ log ⁡ ∫ A e − θ φ ( x ) d x = − e s s i n f x ∈ A ⁡ φ ( x ) , {\displaystyle \lim _{\theta \to \infty }{\frac {1}{\theta }}\log \int _{A}e^{-\theta \varphi (x)}\,dx=-\mathop {\mathrm {ess\,inf} } _{x\in A}\varphi (x),} where ess inf denotes the essential infimum. Heuristically, this may be read as saying that for large θ, ∫ A e − θ φ ( x ) d x ≈ exp ⁡ ( − θ e s s i n f x ∈ A ⁡ φ ( x ) ) . {\displaystyle \int _{A}e^{-\theta \varphi (x)}\,dx\approx \exp \left(-\theta \mathop {\mathrm {ess\,inf} } _{x\in A}\varphi (x)\right).} == Application == The Laplace principle can be applied to the family of probability measures Pθ given by P θ ( A ) = ( ∫ A e − θ φ ( x ) d x ) / ( ∫ R d e − θ φ ( y ) d y ) {\displaystyle \mathbf {P} _{\theta }(A)=\left(\int _{A}e^{-\theta \varphi (x)}\,dx\right){\bigg /}\left(\int _{\mathbf {R} ^{d}}e^{-\theta \varphi (y)}\,dy\right)} to give an asymptotic expression for the probability of some event A as θ becomes large. For example, if X is a standard normally distributed random variable on R, then lim ε ↓ 0 ε log ⁡ P [ ε X ∈ A ] = − e s s i n f x ∈ A ⁡ x 2 2 {\displaystyle \lim _{\varepsilon \downarrow 0}\varepsilon \log \mathbf {P} {\big [}{\sqrt {\varepsilon }}X\in A{\big ]}=-\mathop {\mathrm {ess\,inf} } _{x\in A}{\frac {x^{2}}{2}}} for every measurable set A. == See also == Laplace's method == References == Dembo, Amir; Zeitouni, Ofer (1998). Large deviations techniques and applications. Applications of Mathematics (New York) 38 (Second ed.). New York: Springer-Verlag. pp. xvi+396. ISBN 0-387-98406-2. MR1619036
Wikipedia/Laplace_principle_(large_deviations_theory)
In mathematics, integrability is a property of certain dynamical systems. While there are several distinct formal definitions, informally speaking, an integrable system is a dynamical system with sufficiently many conserved quantities, or first integrals, that its motion is confined to a submanifold of much smaller dimensionality than that of its phase space. Three features are often referred to as characterizing integrable systems: the existence of a maximal set of conserved quantities (the usual defining property of complete integrability) the existence of algebraic invariants, having a basis in algebraic geometry (a property known sometimes as algebraic integrability) the explicit determination of solutions in an explicit functional form (not an intrinsic property, but something often referred to as solvability) Integrable systems may be seen as very different in qualitative character from more generic dynamical systems, which are more typically chaotic systems. The latter generally have no conserved quantities, and are asymptotically intractable, since an arbitrarily small perturbation in initial conditions may lead to arbitrarily large deviations in their trajectories over a sufficiently large time. Many systems studied in physics are completely integrable, in particular, in the Hamiltonian sense, the key example being multi-dimensional harmonic oscillators. Another standard example is planetary motion about either one fixed center (e.g., the sun) or two. Other elementary examples include the motion of a rigid body about its center of mass (the Euler top) and the motion of an axially symmetric rigid body about a point in its axis of symmetry (the Lagrange top). In the late 1960s, it was realized that there are completely integrable systems in physics having an infinite number of degrees of freedom, such as some models of shallow water waves (Korteweg–de Vries equation), the Kerr effect in optical fibres, described by the nonlinear Schrödinger equation, and certain integrable many-body systems, such as the Toda lattice. The modern theory of integrable systems was revived with the numerical discovery of solitons by Martin Kruskal and Norman Zabusky in 1965, which led to the inverse scattering transform method in 1967. In the special case of Hamiltonian systems, if there are enough independent Poisson commuting first integrals for the flow parameters to be able to serve as a coordinate system on the invariant level sets (the leaves of the Lagrangian foliation), and if the flows are complete and the energy level set is compact, this implies the Liouville–Arnold theorem; i.e., the existence of action-angle variables. General dynamical systems have no such conserved quantities; in the case of autonomous Hamiltonian systems, the energy is generally the only one, and on the energy level sets, the flows are typically chaotic. A key ingredient in characterizing integrable systems is the Frobenius theorem, which states that a system is Frobenius integrable (i.e., is generated by an integrable distribution) if, locally, it has a foliation by maximal integral manifolds. But integrability, in the sense of dynamical systems, is a global property, not a local one, since it requires that the foliation be a regular one, with the leaves embedded submanifolds. Integrability does not necessarily imply that generic solutions can be explicitly expressed in terms of some known set of special functions; it is an intrinsic property of the geometry and topology of the system, and the nature of the dynamics. == General dynamical systems == In the context of differentiable dynamical systems, the notion of integrability refers to the existence of invariant, regular foliations; i.e., ones whose leaves are embedded submanifolds of the smallest possible dimension that are invariant under the flow. There is thus a variable notion of the degree of integrability, depending on the dimension of the leaves of the invariant foliation. This concept has a refinement in the case of Hamiltonian systems, known as complete integrability in the sense of Liouville (see below), which is what is most frequently referred to in this context. An extension of the notion of integrability is also applicable to discrete systems such as lattices. This definition can be adapted to describe evolution equations that either are systems of differential equations or finite difference equations. The distinction between integrable and nonintegrable dynamical systems has the qualitative implication of regular motion vs. chaotic motion and hence is an intrinsic property, not just a matter of whether a system can be explicitly integrated in an exact form. == Hamiltonian systems and Liouville integrability == In the special setting of Hamiltonian systems, we have the notion of integrability in the Liouville sense. (See the Liouville–Arnold theorem.) Liouville integrability means that there exists a regular foliation of the phase space by invariant manifolds such that the Hamiltonian vector fields associated with the invariants of the foliation span the tangent distribution. Another way to state this is that there exists a maximal set of functionally independent Poisson commuting invariants (i.e., independent functions on the phase space whose Poisson brackets with the Hamiltonian of the system, and with each other, vanish). In finite dimensions, if the phase space is symplectic (i.e., the center of the Poisson algebra consists only of constants), it must have even dimension 2 n , {\displaystyle 2n,} and the maximal number of independent Poisson commuting invariants (including the Hamiltonian itself) is n {\displaystyle n} . The leaves of the foliation are totally isotropic with respect to the symplectic form and such a maximal isotropic foliation is called Lagrangian. All autonomous Hamiltonian systems (i.e. those for which the Hamiltonian and Poisson brackets are not explicitly time-dependent) have at least one invariant; namely, the Hamiltonian itself, whose value along the flow is the energy. If the energy level sets are compact, the leaves of the Lagrangian foliation are tori, and the natural linear coordinates on these are called "angle" variables. The cycles of the canonical 1 {\displaystyle 1} -form are called the action variables, and the resulting canonical coordinates are called action-angle variables (see below). There is also a distinction between complete integrability, in the Liouville sense, and partial integrability, as well as a notion of superintegrability and maximal superintegrability. Essentially, these distinctions correspond to the dimensions of the leaves of the foliation. When the number of independent Poisson commuting invariants is less than maximal (but, in the case of autonomous systems, more than one), we say the system is partially integrable. When there exist further functionally independent invariants, beyond the maximal number that can be Poisson commuting, and hence the dimension of the leaves of the invariant foliation is less than n, we say the system is superintegrable. If there is a regular foliation with one-dimensional leaves (curves), this is called maximally superintegrable. == Action-angle variables == When a finite-dimensional Hamiltonian system is completely integrable in the Liouville sense, and the energy level sets are compact, the flows are complete, and the leaves of the invariant foliation are tori. There then exist, as mentioned above, special sets of canonical coordinates on the phase space known as action-angle variables, such that the invariant tori are the joint level sets of the action variables. These thus provide a complete set of invariants of the Hamiltonian flow (constants of motion), and the angle variables are the natural periodic coordinates on the tori. The motion on the invariant tori, expressed in terms of these canonical coordinates, is linear in the angle variables. == The Hamilton–Jacobi approach == In canonical transformation theory, there is the Hamilton–Jacobi method, in which solutions to Hamilton's equations are sought by first finding a complete solution of the associated Hamilton–Jacobi equation. In classical terminology, this is described as determining a transformation to a canonical set of coordinates consisting of completely ignorable variables; i.e., those in which there is no dependence of the Hamiltonian on a complete set of canonical "position" coordinates, and hence the corresponding canonically conjugate momenta are all conserved quantities. In the case of compact energy level sets, this is the first step towards determining the action-angle variables. In the general theory of partial differential equations of Hamilton–Jacobi type, a complete solution (i.e. one that depends on n independent constants of integration, where n is the dimension of the configuration space), exists in very general cases, but only in the local sense. Therefore, the existence of a complete solution of the Hamilton–Jacobi equation is by no means a characterization of complete integrability in the Liouville sense. Most cases that can be "explicitly integrated" involve a complete separation of variables, in which the separation constants provide the complete set of integration constants that are required. Only when these constants can be reinterpreted, within the full phase space setting, as the values of a complete set of Poisson commuting functions restricted to the leaves of a Lagrangian foliation, can the system be regarded as completely integrable in the Liouville sense. == Solitons and inverse spectral methods == A resurgence of interest in classical integrable systems came with the discovery, in the late 1960s, that solitons, which are strongly stable, localized solutions of partial differential equations like the Korteweg–de Vries equation (which describes 1-dimensional non-dissipative fluid dynamics in shallow basins), could be understood by viewing these equations as infinite-dimensional integrable Hamiltonian systems. Their study leads to a very fruitful approach for "integrating" such systems, the inverse scattering transform and more general inverse spectral methods (often reducible to Riemann–Hilbert problems), which generalize local linear methods like Fourier analysis to nonlocal linearization, through the solution of associated integral equations. The basic idea of this method is to introduce a linear operator that is determined by the position in phase space and which evolves under the dynamics of the system in question in such a way that its "spectrum" (in a suitably generalized sense) is invariant under the evolution, cf. Lax pair. This provides, in certain cases, enough invariants, or "integrals of motion" to make the system completely integrable. In the case of systems having an infinite number of degrees of freedom, such as the KdV equation, this is not sufficient to make precise the property of Liouville integrability. However, for suitably defined boundary conditions, the spectral transform can, in fact, be interpreted as a transformation to completely ignorable coordinates, in which the conserved quantities form half of a doubly infinite set of canonical coordinates, and the flow linearizes in these. In some cases, this may even be seen as a transformation to action-angle variables, although typically only a finite number of the "position" variables are actually angle coordinates, and the rest are noncompact. == Hirota bilinear equations and τ-functions == Another viewpoint that arose in the modern theory of integrable systems originated in a calculational approach pioneered by Ryogo Hirota, which involved replacing the original nonlinear dynamical system with a bilinear system of constant coefficient equations for an auxiliary quantity, which later came to be known as the τ-function. These are now referred to as the Hirota equations. Although originally appearing just as a calculational device, without any clear relation to the inverse scattering approach, or the Hamiltonian structure, this nevertheless gave a very direct method from which important classes of solutions such as solitons could be derived. Subsequently, this was interpreted by Mikio Sato and his students, at first for the case of integrable hierarchies of PDEs, such as the Kadomtsev–Petviashvili hierarchy, but then for much more general classes of integrable hierarchies, as a sort of universal phase space approach, in which, typically, the commuting dynamics were viewed simply as determined by a fixed (finite or infinite) abelian group action on a (finite or infinite) Grassmann manifold. The τ-function was viewed as the determinant of a projection operator from elements of the group orbit to some origin within the Grassmannian, and the Hirota equations as expressing the Plücker relations, characterizing the Plücker embedding of the Grassmannian in the projectivization of a suitably defined (infinite) exterior space, viewed as a fermionic Fock space. == Quantum integrable systems == There is also a notion of quantum integrable systems. In the quantum setting, functions on phase space must be replaced by self-adjoint operators on a Hilbert space, and the notion of Poisson commuting functions replaced by commuting operators. The notion of conservation laws must be specialized to local conservation laws. Every Hamiltonian has an infinite set of conserved quantities given by projectors to its energy eigenstates. However, this does not imply any special dynamical structure. To explain quantum integrability, it is helpful to consider the free particle setting. Here all dynamics are one-body reducible. A quantum system is said to be integrable if the dynamics are two-body reducible. The Yang–Baxter equation is a consequence of this reducibility and leads to trace identities which provide an infinite set of conserved quantities. All of these ideas are incorporated into the quantum inverse scattering method where the algebraic Bethe ansatz can be used to obtain explicit solutions. Examples of quantum integrable models are the Lieb–Liniger model, the Hubbard model and several variations on the Heisenberg model. Some other types of quantum integrability are known in explicitly time-dependent quantum problems, such as the driven Tavis-Cummings model. == Exactly solvable models == In physics, completely integrable systems, especially in the infinite-dimensional setting, are often referred to as exactly solvable models. This obscures the distinction between integrability, in the Hamiltonian sense, and the more general dynamical systems sense. There are also exactly solvable models in statistical mechanics, which are more closely related to quantum integrable systems than classical ones. Two closely related methods: the Bethe ansatz approach, in its modern sense, based on the Yang–Baxter equations and the quantum inverse scattering method, provide quantum analogs of the inverse spectral methods. These are equally important in the study of solvable models in statistical mechanics. An imprecise notion of "exact solvability" as meaning: "The solutions can be expressed explicitly in terms of some previously known functions" is also sometimes used, as though this were an intrinsic property of the system itself, rather than the purely calculational feature that we happen to have some "known" functions available, in terms of which the solutions may be expressed. This notion has no intrinsic meaning, since what is meant by "known" functions very often is defined precisely by the fact that they satisfy certain given equations, and the list of such "known functions" is constantly growing. Although such a characterization of "integrability" has no intrinsic validity, it often implies the sort of regularity that is to be expected in integrable systems. == List of some well-known integrable systems == Classical mechanical systems Calogero–Moser–Sutherland model Central force motion (exact solutions of classical central-force problems) Geodesic motion on ellipsoids Harmonic oscillator Integrable Clebsch and Steklov systems in fluids Lagrange, Euler, and Kovalevskaya tops Neumann oscillator Two center Newtonian gravitational motion Integrable lattice models Ablowitz–Ladik lattice Toda lattice Volterra lattice Integrable systems in 1 + 1 dimensions AKNS system Benjamin–Ono equation Boussinesq equation (water waves) Camassa–Holm equation Classical Heisenberg ferromagnet model (spin chain) Degasperis–Procesi equation Dym equation Garnier integrable system Kaup–Kupershmidt equation Krichever–Novikov equation Korteweg–de Vries equation Landau–Lifshitz equation (continuous spin field) Nonlinear Schrödinger equation Nonlinear sigma models Sine–Gordon equation Thirring model Three-wave equation Integrable PDEs in 2 + 1 dimensions Davey–Stewartson equation Ishimori equation Kadomtsev–Petviashvili equation Novikov–Veselov equation Integrable PDEs in 3 + 1 dimensions The Belinski–Zakharov transform generates a Lax pair for the Einstein field equations; general solutions are termed gravitational solitons, of which the Schwarzschild metric, the Kerr metric and some gravitational wave solutions are examples. Exactly solvable statistical lattice models 8-vertex model Gaudin model Ising model in 1- and 2-dimensions Ice-type model of Lieb Quantum Heisenberg model == See also == Hitchin system Pentagram map === Related areas === Mathematical physics Soliton Painleve transcendents Statistical mechanics Integrable algorithm === Some key contributors (since 1965) === == References == Arnold, V.I. (1997). Mathematical Methods of Classical Mechanics (2nd ed.). Springer. ISBN 978-0-387-96890-2. Audin, M. (1996). Spinning Tops: A Course on Integrable Systems. Cambridge Studies in Advanced Mathematics. Vol. 51. Cambridge University Press. ISBN 978-0521779197. Babelon, O.; Bernard, D.; Talon, M. (2003). Introduction to classical integrable systems. Cambridge University Press. doi:10.1017/CBO9780511535024. ISBN 0-521-82267-X. Baxter, R.J. (1982). Exactly solved models in statistical mechanics. Academic Press. ISBN 978-0-12-083180-7. Dunajski, M. (2009). Solitons, Instantons and Twistors. Oxford University Press. ISBN 978-0-19-857063-9. Faddeev, L.D.; Takhtajan, L.A. (1987). Hamiltonian Methods in the Theory of Solitons. Addison-Wesley. ISBN 978-0-387-15579-1. Fomenko, A.T. (1995). Symplectic Geometry. Methods and Applications (2nd ed.). Gordon and Breach. ISBN 978-2-88124-901-3. Fomenko, A.T.; Bolsinov, A.V. (2003). Integrable Hamiltonian Systems: Geometry, Topology, Classification. Taylor and Francis. ISBN 978-0-415-29805-6. Goldstein, H. (1980). Classical Mechanics (2nd ed.). Addison-Wesley. ISBN 0-201-02918-9. Harnad, J.; Winternitz, P.; Sabidussi, G., eds. (2000). Integrable Systems: From Classical to Quantum. American Mathematical Society. ISBN 0-8218-2093-1. Harnad, J.; Balogh, F. (2021). Tau functions and Their Applications. Cambridge Monographs on Mathematical Physics. Cambridge University Press. doi:10.1017/9781108610902. ISBN 9781108492683. S2CID 222379146. Hietarinta, J.; Joshi, N.; Nijhoff, F. (2016). Discrete systems and integrability. Cambridge University Press. Bibcode:2016dsi..book.....H. doi:10.1017/CBO9781107337411. ISBN 978-1-107-04272-8. Korepin, V. E.; Bogoliubov, N.M.; Izergin, A.G. (1997). Quantum Inverse Scattering Method and Correlation Functions. Cambridge University Press. ISBN 978-0-521-58646-7. Afrajmovich, V.S.; Arnold, V.I.; Il'yashenko, Yu. S.; Shil'nikov, L.P. Dynamical Systems V. Springer. ISBN 3-540-18173-3. Mussardo, Giuseppe (2010). Statistical Field Theory. An Introduction to Exactly Solved Models of Statistical Physics. Oxford University Press. ISBN 978-0-19-954758-6. Sardanashvily, G. (2015). Handbook of Integrable Hamiltonian Systems. URSS. ISBN 978-5-396-00687-4. == Further reading == Beilinson, A.; Drinfeld, V. "Quantization of Hitchin's integrable system and Hecke eigensheaves" (PDF). Donagi, R.; Markman, E. (1996). "Spectral covers, algebraically completely integrable, Hamiltonian systems, and moduli of bundles". Integrable systems and quantum groups. Lecture Notes in Mathematics. Vol. 1620. Springer. pp. 1–119. doi:10.1007/BFb0094792. ISBN 978-3-540-60542-3. Sonnad, Kiran G.; Cary, John R. (2004). "Finding a nonlinear lattice with improved integrability using Lie transform perturbation theory". Physical Review E. 69 (5): 056501. Bibcode:2004PhRvE..69e6501S. doi:10.1103/PhysRevE.69.056501. PMID 15244955. == External links == "Integrable system", Encyclopedia of Mathematics, EMS Press, 2001 [1994] "SIDE - Symmetries and Integrability of Difference Equations", a conference devoted to the study of integrable difference equations and related topics. == Notes ==
Wikipedia/Integrable_model
In mathematics, the inverse Laplace transform of a function F ( s ) {\displaystyle F(s)} is a real function f ( t ) {\displaystyle f(t)} that is piecewise-continuous, exponentially-restricted (that is, | f ( t ) | ≤ M e α t {\displaystyle |f(t)|\leq Me^{\alpha t}} ∀ t ≥ 0 {\displaystyle \forall t\geq 0} for some constants M > 0 {\displaystyle M>0} and α ∈ R {\displaystyle \alpha \in \mathbb {R} } ) and has the property: L { f } ( s ) = L { f ( t ) } ( s ) = F ( s ) , {\displaystyle {\mathcal {L}}\{f\}(s)={\mathcal {L}}\{f(t)\}(s)=F(s),} where L {\displaystyle {\mathcal {L}}} denotes the Laplace transform. It can be proven that, if a function F ( s ) {\displaystyle F(s)} has the inverse Laplace transform f ( t ) {\displaystyle f(t)} , then f ( t ) {\displaystyle f(t)} is uniquely determined (considering functions which differ from each other only on a point set having Lebesgue measure zero as the same). This result was first proven by Mathias Lerch in 1903 and is known as Lerch's theorem. The Laplace transform and the inverse Laplace transform together have a number of properties that make them useful for analysing linear dynamical systems. == Mellin's inverse formula == An integral formula for the inverse Laplace transform, called the Mellin's inverse formula, the Bromwich integral, or the Fourier–Mellin integral, is given by the line integral: f ( t ) = L − 1 { F ( s ) } ( t ) = 1 2 π i lim T → ∞ ∫ γ − i T γ + i T e s t F ( s ) d s {\displaystyle f(t)={\mathcal {L}}^{-1}\{F(s)\}(t)={\frac {1}{2\pi i}}\lim _{T\to \infty }\int _{\gamma -iT}^{\gamma +iT}e^{st}F(s)\,ds} where the integration is done along the vertical line Re ( s ) = γ {\displaystyle {\textrm {Re}}(s)=\gamma } in the complex plane such that γ {\displaystyle \gamma } is greater than the real part of all singularities of F ( s ) {\displaystyle F(s)} and F ( s ) {\displaystyle F(s)} is bounded on the line, for example if the contour path is in the region of convergence. If all singularities are in the left half-plane, or F ( s ) {\displaystyle F(s)} is an entire function, then γ {\displaystyle \gamma } can be set to zero and the above inverse integral formula becomes identical to the inverse Fourier transform. In practice, computing the complex integral can be done by using the Cauchy residue theorem. == Post's inversion formula == Post's inversion formula for Laplace transforms, named after Emil Post, is a simple-looking but usually impractical formula for evaluating an inverse Laplace transform. The statement of the formula is as follows: Let f ( t ) {\displaystyle f(t)} be a continuous function on the interval [ 0 , ∞ ) {\displaystyle [0,\infty )} of exponential order, i.e. sup t > 0 f ( t ) e b t < ∞ {\displaystyle \sup _{t>0}{\frac {f(t)}{e^{bt}}}<\infty } for some real number b {\displaystyle b} . Then for all s > b {\displaystyle s>b} , the Laplace transform for f ( t ) {\displaystyle f(t)} exists and is infinitely differentiable with respect to s {\displaystyle s} . Furthermore, if F ( s ) {\displaystyle F(s)} is the Laplace transform of f ( t ) {\displaystyle f(t)} , then the inverse Laplace transform of F ( s ) {\displaystyle F(s)} is given by f ( t ) = L − 1 { F } ( t ) = lim k → ∞ ( − 1 ) k k ! ( k t ) k + 1 F ( k ) ( k t ) {\displaystyle f(t)={\mathcal {L}}^{-1}\{F\}(t)=\lim _{k\to \infty }{\frac {(-1)^{k}}{k!}}\left({\frac {k}{t}}\right)^{k+1}F^{(k)}\left({\frac {k}{t}}\right)} for t > 0 {\displaystyle t>0} , where F ( k ) {\displaystyle F^{(k)}} is the k {\displaystyle k} -th derivative of F {\displaystyle F} with respect to s {\displaystyle s} . As can be seen from the formula, the need to evaluate derivatives of arbitrarily high orders renders this formula impractical for most purposes. With the advent of powerful personal computers, the main efforts to use this formula have come from dealing with approximations or asymptotic analysis of the Inverse Laplace transform, using the Grunwald–Letnikov differintegral to evaluate the derivatives. Post's inversion has attracted interest due to the improvement in computational science and the fact that it is not necessary to know where the poles of F ( s ) {\displaystyle F(s)} lie, which make it possible to calculate the asymptotic behaviour for big x {\displaystyle x} using inverse Mellin transforms for several arithmetical functions related to the Riemann hypothesis. == Software tools == InverseLaplaceTransform performs symbolic inverse transforms in Mathematica Numerical Inversion of Laplace Transform with Multiple Precision Using the Complex Domain in Mathematica gives numerical solutions ilaplace Archived 2014-09-03 at the Wayback Machine performs symbolic inverse transforms in MATLAB Numerical Inversion of Laplace Transforms in Matlab Numerical Inversion of Laplace Transforms based on concentrated matrix-exponential functions in Matlab == See also == Inverse Fourier transform Poisson summation formula == References == == Further reading == Davies, B. J. (2002), Integral transforms and their applications (3rd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-95314-4 Manzhirov, A. V.; Polyanin, Andrei D. (1998), Handbook of integral equations, London: CRC Press, ISBN 978-0-8493-2876-3 Boas, Mary (1983), Mathematical Methods in the physical sciences, John Wiley & Sons, p. 662, ISBN 0-471-04409-1 (p. 662 or search Index for "Bromwich Integral", a nice explanation showing the connection to the Fourier transform) Widder, D. V. (1946), The Laplace Transform, Princeton University Press Elementary inversion of the Laplace transform. Bryan, Kurt. Accessed June 14, 2006. == External links == Tables of Integral Transforms at EqWorld: The World of Mathematical Equations. This article incorporates material from Mellin's inverse formula on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Wikipedia/Inverse_Laplace_transform
In mathematics, the stationary phase approximation is a basic principle of asymptotic analysis, applying to functions given by integration against a rapidly-varying complex exponential. This method originates from the 19th century, and is due to George Gabriel Stokes and Lord Kelvin. It is closely related to Laplace's method and the method of steepest descent, but Laplace's contribution precedes the others. == Basics == The main idea of stationary phase methods relies on the cancellation of sinusoids with rapidly varying phase. If many sinusoids have the same phase and they are added together, they will add constructively. If, however, these same sinusoids have phases which change rapidly as the frequency changes, they will add incoherently, varying between constructive and destructive addition at different times. == Formula == Letting Σ {\displaystyle \Sigma } denote the set of critical points of the function f {\displaystyle f} (i.e. points where ∇ f = 0 {\displaystyle \nabla f=0} ), under the assumption that g {\displaystyle g} is either compactly supported or has exponential decay, and that all critical points are nondegenerate (i.e. det ( H e s s ( f ( x 0 ) ) ) ≠ 0 {\displaystyle \det(\mathrm {Hess} (f(x_{0})))\neq 0} for x 0 ∈ Σ {\displaystyle x_{0}\in \Sigma } ) we have the following asymptotic formula, as k → ∞ {\displaystyle k\to \infty } : ∫ R n g ( x ) e i k f ( x ) d x = ∑ x 0 ∈ Σ e i k f ( x 0 ) | det ( H e s s ( f ( x 0 ) ) ) | − 1 / 2 e i π 4 s g n ( H e s s ( f ( x 0 ) ) ) ( 2 π / k ) n / 2 g ( x 0 ) + o ( k − n / 2 ) {\displaystyle \int _{\mathbb {R} ^{n}}g(x)e^{ikf(x)}dx=\sum _{x_{0}\in \Sigma }e^{ikf(x_{0})}|\det({\mathrm {Hess} }(f(x_{0})))|^{-1/2}e^{{\frac {i\pi }{4}}\mathrm {sgn} (\mathrm {Hess} (f(x_{0})))}(2\pi /k)^{n/2}g(x_{0})+o(k^{-n/2})} Here H e s s ( f ) {\displaystyle \mathrm {Hess} (f)} denotes the Hessian of f {\displaystyle f} , and s g n ( H e s s ( f ) ) {\displaystyle \mathrm {sgn} (\mathrm {Hess} (f))} denotes the signature of the Hessian, i.e. the number of positive eigenvalues minus the number of negative eigenvalues. For n = 1 {\displaystyle n=1} , this reduces to: ∫ R g ( x ) e i k f ( x ) d x = ∑ x 0 ∈ Σ g ( x 0 ) e i k f ( x 0 ) + s i g n ( f ″ ( x 0 ) ) i π / 4 ( 2 π k | f ″ ( x 0 ) | ) 1 / 2 + o ( k − 1 / 2 ) {\displaystyle \int _{\mathbb {R} }g(x)e^{ikf(x)}dx=\sum _{x_{0}\in \Sigma }g(x_{0})e^{ikf(x_{0})+\mathrm {sign} (f''(x_{0}))i\pi /4}\left({\frac {2\pi }{k|f''(x_{0})|}}\right)^{1/2}+o(k^{-1/2})} In this case the assumptions on f {\displaystyle f} reduce to all the critical points being non-degenerate. This is just the Wick-rotated version of the formula for the method of steepest descent. == An example == Consider a function f ( x , t ) = 1 2 π ∫ R F ( ω ) e i [ k ( ω ) x − ω t ] d ω {\displaystyle f(x,t)={\frac {1}{2\pi }}\int _{\mathbb {R} }F(\omega )e^{i[k(\omega )x-\omega t]}\,d\omega } . The phase term in this function, ϕ = k ( ω ) x − ω t {\displaystyle \phi =k(\omega )x-\omega t} , is stationary when d d ω ( k ( ω ) x − ω t ) = 0 {\displaystyle {\frac {d}{d\omega }}{\mathopen {}}\left(k(\omega )x-\omega t\right){\mathclose {}}=0} or equivalently, d k ( ω ) d ω | ω = ω 0 = t x {\displaystyle {\frac {dk(\omega )}{d\omega }}{\Big |}_{\omega =\omega _{0}}={\frac {t}{x}}} . Solutions to this equation yield dominant frequencies ω 0 {\displaystyle \omega _{0}} for some x {\displaystyle x} and t {\displaystyle t} . If we expand ϕ {\displaystyle \phi } as a Taylor series about ω 0 {\displaystyle \omega _{0}} and neglect terms of order higher than ( ω − ω 0 ) 2 {\displaystyle (\omega -\omega _{0})^{2}} , we have ϕ = [ k ( ω 0 ) x − ω 0 t ] + 1 2 x k ″ ( ω 0 ) ( ω − ω 0 ) 2 + ⋯ {\displaystyle \phi =\left[k(\omega _{0})x-\omega _{0}t\right]+{\frac {1}{2}}xk''(\omega _{0})(\omega -\omega _{0})^{2}+\cdots } where k ″ {\displaystyle k''} denotes the second derivative of k {\displaystyle k} . When x {\displaystyle x} is relatively large, even a small difference ( ω − ω 0 ) {\displaystyle (\omega -\omega _{0})} will generate rapid oscillations within the integral, leading to cancellation. Therefore we can extend the limits of integration beyond the limit for a Taylor expansion. If we use the formula, ∫ R e 1 2 i c x 2 d x = 2 i π c = 2 π | c | e ± i π 4 {\displaystyle \int _{\mathbb {R} }e^{{\frac {1}{2}}icx^{2}}dx={\sqrt {\frac {2i\pi }{c}}}={\sqrt {\frac {2\pi }{|c|}}}e^{\pm i{\frac {\pi }{4}}}} . f ( x , t ) ≈ 1 2 π e i [ k ( ω 0 ) x − ω 0 t ] | F ( ω 0 ) | ∫ R e 1 2 i x k ″ ( ω 0 ) ( ω − ω 0 ) 2 d ω {\displaystyle f(x,t)\approx {\frac {1}{2\pi }}e^{i\left[k(\omega _{0})x-\omega _{0}t\right]}\left|F(\omega _{0})\right|\int _{\mathbb {R} }e^{{\frac {1}{2}}ixk''(\omega _{0})(\omega -\omega _{0})^{2}}\,d\omega } . This integrates to f ( x , t ) ≈ | F ( ω 0 ) | 2 π 2 π x | k ″ ( ω 0 ) | cos ⁡ [ k ( ω 0 ) x − ω 0 t ± π 4 ] {\displaystyle f(x,t)\approx {\frac {\left|F(\omega _{0})\right|}{2\pi }}{\sqrt {\frac {2\pi }{x\left|k''(\omega _{0})\right|}}}\cos \left[k(\omega _{0})x-\omega _{0}t\pm {\frac {\pi }{4}}\right]} . == Reduction steps == The first major general statement of the principle involved is that the asymptotic behaviour of I(k) depends only on the critical points of f. If by choice of g the integral is localised to a region of space where f has no critical point, the resulting integral tends to 0 as the frequency of oscillations is taken to infinity. See for example Riemann–Lebesgue lemma. The second statement is that when f is a Morse function, so that the singular points of f are non-degenerate and isolated, then the question can be reduced to the case n = 1. In fact, then, a choice of g can be made to split the integral into cases with just one critical point P in each. At that point, because the Hessian determinant at P is by assumption not 0, the Morse lemma applies. By a change of co-ordinates f may be replaced by ( x 1 2 + x 2 2 + ⋯ + x j 2 ) − ( x j + 1 2 + x j + 2 2 + ⋯ + x n 2 ) {\displaystyle (x_{1}^{2}+x_{2}^{2}+\cdots +x_{j}^{2})-(x_{j+1}^{2}+x_{j+2}^{2}+\cdots +x_{n}^{2})} . The value of j is given by the signature of the Hessian matrix of f at P. As for g, the essential case is that g is a product of bump functions of xi. Assuming now without loss of generality that P is the origin, take a smooth bump function h with value 1 on the interval [−1, 1] and quickly tending to 0 outside it. Take g ( x ) = ∏ i h ( x i ) {\displaystyle g(x)=\prod _{i}h(x_{i})} , then Fubini's theorem reduces I(k) to a product of integrals over the real line like J ( k ) = ∫ h ( x ) e i k f ( x ) d x {\displaystyle J(k)=\int h(x)e^{ikf(x)}\,dx} with f(x) = ±x2. The case with the minus sign is the complex conjugate of the case with the plus sign, so there is essentially one required asymptotic estimate. In this way asymptotics can be found for oscillatory integrals for Morse functions. The degenerate case requires further techniques (see for example Airy function). == One-dimensional case == The essential statement is this one: ∫ − 1 1 e i k x 2 d x = π k e i π / 4 + O ( 1 k ) {\displaystyle \int _{-1}^{1}e^{ikx^{2}}\,dx={\sqrt {\frac {\pi }{k}}}e^{i\pi /4}+{\mathcal {O}}{\mathopen {}}\left({\frac {1}{k}}\right){\mathclose {}}} . In fact by contour integration it can be shown that the main term on the right hand side of the equation is the value of the integral on the left hand side, extended over the range [ − ∞ , ∞ ] {\displaystyle [-\infty ,\infty ]} (for a proof see Fresnel integral). Therefore it is the question of estimating away the integral over, say, [ 1 , ∞ ] {\displaystyle [1,\infty ]} . This is the model for all one-dimensional integrals I ( k ) {\displaystyle I(k)} with f {\displaystyle f} having a single non-degenerate critical point at which f {\displaystyle f} has second derivative > 0 {\displaystyle >0} . In fact the model case has second derivative 2 at 0. In order to scale using k {\displaystyle k} , observe that replacing k {\displaystyle k} by c k {\displaystyle ck} where c {\displaystyle c} is constant is the same as scaling x {\displaystyle x} by c {\displaystyle {\sqrt {c}}} . It follows that for general values of f ″ ( 0 ) > 0 {\displaystyle f''(0)>0} , the factor π / k {\displaystyle {\sqrt {\pi /k}}} becomes 2 π k f ″ ( 0 ) {\displaystyle {\sqrt {\frac {2\pi }{kf''(0)}}}} . For f ″ ( 0 ) < 0 {\displaystyle f''(0)<0} one uses the complex conjugate formula, as mentioned before. == Lower-order terms == As can be seen from the formula, the stationary phase approximation is a first-order approximation of the asymptotic behavior of the integral. The lower-order terms can be understood as a sum of over Feynman diagrams with various weighting factors, for well behaved f {\displaystyle f} . == See also == Common integrals in quantum field theory Laplace's method Method of steepest descent == Notes == == References == Bleistein, N. and Handelsman, R. (1975), Asymptotic Expansions of Integrals, Dover, New York. Victor Guillemin and Shlomo Sternberg (1990), Geometric Asymptotics, (see Chapter 1). Hörmander, L. (1976), Linear Partial Differential Operators, Volume 1, Springer-Verlag, ISBN 978-3-540-00662-6. Aki, Keiiti; & Richards, Paul G. (2002), Quantitative Seismology (2nd ed.), pp 255–256. University Science Books, ISBN 0-935702-96-2 Wong, R. (2001), Asymptotic Approximations of Integrals, Classics in Applied Mathematics, Vol. 34. Corrected reprint of the 1989 original. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA. xviii+543 pages, ISBN 0-89871-497-4. Dieudonné, J. (1980), Calcul Infinitésimal, Hermann, Paris Paris, Richard Bruce (2011), Hadamard Expansions and Hyperasymptotic Evaluation: An Extension of the Method of Steepest Descents, Cambridge University Press, ISBN 978-1-107-00258-6 == External links == "Stationary phase, method of the", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Stationary_phase_approximation
In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the base form f ( x ) = exp ⁡ ( − x 2 ) {\displaystyle f(x)=\exp(-x^{2})} and with parametric extension f ( x ) = a exp ⁡ ( − ( x − b ) 2 2 c 2 ) {\displaystyle f(x)=a\exp \left(-{\frac {(x-b)^{2}}{2c^{2}}}\right)} for arbitrary real constants a, b and non-zero c. It is named after the mathematician Carl Friedrich Gauss. The graph of a Gaussian is a characteristic symmetric "bell curve" shape. The parameter a is the height of the curve's peak, b is the position of the center of the peak, and c (the standard deviation, sometimes called the Gaussian RMS width) controls the width of the "bell". Gaussian functions are often used to represent the probability density function of a normally distributed random variable with expected value μ = b and variance σ2 = c2. In this case, the Gaussian is of the form g ( x ) = 1 σ 2 π exp ⁡ ( − 1 2 ( x − μ ) 2 σ 2 ) . {\displaystyle g(x)={\frac {1}{\sigma {\sqrt {2\pi }}}}\exp \left(-{\frac {1}{2}}{\frac {(x-\mu )^{2}}{\sigma ^{2}}}\right).} Gaussian functions are widely used in statistics to describe the normal distributions, in signal processing to define Gaussian filters, in image processing where two-dimensional Gaussians are used for Gaussian blurs, and in mathematics to solve heat equations and diffusion equations and to define the Weierstrass transform. They are also abundantly used in quantum chemistry to form basis sets. == Properties == Gaussian functions arise by composing the exponential function with a concave quadratic function: f ( x ) = exp ⁡ ( α x 2 + β x + γ ) , {\displaystyle f(x)=\exp(\alpha x^{2}+\beta x+\gamma ),} where α = − 1 / 2 c 2 , {\displaystyle \alpha =-1/2c^{2},} β = b / c 2 , {\displaystyle \beta =b/c^{2},} γ = ln ⁡ a − ( b 2 / 2 c 2 ) . {\displaystyle \gamma =\ln a-(b^{2}/2c^{2}).} (Note: a = 1 / ( σ 2 π ) {\displaystyle a=1/(\sigma {\sqrt {2\pi }})} in ln ⁡ a {\displaystyle \ln a} , not to be confused with α = − 1 / 2 c 2 {\displaystyle \alpha =-1/2c^{2}} ) The Gaussian functions are thus those functions whose logarithm is a concave quadratic function. The parameter c is related to the full width at half maximum (FWHM) of the peak according to FWHM = 2 2 ln ⁡ 2 c ≈ 2.35482 c . {\displaystyle {\text{FWHM}}=2{\sqrt {2\ln 2}}\,c\approx 2.35482\,c.} The function may then be expressed in terms of the FWHM, represented by w: f ( x ) = a e − 4 ( ln ⁡ 2 ) ( x − b ) 2 / w 2 . {\displaystyle f(x)=ae^{-4(\ln 2)(x-b)^{2}/w^{2}}.} Alternatively, the parameter c can be interpreted by saying that the two inflection points of the function occur at x = b ± c. The full width at tenth of maximum (FWTM) for a Gaussian could be of interest and is FWTM = 2 2 ln ⁡ 10 c ≈ 4.29193 c . {\displaystyle {\text{FWTM}}=2{\sqrt {2\ln 10}}\,c\approx 4.29193\,c.} Gaussian functions are analytic, and their limit as x → ∞ is 0 (for the above case of b = 0). Gaussian functions are among those functions that are elementary but lack elementary antiderivatives; the integral of the Gaussian function is the error function: ∫ e − x 2 d x = π 2 erf ⁡ x + C . {\displaystyle \int e^{-x^{2}}\,dx={\frac {\sqrt {\pi }}{2}}\operatorname {erf} x+C.} Nonetheless, their improper integrals over the whole real line can be evaluated exactly, using the Gaussian integral ∫ − ∞ ∞ e − x 2 d x = π , {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx={\sqrt {\pi }},} and one obtains ∫ − ∞ ∞ a e − ( x − b ) 2 / ( 2 c 2 ) d x = a c ⋅ 2 π . {\displaystyle \int _{-\infty }^{\infty }ae^{-(x-b)^{2}/(2c^{2})}\,dx=ac\cdot {\sqrt {2\pi }}.} This integral is 1 if and only if a = 1 c 2 π {\textstyle a={\tfrac {1}{c{\sqrt {2\pi }}}}} (the normalizing constant), and in this case the Gaussian is the probability density function of a normally distributed random variable with expected value μ = b and variance σ2 = c2: g ( x ) = 1 σ 2 π exp ⁡ ( − ( x − μ ) 2 2 σ 2 ) . {\displaystyle g(x)={\frac {1}{\sigma {\sqrt {2\pi }}}}\exp \left({\frac {-(x-\mu )^{2}}{2\sigma ^{2}}}\right).} These Gaussians are plotted in the accompanying figure. The product of two Gaussian functions is a Gaussian, and the convolution of two Gaussian functions is also a Gaussian, with variance being the sum of the original variances: c 2 = c 1 2 + c 2 2 {\displaystyle c^{2}=c_{1}^{2}+c_{2}^{2}} . The product of two Gaussian probability density functions (PDFs), though, is not in general a Gaussian PDF. The Fourier uncertainty principle becomes an equality if and only if (modulated) Gaussian functions are considered. Taking the Fourier transform (unitary, angular-frequency convention) of a Gaussian function with parameters a = 1, b = 0 and c yields another Gaussian function, with parameters c {\displaystyle c} , b = 0 and 1 / c {\displaystyle 1/c} . So in particular the Gaussian functions with b = 0 and c = 1 {\displaystyle c=1} are kept fixed by the Fourier transform (they are eigenfunctions of the Fourier transform with eigenvalue 1). A physical realization is that of the diffraction pattern: for example, a photographic slide whose transmittance has a Gaussian variation is also a Gaussian function. The fact that the Gaussian function is an eigenfunction of the continuous Fourier transform allows us to derive the following interesting identity from the Poisson summation formula: ∑ k ∈ Z exp ⁡ ( − π ⋅ ( k c ) 2 ) = c ⋅ ∑ k ∈ Z exp ⁡ ( − π ⋅ ( k c ) 2 ) . {\displaystyle \sum _{k\in \mathbb {Z} }\exp \left(-\pi \cdot \left({\frac {k}{c}}\right)^{2}\right)=c\cdot \sum _{k\in \mathbb {Z} }\exp \left(-\pi \cdot (kc)^{2}\right).} == Integral of a Gaussian function == The integral of an arbitrary Gaussian function is ∫ − ∞ ∞ a e − ( x − b ) 2 / 2 c 2 d x = a | c | 2 π . {\displaystyle \int _{-\infty }^{\infty }a\,e^{-(x-b)^{2}/2c^{2}}\,dx=\ a\,|c|\,{\sqrt {2\pi }}.} An alternative form is ∫ − ∞ ∞ k e − f x 2 + g x + h d x = ∫ − ∞ ∞ k e − f ( x − g / ( 2 f ) ) 2 + g 2 / ( 4 f ) + h d x = k π f exp ⁡ ( g 2 4 f + h ) , {\displaystyle \int _{-\infty }^{\infty }k\,e^{-fx^{2}+gx+h}\,dx=\int _{-\infty }^{\infty }k\,e^{-f{\big (}x-g/(2f){\big )}^{2}+g^{2}/(4f)+h}\,dx=k\,{\sqrt {\frac {\pi }{f}}}\,\exp \left({\frac {g^{2}}{4f}}+h\right),} where f must be strictly positive for the integral to converge. === Relation to standard Gaussian integral === The integral ∫ − ∞ ∞ a e − ( x − b ) 2 / 2 c 2 d x {\displaystyle \int _{-\infty }^{\infty }ae^{-(x-b)^{2}/2c^{2}}\,dx} for some real constants a, b and c > 0 can be calculated by putting it into the form of a Gaussian integral. First, the constant a can simply be factored out of the integral. Next, the variable of integration is changed from x to y = x − b: a ∫ − ∞ ∞ e − y 2 / 2 c 2 d y , {\displaystyle a\int _{-\infty }^{\infty }e^{-y^{2}/2c^{2}}\,dy,} and then to z = y / 2 c 2 {\displaystyle z=y/{\sqrt {2c^{2}}}} : a 2 c 2 ∫ − ∞ ∞ e − z 2 d z . {\displaystyle a{\sqrt {2c^{2}}}\int _{-\infty }^{\infty }e^{-z^{2}}\,dz.} Then, using the Gaussian integral identity ∫ − ∞ ∞ e − z 2 d z = π , {\displaystyle \int _{-\infty }^{\infty }e^{-z^{2}}\,dz={\sqrt {\pi }},} we have ∫ − ∞ ∞ a e − ( x − b ) 2 / 2 c 2 d x = a 2 π c 2 . {\displaystyle \int _{-\infty }^{\infty }ae^{-(x-b)^{2}/2c^{2}}\,dx=a{\sqrt {2\pi c^{2}}}.} == Two-dimensional Gaussian function == Base form: f ( x , y ) = exp ⁡ ( − x 2 − y 2 ) {\displaystyle f(x,y)=\exp(-x^{2}-y^{2})} In two dimensions, the power to which e is raised in the Gaussian function is any negative-definite quadratic form. Consequently, the level sets of the Gaussian will always be ellipses. A particular example of a two-dimensional Gaussian function is f ( x , y ) = A exp ⁡ ( − ( ( x − x 0 ) 2 2 σ X 2 + ( y − y 0 ) 2 2 σ Y 2 ) ) . {\displaystyle f(x,y)=A\exp \left(-\left({\frac {(x-x_{0})^{2}}{2\sigma _{X}^{2}}}+{\frac {(y-y_{0})^{2}}{2\sigma _{Y}^{2}}}\right)\right).} Here the coefficient A is the amplitude, x0, y0 is the center, and σx, σy are the x and y spreads of the blob. The figure on the right was created using A = 1, x0 = 0, y0 = 0, σx = σy = 1. The volume under the Gaussian function is given by V = ∫ − ∞ ∞ ∫ − ∞ ∞ f ( x , y ) d x d y = 2 π A σ X σ Y . {\displaystyle V=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }f(x,y)\,dx\,dy=2\pi A\sigma _{X}\sigma _{Y}.} In general, a two-dimensional elliptical Gaussian function is expressed as f ( x , y ) = A exp ⁡ ( − ( a ( x − x 0 ) 2 + 2 b ( x − x 0 ) ( y − y 0 ) + c ( y − y 0 ) 2 ) ) , {\displaystyle f(x,y)=A\exp {\Big (}-{\big (}a(x-x_{0})^{2}+2b(x-x_{0})(y-y_{0})+c(y-y_{0})^{2}{\big )}{\Big )},} where the matrix [ a b b c ] {\displaystyle {\begin{bmatrix}a&b\\b&c\end{bmatrix}}} is positive-definite. Using this formulation, the figure on the right can be created using A = 1, (x0, y0) = (0, 0), a = c = 1/2, b = 0. === Meaning of parameters for the general equation === For the general form of the equation the coefficient A is the height of the peak and (x0, y0) is the center of the blob. If we set a = cos 2 ⁡ θ 2 σ X 2 + sin 2 ⁡ θ 2 σ Y 2 , b = − sin ⁡ θ cos ⁡ θ 2 σ X 2 + sin ⁡ θ cos ⁡ θ 2 σ Y 2 , c = sin 2 ⁡ θ 2 σ X 2 + cos 2 ⁡ θ 2 σ Y 2 , {\displaystyle {\begin{aligned}a&={\frac {\cos ^{2}\theta }{2\sigma _{X}^{2}}}+{\frac {\sin ^{2}\theta }{2\sigma _{Y}^{2}}},\\b&=-{\frac {\sin \theta \cos \theta }{2\sigma _{X}^{2}}}+{\frac {\sin \theta \cos \theta }{2\sigma _{Y}^{2}}},\\c&={\frac {\sin ^{2}\theta }{2\sigma _{X}^{2}}}+{\frac {\cos ^{2}\theta }{2\sigma _{Y}^{2}}},\end{aligned}}} then we rotate the blob by a positive, counter-clockwise angle θ {\displaystyle \theta } (for negative, clockwise rotation, invert the signs in the b coefficient). To get back the coefficients θ {\displaystyle \theta } , σ X {\displaystyle \sigma _{X}} and σ Y {\displaystyle \sigma _{Y}} from a {\displaystyle a} , b {\displaystyle b} and c {\displaystyle c} use θ = 1 2 arctan ⁡ ( 2 b a − c ) , θ ∈ [ − 45 , 45 ] , σ X 2 = 1 2 ( a ⋅ cos 2 ⁡ θ + 2 b ⋅ cos ⁡ θ sin ⁡ θ + c ⋅ sin 2 ⁡ θ ) , σ Y 2 = 1 2 ( a ⋅ sin 2 ⁡ θ − 2 b ⋅ cos ⁡ θ sin ⁡ θ + c ⋅ cos 2 ⁡ θ ) . {\displaystyle {\begin{aligned}\theta &={\frac {1}{2}}\arctan \left({\frac {2b}{a-c}}\right),\quad \theta \in [-45,45],\\\sigma _{X}^{2}&={\frac {1}{2(a\cdot \cos ^{2}\theta +2b\cdot \cos \theta \sin \theta +c\cdot \sin ^{2}\theta )}},\\\sigma _{Y}^{2}&={\frac {1}{2(a\cdot \sin ^{2}\theta -2b\cdot \cos \theta \sin \theta +c\cdot \cos ^{2}\theta )}}.\end{aligned}}} Example rotations of Gaussian blobs can be seen in the following examples: Using the following Octave code, one can easily see the effect of changing the parameters: Such functions are often used in image processing and in computational models of visual system function—see the articles on scale space and affine shape adaptation. Also see multivariate normal distribution. === Higher-order Gaussian or super-Gaussian function or generalized Gaussian function === A more general formulation of a Gaussian function with a flat-top and Gaussian fall-off can be taken by raising the content of the exponent to a power P {\displaystyle P} : f ( x ) = A exp ⁡ ( − ( ( x − x 0 ) 2 2 σ X 2 ) P ) . {\displaystyle f(x)=A\exp \left(-\left({\frac {(x-x_{0})^{2}}{2\sigma _{X}^{2}}}\right)^{P}\right).} This function is known as a super-Gaussian function and is often used for Gaussian beam formulation. This function may also be expressed in terms of the full width at half maximum (FWHM), represented by w: f ( x ) = A exp ⁡ ( − ln ⁡ 2 ( 4 ( x − x 0 ) 2 w 2 ) P ) . {\displaystyle f(x)=A\exp \left(-\ln 2\left(4{\frac {(x-x_{0})^{2}}{w^{2}}}\right)^{P}\right).} In a two-dimensional formulation, a Gaussian function along x {\displaystyle x} and y {\displaystyle y} can be combined with potentially different P X {\displaystyle P_{X}} and P Y {\displaystyle P_{Y}} to form a rectangular Gaussian distribution: f ( x , y ) = A exp ⁡ ( − ( ( x − x 0 ) 2 2 σ X 2 ) P X − ( ( y − y 0 ) 2 2 σ Y 2 ) P Y ) . {\displaystyle f(x,y)=A\exp \left(-\left({\frac {(x-x_{0})^{2}}{2\sigma _{X}^{2}}}\right)^{P_{X}}-\left({\frac {(y-y_{0})^{2}}{2\sigma _{Y}^{2}}}\right)^{P_{Y}}\right).} or an elliptical Gaussian distribution: f ( x , y ) = A exp ⁡ ( − ( ( x − x 0 ) 2 2 σ X 2 + ( y − y 0 ) 2 2 σ Y 2 ) P ) {\displaystyle f(x,y)=A\exp \left(-\left({\frac {(x-x_{0})^{2}}{2\sigma _{X}^{2}}}+{\frac {(y-y_{0})^{2}}{2\sigma _{Y}^{2}}}\right)^{P}\right)} == Multi-dimensional Gaussian function == In an n {\displaystyle n} -dimensional space a Gaussian function can be defined as f ( x ) = exp ⁡ ( − x T C x ) , {\displaystyle f(x)=\exp(-x^{\mathsf {T}}Cx),} where x = [ x 1 ⋯ x n ] {\displaystyle x={\begin{bmatrix}x_{1}&\cdots &x_{n}\end{bmatrix}}} is a column of n {\displaystyle n} coordinates, C {\displaystyle C} is a positive-definite n × n {\displaystyle n\times n} matrix, and T {\displaystyle {}^{\mathsf {T}}} denotes matrix transposition. The integral of this Gaussian function over the whole n {\displaystyle n} -dimensional space is given as ∫ R n exp ⁡ ( − x T C x ) d x = π n det C . {\displaystyle \int _{\mathbb {R} ^{n}}\exp(-x^{\mathsf {T}}Cx)\,dx={\sqrt {\frac {\pi ^{n}}{\det C}}}.} It can be easily calculated by diagonalizing the matrix C {\displaystyle C} and changing the integration variables to the eigenvectors of C {\displaystyle C} . More generally a shifted Gaussian function is defined as f ( x ) = exp ⁡ ( − x T C x + s T x ) , {\displaystyle f(x)=\exp(-x^{\mathsf {T}}Cx+s^{\mathsf {T}}x),} where s = [ s 1 ⋯ s n ] {\displaystyle s={\begin{bmatrix}s_{1}&\cdots &s_{n}\end{bmatrix}}} is the shift vector and the matrix C {\displaystyle C} can be assumed to be symmetric, C T = C {\displaystyle C^{\mathsf {T}}=C} , and positive-definite. The following integrals with this function can be calculated with the same technique: ∫ R n e − x T C x + v T x d x = π n det C exp ⁡ ( 1 4 v T C − 1 v ) ≡ M . {\displaystyle \int _{\mathbb {R} ^{n}}e^{-x^{\mathsf {T}}Cx+v^{\mathsf {T}}x}\,dx={\sqrt {\frac {\pi ^{n}}{\det {C}}}}\exp \left({\frac {1}{4}}v^{\mathsf {T}}C^{-1}v\right)\equiv {\mathcal {M}}.} ∫ R n e − x T C x + v T x ( a T x ) d x = ( a T u ) ⋅ M , where u = 1 2 C − 1 v . {\displaystyle \int _{\mathbb {R} ^{n}}e^{-x^{\mathsf {T}}Cx+v^{\mathsf {T}}x}(a^{\mathsf {T}}x)\,dx=(a^{T}u)\cdot {\mathcal {M}},{\text{ where }}u={\frac {1}{2}}C^{-1}v.} ∫ R n e − x T C x + v T x ( x T D x ) d x = ( u T D u + 1 2 tr ⁡ ( D C − 1 ) ) ⋅ M . {\displaystyle \int _{\mathbb {R} ^{n}}e^{-x^{\mathsf {T}}Cx+v^{\mathsf {T}}x}(x^{\mathsf {T}}Dx)\,dx=\left(u^{\mathsf {T}}Du+{\frac {1}{2}}\operatorname {tr} (DC^{-1})\right)\cdot {\mathcal {M}}.} ∫ R n e − x T C ′ x + s ′ T x ( − ∂ ∂ x Λ ∂ ∂ x ) e − x T C x + s T x d x = ( 2 tr ⁡ ( C ′ Λ C B − 1 ) + 4 u T C ′ Λ C u − 2 u T ( C ′ Λ s + C Λ s ′ ) + s ′ T Λ s ) ⋅ M , {\displaystyle {\begin{aligned}&\int _{\mathbb {R} ^{n}}e^{-x^{\mathsf {T}}C'x+s'^{\mathsf {T}}x}\left(-{\frac {\partial }{\partial x}}\Lambda {\frac {\partial }{\partial x}}\right)e^{-x^{\mathsf {T}}Cx+s^{\mathsf {T}}x}\,dx\\&\qquad =\left(2\operatorname {tr} (C'\Lambda CB^{-1})+4u^{\mathsf {T}}C'\Lambda Cu-2u^{\mathsf {T}}(C'\Lambda s+C\Lambda s')+s'^{\mathsf {T}}\Lambda s\right)\cdot {\mathcal {M}},\end{aligned}}} where u = 1 2 B − 1 v , v = s + s ′ , B = C + C ′ . {\textstyle u={\frac {1}{2}}B^{-1}v,\ v=s+s',\ B=C+C'.} == Estimation of parameters == A number of fields such as stellar photometry, Gaussian beam characterization, and emission/absorption line spectroscopy work with sampled Gaussian functions and need to accurately estimate the height, position, and width parameters of the function. There are three unknown parameters for a 1D Gaussian function (a, b, c) and five for a 2D Gaussian function ( A ; x 0 , y 0 ; σ X , σ Y ) {\displaystyle (A;x_{0},y_{0};\sigma _{X},\sigma _{Y})} . The most common method for estimating the Gaussian parameters is to take the logarithm of the data and fit a parabola to the resulting data set. While this provides a simple curve fitting procedure, the resulting algorithm may be biased by excessively weighting small data values, which can produce large errors in the profile estimate. One can partially compensate for this problem through weighted least squares estimation, reducing the weight of small data values, but this too can be biased by allowing the tail of the Gaussian to dominate the fit. In order to remove the bias, one can instead use an iteratively reweighted least squares procedure, in which the weights are updated at each iteration. It is also possible to perform non-linear regression directly on the data, without involving the logarithmic data transformation; for more options, see probability distribution fitting. === Parameter precision === Once one has an algorithm for estimating the Gaussian function parameters, it is also important to know how precise those estimates are. Any least squares estimation algorithm can provide numerical estimates for the variance of each parameter (i.e., the variance of the estimated height, position, and width of the function). One can also use Cramér–Rao bound theory to obtain an analytical expression for the lower bound on the parameter variances, given certain assumptions about the data. The noise in the measured profile is either i.i.d. Gaussian, or the noise is Poisson-distributed. The spacing between each sampling (i.e. the distance between pixels measuring the data) is uniform. The peak is "well-sampled", so that less than 10% of the area or volume under the peak (area if a 1D Gaussian, volume if a 2D Gaussian) lies outside the measurement region. The width of the peak is much larger than the distance between sample locations (i.e. the detector pixels must be at least 5 times smaller than the Gaussian FWHM). When these assumptions are satisfied, the following covariance matrix K applies for the 1D profile parameters a {\displaystyle a} , b {\displaystyle b} , and c {\displaystyle c} under i.i.d. Gaussian noise and under Poisson noise: K Gauss = σ 2 π δ X Q 2 ( 3 2 c 0 − 1 a 0 2 c a 2 0 − 1 a 0 2 c a 2 ) , K Poiss = 1 2 π ( 3 a 2 c 0 − 1 2 0 c a 0 − 1 2 0 c 2 a ) , {\displaystyle \mathbf {K} _{\text{Gauss}}={\frac {\sigma ^{2}}{{\sqrt {\pi }}\delta _{X}Q^{2}}}{\begin{pmatrix}{\frac {3}{2c}}&0&{\frac {-1}{a}}\\0&{\frac {2c}{a^{2}}}&0\\{\frac {-1}{a}}&0&{\frac {2c}{a^{2}}}\end{pmatrix}}\ ,\qquad \mathbf {K} _{\text{Poiss}}={\frac {1}{\sqrt {2\pi }}}{\begin{pmatrix}{\frac {3a}{2c}}&0&-{\frac {1}{2}}\\0&{\frac {c}{a}}&0\\-{\frac {1}{2}}&0&{\frac {c}{2a}}\end{pmatrix}}\ ,} where δ X {\displaystyle \delta _{X}} is the width of the pixels used to sample the function, Q {\displaystyle Q} is the quantum efficiency of the detector, and σ {\displaystyle \sigma } indicates the standard deviation of the measurement noise. Thus, the individual variances for the parameters are, in the Gaussian noise case, var ⁡ ( a ) = 3 σ 2 2 π δ X Q 2 c var ⁡ ( b ) = 2 σ 2 c δ X π Q 2 a 2 var ⁡ ( c ) = 2 σ 2 c δ X π Q 2 a 2 {\displaystyle {\begin{aligned}\operatorname {var} (a)&={\frac {3\sigma ^{2}}{2{\sqrt {\pi }}\,\delta _{X}Q^{2}c}}\\\operatorname {var} (b)&={\frac {2\sigma ^{2}c}{\delta _{X}{\sqrt {\pi }}\,Q^{2}a^{2}}}\\\operatorname {var} (c)&={\frac {2\sigma ^{2}c}{\delta _{X}{\sqrt {\pi }}\,Q^{2}a^{2}}}\end{aligned}}} and in the Poisson noise case, var ⁡ ( a ) = 3 a 2 2 π c var ⁡ ( b ) = c 2 π a var ⁡ ( c ) = c 2 2 π a . {\displaystyle {\begin{aligned}\operatorname {var} (a)&={\frac {3a}{2{\sqrt {2\pi }}\,c}}\\\operatorname {var} (b)&={\frac {c}{{\sqrt {2\pi }}\,a}}\\\operatorname {var} (c)&={\frac {c}{2{\sqrt {2\pi }}\,a}}.\end{aligned}}} For the 2D profile parameters giving the amplitude A {\displaystyle A} , position ( x 0 , y 0 ) {\displaystyle (x_{0},y_{0})} , and width ( σ X , σ Y ) {\displaystyle (\sigma _{X},\sigma _{Y})} of the profile, the following covariance matrices apply: K Gauss = σ 2 π δ X δ Y Q 2 ( 2 σ X σ Y 0 0 − 1 A σ Y − 1 A σ X 0 2 σ X A 2 σ Y 0 0 0 0 0 2 σ Y A 2 σ X 0 0 − 1 A σ y 0 0 2 σ X A 2 σ y 0 − 1 A σ X 0 0 0 2 σ Y A 2 σ X ) K Poisson = 1 2 π ( 3 A σ X σ Y 0 0 − 1 σ Y − 1 σ X 0 σ X A σ Y 0 0 0 0 0 σ Y A σ X 0 0 − 1 σ Y 0 0 2 σ X 3 A σ Y 1 3 A − 1 σ X 0 0 1 3 A 2 σ Y 3 A σ X ) . {\displaystyle {\begin{aligned}\mathbf {K} _{\text{Gauss}}={\frac {\sigma ^{2}}{\pi \delta _{X}\delta _{Y}Q^{2}}}&{\begin{pmatrix}{\frac {2}{\sigma _{X}\sigma _{Y}}}&0&0&{\frac {-1}{A\sigma _{Y}}}&{\frac {-1}{A\sigma _{X}}}\\0&{\frac {2\sigma _{X}}{A^{2}\sigma _{Y}}}&0&0&0\\0&0&{\frac {2\sigma _{Y}}{A^{2}\sigma _{X}}}&0&0\\{\frac {-1}{A\sigma _{y}}}&0&0&{\frac {2\sigma _{X}}{A^{2}\sigma _{y}}}&0\\{\frac {-1}{A\sigma _{X}}}&0&0&0&{\frac {2\sigma _{Y}}{A^{2}\sigma _{X}}}\end{pmatrix}}\\[6pt]\mathbf {K} _{\operatorname {Poisson} }={\frac {1}{2\pi }}&{\begin{pmatrix}{\frac {3A}{\sigma _{X}\sigma _{Y}}}&0&0&{\frac {-1}{\sigma _{Y}}}&{\frac {-1}{\sigma _{X}}}\\0&{\frac {\sigma _{X}}{A\sigma _{Y}}}&0&0&0\\0&0&{\frac {\sigma _{Y}}{A\sigma _{X}}}&0&0\\{\frac {-1}{\sigma _{Y}}}&0&0&{\frac {2\sigma _{X}}{3A\sigma _{Y}}}&{\frac {1}{3A}}\\{\frac {-1}{\sigma _{X}}}&0&0&{\frac {1}{3A}}&{\frac {2\sigma _{Y}}{3A\sigma _{X}}}\end{pmatrix}}.\end{aligned}}} where the individual parameter variances are given by the diagonal elements of the covariance matrix. == Discrete Gaussian == One may ask for a discrete analog to the Gaussian; this is necessary in discrete applications, particularly digital signal processing. A simple answer is to sample the continuous Gaussian, yielding the sampled Gaussian kernel. However, this discrete function does not have the discrete analogs of the properties of the continuous function, and can lead to undesired effects, as described in the article scale space implementation. An alternative approach is to use the discrete Gaussian kernel: T ( n , t ) = e − t I n ( t ) {\displaystyle T(n,t)=e^{-t}I_{n}(t)} where I n ( t ) {\displaystyle I_{n}(t)} denotes the modified Bessel functions of integer order. This is the discrete analog of the continuous Gaussian in that it is the solution to the discrete diffusion equation (discrete space, continuous time), just as the continuous Gaussian is the solution to the continuous diffusion equation. == Applications == Gaussian functions appear in many contexts in the natural sciences, the social sciences, mathematics, and engineering. Some examples include: In statistics and probability theory, Gaussian functions appear as the density function of the normal distribution, which is a limiting probability distribution of complicated sums, according to the central limit theorem. Gaussian functions are the Green's function for the (homogeneous and isotropic) diffusion equation (and to the heat equation, which is the same thing), a partial differential equation that describes the time evolution of a mass-density under diffusion. Specifically, if the mass-density at time t=0 is given by a Dirac delta, which essentially means that the mass is initially concentrated in a single point, then the mass-distribution at time t will be given by a Gaussian function, with the parameter a being linearly related to 1/√t and c being linearly related to √t; this time-varying Gaussian is described by the heat kernel. More generally, if the initial mass-density is φ(x), then the mass-density at later times is obtained by taking the convolution of φ with a Gaussian function. The convolution of a function with a Gaussian is also known as a Weierstrass transform. A Gaussian function is the wave function of the ground state of the quantum harmonic oscillator. The molecular orbitals used in computational chemistry can be linear combinations of Gaussian functions called Gaussian orbitals (see also basis set (chemistry)). Mathematically, the derivatives of the Gaussian function can be represented using Hermite functions. For unit variance, the n-th derivative of the Gaussian is the Gaussian function itself multiplied by the n-th Hermite polynomial, up to scale. Consequently, Gaussian functions are also associated with the vacuum state in quantum field theory. Gaussian beams are used in optical systems, microwave systems and lasers. In scale space representation, Gaussian functions are used as smoothing kernels for generating multi-scale representations in computer vision and image processing. Specifically, derivatives of Gaussians (Hermite functions) are used as a basis for defining a large number of types of visual operations. Gaussian functions are used to define some types of artificial neural networks. In fluorescence microscopy a 2D Gaussian function is used to approximate the Airy disk, describing the intensity distribution produced by a point source. In signal processing they serve to define Gaussian filters, such as in image processing where 2D Gaussians are used for Gaussian blurs. In digital signal processing, one uses a discrete Gaussian kernel, which may be approximated by the Binomial coefficient or sampling a Gaussian. In geostatistics they have been used for understanding the variability between the patterns of a complex training image. They are used with kernel methods to cluster the patterns in the feature space. == See also == Bell-shaped function Cauchy distribution Normal distribution Radial basis function kernel == References == == Further reading == Haberman, Richard (2013). "10.3.3 Inverse Fourier transform of a Gaussian". Applied Partial Differential Equations. Boston: PEARSON. ISBN 978-0-321-79705-6. == External links == Mathworld, includes a proof for the relations between c and FWHM "Integrating The Bell Curve". MathPages.com. Haskell, Erlang and Perl implementation of Gaussian distribution Bensimhoun Michael, N-Dimensional Cumulative Function, And Other Useful Facts About Gaussians and Normal Densities (2009) Code for fitting Gaussians in ImageJ and Fiji.
Wikipedia/Gaussian_function
Laplace's approximation provides an analytical expression for a posterior probability distribution by fitting a Gaussian distribution with a mean equal to the MAP solution and precision equal to the observed Fisher information. The approximation is justified by the Bernstein–von Mises theorem, which states that, under regularity conditions, the error of the approximation tends to 0 as the number of data points tends to infinity. For example, consider a regression or classification model with data set { x n , y n } n = 1 , … , N {\displaystyle \{x_{n},y_{n}\}_{n=1,\ldots ,N}} comprising inputs x {\displaystyle x} and outputs y {\displaystyle y} with (unknown) parameter vector θ {\displaystyle \theta } of length D {\displaystyle D} . The likelihood is denoted p ( y | x , θ ) {\displaystyle p({\bf {y}}|{\bf {x}},\theta )} and the parameter prior p ( θ ) {\displaystyle p(\theta )} . Suppose one wants to approximate the joint density of outputs and parameters p ( y , θ | x ) {\displaystyle p({\bf {y}},\theta |{\bf {x}})} . Bayes' formula reads: p ( y , θ | x ) = p ( y | x , θ ) p ( θ | x ) = p ( y | x ) p ( θ | y , x ) ≃ q ~ ( θ ) = Z q ( θ ) . {\displaystyle p({\bf {y}},\theta |{\bf {x}})\;=\;p({\bf {y}}|{\bf {x}},\theta )p(\theta |{\bf {x}})\;=\;p({\bf {y}}|{\bf {x}})p(\theta |{\bf {y}},{\bf {x}})\;\simeq \;{\tilde {q}}(\theta )\;=\;Zq(\theta ).} The joint is equal to the product of the likelihood and the prior and by Bayes' rule, equal to the product of the marginal likelihood p ( y | x ) {\displaystyle p({\bf {y}}|{\bf {x}})} and posterior p ( θ | y , x ) {\displaystyle p(\theta |{\bf {y}},{\bf {x}})} . Seen as a function of θ {\displaystyle \theta } the joint is an un-normalised density. In Laplace's approximation, we approximate the joint by an un-normalised Gaussian q ~ ( θ ) = Z q ( θ ) {\displaystyle {\tilde {q}}(\theta )=Zq(\theta )} , where we use q {\displaystyle q} to denote approximate density, q ~ {\displaystyle {\tilde {q}}} for un-normalised density and Z {\displaystyle Z} the normalisation constant of q ~ {\displaystyle {\tilde {q}}} (independent of θ {\displaystyle \theta } ). Since the marginal likelihood p ( y | x ) {\displaystyle p({\bf {y}}|{\bf {x}})} doesn't depend on the parameter θ {\displaystyle \theta } and the posterior p ( θ | y , x ) {\displaystyle p(\theta |{\bf {y}},{\bf {x}})} normalises over θ {\displaystyle \theta } we can immediately identify them with Z {\displaystyle Z} and q ( θ ) {\displaystyle q(\theta )} of our approximation, respectively. Laplace's approximation is p ( y , θ | x ) ≃ p ( y , θ ^ | x ) exp ⁡ ( − 1 2 ( θ − θ ^ ) ⊤ S − 1 ( θ − θ ^ ) ) = q ~ ( θ ) , {\displaystyle p({\bf {y}},\theta |{\bf {x}})\;\simeq \;p({\bf {y}},{\hat {\theta }}|{\bf {x}})\exp {\big (}-{\tfrac {1}{2}}(\theta -{\hat {\theta }})^{\top }S^{-1}(\theta -{\hat {\theta }}){\big )}\;=\;{\tilde {q}}(\theta ),} where we have defined θ ^ = argmax θ ⁡ log ⁡ p ( y , θ | x ) , S − 1 = − ∇ θ ∇ θ log ⁡ p ( y , θ | x ) | θ = θ ^ , {\displaystyle {\begin{aligned}{\hat {\theta }}&\;=\;\operatorname {argmax} _{\theta }\log p({\bf {y}},\theta |{\bf {x}}),\\S^{-1}&\;=\;-\left.\nabla _{\theta }\nabla _{\theta }\log p({\bf {y}},\theta |{\bf {x}})\right|_{\theta ={\hat {\theta }}},\end{aligned}}} where θ ^ {\displaystyle {\hat {\theta }}} is the location of a mode of the joint target density, also known as the maximum a posteriori or MAP point and S − 1 {\displaystyle S^{-1}} is the D × D {\displaystyle D\times D} positive definite matrix of second derivatives of the negative log joint target density at the mode θ = θ ^ {\displaystyle \theta ={\hat {\theta }}} . Thus, the Gaussian approximation matches the value and the log-curvature of the un-normalised target density at the mode. The value of θ ^ {\displaystyle {\hat {\theta }}} is usually found using a gradient based method. In summary, we have q ( θ ) = N ( θ | μ = θ ^ , Σ = S ) , log ⁡ Z = log ⁡ p ( y , θ ^ | x ) + 1 2 log ⁡ | S | + D 2 log ⁡ ( 2 π ) , {\displaystyle {\begin{aligned}q(\theta )&\;=\;{\cal {N}}(\theta |\mu ={\hat {\theta }},\Sigma =S),\\\log Z&\;=\;\log p({\bf {y}},{\hat {\theta }}|{\bf {x}})+{\tfrac {1}{2}}\log |S|+{\tfrac {D}{2}}\log(2\pi ),\end{aligned}}} for the approximate posterior over θ {\displaystyle \theta } and the approximate log marginal likelihood respectively. The main weaknesses of Laplace's approximation are that it is symmetric around the mode and that it is very local: the entire approximation is derived from properties at a single point of the target density. Laplace's method is widely used and was pioneered in the context of neural networks by David MacKay, and for Gaussian processes by Williams and Barber. == References == == Further reading == Amaral Turkman, M. Antónia; Paulino, Carlos Daniel; Müller, Peter (2019). "The Classical Laplace Method". Computational Bayesian Statistics : An Introduction. Cambridge: Cambridge University Press. pp. 154–159. ISBN 978-1-108-48103-8. Tanner, Martin A. (1996). "Posterior Moments and Marginalization Based on Laplace's Method". Tools for Statistical Inference. New York: Springer. pp. 44–51. ISBN 0-387-94688-8.
Wikipedia/Laplace's_approximation
In probability theory, the theory of large deviations concerns the asymptotic behaviour of remote tails of sequences of probability distributions. While some basic ideas of the theory can be traced to Laplace, the formalization started with insurance mathematics, namely ruin theory with Cramér and Lundberg. A unified formalization of large deviation theory was developed in 1966, in a paper by Varadhan. Large deviations theory formalizes the heuristic ideas of concentration of measures and widely generalizes the notion of convergence of probability measures. Roughly speaking, large deviations theory concerns itself with the exponential decline of the probability measures of certain kinds of extreme or tail events. == Introductory examples == Any large deviation is done in the least unlikely of all the unlikely ways! === An elementary example === Consider a sequence of independent tosses of a fair coin. The possible outcomes could be heads or tails. Let us denote the possible outcome of the i-th trial by X i {\displaystyle X_{i}} , where we encode head as 1 and tail as 0. Now let M N {\displaystyle M_{N}} denote the mean value after N {\displaystyle N} trials, namely M N = 1 N ∑ i = 1 N X i {\displaystyle M_{N}={\frac {1}{N}}\sum _{i=1}^{N}X_{i}} . Then M N {\displaystyle M_{N}} lies between 0 and 1. From the law of large numbers it follows that as N grows, the distribution of M N {\displaystyle M_{N}} converges to 0.5 = E ⁡ [ X ] {\displaystyle 0.5=\operatorname {E} [X]} (the expected value of a single coin toss). Moreover, by the central limit theorem, it follows that M N {\displaystyle M_{N}} is approximately normally distributed for large N {\displaystyle N} . The central limit theorem can provide more detailed information about the behavior of M N {\displaystyle M_{N}} than the law of large numbers. For example, we can approximately find a tail probability of M N {\displaystyle M_{N}} – the probability that M N {\displaystyle M_{N}} is greater than some value x {\displaystyle x} – for a fixed value of N {\displaystyle N} . However, the approximation by the central limit theorem may not be accurate if x {\displaystyle x} is far from E ⁡ [ X i ] {\displaystyle \operatorname {E} [X_{i}]} and N {\displaystyle N} is not sufficiently large. Also, it does not provide information about the convergence of the tail probabilities as N → ∞ {\displaystyle N\to \infty } . However, the large deviation theory can provide answers for such problems. Let us make this statement more precise. For a given value 0.5 < x < 1 {\displaystyle 0.5<x<1} , let us compute the tail probability P ( M N > x ) {\displaystyle P(M_{N}>x)} . Define I ( x ) = x ln ⁡ x + ( 1 − x ) ln ⁡ ( 1 − x ) + ln ⁡ 2 {\displaystyle I(x)=x\ln {x}+(1-x)\ln(1-x)+\ln {2}} . Note that the function I ( x ) {\displaystyle I(x)} is a convex, nonnegative function that is zero at x = 1 2 {\displaystyle x={\tfrac {1}{2}}} and increases as x {\displaystyle x} approaches 1 {\displaystyle 1} . It is the negative of the Bernoulli entropy with p = 1 2 {\displaystyle p={\tfrac {1}{2}}} ; that it's appropriate for coin tosses follows from the asymptotic equipartition property applied to a Bernoulli trial. Then by Chernoff's inequality, it can be shown that P ( M N > x ) < exp ⁡ ( − N I ( x ) ) {\displaystyle P(M_{N}>x)<\exp(-NI(x))} . This bound is rather sharp, in the sense that I ( x ) {\displaystyle I(x)} cannot be replaced with a larger number which would yield a strict inequality for all positive N {\displaystyle N} . (However, the exponential bound can still be reduced by a subexponential factor on the order of 1 / N {\displaystyle 1/{\sqrt {N}}} ; this follows from the Stirling approximation applied to the binomial coefficient appearing in the Bernoulli distribution.) Hence, we obtain the following result: P ( M N > x ) ≈ exp ⁡ ( − N I ( x ) ) {\displaystyle P(M_{N}>x)\approx \exp(-NI(x))} . The probability P ( M N > x ) {\displaystyle P(M_{N}>x)} decays exponentially as N → ∞ {\displaystyle N\to \infty } at a rate depending on x. This formula approximates any tail probability of the sample mean of i.i.d. variables and gives its convergence as the number of samples increases. === Large deviations for sums of independent random variables === In the above example of coin-tossing we explicitly assumed that each toss is an independent trial, and the probability of getting head or tail is always the same. Let X , X 1 , X 2 , … {\displaystyle X,X_{1},X_{2},\ldots } be independent and identically distributed (i.i.d.) random variables whose common distribution satisfies a certain growth condition. Then the following limit exists: lim N → ∞ 1 N ln ⁡ P ( M N > x ) = − I ( x ) {\displaystyle \lim _{N\to \infty }{\frac {1}{N}}\ln P(M_{N}>x)=-I(x)} . Here M N = 1 N ∑ i = 1 N X i {\displaystyle M_{N}={\frac {1}{N}}\sum _{i=1}^{N}X_{i}} , as before. Function I ( ⋅ ) {\displaystyle I(\cdot )} is called the "rate function" or "Cramér function" or sometimes the "entropy function". The above-mentioned limit means that for large N {\displaystyle N} , P ( M N > x ) ≈ exp ⁡ [ − N I ( x ) ] {\displaystyle P(M_{N}>x)\approx \exp[-NI(x)]} , which is the basic result of large deviations theory. If we know the probability distribution of X {\displaystyle X} , an explicit expression for the rate function can be obtained. This is given by a Legendre–Fenchel transformation, I ( x ) = sup θ > 0 [ θ x − λ ( θ ) ] {\displaystyle I(x)=\sup _{\theta >0}[\theta x-\lambda (\theta )]} , where λ ( θ ) = ln ⁡ E ⁡ [ exp ⁡ ( θ X ) ] {\displaystyle \lambda (\theta )=\ln \operatorname {E} [\exp(\theta X)]} is called the cumulant generating function (CGF) and E {\displaystyle \operatorname {E} } denotes the mathematical expectation. If X {\displaystyle X} follows a normal distribution, the rate function becomes a parabola with its apex at the mean of the normal distribution. If { X i } {\displaystyle \{X_{i}\}} is an irreducible and aperiodic Markov chain, the variant of the basic large deviations result stated above may hold. === Moderate deviations for sums of independent random variables === The previous example controlled the probability of the event [ M N > x ] {\displaystyle [M_{N}>x]} , that is, the concentration of the law of M N {\displaystyle M_{N}} on the compact set [ − x , x ] {\displaystyle [-x,x]} . It is also possible to control the probability of the event [ M N > x a N ] {\displaystyle [M_{N}>xa_{N}]} for some sequence a N → 0 {\displaystyle a_{N}\to 0} . The following is an example of a moderate deviations principle: In particular, the limit case a N = N {\displaystyle a_{N}={\sqrt {N}}} is the central limit theorem. == Formal definition == Given a Polish space X {\displaystyle {\mathcal {X}}} let { P N } {\displaystyle \{\mathbb {P} _{N}\}} be a sequence of Borel probability measures on X {\displaystyle {\mathcal {X}}} , let { a N } {\displaystyle \{a_{N}\}} be a sequence of positive real numbers such that lim N a N = ∞ {\displaystyle \lim _{N}a_{N}=\infty } , and finally let I : X → [ 0 , ∞ ] {\displaystyle I:{\mathcal {X}}\to [0,\infty ]} be a lower semicontinuous functional on X . {\displaystyle {\mathcal {X}}.} The sequence { P N } {\displaystyle \{\mathbb {P} _{N}\}} is said to satisfy a large deviation principle with speed { a n } {\displaystyle \{a_{n}\}} and rate I {\displaystyle I} if, and only if, for each Borel measurable set E ⊂ X {\displaystyle E\subset {\mathcal {X}}} , − inf x ∈ E ∘ I ( x ) ≤ lim _ N ⁡ a N − 1 log ⁡ ( P N ( E ) ) ≤ lim ¯ N ⁡ a N − 1 log ⁡ ( P N ( E ) ) ≤ − inf x ∈ E ¯ I ( x ) {\displaystyle -\inf _{x\in E^{\circ }}I(x)\leq \varliminf _{N}a_{N}^{-1}\log(\mathbb {P} _{N}(E))\leq \varlimsup _{N}a_{N}^{-1}\log(\mathbb {P} _{N}(E))\leq -\inf _{x\in {\overline {E}}}I(x)} , where E ¯ {\displaystyle {\overline {E}}} and E ∘ {\displaystyle E^{\circ }} denote respectively the closure and interior of E {\displaystyle E} . == Brief history == The first rigorous results concerning large deviations are due to the Swedish mathematician Harald Cramér, who applied them to model the insurance business. From the point of view of an insurance company, the earning is at a constant rate per month (the monthly premium) but the claims come randomly. For the company to be successful over a certain period of time (preferably many months), the total earning should exceed the total claim. Thus to estimate the premium you have to ask the following question: "What should we choose as the premium q {\displaystyle q} such that over N {\displaystyle N} months the total claim C = Σ X i {\displaystyle C=\Sigma X_{i}} should be less than N q {\displaystyle Nq} ?" This is clearly the same question asked by the large deviations theory. Cramér gave a solution to this question for i.i.d. random variables, where the rate function is expressed as a power series. A very incomplete list of mathematicians who have made important advances would include Petrov, Sanov, S.R.S. Varadhan (who has won the Abel prize for his contribution to the theory), D. Ruelle, O.E. Lanford, Mark Freidlin, Alexander D. Wentzell, Amir Dembo, and Ofer Zeitouni. == Applications == Principles of large deviations may be effectively applied to gather information out of a probabilistic model. Thus, theory of large deviations finds its applications in information theory and risk management. In physics, the best known application of large deviations theory arise in thermodynamics and statistical mechanics (in connection with relating entropy with rate function). === Large deviations and entropy === The rate function is related to the entropy in statistical mechanics. This can be heuristically seen in the following way. In statistical mechanics the entropy of a particular macro-state is related to the number of micro-states which corresponds to this macro-state. In our coin tossing example the mean value M N {\displaystyle M_{N}} could designate a particular macro-state. And the particular sequence of heads and tails which gives rise to a particular value of M N {\displaystyle M_{N}} constitutes a particular micro-state. Loosely speaking a macro-state having a higher number of micro-states giving rise to it, has higher entropy. And a state with higher entropy has a higher chance of being realised in actual experiments. The macro-state with mean value of 1/2 (as many heads as tails) has the highest number of micro-states giving rise to it and it is indeed the state with the highest entropy. And in most practical situations we shall indeed obtain this macro-state for large numbers of trials. The "rate function" on the other hand measures the probability of appearance of a particular macro-state. The smaller the rate function the higher is the chance of a macro-state appearing. In our coin-tossing the value of the "rate function" for mean value equal to 1/2 is zero. In this way one can see the "rate function" as the negative of the "entropy". There is a relation between the "rate function" in large deviations theory and the Kullback–Leibler divergence, the connection is established by Sanov's theorem (see Sanov and Novak, ch. 14.5). In a special case, large deviations are closely related to the concept of Gromov–Hausdorff limits. == See also == Large deviation principle Cramér's large deviation theorem Chernoff's inequality Sanov's theorem Contraction principle (large deviations theory), a result on how large deviations principles "push forward" Freidlin–Wentzell theorem, a large deviations principle for Itō diffusions Legendre transformation, Ensemble equivalence is based on this transformation. Laplace principle, a large deviations principle in Rd Laplace's method Schilder's theorem, a large deviations principle for Brownian motion Varadhan's lemma Extreme value theory Large deviations of Gaussian random functions == References == == Bibliography == Special invited paper: Large deviations by S. R. S. Varadhan The Annals of Probability 2008, Vol. 36, No. 2, 397–419 doi:10.1214/07-AOP348 A basic introduction to large deviations: Theory, applications, simulations, Hugo Touchette, arXiv:1106.4146. Entropy, Large Deviations and Statistical Mechanics by R.S. Ellis, Springer Publication. ISBN 3-540-29059-1 Large Deviations for Performance Analysis by Alan Weiss and Adam Shwartz. Chapman and Hall ISBN 0-412-06311-5 Large Deviations Techniques and Applications by Amir Dembo and Ofer Zeitouni. Springer ISBN 0-387-98406-2 A course on large deviations with an introduction to Gibbs measures by Firas Rassoul-Agha and Timo Seppäläinen. Grad. Stud. Math., 162. American Mathematical Society ISBN 978-0-8218-7578-0 Random Perturbations of Dynamical Systems by M.I. Freidlin and A.D. Wentzell. Springer ISBN 0-387-98362-7 "Large Deviations for Two Dimensional Navier-Stokes Equation with Multiplicative Noise", S. S. Sritharan and P. Sundar, Stochastic Processes and Their Applications, Vol. 116 (2006) 1636–1659.[2] "Large Deviations for the Stochastic Shell Model of Turbulence", U. Manna, S. S. Sritharan and P. Sundar, NoDEA Nonlinear Differential Equations Appl. 16 (2009), no. 4, 493–521.[3]
Wikipedia/Large_deviations_theory
In quantum field theory, partition functions are generating functionals for correlation functions, making them key objects of study in the path integral formalism. They are the imaginary time versions of statistical mechanics partition functions, giving rise to a close connection between these two areas of physics. Partition functions can rarely be solved for exactly, although free theories do admit such solutions. Instead, a perturbative approach is usually implemented, this being equivalent to summing over Feynman diagrams. == Generating functional == === Scalar theories === In a d {\displaystyle d} -dimensional field theory with a real scalar field ϕ {\displaystyle \phi } and action S [ ϕ ] {\displaystyle S[\phi ]} , the partition function is defined in the path integral formalism as the functional Z [ J ] = ∫ D ϕ e i S [ ϕ ] + i ∫ d d x J ( x ) ϕ ( x ) {\displaystyle Z[J]=\int {\mathcal {D}}\phi \ e^{iS[\phi ]+i\int d^{d}xJ(x)\phi (x)}} where J ( x ) {\displaystyle J(x)} is a fictitious source current. It acts as a generating functional for arbitrary n-point correlation functions G n ( x 1 , … , x n ) = ( − 1 ) n 1 Z [ 0 ] δ n Z [ J ] δ J ( x 1 ) ⋯ δ J ( x n ) | J = 0 . {\displaystyle G_{n}(x_{1},\dots ,x_{n})=(-1)^{n}{\frac {1}{Z[0]}}{\frac {\delta ^{n}Z[J]}{\delta J(x_{1})\cdots \delta J(x_{n})}}{\bigg |}_{J=0}.} The derivatives used here are functional derivatives rather than regular derivatives since they are acting on functionals rather than regular functions. From this it follows that an equivalent expression for the partition function reminiscent to a power series in source currents is given by Z [ J ] = ∑ n ≥ 0 1 n ! ∫ ∏ i = 1 n d d x i G ( x 1 , … , x n ) J ( x 1 ) ⋯ J ( x n ) . {\displaystyle Z[J]=\sum _{n\geq 0}{\frac {1}{n!}}\int \prod _{i=1}^{n}d^{d}x_{i}G(x_{1},\dots ,x_{n})J(x_{1})\cdots J(x_{n}).} In curved spacetimes there is an added subtlety that must be dealt with due to the fact that the initial vacuum state need not be the same as the final vacuum state. Partition functions can also be constructed for composite operators in the same way as they are for fundamental fields. Correlation functions of these operators can then be calculated as functional derivatives of these functionals. For example, the partition function for a composite operator O ( x ) {\displaystyle {\mathcal {O}}(x)} is given by Z O [ J ] = ∫ D ϕ e i S [ ϕ ] + i ∫ d d x J ( x ) O ( x ) . {\displaystyle Z_{\mathcal {O}}[J]=\int {\mathcal {D}}\phi e^{iS[\phi ]+i\int d^{d}xJ(x){\mathcal {O}}(x)}.} Knowing the partition function completely solves the theory since it allows for the direct calculation of all of its correlation functions. However, there are very few cases where the partition function can be calculated exactly. While free theories do admit exact solutions, interacting theories generally do not. Instead the partition function can be evaluated at weak coupling perturbatively, which amounts to regular perturbation theory using Feynman diagrams with J {\displaystyle J} insertions on the external legs. The symmetry factors for these types of diagrams differ from those of correlation functions since all external legs have identical J {\displaystyle J} insertions that can be interchanged, whereas the external legs of correlation functions are all fixed at specific coordinates and are therefore fixed. By performing a Wick transformation, the partition function can be expressed in Euclidean spacetime as Z [ J ] = ∫ D ϕ e − ( S E [ ϕ ] + ∫ d d x E J ϕ ) , {\displaystyle Z[J]=\int {\mathcal {D}}\phi \ e^{-(S_{E}[\phi ]+\int d^{d}x_{E}J\phi )},} where S E {\displaystyle S_{E}} is the Euclidean action and x E {\displaystyle x_{E}} are Euclidean coordinates. This form is closely connected to the partition function in statistical mechanics, especially since the Euclidean Lagrangian is usually bounded from below in which case it can be interpreted as an energy density. It also allows for the interpretation of the exponential factor as a statistical weight for the field configurations, with larger fluctuations in the gradient or field values leading to greater suppression. This connection with statistical mechanics also lends additional intuition for how correlation functions should behave in a quantum field theory. === General theories === Most of the same principles of the scalar case hold for more general theories with additional fields. Each field requires the introduction of its own fictitious current, with antiparticle fields requiring their own separate currents. Acting on the partition function with a derivative of a current brings down its associated field from the exponential, allowing for the construction of arbitrary correlation functions. After differentiation, the currents are set to zero when correlation functions in a vacuum state are desired, but the currents can also be set to take on particular values to yield correlation functions in non-vanishing background fields. For partition functions with Grassmann valued fermion fields, the sources are also Grassmann valued. For example, a theory with a single Dirac fermion ψ ( x ) {\displaystyle \psi (x)} requires the introduction of two Grassmann currents η {\displaystyle \eta } and η ¯ {\displaystyle {\bar {\eta }}} so that the partition function is Z [ η ¯ , η ] = ∫ D ψ ¯ D ψ e i S [ ψ , ψ ¯ ] + i ∫ d d x ( η ¯ ψ + ψ ¯ η ) . {\displaystyle Z[{\bar {\eta }},\eta ]=\int {\mathcal {D}}{\bar {\psi }}{\mathcal {D}}\psi \ e^{iS[\psi ,{\bar {\psi }}]+i\int d^{d}x({\bar {\eta }}\psi +{\bar {\psi }}\eta )}.} Functional derivatives with respect to η ¯ {\displaystyle {\bar {\eta }}} give fermion fields while derivatives with respect to η {\displaystyle \eta } give anti-fermion fields in the correlation functions. === Thermal field theories === A thermal field theory at temperature T {\displaystyle T} is equivalent in Euclidean formalism to a theory with a compactified temporal direction of length β = 1 / T {\displaystyle \beta =1/T} . Partition functions must be modified appropriately by imposing periodicity conditions on the fields and the Euclidean spacetime integrals Z [ β , J ] = ∫ D ϕ e − S E , β [ ϕ ] + ∫ β d d x E J ϕ | ϕ ( x , 0 ) = ϕ ( x , β ) . {\displaystyle Z[\beta ,J]=\int {\mathcal {D}}\phi e^{-S_{E,\beta }[\phi ]+\int _{\beta }d^{d}x_{E}J\phi }{\bigg |}_{\phi ({\boldsymbol {x}},0)=\phi ({\boldsymbol {x}},\beta )}.} This partition function can be taken as the definition of the thermal field theory in imaginary time formalism. Correlation functions are acquired from the partition function through the usual functional derivatives with respect to currents G n , β ( x 1 , … , x n ) = δ n Z [ β , J ] δ J ( x 1 ) ⋯ δ J ( x n ) | J = 0 . {\displaystyle G_{n,\beta }(x_{1},\dots ,x_{n})={\frac {\delta ^{n}Z[\beta ,J]}{\delta J(x_{1})\cdots \delta J(x_{n})}}{\bigg |}_{J=0}.} == Free theories == The partition function can be solved exactly in free theories by completing the square in terms of the fields. Since a shift by a constant does not affect the path integral measure, this allows for separating the partition function into a constant of proportionality N {\displaystyle N} arising from the path integral, and a second term that only depends on the current. For the scalar theory this yields Z 0 [ J ] = N exp ⁡ ( − 1 2 ∫ d d x d d y J ( x ) Δ F ( x − y ) J ( y ) ) , {\displaystyle Z_{0}[J]=N\exp {\bigg (}-{\frac {1}{2}}\int d^{d}xd^{d}y\ J(x)\Delta _{F}(x-y)J(y){\bigg )},} where Δ F ( x − y ) {\displaystyle \Delta _{F}(x-y)} is the position space Feynman propagator Δ F ( x − y ) = ∫ d d p ( 2 π ) d i p 2 − m 2 + i ϵ e − i p ⋅ ( x − y ) . {\displaystyle \Delta _{F}(x-y)=\int {\frac {d^{d}p}{(2\pi )^{d}}}{\frac {i}{p^{2}-m^{2}+i\epsilon }}e^{-ip\cdot (x-y)}.} This partition function fully determines the free field theory. In the case of a theory with a single free Dirac fermion, completing the square yields a partition function of the form Z 0 [ η ¯ , η ] = N exp ⁡ ( ∫ d d x d d y η ¯ ( y ) Δ D ( x − y ) η ( x ) ) , {\displaystyle Z_{0}[{\bar {\eta }},\eta ]=N\exp {\bigg (}\int d^{d}xd^{d}y\ {\bar {\eta }}(y)\Delta _{D}(x-y)\eta (x){\bigg )},} where Δ D ( x − y ) {\displaystyle \Delta _{D}(x-y)} is the position space Dirac propagator Δ D ( x − y ) = ∫ d d p ( 2 π ) d i ( p / + m ) p 2 − m 2 + i ϵ e − i p ⋅ ( x − y ) . {\displaystyle \Delta _{D}(x-y)=\int {\frac {d^{d}p}{(2\pi )^{d}}}{\frac {i({p\!\!\!/}+m)}{p^{2}-m^{2}+i\epsilon }}e^{-ip\cdot (x-y)}.} == References == == Further reading == Ashok Das, Field Theory: A Path Integral Approach, 2nd edition, World Scientific (Singapore, 2006); paperback ISBN 978-9812568489. Kleinert, Hagen, Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 4th edition, World Scientific (Singapore, 2004); paperback ISBN 981-238-107-4 (also available online: PDF-files). Jean Zinn-Justin (2009), Scholarpedia, 4(2): 8674.
Wikipedia/Partition_function_(quantum_field_theory)
In mathematics, a homogeneous function is a function of several variables such that the following holds: If each of the function's arguments is multiplied by the same scalar, then the function's value is multiplied by some power of this scalar; the power is called the degree of homogeneity, or simply the degree. That is, if k is an integer, a function f of n variables is homogeneous of degree k if f ( s x 1 , … , s x n ) = s k f ( x 1 , … , x n ) {\displaystyle f(sx_{1},\ldots ,sx_{n})=s^{k}f(x_{1},\ldots ,x_{n})} for every x 1 , … , x n , {\displaystyle x_{1},\ldots ,x_{n},} and s ≠ 0. {\displaystyle s\neq 0.} This is also referred to a kth-degree or kth-order homogeneous function. For example, a homogeneous polynomial of degree k defines a homogeneous function of degree k. The above definition extends to functions whose domain and codomain are vector spaces over a field F: a function f : V → W {\displaystyle f:V\to W} between two F-vector spaces is homogeneous of degree k {\displaystyle k} if for all nonzero s ∈ F {\displaystyle s\in F} and v ∈ V . {\displaystyle v\in V.} This definition is often further generalized to functions whose domain is not V, but a cone in V, that is, a subset C of V such that v ∈ C {\displaystyle \mathbf {v} \in C} implies s v ∈ C {\displaystyle s\mathbf {v} \in C} for every nonzero scalar s. In the case of functions of several real variables and real vector spaces, a slightly more general form of homogeneity called positive homogeneity is often considered, by requiring only that the above identities hold for s > 0 , {\displaystyle s>0,} and allowing any real number k as a degree of homogeneity. Every homogeneous real function is positively homogeneous. The converse is not true, but is locally true in the sense that (for integer degrees) the two kinds of homogeneity cannot be distinguished by considering the behavior of a function near a given point. A norm over a real vector space is an example of a positively homogeneous function that is not homogeneous. A special case is the absolute value of real numbers. The quotient of two homogeneous polynomials of the same degree gives an example of a homogeneous function of degree zero. This example is fundamental in the definition of projective schemes. == Definitions == The concept of a homogeneous function was originally introduced for functions of several real variables. With the definition of vector spaces at the end of 19th century, the concept has been naturally extended to functions between vector spaces, since a tuple of variable values can be considered as a coordinate vector. It is this more general point of view that is described in this article. There are two commonly used definitions. The general one works for vector spaces over arbitrary fields, and is restricted to degrees of homogeneity that are integers. The second one supposes to work over the field of real numbers, or, more generally, over an ordered field. This definition restricts to positive values the scaling factor that occurs in the definition, and is therefore called positive homogeneity, the qualificative positive being often omitted when there is no risk of confusion. Positive homogeneity leads to considering more functions as homogeneous. For example, the absolute value and all norms are positively homogeneous functions that are not homogeneous. The restriction of the scaling factor to real positive values allows also considering homogeneous functions whose degree of homogeneity is any real number. === General homogeneity === Let V and W be two vector spaces over a field F. A linear cone in V is a subset C of V such that s x ∈ C {\displaystyle sx\in C} for all x ∈ C {\displaystyle x\in C} and all nonzero s ∈ F . {\displaystyle s\in F.} A homogeneous function f from V to W is a partial function from V to W that has a linear cone C as its domain, and satisfies f ( s x ) = s k f ( x ) {\displaystyle f(sx)=s^{k}f(x)} for some integer k, every x ∈ C , {\displaystyle x\in C,} and every nonzero s ∈ F . {\displaystyle s\in F.} The integer k is called the degree of homogeneity, or simply the degree of f. A typical example of a homogeneous function of degree k is the function defined by a homogeneous polynomial of degree k. The rational function defined by the quotient of two homogeneous polynomials is a homogeneous function; its degree is the difference of the degrees of the numerator and the denominator; its cone of definition is the linear cone of the points where the value of denominator is not zero. Homogeneous functions play a fundamental role in projective geometry since any homogeneous function f from V to W defines a well-defined function between the projectivizations of V and W. The homogeneous rational functions of degree zero (those defined by the quotient of two homogeneous polynomial of the same degree) play an essential role in the Proj construction of projective schemes. === Positive homogeneity === When working over the real numbers, or more generally over an ordered field, it is commonly convenient to consider positive homogeneity, the definition being exactly the same as that in the preceding section, with "nonzero s" replaced by "s > 0" in the definitions of a linear cone and a homogeneous function. This change allow considering (positively) homogeneous functions with any real number as their degrees, since exponentiation with a positive real base is well defined. Even in the case of integer degrees, there are many useful functions that are positively homogeneous without being homogeneous. This is, in particular, the case of the absolute value function and norms, which are all positively homogeneous of degree 1. They are not homogeneous since | − x | = | x | ≠ − | x | {\displaystyle |-x|=|x|\neq -|x|} if x ≠ 0. {\displaystyle x\neq 0.} This remains true in the complex case, since the field of the complex numbers C {\displaystyle \mathbb {C} } and every complex vector space can be considered as real vector spaces. Euler's homogeneous function theorem is a characterization of positively homogeneous differentiable functions, which may be considered as the fundamental theorem on homogeneous functions. == Examples == === Simple example === The function f ( x , y ) = x 2 + y 2 {\displaystyle f(x,y)=x^{2}+y^{2}} is homogeneous of degree 2: f ( t x , t y ) = ( t x ) 2 + ( t y ) 2 = t 2 ( x 2 + y 2 ) = t 2 f ( x , y ) . {\displaystyle f(tx,ty)=(tx)^{2}+(ty)^{2}=t^{2}\left(x^{2}+y^{2}\right)=t^{2}f(x,y).} === Absolute value and norms === The absolute value of a real number is a positively homogeneous function of degree 1, which is not homogeneous, since | s x | = s | x | {\displaystyle |sx|=s|x|} if s > 0 , {\displaystyle s>0,} and | s x | = − s | x | {\displaystyle |sx|=-s|x|} if s < 0. {\displaystyle s<0.} The absolute value of a complex number is a positively homogeneous function of degree 1 {\displaystyle 1} over the real numbers (that is, when considering the complex numbers as a vector space over the real numbers). It is not homogeneous, over the real numbers as well as over the complex numbers. More generally, every norm and seminorm is a positively homogeneous function of degree 1 which is not a homogeneous function. As for the absolute value, if the norm or semi-norm is defined on a vector space over the complex numbers, this vector space has to be considered as vector space over the real number for applying the definition of a positively homogeneous function. === Linear Maps === Any linear map f : V → W {\displaystyle f:V\to W} between vector spaces over a field F is homogeneous of degree 1, by the definition of linearity: f ( α v ) = α f ( v ) {\displaystyle f(\alpha \mathbf {v} )=\alpha f(\mathbf {v} )} for all α ∈ F {\displaystyle \alpha \in {F}} and v ∈ V . {\displaystyle v\in V.} Similarly, any multilinear function f : V 1 × V 2 × ⋯ V n → W {\displaystyle f:V_{1}\times V_{2}\times \cdots V_{n}\to W} is homogeneous of degree n , {\displaystyle n,} by the definition of multilinearity: f ( α v 1 , … , α v n ) = α n f ( v 1 , … , v n ) {\displaystyle f\left(\alpha \mathbf {v} _{1},\ldots ,\alpha \mathbf {v} _{n}\right)=\alpha ^{n}f(\mathbf {v} _{1},\ldots ,\mathbf {v} _{n})} for all α ∈ F {\displaystyle \alpha \in {F}} and v 1 ∈ V 1 , v 2 ∈ V 2 , … , v n ∈ V n . {\displaystyle v_{1}\in V_{1},v_{2}\in V_{2},\ldots ,v_{n}\in V_{n}.} === Homogeneous polynomials === Monomials in n {\displaystyle n} variables define homogeneous functions f : F n → F . {\displaystyle f:\mathbb {F} ^{n}\to \mathbb {F} .} For example, f ( x , y , z ) = x 5 y 2 z 3 {\displaystyle f(x,y,z)=x^{5}y^{2}z^{3}\,} is homogeneous of degree 10 since f ( α x , α y , α z ) = ( α x ) 5 ( α y ) 2 ( α z ) 3 = α 10 x 5 y 2 z 3 = α 10 f ( x , y , z ) . {\displaystyle f(\alpha x,\alpha y,\alpha z)=(\alpha x)^{5}(\alpha y)^{2}(\alpha z)^{3}=\alpha ^{10}x^{5}y^{2}z^{3}=\alpha ^{10}f(x,y,z).\,} The degree is the sum of the exponents on the variables; in this example, 10 = 5 + 2 + 3. {\displaystyle 10=5+2+3.} A homogeneous polynomial is a polynomial made up of a sum of monomials of the same degree. For example, x 5 + 2 x 3 y 2 + 9 x y 4 {\displaystyle x^{5}+2x^{3}y^{2}+9xy^{4}} is a homogeneous polynomial of degree 5. Homogeneous polynomials also define homogeneous functions. Given a homogeneous polynomial of degree k {\displaystyle k} with real coefficients that takes only positive values, one gets a positively homogeneous function of degree k / d {\displaystyle k/d} by raising it to the power 1 / d . {\displaystyle 1/d.} So for example, the following function is positively homogeneous of degree 1 but not homogeneous: ( x 2 + y 2 + z 2 ) 1 2 . {\displaystyle \left(x^{2}+y^{2}+z^{2}\right)^{\frac {1}{2}}.} === Min/max === For every set of weights w 1 , … , w n , {\displaystyle w_{1},\dots ,w_{n},} the following functions are positively homogeneous of degree 1, but not homogeneous: min ( x 1 w 1 , … , x n w n ) {\displaystyle \min \left({\frac {x_{1}}{w_{1}}},\dots ,{\frac {x_{n}}{w_{n}}}\right)} (Leontief utilities) max ( x 1 w 1 , … , x n w n ) {\displaystyle \max \left({\frac {x_{1}}{w_{1}}},\dots ,{\frac {x_{n}}{w_{n}}}\right)} === Rational functions === Rational functions formed as the ratio of two homogeneous polynomials are homogeneous functions in their domain, that is, off of the linear cone formed by the zeros of the denominator. Thus, if f {\displaystyle f} is homogeneous of degree m {\displaystyle m} and g {\displaystyle g} is homogeneous of degree n , {\displaystyle n,} then f / g {\displaystyle f/g} is homogeneous of degree m − n {\displaystyle m-n} away from the zeros of g . {\displaystyle g.} === Non-examples === The homogeneous real functions of a single variable have the form x ↦ c x k {\displaystyle x\mapsto cx^{k}} for some constant c. So, the affine function x ↦ x + 5 , {\displaystyle x\mapsto x+5,} the natural logarithm x ↦ ln ⁡ ( x ) , {\displaystyle x\mapsto \ln(x),} and the exponential function x ↦ e x {\displaystyle x\mapsto e^{x}} are not homogeneous. == Euler's theorem == Roughly speaking, Euler's homogeneous function theorem asserts that the positively homogeneous functions of a given degree are exactly the solution of a specific partial differential equation. More precisely: As a consequence, if f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } is continuously differentiable and homogeneous of degree k , {\displaystyle k,} its first-order partial derivatives ∂ f / ∂ x i {\displaystyle \partial f/\partial x_{i}} are homogeneous of degree k − 1. {\displaystyle k-1.} This results from Euler's theorem by differentiating the partial differential equation with respect to one variable. In the case of a function of a single real variable ( n = 1 {\displaystyle n=1} ), the theorem implies that a continuously differentiable and positively homogeneous function of degree k has the form f ( x ) = c + x k {\displaystyle f(x)=c_{+}x^{k}} for x > 0 {\displaystyle x>0} and f ( x ) = c − x k {\displaystyle f(x)=c_{-}x^{k}} for x < 0. {\displaystyle x<0.} The constants c + {\displaystyle c_{+}} and c − {\displaystyle c_{-}} are not necessarily the same, as it is the case for the absolute value. == Application to differential equations == The substitution v = y / x {\displaystyle v=y/x} converts the ordinary differential equation I ( x , y ) d y d x + J ( x , y ) = 0 , {\displaystyle I(x,y){\frac {\mathrm {d} y}{\mathrm {d} x}}+J(x,y)=0,} where I {\displaystyle I} and J {\displaystyle J} are homogeneous functions of the same degree, into the separable differential equation x d v d x = − J ( 1 , v ) I ( 1 , v ) − v . {\displaystyle x{\frac {\mathrm {d} v}{\mathrm {d} x}}=-{\frac {J(1,v)}{I(1,v)}}-v.} == Generalizations == === Homogeneity under a monoid action === The definitions given above are all specialized cases of the following more general notion of homogeneity in which X {\displaystyle X} can be any set (rather than a vector space) and the real numbers can be replaced by the more general notion of a monoid. Let M {\displaystyle M} be a monoid with identity element 1 ∈ M , {\displaystyle 1\in M,} let X {\displaystyle X} and Y {\displaystyle Y} be sets, and suppose that on both X {\displaystyle X} and Y {\displaystyle Y} there are defined monoid actions of M . {\displaystyle M.} Let k {\displaystyle k} be a non-negative integer and let f : X → Y {\displaystyle f:X\to Y} be a map. Then f {\displaystyle f} is said to be homogeneous of degree k {\displaystyle k} over M {\displaystyle M} if for every x ∈ X {\displaystyle x\in X} and m ∈ M , {\displaystyle m\in M,} f ( m x ) = m k f ( x ) . {\displaystyle f(mx)=m^{k}f(x).} If in addition there is a function M → M , {\displaystyle M\to M,} denoted by m ↦ | m | , {\displaystyle m\mapsto |m|,} called an absolute value then f {\displaystyle f} is said to be absolutely homogeneous of degree k {\displaystyle k} over M {\displaystyle M} if for every x ∈ X {\displaystyle x\in X} and m ∈ M , {\displaystyle m\in M,} f ( m x ) = | m | k f ( x ) . {\displaystyle f(mx)=|m|^{k}f(x).} A function is homogeneous over M {\displaystyle M} (resp. absolutely homogeneous over M {\displaystyle M} ) if it is homogeneous of degree 1 {\displaystyle 1} over M {\displaystyle M} (resp. absolutely homogeneous of degree 1 {\displaystyle 1} over M {\displaystyle M} ). More generally, it is possible for the symbols m k {\displaystyle m^{k}} to be defined for m ∈ M {\displaystyle m\in M} with k {\displaystyle k} being something other than an integer (for example, if M {\displaystyle M} is the real numbers and k {\displaystyle k} is a non-zero real number then m k {\displaystyle m^{k}} is defined even though k {\displaystyle k} is not an integer). If this is the case then f {\displaystyle f} will be called homogeneous of degree k {\displaystyle k} over M {\displaystyle M} if the same equality holds: f ( m x ) = m k f ( x ) for every x ∈ X and m ∈ M . {\displaystyle f(mx)=m^{k}f(x)\quad {\text{ for every }}x\in X{\text{ and }}m\in M.} The notion of being absolutely homogeneous of degree k {\displaystyle k} over M {\displaystyle M} is generalized similarly. === Distributions (generalized functions) === A continuous function f {\displaystyle f} on R n {\displaystyle \mathbb {R} ^{n}} is homogeneous of degree k {\displaystyle k} if and only if ∫ R n f ( t x ) φ ( x ) d x = t k ∫ R n f ( x ) φ ( x ) d x {\displaystyle \int _{\mathbb {R} ^{n}}f(tx)\varphi (x)\,dx=t^{k}\int _{\mathbb {R} ^{n}}f(x)\varphi (x)\,dx} for all compactly supported test functions φ {\displaystyle \varphi } ; and nonzero real t . {\displaystyle t.} Equivalently, making a change of variable y = t x , {\displaystyle y=tx,} f {\displaystyle f} is homogeneous of degree k {\displaystyle k} if and only if t − n ∫ R n f ( y ) φ ( y t ) d y = t k ∫ R n f ( y ) φ ( y ) d y {\displaystyle t^{-n}\int _{\mathbb {R} ^{n}}f(y)\varphi \left({\frac {y}{t}}\right)\,dy=t^{k}\int _{\mathbb {R} ^{n}}f(y)\varphi (y)\,dy} for all t {\displaystyle t} and all test functions φ . {\displaystyle \varphi .} The last display makes it possible to define homogeneity of distributions. A distribution S {\displaystyle S} is homogeneous of degree k {\displaystyle k} if t − n ⟨ S , φ ∘ μ t ⟩ = t k ⟨ S , φ ⟩ {\displaystyle t^{-n}\langle S,\varphi \circ \mu _{t}\rangle =t^{k}\langle S,\varphi \rangle } for all nonzero real t {\displaystyle t} and all test functions φ . {\displaystyle \varphi .} Here the angle brackets denote the pairing between distributions and test functions, and μ t : R n → R n {\displaystyle \mu _{t}:\mathbb {R} ^{n}\to \mathbb {R} ^{n}} is the mapping of scalar division by the real number t . {\displaystyle t.} == Glossary of name variants == Let f : X → Y {\displaystyle f:X\to Y} be a map between two vector spaces over a field F {\displaystyle \mathbb {F} } (usually the real numbers R {\displaystyle \mathbb {R} } or complex numbers C {\displaystyle \mathbb {C} } ). If S {\displaystyle S} is a set of scalars, such as Z , {\displaystyle \mathbb {Z} ,} [ 0 , ∞ ) , {\displaystyle [0,\infty ),} or R {\displaystyle \mathbb {R} } for example, then f {\displaystyle f} is said to be homogeneous over S {\displaystyle S} if f ( s x ) = s f ( x ) {\textstyle f(sx)=sf(x)} for every x ∈ X {\displaystyle x\in X} and scalar s ∈ S . {\displaystyle s\in S.} For instance, every additive map between vector spaces is homogeneous over the rational numbers S := Q {\displaystyle S:=\mathbb {Q} } although it might not be homogeneous over the real numbers S := R . {\displaystyle S:=\mathbb {R} .} The following commonly encountered special cases and variations of this definition have their own terminology: (Strict) Positive homogeneity: f ( r x ) = r f ( x ) {\displaystyle f(rx)=rf(x)} for all x ∈ X {\displaystyle x\in X} and all positive real r > 0. {\displaystyle r>0.} When the function f {\displaystyle f} is valued in a vector space or field, then this property is logically equivalent to nonnegative homogeneity, which by definition means: f ( r x ) = r f ( x ) {\displaystyle f(rx)=rf(x)} for all x ∈ X {\displaystyle x\in X} and all non-negative real r ≥ 0. {\displaystyle r\geq 0.} It is for this reason that positive homogeneity is often also called nonnegative homogeneity. However, for functions valued in the extended real numbers [ − ∞ , ∞ ] = R ∪ { ± ∞ } , {\displaystyle [-\infty ,\infty ]=\mathbb {R} \cup \{\pm \infty \},} which appear in fields like convex analysis, the multiplication 0 ⋅ f ( x ) {\displaystyle 0\cdot f(x)} will be undefined whenever f ( x ) = ± ∞ {\displaystyle f(x)=\pm \infty } and so these statements are not necessarily always interchangeable. This property is used in the definition of a sublinear function. Minkowski functionals are exactly those non-negative extended real-valued functions with this property. Real homogeneity: f ( r x ) = r f ( x ) {\displaystyle f(rx)=rf(x)} for all x ∈ X {\displaystyle x\in X} and all real r . {\displaystyle r.} This property is used in the definition of a real linear functional. Homogeneity: f ( s x ) = s f ( x ) {\displaystyle f(sx)=sf(x)} for all x ∈ X {\displaystyle x\in X} and all scalars s ∈ F . {\displaystyle s\in \mathbb {F} .} It is emphasized that this definition depends on the scalar field F {\displaystyle \mathbb {F} } underlying the domain X . {\displaystyle X.} This property is used in the definition of linear functionals and linear maps. Conjugate homogeneity: f ( s x ) = s ¯ f ( x ) {\displaystyle f(sx)={\overline {s}}f(x)} for all x ∈ X {\displaystyle x\in X} and all scalars s ∈ F . {\displaystyle s\in \mathbb {F} .} If F = C {\displaystyle \mathbb {F} =\mathbb {C} } then s ¯ {\displaystyle {\overline {s}}} typically denotes the complex conjugate of s {\displaystyle s} . But more generally, as with semilinear maps for example, s ¯ {\displaystyle {\overline {s}}} could be the image of s {\displaystyle s} under some distinguished automorphism of F . {\displaystyle \mathbb {F} .} Along with additivity, this property is assumed in the definition of an antilinear map. It is also assumed that one of the two coordinates of a sesquilinear form has this property (such as the inner product of a Hilbert space). All of the above definitions can be generalized by replacing the condition f ( r x ) = r f ( x ) {\displaystyle f(rx)=rf(x)} with f ( r x ) = | r | f ( x ) , {\displaystyle f(rx)=|r|f(x),} in which case that definition is prefixed with the word "absolute" or "absolutely." For example, Absolute homogeneity: f ( s x ) = | s | f ( x ) {\displaystyle f(sx)=|s|f(x)} for all x ∈ X {\displaystyle x\in X} and all scalars s ∈ F . {\displaystyle s\in \mathbb {F} .} This property is used in the definition of a seminorm and a norm. If k {\displaystyle k} is a fixed real number then the above definitions can be further generalized by replacing the condition f ( r x ) = r f ( x ) {\displaystyle f(rx)=rf(x)} with f ( r x ) = r k f ( x ) {\displaystyle f(rx)=r^{k}f(x)} (and similarly, by replacing f ( r x ) = | r | f ( x ) {\displaystyle f(rx)=|r|f(x)} with f ( r x ) = | r | k f ( x ) {\displaystyle f(rx)=|r|^{k}f(x)} for conditions using the absolute value, etc.), in which case the homogeneity is said to be "of degree k {\displaystyle k} " (where in particular, all of the above definitions are "of degree 1 {\displaystyle 1} "). For instance, Real homogeneity of degree k {\displaystyle k} : f ( r x ) = r k f ( x ) {\displaystyle f(rx)=r^{k}f(x)} for all x ∈ X {\displaystyle x\in X} and all real r . {\displaystyle r.} Homogeneity of degree k {\displaystyle k} : f ( s x ) = s k f ( x ) {\displaystyle f(sx)=s^{k}f(x)} for all x ∈ X {\displaystyle x\in X} and all scalars s ∈ F . {\displaystyle s\in \mathbb {F} .} Absolute real homogeneity of degree k {\displaystyle k} : f ( r x ) = | r | k f ( x ) {\displaystyle f(rx)=|r|^{k}f(x)} for all x ∈ X {\displaystyle x\in X} and all real r . {\displaystyle r.} Absolute homogeneity of degree k {\displaystyle k} : f ( s x ) = | s | k f ( x ) {\displaystyle f(sx)=|s|^{k}f(x)} for all x ∈ X {\displaystyle x\in X} and all scalars s ∈ F . {\displaystyle s\in \mathbb {F} .} A nonzero continuous function that is homogeneous of degree k {\displaystyle k} on R n ∖ { 0 } {\displaystyle \mathbb {R} ^{n}\backslash \lbrace 0\rbrace } extends continuously to R n {\displaystyle \mathbb {R} ^{n}} if and only if k > 0. {\displaystyle k>0.} == See also == Homogeneous space Triangle center function – Point in a triangle that can be seen as its middle under some criteriaPages displaying short descriptions of redirect targets == Notes == Proofs == References == == Sources == Blatter, Christian (1979). "20. Mehrdimensionale Differentialrechnung, Aufgaben, 1.". Analysis II (in German) (2nd ed.). Springer Verlag. p. 188. ISBN 3-540-09484-9. Kubrusly, Carlos S. (2011). The Elements of Operator Theory (Second ed.). Boston: Birkhäuser. ISBN 978-0-8176-4998-2. OCLC 710154895. Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. Schechter, Eric (1996). Handbook of Analysis and Its Foundations. San Diego, CA: Academic Press. ISBN 978-0-12-622760-4. OCLC 175294365. == External links == "Homogeneous function", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Eric Weisstein. "Euler's Homogeneous Function Theorem". MathWorld.
Wikipedia/Homogeneous_function
In mathematics, a piecewise linear or segmented function is a real-valued function of a real variable, whose graph is composed of straight-line segments. == Definition == A piecewise linear function is a function defined on a (possibly unbounded) interval of real numbers, such that there is a collection of intervals on each of which the function is an affine function. (Thus "piecewise linear" is actually defined to mean "piecewise affine".) If the domain of the function is compact, there needs to be a finite collection of such intervals; if the domain is not compact, it may either be required to be finite or to be locally finite in the reals. == Examples == The function defined by f ( x ) = { − x − 3 if x ≤ − 3 x + 3 if − 3 < x < 0 − 2 x + 3 if 0 ≤ x < 3 0.5 x − 4.5 if x ≥ 3 {\displaystyle f(x)={\begin{cases}-x-3&{\text{if }}x\leq -3\\x+3&{\text{if }}-3<x<0\\-2x+3&{\text{if }}0\leq x<3\\0.5x-4.5&{\text{if }}x\geq 3\end{cases}}} is piecewise linear with four pieces. The graph of this function is shown to the right. Since the graph of an affine(*) function is a line, the graph of a piecewise linear function consists of line segments and rays. The x values (in the above example −3, 0, and 3) where the slope changes are typically called breakpoints, changepoints, threshold values or knots. As in many applications, this function is also continuous. The graph of a continuous piecewise linear function on a compact interval is a polygonal chain. (*) A linear function satisfies by definition f ( λ x ) = λ f ( x ) {\displaystyle f(\lambda x)=\lambda f(x)} and therefore in particular f ( 0 ) = 0 {\displaystyle f(0)=0} ; functions whose graph is a straight line are affine rather than linear. There are other examples of piecewise linear functions: Absolute value Sawtooth function Floor function Step function, a function composed of constant sub-functions, so also called a piecewise constant function Boxcar function, Heaviside step function Sign function Triangular function == Fitting to a curve == An approximation to a known curve can be found by sampling the curve and interpolating linearly between the points. An algorithm for computing the most significant points subject to a given error tolerance has been published. == Fitting to data == If partitions, and then breakpoints, are already known, linear regression can be performed independently on these partitions. However, continuity is not preserved in that case, and also there is no unique reference model underlying the observed data. A stable algorithm with this case has been derived. If partitions are not known, the residual sum of squares can be used to choose optimal separation points. However efficient computation and joint estimation of all model parameters (including the breakpoints) may be obtained by an iterative procedure currently implemented in the package segmented for the R language. A variant of decision tree learning called model trees learns piecewise linear functions. == Generalizations == The notion of a piecewise linear function makes sense in several different contexts. Piecewise linear functions may be defined on n-dimensional Euclidean space, or more generally any vector space or affine space, as well as on piecewise linear manifolds and simplicial complexes (see simplicial map). In each case, the function may be real-valued, or it may take values from a vector space, an affine space, a piecewise linear manifold, or a simplicial complex. (In these contexts, the term “linear” does not refer solely to linear transformations, but to more general affine linear functions.) In dimensions higher than one, it is common to require the domain of each piece to be a polygon or polytope. This guarantees that the graph of the function will be composed of polygonal or polytopal pieces. Splines generalize piecewise linear functions to higher-order polynomials, which are in turn contained in the category of piecewise-differentiable functions, PDIFF. == Specializations == Important sub-classes of piecewise linear functions include the continuous piecewise linear functions and the convex piecewise linear functions. In general, for every n-dimensional continuous piecewise linear function f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } , there is a Π ∈ P ( P ( R n + 1 ) ) {\displaystyle \Pi \in {\mathcal {P}}({\mathcal {P}}(\mathbb {R} ^{n+1}))} such that f ( x → ) = min Σ ∈ Π max ( a → , b ) ∈ Σ a → ⋅ x → + b . {\displaystyle f({\vec {x}})=\min _{\Sigma \in \Pi }\max _{({\vec {a}},b)\in \Sigma }{\vec {a}}\cdot {\vec {x}}+b.} If f {\displaystyle f} is convex and continuous, then there is a Σ ∈ P ( R n + 1 ) {\displaystyle \Sigma \in {\mathcal {P}}(\mathbb {R} ^{n+1})} such that f ( x → ) = max ( a → , b ) ∈ Σ a → ⋅ x → + b . {\displaystyle f({\vec {x}})=\max _{({\vec {a}},b)\in \Sigma }{\vec {a}}\cdot {\vec {x}}+b.} == Applications == In agriculture piecewise regression analysis of measured data is used to detect the range over which growth factors affect the yield and the range over which the crop is not sensitive to changes in these factors. The image on the left shows that at shallow watertables the yield declines, whereas at deeper (> 7 dm) watertables the yield is unaffected. The graph is made using the method of least squares to find the two segments with the best fit. The graph on the right reveals that crop yields tolerate a soil salinity up to ECe = 8 dS/m (ECe is the electric conductivity of an extract of a saturated soil sample), while beyond that value the crop production reduces. The graph is made with the method of partial regression to find the longest range of "no effect", i.e. where the line is horizontal. The two segments need not join at the same point. Only for the second segment method of least squares is used. == See also == Linear interpolation Spline interpolation Tropical geometry Polygonal chain == Further reading == Apps, P., Long, N., & Rees, R. (2014). Optimal piecewise linear income taxation. Journal of Public Economic Theory, 16(4), 523–545. == References ==
Wikipedia/Piecewise_linear_function
The governing equations of a mathematical model describe how the values of the unknown variables (i.e. the dependent variables) change when one or more of the known (i.e. independent) variables change. Physical systems can be modeled phenomenologically at various levels of sophistication, with each level capturing a different degree of detail about the system. A governing equation represents the most detailed and fundamental phenomenological model currently available for a given system. For example, at the coarsest level, a beam is just a 1D curve whose torque is a function of local curvature. At a more refined level, the beam is a 2D body whose stress-tensor is a function of local strain-tensor, and strain-tensor is a function of its deformation. The equations are then a PDE system. Note that both levels of sophistication are phenomenological, but one is deeper than the other. As another example, in fluid dynamics, the Navier-Stokes equations are more refined than Euler equations. As the field progresses and our understanding of the underlying mechanisms deepens, governing equations may be replaced or refined by new, more accurate models that better represent the system's behavior. These new governing equations can then be considered the deepest level of phenomenological model at that point in time. == Mass balance == A mass balance, also called a material balance, is an application of conservation of mass to the analysis of physical systems. It is the simplest governing equation, and it is simply a budget (balance calculation) over the quantity in question: == Differential equation == === Physics === The governing equations in classical physics that are lectured at universities are listed below. === Classical continuum mechanics === The basic equations in classical continuum mechanics are all balance equations, and as such each of them contains a time-derivative term which calculates how much the dependent variable change with time. For an isolated, frictionless / inviscid system the first four equations are the familiar conservation equations in classical mechanics. Darcy's law of groundwater flow has the form of a volumetric flux caused by a pressure gradient. A flux in classical mechanics is normally not a governing equation, but usually a defining equation for transport properties. Darcy's law was originally established as an empirical equation, but is later shown to be derivable as an approximation of Navier-Stokes equation combined with an empirical composite friction force term. This explains the duality in Darcy's law as a governing equation and a defining equation for absolute permeability. The non-linearity of the material derivative in balance equations in general, and the complexities of Cauchy's momentum equation and Navier-Stokes equation makes the basic equations in classical mechanics exposed to establishing of simpler approximations. Some examples of governing differential equations in classical continuum mechanics are === Biology === A famous example of governing differential equations within biology is == Sequence of states == A governing equation may also be a state equation, an equation describing the state of the system, and thus actually be a constitutive equation that has "stepped up the ranks" because the model in question was not meant to include a time-dependent term in the equation. This is the case for a model of an oil production plant which on the average operates in a steady state mode. Results from one thermodynamic equilibrium calculation are input data to the next equilibrium calculation together with some new state parameters, and so on. In this case the algorithm and sequence of input data form a chain of actions, or calculations, that describes change of states from the first state (based solely on input data) to the last state that finally comes out of the calculation sequence. == See also == Constitutive equation Mass balance Master equation Mathematical model Primitive equations == References ==
Wikipedia/Governing_equation
A Malthusian growth model, sometimes called a simple exponential growth model, is essentially exponential growth based on the idea of the function being proportional to the speed to which the function grows. The model is named after Thomas Robert Malthus, who wrote An Essay on the Principle of Population (1798), one of the earliest and most influential books on population. Malthusian models have the following form: P ( t ) = P 0 e r t {\displaystyle P(t)=P_{0}e^{rt}} where P0 = P(0) is the initial population size, r = the population growth rate, which Ronald Fisher called the Malthusian parameter of population growth in The Genetical Theory of Natural Selection, and Alfred J. Lotka called the intrinsic rate of increase, t = time. The model can also be written in the form of a differential equation: d P d t = r P {\displaystyle {\frac {dP}{dt}}=rP} with initial condition: P(0)= P0 This model is often referred to as the exponential law. It is widely regarded in the field of population ecology as the first principle of population dynamics, with Malthus as the founder. The exponential law is therefore also sometimes referred to as the Malthusian law. By now, it is a widely accepted view to analogize Malthusian growth in ecology to Newton's first law of motion in physics. Malthus wrote that all life forms, including humans, have a propensity to exponential population growth when resources are abundant but that actual growth is limited by available resources: "Through the animal and vegetable kingdoms, nature has scattered the seeds of life abroad with the most profuse and liberal hand. ... The germs of existence contained in this spot of earth, with ample food, and ample room to expand in, would fill millions of worlds in the course of a few thousand years. Necessity, that imperious all pervading law of nature, restrains them within the prescribed bounds. The race of plants, and the race of animals shrink under this great restrictive law. And the race of man cannot, by any efforts of reason, escape from it. Among plants and animals its effects are waste of seed, sickness, and premature death. Among mankind, misery and vice. " A model of population growth bounded by resource limitations was developed by Pierre Francois Verhulst in 1838, after he had read Malthus' essay. Verhulst named the model a logistic function. == See also == Logistic function for the mathematical model used in Population dynamics that adjusts growth rate based on how close it is to the maximum a system can support Albert Allen Bartlett – a leading proponent of the Malthusian Growth Model Exogenous growth model – related growth model from economics Growth theory – related ideas from economics Human overpopulation Irruptive growth – an extension of the Malthusian model accounting for population explosions and crashes Malthusian catastrophe Neo-malthusianism The Genetical Theory of Natural Selection == References == == External links == Malthusian Growth Model from Steve McKelvey, Department of Mathematics, Saint Olaf College, Northfield, Minnesota Logistic Model from Steve McKelvey, Department of Mathematics, Saint Olaf College, Northfield, Minnesota Laws Of Population Ecology Dr. Paul D. Haemig On principles, laws and theory of population ecology Professor of Entomology, Alan Berryman, Washington State University Introduction to Social Macrodynamics Professor Andrey Korotayev Ecological Orbits Lev Ginzburg, Mark Colyvan
Wikipedia/Malthusian_growth_model
"All models are wrong" is a common aphorism and anapodoton in statistics. It is often expanded as "All models are wrong, but some are useful". The aphorism acknowledges that statistical models always fall short of the complexities of reality but can still be useful nonetheless. The aphorism is generally attributed to George E. P. Box, a British statistician, although the underlying concept predates Box's writings. == History == The phrase "all models are wrong" was attributed to George Box who used the phrase in a 1976 paper to refer to the limitations of models, arguing that while no model is ever completely accurate, simpler models can still provide valuable insights if applied judiciously.: 792  In their 1983 book on generalized linear models, Peter McCullagh and John Nelder stated that while modeling in science is a creative process, some models are better than others, even though none can claim eternal truth. In 1996, an Applied Statistician's Creed was proposed by M.R. Nester, which incorporated the aphorism as a central tenet. Although the aphorism is most commonly associated with George Box, the underlying idea has been historically expressed by various thinkers in the past. Alfred Korzybski noted in 1933, "A map is not the territory it represents, but, if correct, it has a similar structure to the territory, which accounts for its usefulness." In 1939, Walter Shewhart discussed the impossibility of constructing a model that fully characterizes a state of statistical control, noting that no model can exactly represent any specific characteristic of such a state. John von Neumann, in 1947, remarked that "truth is much too complicated to allow anything but approximations." == Discussions == Box used the aphorism again in 1979, where he expanded on the idea by discussing how models serve as useful approximations, despite failing to perfectly describe empirical phenomena. He reiterated this sentiment in his later works, where he discussed how models should be judged based on their utility rather than their absolute correctness. David Cox, in a 1995 commentary, argued that stating all models are wrong is unhelpful, as models by their nature simplify reality. He emphasized that statistical models, like other scientific models, aim to capture important aspects of systems through idealized representations. In their 2002 book on statistical model selection, Burnham and Anderson reiterated Box’s statement, noting that while models are simplifications of reality, they vary in usefulness, from highly useful to essentially useless. J. Michael Steele used the analogy of city maps to explain that models, like maps, serve practical purposes despite their limitations, emphasizing that certain models, though simplified, are not necessarily wrong. In response, Andrew Gelman acknowledged Steele’s point but defended the usefulness of the aphorism, particularly in drawing attention to the inherent imperfections of models. Philosopher Peter Truran, in a 2013 essay, discussed how seemingly incompatible models can make accurate predictions by representing different aspects of the same phenomenon, illustrating the point with an example of two observers viewing a cylindrical object from different angles. In 2014, David Hand reiterated that models are meant to aid in understanding or decision-making about the real world, a point emphasized by Box’s famous remark. == See also == Anscombe's quartet – Four data sets with the same descriptive statistics, yet very different distributions Bonini's paradox – As a model of a complex system becomes more complete, it becomes less understandable Lie-to-children – Teaching a complex subject via simpler models Map–territory relation – Relationship between an object and a representation of that object Pragmatism – Philosophical tradition Reification (fallacy) – Fallacy of treating an abstraction as if it were a real thing Scientific modelling – Scientific activity that produces models Statistical model – Type of mathematical model Statistical model validation – Evaluating whether a chosen statistical model is appropriate or not Verisimilitude – Resemblance to reality == Notes == == References == == Further reading == Anderson, C. (23 June 2008), "The end of theory", Wired Box, G. E. P. (1999), "Statistics as a catalyst to learning by scientific method Part II—A discussion", Journal of Quality Technology, 31: 16–29, doi:10.1080/00224065.1999.11979890 Enderling, H.; Wolkenhauer, O. (2021), "Are all models wrong?", Computational and Systems Oncology, 1 (1): e1008, doi:10.1002/cso2.1008, PMC 7880041, PMID 33585835 Saltelli, A.; Funtowicz, S. (Winter 2014), "When all models are wrong", Issues in Science and Technology, 30 == External links == "All Models are Right, Most are Useless"—Andrew Gelman blog All models are wrong—Peter Coles blog
Wikipedia/All_models_are_wrong
In mathematics, statistics, and computational modelling, a grey box model combines a partial theoretical structure with data to complete the model. The theoretical structure may vary from information on the smoothness of results, to models that need only parameter values from data or existing literature. Thus, almost all models are grey box models as opposed to black box where no model form is assumed or white box models that are purely theoretical. Some models assume a special form such as a linear regression or neural network. These have special analysis methods. In particular linear regression techniques are much more efficient than most non-linear techniques. The model can be deterministic or stochastic (i.e. containing random components) depending on its planned use. == Model form == The general case is a non-linear model with a partial theoretical structure and some unknown parts derived from data. Models with unlike theoretical structures need to be evaluated individually, possibly using simulated annealing or genetic algorithms. Within a particular model structure, parameters or variable parameter relations may need to be found. For a particular structure it is arbitrarily assumed that the data consists of sets of feed vectors f, product vectors p, and operating condition vectors c. Typically c will contain values extracted from f, as well as other values. In many cases a model can be converted to a function of the form: m(f,p,q) where the vector function m gives the errors between the data p, and the model predictions. The vector q gives some variable parameters that are the model's unknown parts. The parameters q vary with the operating conditions c in a manner to be determined. This relation can be specified as q = Ac where A is a matrix of unknown coefficients, and c as in linear regression includes a constant term and possibly transformed values of the original operating conditions to obtain non-linear relations between the original operating conditions and q. It is then a matter of selecting which terms in A are non-zero and assigning their values. The model completion becomes an optimization problem to determine the non-zero values in A that minimizes the error terms m(f,p,Ac) over the data. == Model completion == Once a selection of non-zero values is made, the remaining coefficients in A can be determined by minimizing m(f,p,Ac) over the data with respect to the nonzero values in A, typically by non-linear least squares. Selection of the nonzero terms can be done by optimization methods such as simulated annealing and evolutionary algorithms. Also the non-linear least squares can provide accuracy estimates for the elements of A that can be used to determine if they are significantly different from zero, thus providing a method of term selection. It is sometimes possible to calculate values of q for each data set, directly or by non-linear least squares. Then the more efficient linear regression can be used to predict q using c thus selecting the non-zero values in A and estimating their values. Once the non-zero values are located non-linear least squares can be used on the original model m(f,p,Ac) to refine these values . A third method is model inversion, which converts the non-linear m(f,p,Ac) into an approximate linear form in the elements of A, that can be examined using efficient term selection and evaluation of the linear regression. For the simple case of a single q value (q = aTc) and an estimate q* of q. Putting dq = aTc − q* gives m(f,p,aTc) = m(f,p,q* + dq) ≈ m(f,p.q*) + dq m’(f,p,q*) = m(f,p.q*) + (aTc − q*) m’(f,p,q*) so that aT is now in a linear position with all other terms known, and thus can be analyzed by linear regression techniques. For more than one parameter the method extends in a direct manner. After checking that the model has been improved this process can be repeated until convergence. This approach has the advantages that it does not need the parameters q to be able to be determined from an individual data set and the linear regression is on the original error terms == Model validation == Where sufficient data is available, division of the data into a separate model construction set and one or two evaluation sets is recommended. This can be repeated using multiple selections of the construction set and the resulting models averaged or used to evaluate prediction differences. A statistical test such as chi-squared on the residuals is not particularly useful. The chi squared test requires known standard deviations which are seldom available, and failed tests give no indication of how to improve the model. There are a range of methods to compare both nested and non nested models. These include comparison of model predictions with repeated data. An attempt to predict the residuals m(, ) with the operating conditions c using linear regression will show if the residuals can be predicted. Residuals that cannot be predicted offer little prospect of improving the model using the current operating conditions. Terms that do predict the residuals are prospective terms to incorporate into the model to improve its performance. The model inversion technique above can be used as a method of determining whether a model can be improved. In this case selection of nonzero terms is not so important and linear prediction can be done using the significant eigenvectors of the regression matrix. The values in A determined in this manner need to be substituted into the nonlinear model to assess improvements in the model errors. The absence of a significant improvement indicates the available data is not able to improve the current model form using the defined parameters. Extra parameters can be inserted into the model to make this test more comprehensive. == See also == == References ==
Wikipedia/Grey_box_model
Mathematical models can project how infectious diseases progress to show the likely outcome of an epidemic (including in plants) and help inform public health and plant health interventions. Models use basic assumptions or collected statistics along with mathematics to find parameters for various infectious diseases and use those parameters to calculate the effects of different interventions, like mass vaccination programs. The modelling can help decide which intervention(s) to avoid and which to trial, or can predict future growth patterns, etc. == History == The modelling of infectious diseases is a tool that has been used to study the mechanisms by which diseases spread, to predict the future course of an outbreak and to evaluate strategies to control an epidemic. The first scientist who systematically tried to quantify causes of death was John Graunt in his book Natural and Political Observations made upon the Bills of Mortality, in 1662. The bills he studied were listings of numbers and causes of deaths published weekly. Graunt's analysis of causes of death is considered the beginning of the "theory of competing risks" which according to Daley and Gani is "a theory that is now well established among modern epidemiologists". The earliest account of mathematical modelling of spread of disease was carried out in 1760 by Daniel Bernoulli. Trained as a physician, Bernoulli created a mathematical model to defend the practice of inoculating against smallpox. The calculations from this model showed that universal inoculation against smallpox would increase the life expectancy from 26 years 7 months to 29 years 9 months. Daniel Bernoulli's work preceded the modern understanding of germ theory. In the early 20th century, William Hamer and Ronald Ross applied the law of mass action to explain epidemic behaviour. The 1920s saw the emergence of compartmental models. The Kermack–McKendrick epidemic model (1927) and the Reed–Frost epidemic model (1928) both describe the relationship between susceptible, infected and immune individuals in a population. The Kermack–McKendrick epidemic model was successful in predicting the behavior of outbreaks very similar to that observed in many recorded epidemics. Recently, agent-based models (ABMs) have been used in exchange for simpler compartmental models. For example, epidemiological ABMs have been used to inform public health (nonpharmaceutical) interventions against the spread of SARS-CoV-2. Epidemiological ABMs, in spite of their complexity and requiring high computational power, have been criticized for simplifying and unrealistic assumptions. Still, they can be useful in informing decisions regarding mitigation and suppression measures in cases when ABMs are accurately calibrated. == Assumptions == Models are only as good as the assumptions on which they are based. If a model makes predictions that are out of line with observed results and the mathematics is correct, the initial assumptions must change to make the model useful. Rectangular and stationary age distribution, i.e., everybody in the population lives to age L and then dies, and for each age (up to L) there is the same number of people in the population. This is often well-justified for developed countries where there is a low infant mortality and much of the population lives to the life expectancy. Homogeneous mixing of the population, i.e., individuals of the population under scrutiny assort and make contact at random and do not mix mostly in a smaller subgroup. This assumption is rarely justified because social structure is widespread. For example, most people in London only make contact with other Londoners. Further, within London then there are smaller subgroups, such as the Turkish community or teenagers (just to give two examples), who mix with each other more than people outside their group. However, homogeneous mixing is a standard assumption to make the mathematics tractable. == Types of epidemic models == === Stochastic === "Stochastic" means being or having a random variable. A stochastic model is a tool for estimating probability distributions of potential outcomes by allowing for random variation in one or more inputs over time. Stochastic models depend on the chance variations in risk of exposure, disease and other illness dynamics. Statistical agent-level disease dissemination in small or large populations can be determined by stochastic methods. === Deterministic === When dealing with large populations, as in the case of tuberculosis, deterministic or compartmental mathematical models are often used. In a deterministic model, individuals in the population are assigned to different subgroups or compartments, each representing a specific stage of the epidemic. The transition rates from one class to another are mathematically expressed as derivatives, hence the model is formulated using differential equations. While building such models, it must be assumed that the population size in a compartment is differentiable with respect to time and that the epidemic process is deterministic. In other words, the changes in population of a compartment can be calculated using only the history that was used to develop the model. === Kinetic and mean-field === Formally, these models belong to the class of deterministic models; however, they incorporate heterogeneous social features into the dynamics, such as individuals' levels of sociality, opinion, wealth, geographic location, which profoundly influence disease propagation. These models are typically represented by partial differential equations, in contrast to classical models described as systems of ordinary differential equations. Following the derivation principles of kinetic theory, they provide a more rigorous description of epidemic dynamics by starting from agent-based interactions. == Sub-exponential growth == A common explanation for the growth of epidemics holds that 1 person infects 2, those 2 infect 4 and so on and so on with the number of infected doubling every generation. It is analogous to a game of tag where 1 person tags 2, those 2 tag 4 others who've never been tagged and so on. As this game progresses it becomes increasing frenetic as the tagged run past the previously tagged to hunt down those who have never been tagged. Thus this model of an epidemic leads to a curve that grows exponentially until it crashes to zero as all the population have been infected. i.e. no herd immunity and no peak and gradual decline as seen in reality. == Epidemic Models on Networks == Epidemics can be modeled as diseases spreading over networks of contact between people. Such a network can be represented mathematically with a graph and is called the contact network. Every node in a contact network is a representation of an individual and each link (edge) between a pair of nodes represents the contact between them. Links in the contact networks may be used to transmit the disease between the individuals and each disease has its own dynamics on top of its contact network. The combination of disease dynamics under the influence of interventions, if any, on a contact network may be modeled with another network, known as a transmission network. In a transmission network, all the links are responsible for transmitting the disease. If such a network is a locally tree-like network, meaning that any local neighborhood in such a network takes the form of a tree, then the basic reproduction can be written in terms of the average excess degree of the transmission network such that: R 0 = ⟨ k 2 ⟩ ⟨ k ⟩ − 1 , {\displaystyle R_{0}={\frac {\langle k^{2}\rangle }{\langle k\rangle }}-1,} where ⟨ k ⟩ {\displaystyle {\langle k\rangle }} is the mean-degree (average degree) of the network and ⟨ k 2 ⟩ {\displaystyle {\langle k^{2}\rangle }} is the second moment of the transmission network degree distribution. It is, however, not always straightforward to find the transmission network out of the contact network and the disease dynamics. For example, if a contact network can be approximated with an Erdős–Rényi graph with a Poissonian degree distribution, and the disease spreading parameters are as defined in the example above, such that β {\displaystyle \beta } is the transmission rate per person and the disease has a mean infectious period of 1 γ {\displaystyle {\dfrac {1}{\gamma }}} , then the basic reproduction number is R 0 = β γ ⟨ k ⟩ {\displaystyle R_{0}={\dfrac {\beta }{\gamma }}{\langle k\rangle }} since ⟨ k 2 ⟩ − ⟨ k ⟩ 2 = ⟨ k ⟩ {\displaystyle {\langle k^{2}\rangle }-{\langle k\rangle }^{2}={\langle k\rangle }} for a Poisson distribution. == Reproduction number == The basic reproduction number (denoted by R0) is a measure of how transferable a disease is. It is the average number of people that a single infectious person will infect over the course of their infection. This quantity determines whether the infection will increase sub-exponentially, die out, or remain constant: if R0 > 1, then each person on average infects more than one other person so the disease will spread; if R0 < 1, then each person infects fewer than one person on average so the disease will die out; and if R0 = 1, then each person will infect on average exactly one other person, so the disease will become endemic: it will move throughout the population but not increase or decrease. == Endemic steady state == An infectious disease is said to be endemic when it can be sustained in a population without the need for external inputs. This means that, on average, each infected person is infecting exactly one other person (any more and the number of people infected will grow sub-exponentially and there will be an epidemic, any less and the disease will die out). In mathematical terms, that is: R 0 S = 1. {\displaystyle \ R_{0}S\ =1.} The basic reproduction number (R0) of the disease, assuming everyone is susceptible, multiplied by the proportion of the population that is actually susceptible (S) must be one (since those who are not susceptible do not feature in our calculations as they cannot contract the disease). Notice that this relation means that for a disease to be in the endemic steady state, the higher the basic reproduction number, the lower the proportion of the population susceptible must be, and vice versa. This expression has limitations concerning the susceptibility proportion, e.g. the R0 equals 0.5 implicates S has to be 2, however this proportion exceeds the population size. Assume the rectangular stationary age distribution and let also the ages of infection have the same distribution for each birth year. Let the average age of infection be A, for instance when individuals younger than A are susceptible and those older than A are immune (or infectious). Then it can be shown by an easy argument that the proportion of the population that is susceptible is given by: S = A L . {\displaystyle S={\frac {A}{L}}.} We reiterate that L is the age at which in this model every individual is assumed to die. But the mathematical definition of the endemic steady state can be rearranged to give: S = 1 R 0 . {\displaystyle S={\frac {1}{R_{0}}}.} Therefore, due to the transitive property: 1 R 0 = A L ⇒ R 0 = L A . {\displaystyle {\frac {1}{R_{0}}}={\frac {A}{L}}\Rightarrow R_{0}={\frac {L}{A}}.} This provides a simple way to estimate the parameter R0 using easily available data. For a population with an exponential age distribution, R 0 = 1 + L A . {\displaystyle R_{0}=1+{\frac {L}{A}}.} This allows for the basic reproduction number of a disease given A and L in either type of population distribution. == Compartmental models in epidemiology == Compartmental models are formulated as Markov chains. A classic compartmental model in epidemiology is the SIR model, which may be used as a simple model for modelling epidemics. Multiple other types of compartmental models are also employed. === The SIR model === In 1927, W. O. Kermack and A. G. McKendrick created a model in which they considered a fixed population with only three compartments: susceptible, S ( t ) {\displaystyle S(t)} ; infected, I ( t ) {\displaystyle I(t)} ; and recovered, R ( t ) {\displaystyle R(t)} . The compartments used for this model consist of three classes: S ( t ) {\displaystyle S(t)} , or those susceptible to the disease of the population. I ( t ) {\displaystyle I(t)} denotes the individuals of the population who have been infected with the disease and are capable of spreading the disease to those in the susceptible category. R ( t ) {\displaystyle R(t)} is the compartment used for the individuals of the population who have been infected and then removed from the disease, either due to immunization or due to death. Those in this category are not able to be infected again or to transmit the infection to others. === Other compartmental models === There are many modifications of the SIR model, including those that include births and deaths, where upon recovery there is no immunity (SIS model), where immunity lasts only for a short period of time (SIRS), where there is a latent period of the disease where the person is not infectious (SEIS and SEIR), and where infants can be born with immunity (MSIR). == Infectious disease dynamics == Mathematical models need to integrate the increasing volume of data being generated on host-pathogen interactions. Many theoretical studies of the population dynamics, structure and evolution of infectious diseases of plants and animals, including humans, are concerned with this problem. Research topics include: antigenic shift epidemiological networks evolution and spread of resistance immuno-epidemiology intra-host dynamics Pandemic pathogen population genetics persistence of pathogens within hosts phylodynamics role and identification of infection reservoirs role of host genetic factors spatial epidemiology statistical and mathematical tools and innovations Strain (biology) structure and interactions transmission, spread and control of infection virulence == Mathematics of mass vaccination == If the proportion of the population that is immune exceeds the herd immunity level for the disease, then the disease can no longer persist in the population and its transmission dies out. Thus, a disease can be eliminated from a population if enough individuals are immune due to either vaccination or recovery from prior exposure to disease. For example, smallpox eradication, with the last wild case in 1977, and certification of the eradication of indigenous transmission of 2 of the 3 types of wild poliovirus (type 2 in 2015, after the last reported case in 1999, and type 3 in 2019, after the last reported case in 2012). The herd immunity level will be denoted q. Recall that, for a stable state: R 0 ⋅ S = 1. {\displaystyle R_{0}\cdot S=1.} In turn, R 0 = N S = μ N E ⁡ ( T L ) μ N E ⁡ [ min ( T L , T S ) ] = E ⁡ ( T L ) E ⁡ [ min ( T L , T S ) ] , {\displaystyle R_{0}={\frac {N}{S}}={\frac {\mu N\operatorname {E} (T_{L})}{\mu N\operatorname {E} [\min(T_{L},T_{S})]}}={\frac {\operatorname {E} (T_{L})}{\operatorname {E} [\min(T_{L},T_{S})]}},} which is approximately: E ⁡ ( T L ) E ⁡ ( T S ) = 1 + λ μ = β N v . {\displaystyle {\frac {\operatorname {\operatorname {E} } (T_{L})}{\operatorname {\operatorname {E} } (T_{S})}}=1+{\frac {\lambda }{\mu }}={\frac {\beta N}{v}}.} S will be (1 − q), since q is the proportion of the population that is immune and q + S must equal one (since in this simplified model, everyone is either susceptible or immune). Then: R 0 ⋅ ( 1 − q ) = 1 , 1 − q = 1 R 0 , q = 1 − 1 R 0 . {\displaystyle {\begin{aligned}&R_{0}\cdot (1-q)=1,\\[6pt]&1-q={\frac {1}{R_{0}}},\\[6pt]&q=1-{\frac {1}{R_{0}}}.\end{aligned}}} Remember that this is the threshold level. Die out of transmission will only occur if the proportion of immune individuals exceeds this level due to a mass vaccination programme. We have just calculated the critical immunization threshold (denoted qc). It is the minimum proportion of the population that must be immunized at birth (or close to birth) in order for the infection to die out in the population. q c = 1 − 1 R 0 . {\displaystyle q_{c}=1-{\frac {1}{R_{0}}}.} Because the fraction of the final size of the population p that is never infected can be defined as: lim t → ∞ S ( t ) = e − ∫ 0 ∞ λ ( t ) d t = 1 − p . {\displaystyle \lim _{t\to \infty }S(t)=e^{-\int _{0}^{\infty }\lambda (t)\,dt}=1-p.} Hence, p = 1 − e − ∫ 0 ∞ β I ( t ) d t = 1 − e − R 0 p . {\displaystyle p=1-e^{-\int _{0}^{\infty }\beta I(t)\,dt}=1-e^{-R_{0}p}.} Solving for R 0 {\displaystyle R_{0}} , we obtain: R 0 = − ln ⁡ ( 1 − p ) p . {\displaystyle R_{0}={\frac {-\ln(1-p)}{p}}.} === When mass vaccination cannot exceed the herd immunity === If the vaccine used is insufficiently effective or the required coverage cannot be reached, the program may fail to exceed qc. Such a program will protect vaccinated individuals from disease, but may change the dynamics of transmission. Suppose that a proportion of the population q (where q < qc) is immunised at birth against an infection with R0 > 1. The vaccination programme changes R0 to Rq where R q = R 0 ( 1 − q ) {\displaystyle R_{q}=R_{0}(1-q)} This change occurs simply because there are now fewer susceptibles in the population who can be infected. Rq is simply R0 minus those that would normally be infected but that cannot be now since they are immune. As a consequence of this lower basic reproduction number, the average age of infection A will also change to some new value Aq in those who have been left unvaccinated. Recall the relation that linked R0, A and L. Assuming that life expectancy has not changed, now: R q = L A q , {\displaystyle R_{q}={\frac {L}{A_{q}}},} A q = L R q = L R 0 ( 1 − q ) . {\displaystyle A_{q}={\frac {L}{R_{q}}}={\frac {L}{R_{0}(1-q)}}.} But R0 = L/A so: A q = L ( L / A ) ( 1 − q ) = A L L ( 1 − q ) = A 1 − q . {\displaystyle A_{q}={\frac {L}{(L/A)(1-q)}}={\frac {AL}{L(1-q)}}={\frac {A}{1-q}}.} Thus, the vaccination program may raise the average age of infection, and unvaccinated individuals will experience a reduced force of infection due to the presence of the vaccinated group. For a disease that leads to greater clinical severity in older populations, the unvaccinated proportion of the population may experience the disease relatively later in life than would occur in the absence of vaccine. === When mass vaccination exceeds the herd immunity === If a vaccination program causes the proportion of immune individuals in a population to exceed the critical threshold for a significant length of time, transmission of the infectious disease in that population will stop. If elimination occurs everywhere at the same time, then this can lead to eradication. Elimination Interruption of endemic transmission of an infectious disease, which occurs if each infected individual infects less than one other, is achieved by maintaining vaccination coverage to keep the proportion of immune individuals above the critical immunization threshold. Eradication Elimination everywhere at the same time such that the infectious agent dies out (for example, smallpox and rinderpest). == Reliability == Models have the advantage of examining multiple outcomes simultaneously, rather than making a single forecast. Models have shown broad degrees of reliability in past pandemics, such as SARS, SARS-CoV-2, Swine flu, MERS and Ebola. == See also == == References == == Sources == Barabási AL (2016). Network Science. Cambridge University Press. ISBN 978-1-107-07626-6. Brauer F, Castillo-Chavez C (2012). Mathematical Models in Population Biology and Epidemiology. Texts in Applied Mathematics. Vol. 40. doi:10.1007/978-1-4614-1686-9. ISBN 978-1-4614-1685-2. Daley DJ, Gani JM (1999). Epidemic Modelling: An Introduction. Cambridge University Press. ISBN 978-0-521-01467-0. Hamer WH (1929). Epidemiology, Old and New. Macmillan. hdl:2027/mdp.39015006657475. OCLC 609575950. Ross R (1910). The Prevention of Malaria. Dutton. hdl:2027/uc2.ark:/13960/t02z1ds0q. OCLC 610268760. == Further reading == == External links == Software Model-Builder: Interactive (GUI-based) software to build, simulate, and analyze ODE models. GLEaMviz Simulator: Enables simulation of emerging infectious diseases spreading across the world. STEM: Open source framework for Epidemiological Modeling available through the Eclipse Foundation. R package surveillance: Temporal and Spatio-Temporal Modeling and Monitoring of Epidemic Phenomena
Wikipedia/Mathematical_modelling_of_infectious_disease
A logistic function or logistic curve is a common S-shaped curve (sigmoid curve) with the equation f ( x ) = L 1 + e − k ( x − x 0 ) {\displaystyle f(x)={\frac {L}{1+e^{-k(x-x_{0})}}}} where The logistic function has domain the real numbers, the limit as x → − ∞ {\displaystyle x\to -\infty } is 0, and the limit as x → + ∞ {\displaystyle x\to +\infty } is L {\displaystyle L} . The standard logistic function, depicted at right, where L = 1 , k = 1 , x 0 = 0 {\displaystyle L=1,k=1,x_{0}=0} , has the equation f ( x ) = 1 1 + e − x {\displaystyle f(x)={\frac {1}{1+e^{-x}}}} and is sometimes simply called the sigmoid. It is also sometimes called the expit, being the inverse function of the logit. The logistic function finds applications in a range of fields, including biology (especially ecology), biomathematics, chemistry, demography, economics, geoscience, mathematical psychology, probability, sociology, political science, linguistics, statistics, and artificial neural networks. There are various generalizations, depending on the field. == History == The logistic function was introduced in a series of three papers by Pierre François Verhulst between 1838 and 1847, who devised it as a model of population growth by adjusting the exponential growth model, under the guidance of Adolphe Quetelet. Verhulst first devised the function in the mid 1830s, publishing a brief note in 1838, then presented an expanded analysis and named the function in 1844 (published 1845); the third paper adjusted the correction term in his model of Belgian population growth. The initial stage of growth is approximately exponential (geometric); then, as saturation begins, the growth slows to linear (arithmetic), and at maturity, growth approaches the limit with an exponentially decaying gap, like the initial stage in reverse. Verhulst did not explain the choice of the term "logistic" (French: logistique), but it is presumably in contrast to the logarithmic curve, and by analogy with arithmetic and geometric. His growth model is preceded by a discussion of arithmetic growth and geometric growth (whose curve he calls a logarithmic curve, instead of the modern term exponential curve), and thus "logistic growth" is presumably named by analogy, logistic being from Ancient Greek: λογιστικός, romanized: logistikós, a traditional division of Greek mathematics. As a word derived from ancient Greek mathematical terms, the name of this function is unrelated to the military and management term logistics, which is instead from French: logis "lodgings", though some believe the Greek term also influenced logistics; see Logistics § Origin for details. == Mathematical properties == The standard logistic function is the logistic function with parameters k = 1 {\displaystyle k=1} , x 0 = 0 {\displaystyle x_{0}=0} , L = 1 {\displaystyle L=1} , which yields f ( x ) = 1 1 + e − x = e x e x + 1 = e x / 2 e x / 2 + e − x / 2 . {\displaystyle f(x)={\frac {1}{1+e^{-x}}}={\frac {e^{x}}{e^{x}+1}}={\frac {e^{x/2}}{e^{x/2}+e^{-x/2}}}.} In practice, due to the nature of the exponential function e − x {\displaystyle e^{-x}} , it is often sufficient to compute the standard logistic function for x {\displaystyle x} over a small range of real numbers, such as a range contained in [−6, +6], as it quickly converges very close to its saturation values of 0 and 1. === Symmetries === The logistic function has the symmetry property that 1 − f ( x ) = f ( − x ) . {\displaystyle 1-f(x)=f(-x).} This reflects that the growth from 0 when x {\displaystyle x} is small is symmetric with the decay of the gap to the limit (1) when x {\displaystyle x} is large. Further, x ↦ f ( x ) − 1 / 2 {\displaystyle x\mapsto f(x)-1/2} is an odd function. The sum of the logistic function and its reflection about the vertical axis, f ( − x ) {\displaystyle f(-x)} , is 1 1 + e − x + 1 1 + e − ( − x ) = e x e x + 1 + 1 e x + 1 = 1. {\displaystyle {\frac {1}{1+e^{-x}}}+{\frac {1}{1+e^{-(-x)}}}={\frac {e^{x}}{e^{x}+1}}+{\frac {1}{e^{x}+1}}=1.} The logistic function is thus rotationally symmetrical about the point (0, 1/2). === Inverse function === The logistic function is the inverse of the natural logit function logit ⁡ p = log ⁡ p 1 − p for 0 < p < 1 {\displaystyle \operatorname {logit} p=\log {\frac {p}{1-p}}\quad {\text{ for }}\,0<p<1} and so converts the logarithm of odds into a probability. The conversion from the log-likelihood ratio of two alternatives also takes the form of a logistic curve. === Hyperbolic tangent === The logistic function is an offset and scaled hyperbolic tangent function: f ( x ) = 1 2 + 1 2 tanh ⁡ ( x 2 ) , {\displaystyle f(x)={\frac {1}{2}}+{\frac {1}{2}}\tanh \left({\frac {x}{2}}\right),} or tanh ⁡ ( x ) = 2 f ( 2 x ) − 1. {\displaystyle \tanh(x)=2f(2x)-1.} This follows from tanh ⁡ ( x ) = e x − e − x e x + e − x = e x ⋅ ( 1 − e − 2 x ) e x ⋅ ( 1 + e − 2 x ) = f ( 2 x ) − e − 2 x 1 + e − 2 x = f ( 2 x ) − e − 2 x + 1 − 1 1 + e − 2 x = 2 f ( 2 x ) − 1. {\displaystyle {\begin{aligned}\tanh(x)&={\frac {e^{x}-e^{-x}}{e^{x}+e^{-x}}}={\frac {e^{x}\cdot \left(1-e^{-2x}\right)}{e^{x}\cdot \left(1+e^{-2x}\right)}}\\&=f(2x)-{\frac {e^{-2x}}{1+e^{-2x}}}=f(2x)-{\frac {e^{-2x}+1-1}{1+e^{-2x}}}=2f(2x)-1.\end{aligned}}} The hyperbolic-tangent relationship leads to another form for the logistic function's derivative: d d x f ( x ) = 1 4 sech 2 ⁡ ( x 2 ) , {\displaystyle {\frac {d}{dx}}f(x)={\frac {1}{4}}\operatorname {sech} ^{2}\left({\frac {x}{2}}\right),} which ties the logistic function into the logistic distribution. Geometrically, the hyperbolic tangent function is the hyperbolic angle on the unit hyperbola x 2 − y 2 = 1 {\displaystyle x^{2}-y^{2}=1} , which factors as ( x + y ) ( x − y ) = 1 {\displaystyle (x+y)(x-y)=1} , and thus has asymptotes the lines through the origin with slope ⁠ − 1 {\displaystyle -1} ⁠ and with slope ⁠ 1 {\displaystyle 1} ⁠, and vertex at ⁠ ( 1 , 0 ) {\displaystyle (1,0)} ⁠ corresponding to the range and midpoint (⁠ 1 {\displaystyle {1}} ⁠) of tanh. Analogously, the logistic function can be viewed as the hyperbolic angle on the hyperbola x y − y 2 = 1 {\displaystyle xy-y^{2}=1} , which factors as y ( x − y ) = 1 {\displaystyle y(x-y)=1} , and thus has asymptotes the lines through the origin with slope ⁠ 0 {\displaystyle 0} ⁠ and with slope ⁠ 1 {\displaystyle 1} ⁠, and vertex at ⁠ ( 2 , 1 ) {\displaystyle (2,1)} ⁠, corresponding to the range and midpoint (⁠ 1 / 2 {\displaystyle 1/2} ⁠) of the logistic function. Parametrically, hyperbolic cosine and hyperbolic sine give coordinates on the unit hyperbola: ( ( e t + e − t ) / 2 , ( e t − e − t ) / 2 ) {\displaystyle \left((e^{t}+e^{-t})/2,(e^{t}-e^{-t})/2\right)} , with quotient the hyperbolic tangent. Similarly, ( e t / 2 + e − t / 2 , e t / 2 ) {\displaystyle {\bigl (}e^{t/2}+e^{-t/2},e^{t/2}{\bigr )}} parametrizes the hyperbola x y − y 2 = 1 {\displaystyle xy-y^{2}=1} , with quotient the logistic function. These correspond to linear transformations (and rescaling the parametrization) of the hyperbola x y = 1 {\displaystyle xy=1} , with parametrization ( e − t , e t ) {\displaystyle (e^{-t},e^{t})} : the parametrization of the hyperbola for the logistic function corresponds to t / 2 {\displaystyle t/2} and the linear transformation ( 1 1 0 1 ) {\displaystyle {\bigl (}{\begin{smallmatrix}1&1\\0&1\end{smallmatrix}}{\bigr )}} , while the parametrization of the unit hyperbola (for the hyperbolic tangent) corresponds to the linear transformation 1 2 ( 1 1 − 1 1 ) {\displaystyle {\tfrac {1}{2}}{\bigl (}{\begin{smallmatrix}1&1\\-1&1\end{smallmatrix}}{\bigr )}} . === Derivative === The standard logistic function has an easily calculated derivative. The derivative is known as the density of the logistic distribution: f ( x ) = 1 1 + e − x = e x 1 + e x , {\displaystyle f(x)={\frac {1}{1+e^{-x}}}={\frac {e^{x}}{1+e^{x}}},} d d x f ( x ) = e x ⋅ ( 1 + e x ) − e x ⋅ e x ( 1 + e x ) 2 = e x ( 1 + e x ) 2 = ( e x 1 + e x ) ( 1 1 + e x ) = ( e x 1 + e x ) ( 1 − e x 1 + e x ) = f ( x ) ( 1 − f ( x ) ) {\displaystyle {\begin{aligned}{\frac {d}{dx}}f(x)&={\frac {e^{x}\cdot (1+e^{x})-e^{x}\cdot e^{x}}{{\left(1+e^{x}\right)}^{2}}}\\[1ex]&={\frac {e^{x}}{{\left(1+e^{x}\right)}^{2}}}\\[1ex]&=\left({\frac {e^{x}}{1+e^{x}}}\right)\left({\frac {1}{1+e^{x}}}\right)\\[1ex]&=\left({\frac {e^{x}}{1+e^{x}}}\right)\left(1-{\frac {e^{x}}{1+e^{x}}}\right)\\[1.2ex]&=f(x)\left(1-f(x)\right)\end{aligned}}} from which all higher derivatives can be derived algebraically. For example, f ″ = ( 1 − 2 f ) ( 1 − f ) f {\displaystyle f''=(1-2f)(1-f)f} . The logistic distribution is a location–scale family, which corresponds to parameters of the logistic function. If ⁠ L = 1 {\displaystyle L=1} ⁠ is fixed, then the midpoint ⁠ x 0 {\displaystyle x_{0}} ⁠ is the location and the slope ⁠ k {\displaystyle k} ⁠ is the scale. === Integral === Conversely, its antiderivative can be computed by the substitution u = 1 + e x {\displaystyle u=1+e^{x}} , since f ( x ) = e x 1 + e x = u ′ u , {\displaystyle f(x)={\frac {e^{x}}{1+e^{x}}}={\frac {u'}{u}},} so (dropping the constant of integration) ∫ e x 1 + e x d x = ∫ 1 u d u = ln ⁡ u = ln ⁡ ( 1 + e x ) . {\displaystyle \int {\frac {e^{x}}{1+e^{x}}}\,dx=\int {\frac {1}{u}}\,du=\ln u=\ln(1+e^{x}).} In artificial neural networks, this is known as the softplus function and (with scaling) is a smooth approximation of the ramp function, just as the logistic function (with scaling) is a smooth approximation of the Heaviside step function. === Taylor series === The standard logistic function is analytic on the whole real line since f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } , f ( x ) = 1 1 + e − x = h ( g ( x ) ) {\displaystyle f(x)={\frac {1}{1+e^{-x}}}=h(g(x))} where g : R → R {\displaystyle g:\mathbb {R} \to \mathbb {R} } , g ( x ) = 1 + e − x {\displaystyle g(x)=1+e^{-x}} and h : ( 0 , ∞ ) → ( 0 , ∞ ) {\displaystyle h:(0,\infty )\to (0,\infty )} , h ( x ) = 1 x {\displaystyle h(x)={\frac {1}{x}}} are analytic on their domains, and the composition of analytic functions is again analytic. A formula for the nth derivative of the standard logistic function is d n f d x n = ∑ i = 1 n ( ∑ j = 1 n ( − 1 ) i + j ( i j ) j n ) e − i x ( 1 + e − x ) i + 1 {\displaystyle {\frac {d^{n}f}{dx^{n}}}=\sum _{i=1}^{n}{\frac {\left(\sum _{j=1}^{n}{\left(-1\right)}^{i+j}{\binom {i}{j}}j^{n}\right)e^{-ix}}{{\left(1+e^{-x}\right)}^{i+1}}}} therefore its Taylor series about the point a {\displaystyle a} is f ( x ) = f ( a ) ( x − a ) + ∑ n = 1 ∞ ∑ i = 1 n ( ∑ j = 1 n ( − 1 ) i + j ( i j ) j n ) e − i x ( 1 + e − x ) i + 1 ( x − a ) n n ! . {\displaystyle f(x)=f(a)(x-a)+\sum _{n=1}^{\infty }\sum _{i=1}^{n}{\frac {\left(\sum _{j=1}^{n}{\left(-1\right)}^{i+j}{\binom {i}{j}}j^{n}\right)e^{-ix}}{{\left(1+e^{-x}\right)}^{i+1}}}{\frac {{\left(x-a\right)}^{n}}{n!}}.} === Logistic differential equation === The unique standard logistic function is the solution of the simple first-order non-linear ordinary differential equation d d x f ( x ) = f ( x ) ( 1 − f ( x ) ) {\displaystyle {\frac {d}{dx}}f(x)=f(x){\big (}1-f(x){\big )}} with boundary condition f ( 0 ) = 1 / 2 {\displaystyle f(0)=1/2} . This equation is the continuous version of the logistic map. Note that the reciprocal logistic function is solution to a simple first-order linear ordinary differential equation. The qualitative behavior is easily understood in terms of the phase line: the derivative is 0 when the function is 1; and the derivative is positive for f {\displaystyle f} between 0 and 1, and negative for f {\displaystyle f} above 1 or less than 0 (though negative populations do not generally accord with a physical model). This yields an unstable equilibrium at 0 and a stable equilibrium at 1, and thus for any function value greater than 0 and less than 1, it grows to 1. The logistic equation is a special case of the Bernoulli differential equation and has the following solution: f ( x ) = e x e x + C . {\displaystyle f(x)={\frac {e^{x}}{e^{x}+C}}.} Choosing the constant of integration C = 1 {\displaystyle C=1} gives the other well known form of the definition of the logistic curve: f ( x ) = e x e x + 1 = 1 1 + e − x . {\displaystyle f(x)={\frac {e^{x}}{e^{x}+1}}={\frac {1}{1+e^{-x}}}.} More quantitatively, as can be seen from the analytical solution, the logistic curve shows early exponential growth for negative argument, which reaches to linear growth of slope 1/4 for an argument near 0, then approaches 1 with an exponentially decaying gap. The differential equation derived above is a special case of a general differential equation that only models the sigmoid function for x > 0 {\displaystyle x>0} . In many modeling applications, the more general form d f ( x ) d x = k L f ( x ) ( L − f ( x ) ) , f ( 0 ) = L 1 + e k x 0 {\displaystyle {\frac {df(x)}{dx}}={\frac {k}{L}}f(x){\big (}L-f(x){\big )},\quad f(0)={\frac {L}{1+e^{kx_{0}}}}} can be desirable. Its solution is the shifted and scaled sigmoid function L σ ( k ( x − x 0 ) ) = L 1 + e − k ( x − x 0 ) {\displaystyle L\sigma {\big (}k(x-x_{0}){\big )}={\frac {L}{1+e^{-k(x-x_{0})}}}} . == Probabilistic interpretation == When the capacity L = 1 {\displaystyle L=1} , the value of the logistic function is in the range ⁠ ( 0 , 1 ) {\displaystyle (0,1)} ⁠ and can be interpreted as a probability p. In more detail, p can be interpreted as the probability of one of two alternatives (the parameter of a Bernoulli distribution); the two alternatives are complementary, so the probability of the other alternative is q = 1 − p {\displaystyle q=1-p} and p + q = 1 {\displaystyle p+q=1} . The two alternatives are coded as 1 and 0, corresponding to the limiting values as x → ± ∞ {\displaystyle x\to \pm \infty } . In this interpretation the input x is the log-odds for the first alternative (relative to the other alternative), measured in "logistic units" (or logits), ⁠ e x {\displaystyle e^{x}} ⁠ is the odds for the first event (relative to the second), and, recalling that given odds of O = O : 1 {\displaystyle O=O:1} for (⁠ O {\displaystyle O} ⁠ against 1), the probability is the ratio of for over (for plus against), O / ( O + 1 ) {\displaystyle O/(O+1)} , we see that e x / ( e x + 1 ) = 1 / ( 1 + e − x ) = p {\displaystyle e^{x}/(e^{x}+1)=1/(1+e^{-x})=p} is the probability of the first alternative. Conversely, x is the log-odds against the second alternative, ⁠ − x {\displaystyle -x} ⁠ is the log-odds for the second alternative, e − x {\displaystyle e^{-x}} is the odds for the second alternative, and e − x / ( e − x + 1 ) = 1 / ( 1 + e x ) = q {\displaystyle e^{-x}/(e^{-x}+1)=1/(1+e^{x})=q} is the probability of the second alternative. This can be framed more symmetrically in terms of two inputs, ⁠ x 0 {\displaystyle x_{0}} ⁠ and ⁠ x 1 {\displaystyle x_{1}} ⁠, which then generalizes naturally to more than two alternatives. Given two real number inputs, ⁠ x 0 {\displaystyle x_{0}} ⁠ and ⁠ x 1 {\displaystyle x_{1}} ⁠, interpreted as logits, their difference x 1 − x 0 {\displaystyle x_{1}-x_{0}} is the log-odds for option 1 (the log-odds against option 0), e x 1 − x 0 {\displaystyle e^{x_{1}-x_{0}}} is the odds, e x 1 − x 0 / ( e x 1 − x 0 + 1 ) = 1 / ( 1 + e − ( x 1 − x 0 ) ) = e x 1 / ( e x 0 + e x 1 ) {\displaystyle e^{x_{1}-x_{0}}/(e^{x_{1}-x_{0}}+1)=1/\left(1+e^{-(x_{1}-x_{0})}\right)=e^{x_{1}}/(e^{x_{0}}+e^{x_{1}})} is the probability of option 1, and similarly e x 0 / ( e x 0 + e x 1 ) {\displaystyle e^{x_{0}}/(e^{x_{0}}+e^{x_{1}})} is the probability of option 0. This form immediately generalizes to more alternatives as the softmax function, which is a vector-valued function whose i-th coordinate is e x i / ∑ i = 0 n e x i {\textstyle e^{x_{i}}/\sum _{i=0}^{n}e^{x_{i}}} . More subtly, the symmetric form emphasizes interpreting the input x as x 1 − x 0 {\displaystyle x_{1}-x_{0}} and thus relative to some reference point, implicitly to x 0 = 0 {\displaystyle x_{0}=0} . Notably, the softmax function is invariant under adding a constant to all the logits x i {\displaystyle x_{i}} , which corresponds to the difference x j − x i {\displaystyle x_{j}-x_{i}} being the log-odds for option j against option i, but the individual logits x i {\displaystyle x_{i}} not being log-odds on their own. Often one of the options is used as a reference ("pivot"), and its value fixed as 0, so the other logits are interpreted as odds versus this reference. This is generally done with the first alternative, hence the choice of numbering: x 0 = 0 {\displaystyle x_{0}=0} , and then x i = x i − x 0 {\displaystyle x_{i}=x_{i}-x_{0}} is the log-odds for option i against option 0. Since e 0 = 1 {\displaystyle e^{0}=1} , this yields the + 1 {\displaystyle +1} term in many expressions for the logistic function and generalizations. == Generalizations == In growth modeling, numerous generalizations exist, including the generalized logistic curve, the Gompertz function, the cumulative distribution function of the shifted Gompertz distribution, and the hyperbolastic function of type I. In statistics, where the logistic function is interpreted as the probability of one of two alternatives, the generalization to three or more alternatives is the softmax function, which is vector-valued, as it gives the probability of each alternative. == Applications == === In ecology: modeling population growth === A typical application of the logistic equation is a common model of population growth (see also population dynamics), originally due to Pierre-François Verhulst in 1838, where the rate of reproduction is proportional to both the existing population and the amount of available resources, all else being equal. The Verhulst equation was published after Verhulst had read Thomas Malthus' An Essay on the Principle of Population, which describes the Malthusian growth model of simple (unconstrained) exponential growth. Verhulst derived his logistic equation to describe the self-limiting growth of a biological population. The equation was rediscovered in 1911 by A. G. McKendrick for the growth of bacteria in broth and experimentally tested using a technique for nonlinear parameter estimation. The equation is also sometimes called the Verhulst-Pearl equation following its rediscovery in 1920 by Raymond Pearl (1879–1940) and Lowell Reed (1888–1966) of the Johns Hopkins University. Another scientist, Alfred J. Lotka derived the equation again in 1925, calling it the law of population growth. Letting P {\displaystyle P} represent population size ( N {\displaystyle N} is often used in ecology instead) and t {\displaystyle t} represent time, this model is formalized by the differential equation: d P d t = r P ( 1 − P K ) , {\displaystyle {\frac {dP}{dt}}=rP\left(1-{\frac {P}{K}}\right),} where the constant r {\displaystyle r} defines the growth rate and K {\displaystyle K} is the carrying capacity. In the equation, the early, unimpeded growth rate is modeled by the first term + r P {\displaystyle +rP} . The value of the rate r {\displaystyle r} represents the proportional increase of the population P {\displaystyle P} in one unit of time. Later, as the population grows, the modulus of the second term (which multiplied out is − r P 2 / K {\displaystyle -rP^{2}/K} ) becomes almost as large as the first, as some members of the population P {\displaystyle P} interfere with each other by competing for some critical resource, such as food or living space. This antagonistic effect is called the bottleneck, and is modeled by the value of the parameter K {\displaystyle K} . The competition diminishes the combined growth rate, until the value of P {\displaystyle P} ceases to grow (this is called maturity of the population). The solution to the equation (with P 0 {\displaystyle P_{0}} being the initial population) is P ( t ) = K P 0 e r t K + P 0 ( e r t − 1 ) = K 1 + ( K − P 0 P 0 ) e − r t , {\displaystyle P(t)={\frac {KP_{0}e^{rt}}{K+P_{0}\left(e^{rt}-1\right)}}={\frac {K}{1+\left({\frac {K-P_{0}}{P_{0}}}\right)e^{-rt}}},} where lim t → ∞ P ( t ) = K , {\displaystyle \lim _{t\to \infty }P(t)=K,} where K {\displaystyle K} is the limiting value of P {\displaystyle P} , the highest value that the population can reach given infinite time (or come close to reaching in finite time). The carrying capacity is asymptotically reached independently of the initial value P ( 0 ) > 0 {\displaystyle P(0)>0} , and also in the case that P ( 0 ) > K {\displaystyle P(0)>K} . In ecology, species are sometimes referred to as r {\displaystyle r} -strategist or K {\displaystyle K} -strategist depending upon the selective processes that have shaped their life history strategies. Choosing the variable dimensions so that n {\displaystyle n} measures the population in units of carrying capacity, and τ {\displaystyle \tau } measures time in units of 1 / r {\displaystyle 1/r} , gives the dimensionless differential equation d n d τ = n ( 1 − n ) . {\displaystyle {\frac {dn}{d\tau }}=n(1-n).} ==== Integral ==== The antiderivative of the ecological form of the logistic function can be computed by the substitution u = K + P 0 ( e r t − 1 ) {\displaystyle u=K+P_{0}\left(e^{rt}-1\right)} , since d u = r P 0 e r t d t {\displaystyle du=rP_{0}e^{rt}dt} ∫ K P 0 e r t K + P 0 ( e r t − 1 ) d t = ∫ K r 1 u d u = K r ln ⁡ u + C = K r ln ⁡ ( K + P 0 ( e r t − 1 ) ) + C {\displaystyle \int {\frac {KP_{0}e^{rt}}{K+P_{0}\left(e^{rt}-1\right)}}\,dt=\int {\frac {K}{r}}{\frac {1}{u}}\,du={\frac {K}{r}}\ln u+C={\frac {K}{r}}\ln \left(K+P_{0}(e^{rt}-1)\right)+C} ==== Time-varying carrying capacity ==== Since the environmental conditions influence the carrying capacity, as a consequence it can be time-varying, with K ( t ) > 0 {\displaystyle K(t)>0} , leading to the following mathematical model: d P d t = r P ⋅ ( 1 − P K ( t ) ) . {\displaystyle {\frac {dP}{dt}}=rP\cdot \left(1-{\frac {P}{K(t)}}\right).} A particularly important case is that of carrying capacity that varies periodically with period T {\displaystyle T} : K ( t + T ) = K ( t ) . {\displaystyle K(t+T)=K(t).} It can be shown that in such a case, independently from the initial value P ( 0 ) > 0 {\displaystyle P(0)>0} , P ( t ) {\displaystyle P(t)} will tend to a unique periodic solution P ∗ ( t ) {\displaystyle P_{*}(t)} , whose period is T {\displaystyle T} . A typical value of T {\displaystyle T} is one year: In such case K ( t ) {\displaystyle K(t)} may reflect periodical variations of weather conditions. Another interesting generalization is to consider that the carrying capacity K ( t ) {\displaystyle K(t)} is a function of the population at an earlier time, capturing a delay in the way population modifies its environment. This leads to a logistic delay equation, which has a very rich behavior, with bistability in some parameter range, as well as a monotonic decay to zero, smooth exponential growth, punctuated unlimited growth (i.e., multiple S-shapes), punctuated growth or alternation to a stationary level, oscillatory approach to a stationary level, sustainable oscillations, finite-time singularities as well as finite-time death. === In statistics and machine learning === Logistic functions are used in several roles in statistics. For example, they are the cumulative distribution function of the logistic family of distributions, and they are, a bit simplified, used to model the chance a chess player has to beat their opponent in the Elo rating system. More specific examples now follow. ==== Logistic regression ==== Logistic functions are used in logistic regression to model how the probability p {\displaystyle p} of an event may be affected by one or more explanatory variables: an example would be to have the model p = f ( a + b x ) , {\displaystyle p=f(a+bx),} where x {\displaystyle x} is the explanatory variable, a {\displaystyle a} and b {\displaystyle b} are model parameters to be fitted, and f {\displaystyle f} is the standard logistic function. Logistic regression and other log-linear models are also commonly used in machine learning. A generalisation of the logistic function to multiple inputs is the softmax activation function, used in multinomial logistic regression. Another application of the logistic function is in the Rasch model, used in item response theory. In particular, the Rasch model forms a basis for maximum likelihood estimation of the locations of objects or persons on a continuum, based on collections of categorical data, for example the abilities of persons on a continuum based on responses that have been categorized as correct and incorrect. ==== Neural networks ==== Logistic functions are often used in artificial neural networks to introduce nonlinearity in the model or to clamp signals to within a specified interval. A popular neural net element computes a linear combination of its input signals, and applies a bounded logistic function as the activation function to the result; this model can be seen as a "smoothed" variant of the classical threshold neuron. A common choice for the activation or "squashing" functions, used to clip large magnitudes to keep the response of the neural network bounded, is g ( h ) = 1 1 + e − 2 β h , {\displaystyle g(h)={\frac {1}{1+e^{-2\beta h}}},} which is a logistic function. These relationships result in simplified implementations of artificial neural networks with artificial neurons. Practitioners caution that sigmoidal functions which are antisymmetric about the origin (e.g. the hyperbolic tangent) lead to faster convergence when training networks with backpropagation. The logistic function is itself the derivative of another proposed activation function, the softplus. === In medicine: modeling of growth of tumors === Another application of logistic curve is in medicine, where the logistic differential equation can be used to model the growth of tumors. This application can be considered an extension of the above-mentioned use in the framework of ecology (see also the Generalized logistic curve, allowing for more parameters). Denoting with X ( t ) {\displaystyle X(t)} the size of the tumor at time t {\displaystyle t} , its dynamics are governed by X ′ = r ( 1 − X K ) X , {\displaystyle X'=r\left(1-{\frac {X}{K}}\right)X,} which is of the type X ′ = F ( X ) X , F ′ ( X ) ≤ 0 , {\displaystyle X'=F(X)X,\quad F'(X)\leq 0,} where F ( X ) {\displaystyle F(X)} is the proliferation rate of the tumor. If a course of chemotherapy is started with a log-kill effect, the equation may be revised to be X ′ = r ( 1 − X K ) X − c ( t ) X , {\displaystyle X'=r\left(1-{\frac {X}{K}}\right)X-c(t)X,} where c ( t ) {\displaystyle c(t)} is the therapy-induced death rate. In the idealized case of very long therapy, c ( t ) {\displaystyle c(t)} can be modeled as a periodic function (of period T {\displaystyle T} ) or (in case of continuous infusion therapy) as a constant function, and one has that 1 T ∫ 0 T c ( t ) d t > r → lim t → + ∞ x ( t ) = 0 , {\displaystyle {\frac {1}{T}}\int _{0}^{T}c(t)\,dt>r\to \lim _{t\to +\infty }x(t)=0,} i.e. if the average therapy-induced death rate is greater than the baseline proliferation rate, then there is the eradication of the disease. Of course, this is an oversimplified model of both the growth and the therapy. For example, it does not take into account the evolution of clonal resistance, or the side-effects of the therapy on the patient. These factors can result in the eventual failure of chemotherapy, or its discontinuation. === In medicine: modeling of a pandemic === A novel infectious pathogen to which a population has no immunity will generally spread exponentially in the early stages, while the supply of susceptible individuals is plentiful. The SARS-CoV-2 virus that causes COVID-19 exhibited exponential growth early in the course of infection in several countries in early 2020. Factors including a lack of susceptible hosts (through the continued spread of infection until it passes the threshold for herd immunity) or reduction in the accessibility of potential hosts through physical distancing measures, may result in exponential-looking epidemic curves first linearizing (replicating the "logarithmic" to "logistic" transition first noted by Pierre-François Verhulst, as noted above) and then reaching a maximal limit. A logistic function, or related functions (e.g. the Gompertz function) are usually used in a descriptive or phenomenological manner because they fit well not only to the early exponential rise, but to the eventual levelling off of the pandemic as the population develops a herd immunity. This is in contrast to actual models of pandemics which attempt to formulate a description based on the dynamics of the pandemic (e.g. contact rates, incubation times, social distancing, etc.). Some simple models have been developed, however, which yield a logistic solution. ==== Modeling early COVID-19 cases ==== A generalized logistic function, also called the Richards growth curve, has been applied to model the early phase of the COVID-19 outbreak. The authors fit the generalized logistic function to the cumulative number of infected cases, here referred to as infection trajectory. There are different parameterizations of the generalized logistic function in the literature. One frequently used forms is f ( t ; θ 1 , θ 2 , θ 3 , ξ ) = θ 1 [ 1 + ξ exp ⁡ ( − θ 2 ⋅ ( t − θ 3 ) ) ] 1 / ξ {\displaystyle f(t;\theta _{1},\theta _{2},\theta _{3},\xi )={\frac {\theta _{1}}{{\left[1+\xi \exp \left(-\theta _{2}\cdot (t-\theta _{3})\right)\right]}^{1/\xi }}}} where θ 1 , θ 2 , θ 3 {\displaystyle \theta _{1},\theta _{2},\theta _{3}} are real numbers, and ξ {\displaystyle \xi } is a positive real number. The flexibility of the curve f {\displaystyle f} is due to the parameter ξ {\displaystyle \xi } : (i) if ξ = 1 {\displaystyle \xi =1} then the curve reduces to the logistic function, and (ii) as ξ {\displaystyle \xi } approaches zero, the curve converges to the Gompertz function. In epidemiological modeling, θ 1 {\displaystyle \theta _{1}} , θ 2 {\displaystyle \theta _{2}} , and θ 3 {\displaystyle \theta _{3}} represent the final epidemic size, infection rate, and lag phase, respectively. See the right panel for an example infection trajectory when ( θ 1 , θ 2 , θ 3 ) {\displaystyle (\theta _{1},\theta _{2},\theta _{3})} is set to ( 10000 , 0.2 , 40 ) {\displaystyle (10000,0.2,40)} . One of the benefits of using a growth function such as the generalized logistic function in epidemiological modeling is its relatively easy application to the multilevel model framework, where information from different geographic regions can be pooled together. === In chemistry: reaction models === The concentration of reactants and products in autocatalytic reactions follow the logistic function. The degradation of Platinum group metal-free (PGM-free) oxygen reduction reaction (ORR) catalyst in fuel cell cathodes follows the logistic decay function, suggesting an autocatalytic degradation mechanism. === In physics: Fermi–Dirac distribution === The logistic function determines the statistical distribution of fermions over the energy states of a system in thermal equilibrium. In particular, it is the distribution of the probabilities that each possible energy level is occupied by a fermion, according to Fermi–Dirac statistics. === In optics: mirage === The logistic function also finds applications in optics, particularly in modelling phenomena such as mirages. Under certain conditions, such as the presence of a temperature or concentration gradient due to diffusion and balancing with gravity, logistic curve behaviours can emerge. A mirage, resulting from a temperature gradient that modifies the refractive index related to the density/concentration of the material over distance, can be modelled using a fluid with a refractive index gradient due to the concentration gradient. This mechanism can be equated to a limiting population growth model, where the concentrated region attempts to diffuse into the lower concentration region, while seeking equilibrium with gravity, thus yielding a logistic function curve. === In material science: phase diagrams === See Diffusion bonding. === In linguistics: language change === In linguistics, the logistic function can be used to model language change: an innovation that is at first marginal begins to spread more quickly with time, and then more slowly as it becomes more universally adopted. === In agriculture: modeling crop response === The logistic S-curve can be used for modeling the crop response to changes in growth factors. There are two types of response functions: positive and negative growth curves. For example, the crop yield may increase with increasing value of the growth factor up to a certain level (positive function), or it may decrease with increasing growth factor values (negative function owing to a negative growth factor), which situation requires an inverted S-curve. === In economics and sociology: diffusion of innovations === The logistic function can be used to illustrate the progress of the diffusion of an innovation through its life cycle. In The Laws of Imitation (1890), Gabriel Tarde describes the rise and spread of new ideas through imitative chains. In particular, Tarde identifies three main stages through which innovations spread: the first one corresponds to the difficult beginnings, during which the idea has to struggle within a hostile environment full of opposing habits and beliefs; the second one corresponds to the properly exponential take-off of the idea, with f ( x ) = 2 x {\displaystyle f(x)=2^{x}} ; finally, the third stage is logarithmic, with f ( x ) = log ⁡ ( x ) {\displaystyle f(x)=\log(x)} , and corresponds to the time when the impulse of the idea gradually slows down while, simultaneously new opponent ideas appear. The ensuing situation halts or stabilizes the progress of the innovation, which approaches an asymptote. In a sovereign state, the subnational units (constituent states or cities) may use loans to finance their projects. However, this funding source is usually subject to strict legal rules as well as to economy scarcity constraints, especially the resources the banks can lend (due to their equity or Basel limits). These restrictions, which represent a saturation level, along with an exponential rush in an economic competition for money, create a public finance diffusion of credit pleas and the aggregate national response is a sigmoid curve. Historically, when new products are introduced there is an intense amount of research and development which leads to dramatic improvements in quality and reductions in cost. This leads to a period of rapid industry growth. Some of the more famous examples are: railroads, incandescent light bulbs, electrification, cars and air travel. Eventually, dramatic improvement and cost reduction opportunities are exhausted, the product or process are in widespread use with few remaining potential new customers, and markets become saturated. Logistic analysis was used in papers by several researchers at the International Institute of Applied Systems Analysis (IIASA). These papers deal with the diffusion of various innovations, infrastructures and energy source substitutions and the role of work in the economy as well as with the long economic cycle. Long economic cycles were investigated by Robert Ayres (1989). Cesare Marchetti published on long economic cycles and on diffusion of innovations. Arnulf Grübler's book (1990) gives a detailed account of the diffusion of infrastructures including canals, railroads, highways and airlines, showing that their diffusion followed logistic shaped curves. Carlota Perez used a logistic curve to illustrate the long (Kondratiev) business cycle with the following labels: beginning of a technological era as irruption, the ascent as frenzy, the rapid build out as synergy and the completion as maturity. === Inflection Point Determination in Logistic Growth Regression === Logistic growth regressions carry significant uncertainty when data is available only up to around the inflection point of the growth process. Under these conditions, estimating the height at which the inflection point will occur may have uncertainties comparable to the carrying capacity (K) of the system. A method to mitigate this uncertainty involves using the carrying capacity from a surrogate logistic growth process as a reference point. By incorporating this constraint, even if K is only an estimate within a factor of two, the regression is stabilized, which improves accuracy and reduces uncertainty in the prediction parameters. This approach can be applied in fields such as economics and biology, where analogous surrogate systems or populations are available to inform the analysis. === Sequential analysis === Link created an extension of Wald's theory of sequential analysis to a distribution-free accumulation of random variables until either a positive or negative bound is first equaled or exceeded. Link derives the probability of first equaling or exceeding the positive boundary as 1 / ( 1 + e − θ A ) {\displaystyle 1/(1+e^{-\theta A})} , the logistic function. This is the first proof that the logistic function may have a stochastic process as its basis. Link provides a century of examples of "logistic" experimental results and a newly derived relation between this probability and the time of absorption at the boundaries. == See also == == Notes == == References == == External links == L.J. Linacre, Why logistic ogive and not autocatalytic curve?, accessed 2009-09-12. https://web.archive.org/web/20060914155939/http://luna.cas.usf.edu/~mbrannic/files/regression/Logistic.html Weisstein, Eric W. "Sigmoid Function". MathWorld. Online experiments with JSXGraph Esses are everywhere. Seeing the s-curve in everything. Restricted Logarithmic Growth with Injection
Wikipedia/Logistic_function
In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. An objective function is either a loss function or its opposite (in specific domains, variously called a reward function, a profit function, a utility function, a fitness function, etc.), in which case it is to be maximized. The loss function could include terms from several levels of the hierarchy. In statistics, typically a loss function is used for parameter estimation, and the event in question is some function of the difference between estimated and true values for an instance of data. The concept, as old as Laplace, was reintroduced in statistics by Abraham Wald in the middle of the 20th century. In the context of economics, for example, this is usually economic cost or regret. In classification, it is the penalty for an incorrect classification of an example. In actuarial science, it is used in an insurance context to model benefits paid over premiums, particularly since the works of Harald Cramér in the 1920s. In optimal control, the loss is the penalty for failing to achieve a desired value. In financial risk management, the function is mapped to a monetary loss. == Examples == === Regret === Leonard J. Savage argued that using non-Bayesian methods such as minimax, the loss function should be based on the idea of regret, i.e., the loss associated with a decision should be the difference between the consequences of the best decision that could have been made under circumstances will be known and the decision that was in fact taken before they were known. === Quadratic loss function === The use of a quadratic loss function is common, for example when using least squares techniques. It is often more mathematically tractable than other loss functions because of the properties of variances, as well as being symmetric: an error above the target causes the same loss as the same magnitude of error below the target. If the target is t, then a quadratic loss function is λ ( x ) = C ( t − x ) 2 {\displaystyle \lambda (x)=C(t-x)^{2}\;} for some constant C; the value of the constant makes no difference to a decision, and can be ignored by setting it equal to 1. This is also known as the squared error loss (SEL). Many common statistics, including t-tests, regression models, design of experiments, and much else, use least squares methods applied using linear regression theory, which is based on the quadratic loss function. The quadratic loss function is also used in linear-quadratic optimal control problems. In these problems, even in the absence of uncertainty, it may not be possible to achieve the desired values of all target variables. Often loss is expressed as a quadratic form in the deviations of the variables of interest from their desired values; this approach is tractable because it results in linear first-order conditions. In the context of stochastic control, the expected value of the quadratic form is used. The quadratic loss assigns more importance to outliers than to the true data due to its square nature, so alternatives like the Huber, Log-Cosh and SMAE losses are used when the data has many large outliers. === 0-1 loss function === In statistics and decision theory, a frequently used loss function is the 0-1 loss function L ( y ^ , y ) = [ y ^ ≠ y ] {\displaystyle L({\hat {y}},y)=\left[{\hat {y}}\neq y\right]} using Iverson bracket notation, i.e. it evaluates to 1 when y ^ ≠ y {\displaystyle {\hat {y}}\neq y} , and 0 otherwise. == Constructing loss and objective functions == In many applications, objective functions, including loss functions as a particular case, are determined by the problem formulation. In other situations, the decision maker’s preference must be elicited and represented by a scalar-valued function (called also utility function) in a form suitable for optimization — the problem that Ragnar Frisch has highlighted in his Nobel Prize lecture. The existing methods for constructing objective functions are collected in the proceedings of two dedicated conferences. In particular, Andranik Tangian showed that the most usable objective functions — quadratic and additive — are determined by a few indifference points. He used this property in the models for constructing these objective functions from either ordinal or cardinal data that were elicited through computer-assisted interviews with decision makers. Among other things, he constructed objective functions to optimally distribute budgets for 16 Westfalian universities and the European subsidies for equalizing unemployment rates among 271 German regions. == Expected loss == In some contexts, the value of the loss function itself is a random quantity because it depends on the outcome of a random variable X. === Statistics === Both frequentist and Bayesian statistical theory involve making a decision based on the expected value of the loss function; however, this quantity is defined differently under the two paradigms. ==== Frequentist expected loss ==== We first define the expected loss in the frequentist context. It is obtained by taking the expected value with respect to the probability distribution, Pθ, of the observed data, X. This is also referred to as the risk function of the decision rule δ and the parameter θ. Here the decision rule depends on the outcome of X. The risk function is given by: R ( θ , δ ) = E θ ⁡ L ( θ , δ ( X ) ) = ∫ X L ( θ , δ ( x ) ) d P θ ( x ) . {\displaystyle R(\theta ,\delta )=\operatorname {E} _{\theta }L{\big (}\theta ,\delta (X){\big )}=\int _{X}L{\big (}\theta ,\delta (x){\big )}\,\mathrm {d} P_{\theta }(x).} Here, θ is a fixed but possibly unknown state of nature, X is a vector of observations stochastically drawn from a population, E θ {\displaystyle \operatorname {E} _{\theta }} is the expectation over all population values of X, dPθ is a probability measure over the event space of X (parametrized by θ) and the integral is evaluated over the entire support of X. ==== Bayes Risk ==== In a Bayesian approach, the expectation is calculated using the prior distribution π* of the parameter θ: ρ ( π ∗ , a ) = ∫ Θ ∫ X L ( θ , a ( x ) ) d P ( x | θ ) d π ∗ ( θ ) = ∫ X ∫ Θ L ( θ , a ( x ) ) d π ∗ ( θ | x ) d M ( x ) {\displaystyle \rho (\pi ^{*},a)=\int _{\Theta }\int _{\mathbf {X}}L(\theta ,a({\mathbf {x}}))\,\mathrm {d} P({\mathbf {x}}\vert \theta )\,\mathrm {d} \pi ^{*}(\theta )=\int _{\mathbf {X}}\int _{\Theta }L(\theta ,a({\mathbf {x}}))\,\mathrm {d} \pi ^{*}(\theta \vert {\mathbf {x}})\,\mathrm {d} M({\mathbf {x}})} where m(x) is known as the predictive likelihood wherein θ has been "integrated out," π* (θ | x) is the posterior distribution, and the order of integration has been changed. One then should choose the action a* which minimises this expected loss, which is referred to as Bayes Risk. In the latter equation, the integrand inside dx is known as the Posterior Risk, and minimising it with respect to decision a also minimizes the overall Bayes Risk. This optimal decision, a* is known as the Bayes (decision) Rule - it minimises the average loss over all possible states of nature θ, over all possible (probability-weighted) data outcomes. One advantage of the Bayesian approach is to that one need only choose the optimal action under the actual observed data to obtain a uniformly optimal one, whereas choosing the actual frequentist optimal decision rule as a function of all possible observations, is a much more difficult problem. Of equal importance though, the Bayes Rule reflects consideration of loss outcomes under different states of nature, θ. ==== Examples in statistics ==== For a scalar parameter θ, a decision function whose output θ ^ {\displaystyle {\hat {\theta }}} is an estimate of θ, and a quadratic loss function (squared error loss) L ( θ , θ ^ ) = ( θ − θ ^ ) 2 , {\displaystyle L(\theta ,{\hat {\theta }})=(\theta -{\hat {\theta }})^{2},} the risk function becomes the mean squared error of the estimate, R ( θ , θ ^ ) = E θ ⁡ [ ( θ − θ ^ ) 2 ] . {\displaystyle R(\theta ,{\hat {\theta }})=\operatorname {E} _{\theta }\left[(\theta -{\hat {\theta }})^{2}\right].} An Estimator found by minimizing the Mean squared error estimates the Posterior distribution's mean. In density estimation, the unknown parameter is probability density itself. The loss function is typically chosen to be a norm in an appropriate function space. For example, for L2 norm, L ( f , f ^ ) = ‖ f − f ^ ‖ 2 2 , {\displaystyle L(f,{\hat {f}})=\|f-{\hat {f}}\|_{2}^{2}\,,} the risk function becomes the mean integrated squared error R ( f , f ^ ) = E ⁡ ( ‖ f − f ^ ‖ 2 ) . {\displaystyle R(f,{\hat {f}})=\operatorname {E} \left(\|f-{\hat {f}}\|^{2}\right).\,} === Economic choice under uncertainty === In economics, decision-making under uncertainty is often modelled using the von Neumann–Morgenstern utility function of the uncertain variable of interest, such as end-of-period wealth. Since the value of this variable is uncertain, so is the value of the utility function; it is the expected value of utility that is maximized. == Decision rules == A decision rule makes a choice using an optimality criterion. Some commonly used criteria are: Minimax: Choose the decision rule with the lowest worst loss — that is, minimize the worst-case (maximum possible) loss: a r g m i n δ max θ ∈ Θ R ( θ , δ ) . {\displaystyle {\underset {\delta }{\operatorname {arg\,min} }}\ \max _{\theta \in \Theta }\ R(\theta ,\delta ).} Invariance: Choose the decision rule which satisfies an invariance requirement. Choose the decision rule with the lowest average loss (i.e. minimize the expected value of the loss function): a r g m i n δ E θ ∈ Θ ⁡ [ R ( θ , δ ) ] = a r g m i n δ ∫ θ ∈ Θ R ( θ , δ ) p ( θ ) d θ . {\displaystyle {\underset {\delta }{\operatorname {arg\,min} }}\operatorname {E} _{\theta \in \Theta }[R(\theta ,\delta )]={\underset {\delta }{\operatorname {arg\,min} }}\ \int _{\theta \in \Theta }R(\theta ,\delta )\,p(\theta )\,d\theta .} == Selecting a loss function == Sound statistical practice requires selecting an estimator consistent with the actual acceptable variation experienced in the context of a particular applied problem. Thus, in the applied use of loss functions, selecting which statistical method to use to model an applied problem depends on knowing the losses that will be experienced from being wrong under the problem's particular circumstances. A common example involves estimating "location". Under typical statistical assumptions, the mean or average is the statistic for estimating location that minimizes the expected loss experienced under the squared-error loss function, while the median is the estimator that minimizes expected loss experienced under the absolute-difference loss function. Still different estimators would be optimal under other, less common circumstances. In economics, when an agent is risk neutral, the objective function is simply expressed as the expected value of a monetary quantity, such as profit, income, or end-of-period wealth. For risk-averse or risk-loving agents, loss is measured as the negative of a utility function, and the objective function to be optimized is the expected value of utility. Other measures of cost are possible, for example mortality or morbidity in the field of public health or safety engineering. For most optimization algorithms, it is desirable to have a loss function that is globally continuous and differentiable. Two very commonly used loss functions are the squared loss, L ( a ) = a 2 {\displaystyle L(a)=a^{2}} , and the absolute loss, L ( a ) = | a | {\displaystyle L(a)=|a|} . However the absolute loss has the disadvantage that it is not differentiable at a = 0 {\displaystyle a=0} . The squared loss has the disadvantage that it has the tendency to be dominated by outliers—when summing over a set of a {\displaystyle a} 's (as in ∑ i = 1 n L ( a i ) {\textstyle \sum _{i=1}^{n}L(a_{i})} ), the final sum tends to be the result of a few particularly large a-values, rather than an expression of the average a-value. The choice of a loss function is not arbitrary. It is very restrictive and sometimes the loss function may be characterized by its desirable properties. Among the choice principles are, for example, the requirement of completeness of the class of symmetric statistics in the case of i.i.d. observations, the principle of complete information, and some others. W. Edwards Deming and Nassim Nicholas Taleb argue that empirical reality, not nice mathematical properties, should be the sole basis for selecting loss functions, and real losses often are not mathematically nice and are not differentiable, continuous, symmetric, etc. For example, a person who arrives before a plane gate closure can still make the plane, but a person who arrives after can not, a discontinuity and asymmetry which makes arriving slightly late much more costly than arriving slightly early. In drug dosing, the cost of too little drug may be lack of efficacy, while the cost of too much may be tolerable toxicity, another example of asymmetry. Traffic, pipes, beams, ecologies, climates, etc. may tolerate increased load or stress with little noticeable change up to a point, then become backed up or break catastrophically. These situations, Deming and Taleb argue, are common in real-life problems, perhaps more common than classical smooth, continuous, symmetric, differentials cases. == See also == Bayesian regret Loss functions for classification Discounted maximum loss Hinge loss Scoring rule Statistical risk == References == == Further reading == Aretz, Kevin; Bartram, Söhnke M.; Pope, Peter F. (April–June 2011). "Asymmetric Loss Functions and the Rationality of Expected Stock Returns" (PDF). International Journal of Forecasting. 27 (2): 413–437. doi:10.1016/j.ijforecast.2009.10.008. SSRN 889323. Berger, James O. (1985). Statistical decision theory and Bayesian Analysis (2nd ed.). New York: Springer-Verlag. Bibcode:1985sdtb.book.....B. ISBN 978-0-387-96098-2. MR 0804611. Cecchetti, S. (2000). "Making monetary policy: Objectives and rules". Oxford Review of Economic Policy. 16 (4): 43–59. doi:10.1093/oxrep/16.4.43. Horowitz, Ann R. (1987). "Loss functions and public policy". Journal of Macroeconomics. 9 (4): 489–504. doi:10.1016/0164-0704(87)90016-4. Waud, Roger N. (1976). "Asymmetric Policymaker Utility Functions and Optimal Policy under Uncertainty". Econometrica. 44 (1): 53–66. doi:10.2307/1911380. JSTOR 1911380.
Wikipedia/Loss_function
Cliodynamics () is a transdisciplinary area of research that integrates cultural evolution, economic history/cliometrics, macrosociology, the mathematical modeling of historical processes during the longue durée, and the construction and analysis of historical databases. Cliodynamics treats history as science. Its practitioners develop theories that explain such dynamical processes as the rise and fall of empires, population booms and busts, and the spread and disappearance of religions. These theories are translated into mathematical models. Finally, model predictions are tested against data. Thus, building and analyzing massive databases of historical and archaeological information is one of the most important goals of cliodynamics. == Etymology == The word cliodynamics is composed of clio- and -dynamics. In Greek mythology, Clio is the muse of history. Dynamics, most broadly, is the study of how and why phenomena change with time. The term was originally coined by Peter Turchin in 2003, and can be traced to the work of such figures as Ibn Khaldun, Alexandre Deulofeu, Jack Goldstone, Sergey Kapitsa, Randall Collins, John Komlos, and Andrey Korotayev. == Mathematical modeling of historical dynamics == Many historical processes are dynamic, in that they change with time: populations increase and decline, economies expand and contract, states grow and collapse, and so on. As such, practitioners of cliodynamics apply mathematical models to explain macrohistorical patterns—things like the rise of empires, social discontent, civil wars, and state collapse. Cliodynamics is the application of a dynamical systems approach to the social sciences in general and to the study of historical dynamics in particular. More broadly, this approach is quite common and has proved its worth in innumerable applications (particularly in the natural sciences). The dynamical systems approach is so called because the whole phenomenon is represented as a system consisting of several elements (or subsystems) that interact and change dynamically (i.e., over time). More simply, it consists of taking a holistic phenomenon and splitting it up into separate parts that are assumed to interact with each other. In the dynamical systems approach, one sets out explicitly with mathematical formulae how different subsystems interact with each other. This mathematical description is the model of the system, and one can use a variety of methods to study the dynamics predicted by the model, as well as attempt to test the model by comparing its predictions with observed empirical, dynamic evidence. Although the focus is usually on the dynamics of large conglomerates of people, the approach of cliodynamics does not preclude the inclusion of human agency in its explanatory theories. Such questions can be explored with agent-based computer simulations. == Databases and data sources == Cliodynamics relies on large bodies of evidence to test competing theories on a wide range of historical processes. This typically involves building massive stores of evidence. The rise of digital history and various research technologies have allowed huge databases to be constructed in recent years. Some prominent databases utilized by cliodynamics practitioners include: The Seshat: Global History Databank, which systematically collects state-of-the-art accounts of the political and social organization of human groups and how societies have evolved through time into an authoritative databank. Seshat is affiliated also with the Evolution Institute, a non-profit think-tank that "uses evolutionary science to solve real-world problems." D-PLACE (Database of Places, Languages, Culture and Environment), which provides data on over 1,400 human social formations. The Atlas of Cultural Evolution, an archaeological database created by Peter N. Peregrine. CHIA (Collaborative for Historical Information and Analysis), a multidisciplinary collaborative endeavor hosted by the University of Pittsburgh with the goal of archiving historical information and linking data as well as academic/research institutions around the globe. International Institute of Social History, which collects data on the global social history of labour relations, workers, and labour. Human Relations Area Files (eHRAF) Archaeology World Cultures Clio-Infra, a database of measures of economic performance and other aspects of societal well-being on a global sample of societies from 1800 CE to the present. The Google Ngram Viewer, an online search engine that charts frequencies of sets of comma-delimited search strings using a yearly count of n-grams as found in the largest online body of human knowledge, the Google Books corpus. == Research == === Areas of study === As of 2016, the main directions of academic study in cliodynamics are: The coevolutionary model of social complexity and warfare, based on the theoretical framework of cultural multilevel selection The study of revolutions and rebellions Structural-demographic theory and secular cycles Explanations of the global distribution of languages benefitted from the empirical finding that the geographic area in which a language is spoken is more closely associated with the political complexity of the speakers than with all other variables under analysis. Mathematical modeling of the long-term ("millennial") trends of world-systems analysis, Structural-demographic models of the Modern Age revolutions, including the Arab revolutions of 2011. The analysis of vast quantities of historical newspaper content, which shows how periodic structures can be automatically discovered in historical newspapers. A similar analysis was performed on social media, again revealing strongly periodic structures. === Organizations === There are several established venues of peer-reviewed cliodynamics research: Cliodynamics: The Journal of Quantitative History and Cultural Evolution is a peer-reviewed web-based (open-access) journal that publishes on the transdisciplinary area of cliodynamics. It seeks to integrate historical models with data to facilitate theoretical progress. The first issue was published in December 2010. Cliodynamics is a member of Scopus and the Directory of Open Access Journals (DOAJ). The University of Hertfordshire's Cliodynamics Lab is the first lab in the world dedicated explicitly to the new research area of cliodynamics. It is directed by Pieter François, who founded the Lab in 2015. The Santa Fe Institute is a private, not-for-profit research and education center where leading scientists grapple with compelling and complex problems. The institute supports work in complex modeling of networks and dynamical systems. One of the areas of SFI research is cliodynamics. In the past the institute has sponsored a series of conversations and meetings on theoretical history. == Criticism == Critics of cliodynamics argue that the complex social formations of the past cannot and should not be reduced to quantifiable, analyzable points in phase space, for doing so overlooks each historical society's particular circumstances and dynamics. Many historians and social scientists contend that there are no generalisable causal factors that can explain large numbers of cases, and that historical investigation should focus on the unique trajectories through phase space of each case, highlighting commonalities in outcomes where they exist. As Zhao notes, "most historians believe that the importance of any mechanism in history changes, and more importantly, that there is no time-invariant structure that can organise all historical mechanisms into a system." == Fiction == Starting in the 1940s, Isaac Asimov invented the fictional precursor to this discipline, in what he called psychohistory, as a major plot device in his Foundation series of science fiction novels Robert Heinlein wrote a 1952 short story, The Year of the Jackpot, with a similar plot device about tracking the cycles of history and using them to predict the future. == See also == Critical juncture theory Generations (book) Historical geographic information system Sociocultural evolution Historical dynamics Complex system approach to peace and armed conflict == References == === Bibliography === == Further reading == Wood, Graeme (December 2020). "The Next Decade Could Be Even Worse". The Atlantic. Retrieved 12 November 2020. == External links == Cliodynamics: The Journal of Quantitative History and Cultural Evolution Seshat: Global History Databank Peter Turchin's Cliodynamics Page Historical Dynamics in a time of Crisis: Late Byzantium, 1204-1453 (a discussion of some concepts of cliodynamics from the point of view of medieval studies) "Nature" article (August 2012): Human cycles: History as science Evolution Institute
Wikipedia/Cliodynamics
In physics and engineering, a constitutive equation or constitutive relation is a relation between two or more physical quantities (especially kinetic quantities as related to kinematic quantities) that is specific to a material or substance or field, and approximates its response to external stimuli, usually as applied fields or forces. They are combined with other equations governing physical laws to solve physical problems; for example in fluid mechanics the flow of a fluid in a pipe, in solid state physics the response of a crystal to an electric field, or in structural analysis, the connection between applied stresses or loads to strains or deformations. Some constitutive equations are simply phenomenological; others are derived from first principles. A common approximate constitutive equation frequently is expressed as a simple proportionality using a parameter taken to be a property of the material, such as electrical conductivity or a spring constant. However, it is often necessary to account for the directional dependence of the material, and the scalar parameter is generalized to a tensor. Constitutive relations are also modified to account for the rate of response of materials and their non-linear behavior. See the article Linear response function. == Mechanical properties of matter == The first constitutive equation (constitutive law) was developed by Robert Hooke and is known as Hooke's law. It deals with the case of linear elastic materials. Following this discovery, this type of equation, often called a "stress-strain relation" in this example, but also called a "constitutive assumption" or an "equation of state" was commonly used. Walter Noll advanced the use of constitutive equations, clarifying their classification and the role of invariance requirements, constraints, and definitions of terms like "material", "isotropic", "aeolotropic", etc. The class of "constitutive relations" of the form stress rate = f (velocity gradient, stress, density) was the subject of Walter Noll's dissertation in 1954 under Clifford Truesdell. In modern condensed matter physics, the constitutive equation plays a major role. See Linear constitutive equations and Nonlinear correlation functions. === Definitions === === Deformation of solids === ==== Friction ==== Friction is a complicated phenomenon. Macroscopically, the friction force F at the interface of two materials can be modelled as proportional to the reaction force R at a point of contact between two interfaces through a dimensionless coefficient of friction μf, which depends on the pair of materials: F = μ f R . {\displaystyle F=\mu _{\text{f}}R.} This can be applied to static friction (friction preventing two stationary objects from slipping on their own), kinetic friction (friction between two objects scraping/sliding past each other), or rolling (frictional force which prevents slipping but causes a torque to exert on a round object). ==== Stress and strain ==== The stress-strain constitutive relation for linear materials is commonly known as Hooke's law. In its simplest form, the law defines the spring constant (or elasticity constant) k in a scalar equation, stating the tensile/compressive force is proportional to the extended (or contracted) displacement x: F i = − k x i {\displaystyle F_{i}=-kx_{i}} meaning the material responds linearly. Equivalently, in terms of the stress σ, Young's modulus E, and strain ε (dimensionless): σ = E ε {\displaystyle \sigma =E\,\varepsilon } In general, forces which deform solids can be normal to a surface of the material (normal forces), or tangential (shear forces), this can be described mathematically using the stress tensor: σ i j = C i j k l ε k l ⇌ ε i j = S i j k l σ k l {\displaystyle \sigma _{ij}=C_{ijkl}\,\varepsilon _{kl}\,\rightleftharpoons \,\varepsilon _{ij}=S_{ijkl}\,\sigma _{kl}} where C is the elasticity tensor and S is the compliance tensor. ==== Solid-state deformation ==== Several classes of deformation in elastic materials are the following: Plastic The applied force induces non-recoverable deformation in the material when the stress (or elastic strain) reaches a critical magnitude, called the yield point. Elastic The material recovers its initial shape after deformation. Viscoelastic If the time-dependent resistive contributions are large, and cannot be neglected. Rubbers and plastics have this property, and certainly do not satisfy Hooke's law. In fact, elastic hysteresis occurs. Anelastic If the material is close to elastic, but the applied force induces additional time-dependent resistive forces (i.e. depend on rate of change of extension/compression, in addition to the extension/compression). Metals and ceramics have this characteristic, but it is usually negligible, although not so much when heating due to friction occurs (such as vibrations or shear stresses in machines). Hyperelastic The applied force induces displacements in the material following a strain energy density function. ==== Collisions ==== The relative speed of separation vseparation of an object A after a collision with another object B is related to the relative speed of approach vapproach by the coefficient of restitution, defined by Newton's experimental impact law: e = | v | separation | v | approach {\displaystyle e={\frac {|\mathbf {v} |_{\text{separation}}}{|\mathbf {v} |_{\text{approach}}}}} which depends on the materials A and B are made from, since the collision involves interactions at the surfaces of A and B. Usually 0 ≤ e ≤ 1, in which e = 1 for completely elastic collisions, and e = 0 for completely inelastic collisions. It is possible for e ≥ 1 to occur – for superelastic (or explosive) collisions. === Deformation of fluids === The drag equation gives the drag force D on an object of cross-section area A moving through a fluid of density ρ at velocity v (relative to the fluid) D = 1 2 c d ρ A v 2 {\displaystyle D={\frac {1}{2}}c_{d}\rho Av^{2}} where the drag coefficient (dimensionless) cd depends on the geometry of the object and the drag forces at the interface between the fluid and object. For a Newtonian fluid of viscosity μ, the shear stress τ is linearly related to the strain rate (transverse flow velocity gradient) ∂u/∂y (units s−1). In a uniform shear flow: τ = μ ∂ u ∂ y , {\displaystyle \tau =\mu {\frac {\partial u}{\partial y}},} with u(y) the variation of the flow velocity u in the cross-flow (transverse) direction y. In general, for a Newtonian fluid, the relationship between the elements τij of the shear stress tensor and the deformation of the fluid is given by τ i j = 2 μ ( e i j − 1 3 Δ δ i j ) {\displaystyle \tau _{ij}=2\mu \left(e_{ij}-{\frac {1}{3}}\Delta \delta _{ij}\right)} with e i j = 1 2 ( ∂ v i ∂ x j + ∂ v j ∂ x i ) {\displaystyle e_{ij}={\frac {1}{2}}\left({\frac {\partial v_{i}}{\partial x_{j}}}+{\frac {\partial v_{j}}{\partial x_{i}}}\right)} and Δ = ∑ k e k k = div v , {\displaystyle \Delta =\sum _{k}e_{kk}={\text{div}}\;\mathbf {v} ,} where vi are the components of the flow velocity vector in the corresponding xi coordinate directions, eij are the components of the strain rate tensor, Δ is the volumetric strain rate (or dilatation rate) and δij is the Kronecker delta. The ideal gas law is a constitutive relation in the sense the pressure p and volume V are related to the temperature T, via the number of moles n of gas: p V = n R T {\displaystyle pV=nRT} where R is the gas constant (J⋅K−1⋅mol−1). == Electromagnetism == === Constitutive equations in electromagnetism and related areas === In both classical and quantum physics, the precise dynamics of a system form a set of coupled differential equations, which are almost always too complicated to be solved exactly, even at the level of statistical mechanics. In the context of electromagnetism, this remark applies to not only the dynamics of free charges and currents (which enter Maxwell's equations directly), but also the dynamics of bound charges and currents (which enter Maxwell's equations through the constitutive relations). As a result, various approximation schemes are typically used. For example, in real materials, complex transport equations must be solved to determine the time and spatial response of charges, for example, the Boltzmann equation or the Fokker–Planck equation or the Navier–Stokes equations. For example, see magnetohydrodynamics, fluid dynamics, electrohydrodynamics, superconductivity, plasma modeling. An entire physical apparatus for dealing with these matters has developed. See for example, linear response theory, Green–Kubo relations and Green's function (many-body theory). These complex theories provide detailed formulas for the constitutive relations describing the electrical response of various materials, such as permittivities, permeabilities, conductivities and so forth. It is necessary to specify the relations between displacement field D and E, and the magnetic H-field H and B, before doing calculations in electromagnetism (i.e. applying Maxwell's macroscopic equations). These equations specify the response of bound charge and current to the applied fields and are called constitutive relations. Determining the constitutive relationship between the auxiliary fields D and H and the E and B fields starts with the definition of the auxiliary fields themselves: D ( r , t ) = ε 0 E ( r , t ) + P ( r , t ) H ( r , t ) = 1 μ 0 B ( r , t ) − M ( r , t ) , {\displaystyle {\begin{aligned}\mathbf {D} (\mathbf {r} ,t)&=\varepsilon _{0}\mathbf {E} (\mathbf {r} ,t)+\mathbf {P} (\mathbf {r} ,t)\\\mathbf {H} (\mathbf {r} ,t)&={\frac {1}{\mu _{0}}}\mathbf {B} (\mathbf {r} ,t)-\mathbf {M} (\mathbf {r} ,t),\end{aligned}}} where P is the polarization field and M is the magnetization field which are defined in terms of microscopic bound charges and bound current respectively. Before getting to how to calculate M and P it is useful to examine the following special cases. ==== Without magnetic or dielectric materials ==== In the absence of magnetic or dielectric materials, the constitutive relations are simple: D = ε 0 E , H = B / μ 0 {\displaystyle \mathbf {D} =\varepsilon _{0}\mathbf {E} ,\quad \mathbf {H} =\mathbf {B} /\mu _{0}} where ε0 and μ0 are two universal constants, called the permittivity of free space and permeability of free space, respectively. ==== Isotropic linear materials ==== In an (isotropic) linear material, where P is proportional to E, and M is proportional to B, the constitutive relations are also straightforward. In terms of the polarization P and the magnetization M they are: P = ε 0 χ e E , M = χ m H , {\displaystyle \mathbf {P} =\varepsilon _{0}\chi _{e}\mathbf {E} ,\quad \mathbf {M} =\chi _{m}\mathbf {H} ,} where χe and χm are the electric and magnetic susceptibilities of a given material respectively. In terms of D and H the constitutive relations are: D = ε E , H = B / μ , {\displaystyle \mathbf {D} =\varepsilon \mathbf {E} ,\quad \mathbf {H} =\mathbf {B} /\mu ,} where ε and μ are constants (which depend on the material), called the permittivity and permeability, respectively, of the material. These are related to the susceptibilities by: ε / ε 0 = ε r = χ e + 1 , μ / μ 0 = μ r = χ m + 1 {\displaystyle \varepsilon /\varepsilon _{0}=\varepsilon _{r}=\chi _{e}+1,\quad \mu /\mu _{0}=\mu _{r}=\chi _{m}+1} ==== General case ==== For real-world materials, the constitutive relations are not linear, except approximately. Calculating the constitutive relations from first principles involves determining how P and M are created from a given E and B. These relations may be empirical (based directly upon measurements), or theoretical (based upon statistical mechanics, transport theory or other tools of condensed matter physics). The detail employed may be macroscopic or microscopic, depending upon the level necessary to the problem under scrutiny. In general, the constitutive relations can usually still be written: D = ε E , H = μ − 1 B {\displaystyle \mathbf {D} =\varepsilon \mathbf {E} ,\quad \mathbf {H} =\mu ^{-1}\mathbf {B} } but ε and μ are not, in general, simple constants, but rather functions of E, B, position and time, and tensorial in nature. Examples are: As a variation of these examples, in general materials are bianisotropic where D and B depend on both E and H, through the additional coupling constants ξ and ζ: D = ε E + ξ H , B = μ H + ζ E . {\displaystyle \mathbf {D} =\varepsilon \mathbf {E} +\xi \mathbf {H} \,,\quad \mathbf {B} =\mu \mathbf {H} +\zeta \mathbf {E} .} In practice, some materials properties have a negligible impact in particular circumstances, permitting neglect of small effects. For example: optical nonlinearities can be neglected for low field strengths; material dispersion is unimportant when frequency is limited to a narrow bandwidth; material absorption can be neglected for wavelengths for which a material is transparent; and metals with finite conductivity often are approximated at microwave or longer wavelengths as perfect metals with infinite conductivity (forming hard barriers with zero skin depth of field penetration). Some man-made materials such as metamaterials and photonic crystals are designed to have customized permittivity and permeability. ==== Calculation of constitutive relations ==== The theoretical calculation of a material's constitutive equations is a common, important, and sometimes difficult task in theoretical condensed-matter physics and materials science. In general, the constitutive equations are theoretically determined by calculating how a molecule responds to the local fields through the Lorentz force. Other forces may need to be modeled as well such as lattice vibrations in crystals or bond forces. Including all of the forces leads to changes in the molecule which are used to calculate P and M as a function of the local fields. The local fields differ from the applied fields due to the fields produced by the polarization and magnetization of nearby material; an effect which also needs to be modeled. Further, real materials are not continuous media; the local fields of real materials vary wildly on the atomic scale. The fields need to be averaged over a suitable volume to form a continuum approximation. These continuum approximations often require some type of quantum mechanical analysis such as quantum field theory as applied to condensed matter physics. See, for example, density functional theory, Green–Kubo relations and Green's function. A different set of homogenization methods (evolving from a tradition in treating materials such as conglomerates and laminates) are based upon approximation of an inhomogeneous material by a homogeneous effective medium (valid for excitations with wavelengths much larger than the scale of the inhomogeneity). The theoretical modeling of the continuum-approximation properties of many real materials often rely upon experimental measurement as well. For example, ε of an insulator at low frequencies can be measured by making it into a parallel-plate capacitor, and ε at optical-light frequencies is often measured by ellipsometry. === Thermoelectric and electromagnetic properties of matter === These constitutive equations are often used in crystallography, a field of solid-state physics. == Photonics == === Refractive index === The (absolute) refractive index of a medium n (dimensionless) is an inherently important property of geometric and physical optics defined as the ratio of the luminal speed in vacuum c0 to that in the medium c: n = c 0 c = ε μ ε 0 μ 0 = ε r μ r {\displaystyle n={\frac {c_{0}}{c}}={\sqrt {\frac {\varepsilon \mu }{\varepsilon _{0}\mu _{0}}}}={\sqrt {\varepsilon _{r}\mu _{r}}}} where ε is the permittivity and εr the relative permittivity of the medium, likewise μ is the permeability and μr are the relative permeability of the medium. The vacuum permittivity is ε0 and vacuum permeability is μ0. In general, n (also εr) are complex numbers. The relative refractive index is defined as the ratio of the two refractive indices. Absolute is for one material, relative applies to every possible pair of interfaces; n A B = n A n B {\displaystyle n_{AB}={\frac {n_{A}}{n_{B}}}} === Speed of light in matter === As a consequence of the definition, the speed of light in matter is c = 1 ε μ {\displaystyle c={\frac {1}{\sqrt {\varepsilon \mu }}}} for special case of vacuum; ε = ε0 and μ = μ0, c 0 = 1 ε 0 μ 0 {\displaystyle c_{0}={\frac {1}{\sqrt {\varepsilon _{0}\mu _{0}}}}} === Piezooptic effect === The piezooptic effect relates the stresses in solids σ to the dielectric impermeability a, which are coupled by a fourth-rank tensor called the piezooptic coefficient Π (units K−1): a i j = Π i j p q σ p q {\displaystyle a_{ij}=\Pi _{ijpq}\sigma _{pq}} == Transport phenomena == === Definitions === === Definitive laws === There are several laws which describe the transport of matter, or properties of it, in an almost identical way. In every case, in words they read: Flux (density) is proportional to a gradient, the constant of proportionality is the characteristic of the material. In general the constant must be replaced by a 2nd rank tensor, to account for directional dependences of the material. == See also == Defining equation (physical chemistry) Governing equation Principle of material objectivity Rheology == Notes == == References ==
Wikipedia/Constitutive_equation
Optimal foraging theory (OFT) is a behavioral ecology model that helps predict how an animal behaves when searching for food. Although obtaining food provides the animal with energy, searching for and capturing the food require both energy and time. To maximize fitness, an animal adopts a foraging strategy that provides the most benefit (energy) for the lowest cost, maximizing the net energy gained. OFT helps predict the best strategy that an animal can use to achieve this goal. OFT is an ecological application of the optimality model. This theory assumes that the most economically advantageous foraging pattern will be selected for in a species through natural selection. When using OFT to model foraging behavior, organisms are said to be maximizing a variable known as the currency, such as the most food per unit time. In addition, the constraints of the environment are other variables that must be considered. Constraints are defined as factors that can limit the forager's ability to maximize the currency. The optimal decision rule, or the organism's best foraging strategy, is defined as the decision that maximizes the currency under the constraints of the environment. Identifying the optimal decision rule is the primary goal of the OFT. The connection between OFT and biological evolution has garnered interest over the past decades. Studies on optimal foraging behaviors at the population level have utilized evolutionary birth-death dynamics models. While these models confirm the existence of objective functions, such as "currency" in certain scenarios, they also prompt questions regarding their applicability in other limits such as high population interactions. == Building an optimal foraging model == An optimal foraging model generates quantitative predictions of how animals maximize their fitness while they forage. The model building process involves identifying the currency, constraints, and appropriate decision rule for the forager. Currency is defined as the unit that is optimized by the animal. It is also a hypothesis of the costs and benefits that are imposed on that animal. For example, a certain forager gains energy from food, but incurs the cost of searching for the food: the time and energy spent searching could have been used instead on other endeavors, such as finding mates or protecting young. It would be in the animal's best interest to maximize its benefits at the lowest cost. Thus, the currency in this situation could be defined as net energy gain per unit time. However, for a different forager, the time it takes to digest the food after eating could be a more significant cost than the time and energy spent looking for food. In this case, the currency could be defined as net energy gain per digestive turnover time instead of net energy gain per unit time. Furthermore, benefits and costs can depend on a forager's community. For example, a forager living in a hive would most likely forage in a manner that would maximize efficiency for its colony rather than itself. By identifying the currency, one can construct a hypothesis about which benefits and costs are important to the forager in question. Constraints are hypotheses about the limitations that are placed on an animal. These limitations can be due to features of the environment or the physiology of the animal and could limit their foraging efficiency. The time that it takes for the forager to travel from the nesting site to the foraging site is an example of a constraint. The maximum number of food items a forager is able to carry back to its nesting site is another example of a constraint. There could also be cognitive constraints on animals, such as limits to learning and memory. The more constraints that one is able to identify in a given system, the more predictive power the model will have. Given the hypotheses about the currency and the constraints, the optimal decision rule is the model's prediction of what the animal's best foraging strategy should be. Possible examples of optimal decision rules could be the optimal number of food items that an animal should carry back to its nesting site or the optimal size of a food item that an animal should feed on. Figure 1, shows an example of how an optimal decision rule could be determined from a graphical model. The curve represents the energy gain per cost (E) for adopting foraging strategy x. Energy gain per cost is the currency being optimized. The constraints of the system determine the shape of this curve. The optimal decision rule (x*) is the strategy for which the currency, energy gain per costs, is the greatest. Optimal foraging models can look very different and become very complex, depending on the nature of the currency and the number of constraints considered. However, the general principles of currency, constraints, and optimal decision rule remain the same for all models. To test a model, one can compare the predicted strategy to the animal's actual foraging behavior. If the model fits the observed data well, then the hypotheses about the currency and constraints are supported. If the model does not fit the data well, then it is possible that either the currency or a particular constraint has been incorrectly identified. == Different feeding systems and classes of predators == Optimal foraging theory is widely applicable to feeding systems throughout the animal kingdom. Under the OFT, any organism of interest can be viewed as a predator that forages prey. There are different classes of predators that organisms fall into and each class has distinct foraging and predation strategies. True predators attack large numbers of prey throughout their life. They kill their prey either immediately or shortly after the attack. They may eat all or only part of their prey. True predators include tigers, lions, whales, sharks and ants. Grazers eat only a portion of their prey. They harm the prey, but rarely kill it. Grazers include antelope, cattle, and mosquitoes. Parasites, like grazers, eat only a part of their prey (host), but rarely the entire organism. They spend all or large portions of their life cycle living in/on a single host. This intimate relationship is typical of tapeworms, liver flukes, and plant parasites, such as the potato blight. Parasitoids are mainly typical of wasps (order Hymenoptera), and some flies (order Diptera). Eggs are laid inside the larvae of other arthropods which hatch and consume the host from the inside, killing it. This unusual predator–host relationship is typical of about 10% of all insects. Many viruses that attack single-celled organisms (such as bacteriophages) are also parasitoids; they reproduce inside a single host that is inevitably killed by the association. The optimization of these different foraging and predation strategies can be explained by the optimal foraging theory. In each case, there are costs, benefits, and limitations that ultimately determine the optimal decision rule that the predator should follow. == Optimal diet model == One classical version of the optimal foraging theory is the optimal diet model, which is also known as the prey choice model or the contingency model. In this model, the predator encounters different prey items and decides whether to eat what it has or search for a more profitable prey item. The model predicts that foragers should ignore low profitability prey items when more profitable items are present and abundant. The profitability of a prey item is dependent on several ecological variables. E is the amount of energy (calories) that a prey item provides the predator. Handling time (h) is the amount of time it takes the predator to handle the food, beginning from the time the predator finds the prey item to the time the prey item is eaten. The profitability of a prey item is then defined as E/h. Additionally, search time (S) is the amount of time it takes the predator to find a prey item and is dependent on the abundance of the food and the ease of locating it. In this model, the currency is energy intake per unit time and the constraints include the actual values of E, h, and S, as well as the fact that prey items are encountered sequentially. === Model of choice between big and small prey === Using these variables, the optimal diet model can predict how predators choose between two prey types: big prey1 with energy value E1 and handling time h1, and small prey2 with energy value E2 and handling time h2. In order to maximize its overall rate of energy gain, a predator must consider the profitability of the two prey types. If it is assumed that big prey1 is more profitable than small prey2, then E1/h1 > E2/h2. Thus, if the predator encounters prey1, it should always choose to eat it, because of its higher profitability. It should never bother to go searching for prey2. However, if the animal encounters prey2, it should reject it to look for a more profitable prey1, unless the time it would take to find prey1 is too long and costly for it to be worth it. Thus, the animal should eat prey2 only if E2/h2 > E1/(h1+S1), where S1 is the search time for prey1. Since it is always favorable to choose to eat prey1, the choice to eat prey1 is not dependent on the abundance of prey2. But since the length of S1 (i.e. how difficult it is to find prey1) is logically dependent on the density of prey1, the choice to eat prey2 is dependent on the abundance of prey1. === Generalist and specialist diets === The optimal diet model also predicts that different types of animals should adopt different diets based on variations in search time. This idea is an extension of the model of prey choice that was discussed above. The equation, E2/h2 > E1/(h1+S1), can be rearranged to give: S1 > [(E1h2)/E2] – h1. This rearranged form gives the threshold for how long S1 must be for an animal to choose to eat both prey1 and prey2. Animals that have S1s that reach the threshold are defined as generalists. In nature, generalists include a wide range of prey items in their diet. An example of a generalist is a mouse, which consumes a large variety of seeds, grains, and nuts. In contrast, predators with relatively short S1s are still better off choosing to eat only prey1. These types of animals are defined as specialists and have very exclusive diets in nature. An example of a specialist is the koala, which solely consumes eucalyptus leaves. In general, different animals across the four functional classes of predators exhibit strategies ranging across a continuum between being a generalist and a specialist. Additionally, since the choice to eat prey2 is dependent on the abundance of prey1 (as discussed earlier), if prey1 becomes so scarce that S1 reaches the threshold, then the animal should switch from exclusively eating prey1 to eating both prey1 and prey2. In other words, if the food within a specialist's diet becomes very scarce, a specialist can sometimes switch to being a generalist. === Functional response curves === As previously mentioned, the amount of time it takes to search for a prey item depends on the density of the prey. Functional response curves show the rate of prey capture as a function of food density and can be used in conjunction with the optimal diet theory to predict foraging behavior of predators. There are three different types of functional response curves. For a Type I functional response curve, the rate of prey capture increases linearly with food density. At low prey densities, the search time is long. Since the predator spends most of its time searching, it eats every prey item it finds. As prey density increases, the predator is able to capture the prey faster and faster. At a certain point, the rate of prey capture is so high, that the predator doesn't have to eat every prey item it encounters. After this point, the predator should choose only the prey items with the highest E/h. For a Type II functional response curve, the rate of prey capture negatively accelerates as it increases with food density. This is because it assumes that the predator is limited by its capacity to process food. In other words, as the food density increases, handling time increases. At the beginning of the curve, rate of prey capture increases nearly linearly with prey density and there is almost no handling time. As prey density increases, the predator spends less and less time searching for prey and more and more time handling the prey. The rate of prey capture increases less and less, until it finally plateaus. The high number of prey basically "swamps" the predator. A Type III functional response curve is a sigmoid curve. The rate of prey capture increases at first with prey density at a positively accelerated rate, but then at high densities changes to the negatively accelerated form, similar to that of the Type II curve. At high prey densities (the top of the curve), each new prey item is caught almost immediately. The predator is able to be choosy and does not eat every item it finds. So, assuming that there are two prey types with different profitabilities that are both at high abundance, the predator will choose the item with the higher E/h. However, at low prey densities (the bottom of the curve) the rate of prey capture increases faster than linearly. This means that as the predator feeds and the prey type with the higher E/h becomes less abundant, the predator will start to switch its preference to the prey type with the lower E/h, because that type will be relatively more abundant. This phenomenon is known as prey switching. === Predator–prey interaction === Predator–prey coevolution often makes it unfavorable for a predator to consume certain prey items, since many anti-predator defenses increase handling time. Examples include porcupine quills, the palatability and digestibility of the poison dart frog, crypsis, and other predator avoidance behaviors. In addition, because toxins may be present in many prey types, predators include a lot of variability in their diets to prevent any one toxin from reaching dangerous levels. Thus, it is possible that an approach focusing only on energy intake may not fully explain an animal's foraging behavior in these situations. == Marginal value theorem and optimal foraging == The marginal value theorem is a type of optimality model that is often applied to optimal foraging. This theorem is used to describe a situation in which an organism searching for food in a patch must decide when it is economically favorable to leave. While the animal is within a patch, it experiences the law of diminishing returns, where it becomes harder and harder to find prey as time goes on. This may be because the prey is being depleted, the prey begins to take evasive action and becomes harder to catch, or the predator starts crossing its own path more as it searches. This law of diminishing returns can be shown as a curve of energy gain per time spent in a patch (Figure 3). The curve starts off with a steep slope and gradually levels off as prey becomes harder to find. Another important cost to consider is the traveling time between different patches and the nesting site. An animal loses foraging time while it travels and expends energy through its locomotion. In this model, the currency being optimized is usually net energy gain per unit time. The constraints are the travel time and the shape of the curve of diminishing returns. Graphically, the currency (net energy gain per unit time) is given by the slope of a diagonal line that starts at the beginning of traveling time and intersects the curve of diminishing returns (Figure 3). In order to maximize the currency, one wants the line with the greatest slope that still touches the curve (the tangent line). The place that this line touches the curve provides the optimal decision rule of the amount of time that the animal should spend in a patch before leaving. == Examples of optimal foraging models in animals == === Optimal foraging of oystercatchers === Oystercatcher mussel feeding provides an example of how the optimal diet model can be utilized. Oystercatchers forage on mussels and crack them open with their bills. The constraints on these birds are the characteristics of the different mussel sizes. While large mussels provide more energy than small mussels, large mussels are harder to crack open due to their thicker shells. This means that while large mussels have a higher energy content (E), they also have a longer handling time (h). The profitability of any mussel is calculated as E/h. The oystercatchers must decide which mussel size will provide enough nutrition to outweigh the cost and energy required to open it. In their study, Meire and Ervynck tried to model this decision by graphing the relative profitabilities of different sized mussels. They came up with a bell-shaped curve, indicating that moderately sized mussels were the most profitable. However, they observed that if an oystercatcher rejected too many small mussels, the time it took to search for the next suitable mussel greatly increased. This observation shifted their bell-curve to the right (Figure 4). However, while this model predicted that oystercatchers should prefer mussels of 50–55 mm, the observed data showed that oystercatchers actually prefer mussels of 30–45 mm. Meire and Ervynk then realized the preference of mussel size did not depend only on the profitability of the prey, but also on the prey density. After this was accounted for, they found a good agreement between the model's prediction and the observed data. === Optimal foraging in starlings === The foraging behavior of the European starling, Sturnus vulgaris, provides an example of how marginal value theorem is used to model optimal foraging. Starlings leave their nests and travel to food patches in search for larval leatherjackets to bring back to their young. The starlings must determine the optimal number of prey items to take back in one trip (i.e. the optimal load size). While the starlings forage within a patch, they experience diminishing returns: the starling is able to hold only so many leatherjackets in its bill, so the speed with which the parent picks up larvae decreases with the number of larvae that it already has in its bill. Thus, the constraints are the shape of the curve of diminishing returns and the travel time (the time it takes to make a round trip from the nest to a patch and back). In addition, the currency is hypothesized to be net energy gain per unit time. Using this currency and the constraints, the optimal load can be predicted by drawing a line tangent to the curve of diminishing returns, as discussed previously (Figure 3). Kacelnik et al. wanted to determine if this species does indeed optimize net energy gain per unit time as hypothesized. They designed an experiment in which the starlings were trained to collect mealworms from an artificial feeder at different distances from the nest. The researchers artificially generated a fixed curve of diminishing returns for the birds by dropping mealworms at successively longer and longer intervals. The birds continued to collect mealworms as they were presented, until they reached an "optimal" load and flew home. As Figure 5 shows, if the starlings were maximizing net energy gain per unit time, a short travel time would predict a small optimal load and a long travel time would predict a larger optimal load. In agreement with these predictions, Kacelnik found that the longer the distance between the nest and the artificial feeder, the larger the load size. In addition, the observed load sizes quantitatively corresponded very closely to the model's predictions. Other models based on different currencies, such as energy gained per energy spent (i.e. energy efficiency), failed to predict the observed load sizes as accurately. Thus, Kacelnik concluded that starlings maximize net energy gain per unit time. This conclusion was not disproved in later experiments. === Optimal foraging in bees === Worker bees provide another example of the use of marginal value theorem in modeling optimal foraging behavior. Bees forage from flower to flower collecting nectar to carry back to the hive. While this situation is similar to that of the starlings, both the constraints and currency are actually different for the bees. A bee does not experience diminishing returns because of nectar depletion or any other characteristic of the flowers themselves. The total amount of nectar foraged increases linearly with time spent in a patch. However, the weight of the nectar adds a significant cost to the bee's flight between flowers and its trip back to the hive. Wolf and Schmid-Hempel showed, by experimentally placing varying weights on the backs of bees, that the cost of heavy nectar is so great that it shortens the bees' lifespan. The shorter the lifespan of a worker bee, the less overall time it has to contribute to its colony. Thus, there is a curve of diminishing returns for the net yield of energy that the hive receives as the bee gathers more nectar during one trip. The cost of heavy nectar also impacts the currency used by the bees. Unlike the starlings in the previous example, bees maximize energy efficiency (energy gained per energy spent) rather than net rate of energy gain (net energy gained per time). This is because the optimal load predicted by maximizing net rate of energy gain is too heavy for the bees and shortens their lifespan, decreasing their overall productivity for the hive, as explained earlier. By maximizing energy efficiency, the bees are able to avoid expending too much energy per trip and are able to live long enough to maximize their lifetime productivity for their hive. In a different paper, Schmid-Hempel showed that the observed relationship between load size and flight time is well correlated with the predictions based on maximizing energy efficiency, but very poorly correlated with the predictions based on maximizing net rate of energy gain. === Optimal foraging in centrarchid fishes === The nature of prey selection by two centrarchids (white crappie and bluegill) has been presented as a model incorporating optimal foraging strategies by Manatunge and Asaeda. The visual field of the foraging fish as represented by the reactive distance was analysed in detail to estimate the number of prey encounters per search bout. The predicted reactive distances were compared with experimental data. The energetic cost associated with fish foraging behaviour was calculated based on the sequence of events that takes place for each prey consumed. Comparisons of the relative abundance of prey species and size categories in the stomach to the lake environment indicated that both white crappie and bluegill (length < 100 mm) strongly select prey utilizing an energy optimization strategy. In most cases, the fish exclusively selected large Daphnia, ignoring evasive prey types (Cyclops, diaptomids) and small cladocera. This selectivity is the result of fish actively avoiding prey with high evasion capabilities even though they appear to be high in energetic content and having translated this into optimal selectivity through capture success rates. The energy consideration and visual system, apart from the forager's ability to capture prey, are the major determinants of prey selectivity for large-sized bluegill and white crappie still at planktivorous stages. == Criticism and limitations == Although many studies, such as the ones cited in the examples above, provide quantitative support for optimal foraging theory and demonstrate its usefulness, the model has received criticism regarding its validity and limitations. First, optimal foraging theory relies on the assumption that natural selection will optimize foraging strategies of organisms. However, natural selection is not an all-powerful force that produces perfect designs, but rather a passive process of selection for genetically based traits that increase organisms' reproductive success. Given that genetics involves interactions between loci, recombination, and other complexities, there is no guarantee that natural selection can optimize a specific behavioral parameter. In addition, OFT also assumes that foraging behaviors are able to be freely shaped by natural selection, because these behaviors are independent from other activities of the organism. However, given that organisms are integrated systems, rather than mechanical aggregates of parts, this is not always the case. For example, the need to avoid predators may constrain foragers to feed less than the optimal rate. Thus, an organism's foraging behaviors may not be optimized as OFT would predict, because they are not independent from other behaviors. Another limitation of OFT is that it lacks precision in practice. In theory, an optimal foraging model gives researchers specific, quantitative predictions about a predator's optimal decision rule based on the hypotheses about the currency and constraints of the system. However, in reality, it is difficult to define basic concepts like prey type, encounter rates, or even a patch as the forager perceives them. Thus, while the variables of OFT can seem consistent theoretically, in practice, they can be arbitrary and difficult to measure. Furthermore, although the premise of OFT is to maximize an organism's fitness, many studies show only correlations between observed and predicted foraging behavior and stop short of testing whether the animal's behavior actually increases its reproductive fitness. It is possible that in certain cases, there is no correlation between foraging returns and reproductive success at all. Without accounting for this possibility, many studies using the OFT remain incomplete and fail to address and test the main point of the theory. One of the most imperative critiques of OFT is that it may not be truly testable. This issue arises whenever there is a discrepancy between the model's predictions and the actual observations. It is difficult to tell whether the model is fundamentally wrong or whether a specific variable has been inaccurately identified or left out. Because it is possible to add endless plausible modifications to the model, the model of optimality may never be rejected. This creates the problem of researchers shaping their model to fit their observations, rather than rigorously testing their hypotheses about the animal's foraging behavior. == In archaeology == Optimal foraging theory has been used to predict animal behaviour when searching for food, but can also be used for humans (specifically hunter-gatherers). Food provides energy but costs energy to obtain. Foraging strategy must provide the most benefit for the lowest cost – it is a balance between nutritional value and energy required. The currency of optimal foraging theory is energy because it is an essential component for organisms, but it is also the downfall of optimal foraging theory in regard to archaeology. Optimal foraging theory assumes that behaviour is to some extent influenced by genetic makeup. However, this can be hard to accepted in relation to complex animals with a high behavioral flexibility. Human behaviour is not always predictable when using the premise of optimal foraging theory – hunter-gatherers could for ritual or feasting purposes choose a game which would not benefit energy, but would benefit other needs. The goals and currencies of humans can often change due to behavioral flexibility. These changes can occur over a long time, over a season or during a hunt. Optimal foraging theory must therefore be more complex and introduces more goals and constraints to match the complex decision-making processes employed by humans. The decision-making processes can furthermore be influenced by long-term and short-term experiences and these experiences can then further influence others. But optimal foraging theory has helped archaeology in generating new interpretations of patterns in the archaeological record and thinking about human behaviours in greater detail. Optimal foraging theory has also given archaeology a better insight into the cost and benefits of different resources. == References == == Further reading == Optimal Foraging Theory by Barry Sinervo (1997), Course: "Behavioral Ecology 2013", Department of Ecology and Evolutionary Biology, UCSC – This Section, of that Course at UCSC, considers OFT and 'Adaptational Hypotheses' ('guided trial and error, instinct'). along with addition subjects such as "Prey Size", "Patch Residence Time", "Patch Quality and Competitors", "Search Strategies", "Risk Aversive Behavior" and foraging practices subject to "Food Limitation". See also: up one Level for the Main Section of the Course, where downloadable PDFs are available (as the Images on that Page seem broken currently). The PDF for the above Link is 26 Pages long (with Images).
Wikipedia/Optimal_foraging_theory
Curve fitting is the process of constructing a curve, or mathematical function, that has the best fit to a series of data points, possibly subject to constraints. Curve fitting can involve either interpolation, where an exact fit to the data is required, or smoothing, in which a "smooth" function is constructed that approximately fits the data. A related topic is regression analysis, which focuses more on questions of statistical inference such as how much uncertainty is present in a curve that is fitted to data observed with random errors. Fitted curves can be used as an aid for data visualization, to infer values of a function where no data are available, and to summarize the relationships among two or more variables. Extrapolation refers to the use of a fitted curve beyond the range of the observed data, and is subject to a degree of uncertainty since it may reflect the method used to construct the curve as much as it reflects the observed data. For linear-algebraic analysis of data, "fitting" usually means trying to find the curve that minimizes the vertical (y-axis) displacement of a point from the curve (e.g., ordinary least squares). However, for graphical and image applications, geometric fitting seeks to provide the best visual fit; which usually means trying to minimize the orthogonal distance to the curve (e.g., total least squares), or to otherwise include both axes of displacement of a point from the curve. Geometric fits are not popular because they usually require non-linear and/or iterative calculations, although they have the advantage of a more aesthetic and geometrically accurate result. == Algebraic fitting of functions to data points == Most commonly, one fits a function of the form y=f(x). === Fitting lines and polynomial functions to data points === The first degree polynomial equation y = a x + b {\displaystyle y=ax+b\;} is a line with slope a. A line will connect any two points, so a first degree polynomial equation is an exact fit through any two points with distinct x coordinates. If the order of the equation is increased to a second degree polynomial, the following results: y = a x 2 + b x + c . {\displaystyle y=ax^{2}+bx+c\;.} This will exactly fit a simple curve to three points. If the order of the equation is increased to a third degree polynomial, the following is obtained: y = a x 3 + b x 2 + c x + d . {\displaystyle y=ax^{3}+bx^{2}+cx+d\;.} This will exactly fit four points. A more general statement would be to say it will exactly fit four constraints. Each constraint can be a point, angle, or curvature (which is the reciprocal of the radius of an osculating circle). Angle and curvature constraints are most often added to the ends of a curve, and in such cases are called end conditions. Identical end conditions are frequently used to ensure a smooth transition between polynomial curves contained within a single spline. Higher-order constraints, such as "the change in the rate of curvature", could also be added. This, for example, would be useful in highway cloverleaf design to understand the rate of change of the forces applied to a car (see jerk), as it follows the cloverleaf, and to set reasonable speed limits, accordingly. The first degree polynomial equation could also be an exact fit for a single point and an angle while the third degree polynomial equation could also be an exact fit for two points, an angle constraint, and a curvature constraint. Many other combinations of constraints are possible for these and for higher order polynomial equations. If there are more than n + 1 constraints (n being the degree of the polynomial), the polynomial curve can still be run through those constraints. An exact fit to all constraints is not certain (but might happen, for example, in the case of a first degree polynomial exactly fitting three collinear points). In general, however, some method is then needed to evaluate each approximation. The least squares method is one way to compare the deviations. There are several reasons given to get an approximate fit when it is possible to simply increase the degree of the polynomial equation and get an exact match.: Even if an exact match exists, it does not necessarily follow that it can be readily discovered. Depending on the algorithm used there may be a divergent case, where the exact fit cannot be calculated, or it might take too much computer time to find the solution. This situation might require an approximate solution. The effect of averaging out questionable data points in a sample, rather than distorting the curve to fit them exactly, may be desirable. Runge's phenomenon: high order polynomials can be highly oscillatory. If a curve runs through two points A and B, it would be expected that the curve would run somewhat near the midpoint of A and B, as well. This may not happen with high-order polynomial curves; they may even have values that are very large in positive or negative magnitude. With low-order polynomials, the curve is more likely to fall near the midpoint (it's even guaranteed to exactly run through the midpoint on a first degree polynomial). Low-order polynomials tend to be smooth and high order polynomial curves tend to be "lumpy". To define this more precisely, the maximum number of inflection points possible in a polynomial curve is n-2, where n is the order of the polynomial equation. An inflection point is a location on the curve where it switches from a positive radius to negative. We can also say this is where it transitions from "holding water" to "shedding water". Note that it is only "possible" that high order polynomials will be lumpy; they could also be smooth, but there is no guarantee of this, unlike with low order polynomial curves. A fifteenth degree polynomial could have, at most, thirteen inflection points, but could also have eleven, or nine or any odd number down to one. (Polynomials with even numbered degree could have any even number of inflection points from n - 2 down to zero.) The degree of the polynomial curve being higher than needed for an exact fit is undesirable for all the reasons listed previously for high order polynomials, but also leads to a case where there are an infinite number of solutions. For example, a first degree polynomial (a line) constrained by only a single point, instead of the usual two, would give an infinite number of solutions. This brings up the problem of how to compare and choose just one solution, which can be a problem for both software and humans. Because of this, it is usually best to choose as low a degree as possible for an exact match on all constraints, and perhaps an even lower degree, if an approximate fit is acceptable. === Fitting other functions to data points === Other types of curves, such as trigonometric functions (such as sine and cosine), may also be used, in certain cases. In spectroscopy, data may be fitted with Gaussian, Lorentzian, Voigt and related functions. In biology, ecology, demography, epidemiology, and many other disciplines, the growth of a population, the spread of infectious disease, etc. can be fitted using the logistic function. In agriculture the inverted logistic sigmoid function (S-curve) is used to describe the relation between crop yield and growth factors. The blue figure was made by a sigmoid regression of data measured in farm lands. It can be seen that initially, i.e. at low soil salinity, the crop yield reduces slowly at increasing soil salinity, while thereafter the decrease progresses faster. == Geometric fitting of plane curves to data points == If a function of the form y = f ( x ) {\displaystyle y=f(x)} cannot be postulated, one can still try to fit a plane curve. Other types of curves, such as conic sections (circular, elliptical, parabolic, and hyperbolic arcs) or trigonometric functions (such as sine and cosine), may also be used, in certain cases. For example, trajectories of objects under the influence of gravity follow a parabolic path, when air resistance is ignored. Hence, matching trajectory data points to a parabolic curve would make sense. Tides follow sinusoidal patterns, hence tidal data points should be matched to a sine wave, or the sum of two sine waves of different periods, if the effects of the Moon and Sun are both considered. For a parametric curve, it is effective to fit each of its coordinates as a separate function of arc length; assuming that data points can be ordered, the chord distance may be used. === Fitting a circle by geometric fit === Coope approaches the problem of trying to find the best visual fit of circle to a set of 2D data points. The method elegantly transforms the ordinarily non-linear problem into a linear problem that can be solved without using iterative numerical methods, and is hence much faster than previous techniques. === Fitting an ellipse by geometric fit === The above technique is extended to general ellipses by adding a non-linear step, resulting in a method that is fast, yet finds visually pleasing ellipses of arbitrary orientation and displacement. == Fitting surfaces == Note that while this discussion was in terms of 2D curves, much of this logic also extends to 3D surfaces, each patch of which is defined by a net of curves in two parametric directions, typically called u and v. A surface may be composed of one or more surface patches in each direction. == Software == Many statistical packages such as R and numerical software such as the gnuplot, GNU Scientific Library, Igor Pro, MLAB, Maple, MATLAB, TK Solver 6.0, Scilab, Mathematica, GNU Octave, and SciPy include commands for doing curve fitting in a variety of scenarios. There are also programs specifically written to do curve fitting; they can be found in the lists of statistical and numerical-analysis programs as well as in Category:Regression and curve fitting software. == See also == == References == == Further reading == N. Chernov (2010), Circular and linear regression: Fitting circles and lines by least squares, Chapman & Hall/CRC, Monographs on Statistics and Applied Probability, Volume 117 (256 pp.). [2]
Wikipedia/Model_fitting
In numerical analysis, Broyden's method is a quasi-Newton method for finding roots in k variables. It was originally described by C. G. Broyden in 1965. Newton's method for solving f(x) = 0 uses the Jacobian matrix, J, at every iteration. However, computing this Jacobian can be a difficult and expensive operation; for large problems such as those involving solving the Kohn–Sham equations in quantum mechanics the number of variables can be in the hundreds of thousands. The idea behind Broyden's method is to compute the whole Jacobian at most only at the first iteration, and to do rank-one updates at other iterations. In 1979 Gay proved that when Broyden's method is applied to a linear system of size n × n, it terminates in 2 n steps, although like all quasi-Newton methods, it may not converge for nonlinear systems. == Description of the method == === Solving single-variable nonlinear equation === In the secant method, we replace the first derivative f′ at xn with the finite-difference approximation: f ′ ( x n ) ≃ f ( x n ) − f ( x n − 1 ) x n − x n − 1 , {\displaystyle f'(x_{n})\simeq {\frac {f(x_{n})-f(x_{n-1})}{x_{n}-x_{n-1}}},} and proceed similar to Newton's method: x n + 1 = x n − f ( x n ) f ′ ( x n ) {\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f^{\prime }(x_{n})}}} where n is the iteration index. === Solving a system of nonlinear equations === Consider a system of k nonlinear equations in k {\displaystyle k} unknowns f ( x ) = 0 , {\displaystyle \mathbf {f} (\mathbf {x} )=\mathbf {0} ,} where f is a vector-valued function of vector x x = ( x 1 , x 2 , x 3 , … , x k ) , {\displaystyle \mathbf {x} =(x_{1},x_{2},x_{3},\dotsc ,x_{k}),} f ( x ) = ( f 1 ( x 1 , x 2 , … , x k ) , f 2 ( x 1 , x 2 , … , x k ) , … , f k ( x 1 , x 2 , … , x k ) ) . {\displaystyle \mathbf {f} (\mathbf {x} )={\big (}f_{1}(x_{1},x_{2},\dotsc ,x_{k}),f_{2}(x_{1},x_{2},\dotsc ,x_{k}),\dotsc ,f_{k}(x_{1},x_{2},\dotsc ,x_{k}){\big )}.} For such problems, Broyden gives a variation of the one-dimensional Newton's method, replacing the derivative with an approximate Jacobian J. The approximate Jacobian matrix is determined iteratively based on the secant equation, a finite-difference approximation: J n ( x n − x n − 1 ) ≃ f ( x n ) − f ( x n − 1 ) , {\displaystyle \mathbf {J} _{n}(\mathbf {x} _{n}-\mathbf {x} _{n-1})\simeq \mathbf {f} (\mathbf {x} _{n})-\mathbf {f} (\mathbf {x} _{n-1}),} where n is the iteration index. For clarity, define f n = f ( x n ) , {\displaystyle \mathbf {f} _{n}=\mathbf {f} (\mathbf {x} _{n}),} Δ x n = x n − x n − 1 , {\displaystyle \Delta \mathbf {x} _{n}=\mathbf {x} _{n}-\mathbf {x} _{n-1},} Δ f n = f n − f n − 1 , {\displaystyle \Delta \mathbf {f} _{n}=\mathbf {f} _{n}-\mathbf {f} _{n-1},} so the above may be rewritten as J n Δ x n ≃ Δ f n . {\displaystyle \mathbf {J} _{n}\Delta \mathbf {x} _{n}\simeq \Delta \mathbf {f} _{n}.} The above equation is underdetermined when k is greater than one. Broyden suggested using the most recent estimate of the Jacobian matrix, Jn−1, and then improving upon it by requiring that the new form is a solution to the most recent secant equation, and that there is minimal modification to Jn−1: J n = J n − 1 + Δ f n − J n − 1 Δ x n ‖ Δ x n ‖ 2 Δ x n T . {\displaystyle \mathbf {J} _{n}=\mathbf {J} _{n-1}+{\frac {\Delta \mathbf {f} _{n}-\mathbf {J} _{n-1}\Delta \mathbf {x} _{n}}{\|\Delta \mathbf {x} _{n}\|^{2}}}\Delta \mathbf {x} _{n}^{\mathrm {T} }.} This minimizes the Frobenius norm ‖ J n − J n − 1 ‖ F . {\displaystyle \|\mathbf {J} _{n}-\mathbf {J} _{n-1}\|_{\rm {F}}.} One then updates the variables using the approximate Jacobian, what is called a quasi-Newton approach. x n + 1 = x n − α J n − 1 f ( x n ) . {\displaystyle \mathbf {x} _{n+1}=\mathbf {x} _{n}-\alpha \mathbf {J} _{n}^{-1}\mathbf {f} (\mathbf {x} _{n}).} If α = 1 {\displaystyle \alpha =1} this is the full Newton step; commonly a line search or trust region method is used to control α {\displaystyle \alpha } . The initial Jacobian can be taken as a diagonal, unit matrix, although more common is to scale it based upon the first step. Broyden also suggested using the Sherman–Morrison formula to directly update the inverse of the approximate Jacobian matrix: J n − 1 = J n − 1 − 1 + Δ x n − J n − 1 − 1 Δ f n Δ x n T J n − 1 − 1 Δ f n Δ x n T J n − 1 − 1 . {\displaystyle \mathbf {J} _{n}^{-1}=\mathbf {J} _{n-1}^{-1}+{\frac {\Delta \mathbf {x} _{n}-\mathbf {J} _{n-1}^{-1}\Delta \mathbf {f} _{n}}{\Delta \mathbf {x} _{n}^{\mathrm {T} }\mathbf {J} _{n-1}^{-1}\Delta \mathbf {f} _{n}}}\Delta \mathbf {x} _{n}^{\mathrm {T} }\mathbf {J} _{n-1}^{-1}.} This first method is commonly known as the "good Broyden's method." A similar technique can be derived by using a slightly different modification to Jn−1. This yields a second method, the so-called "bad Broyden's method": J n − 1 = J n − 1 − 1 + Δ x n − J n − 1 − 1 Δ f n ‖ Δ f n ‖ 2 Δ f n T . {\displaystyle \mathbf {J} _{n}^{-1}=\mathbf {J} _{n-1}^{-1}+{\frac {\Delta \mathbf {x} _{n}-\mathbf {J} _{n-1}^{-1}\Delta \mathbf {f} _{n}}{\|\Delta \mathbf {f} _{n}\|^{2}}}\Delta \mathbf {f} _{n}^{\mathrm {T} }.} This minimizes a different Frobenius norm ‖ J n − 1 − J n − 1 − 1 ‖ F . {\displaystyle \|\mathbf {J} _{n}^{-1}-\mathbf {J} _{n-1}^{-1}\|_{\rm {F}}.} In his original paper Broyden could not get the bad method to work, but there are cases where it does for which several explanations have been proposed. Many other quasi-Newton schemes have been suggested in optimization such as the BFGS, where one seeks a maximum or minimum by finding zeros of the first derivatives (zeros of the gradient in multiple dimensions). The Jacobian of the gradient is called the Hessian and is symmetric, adding further constraints to its approximation. == The Broyden Class of Methods == In addition to the two methods described above, Broyden defined a wider class of related methods.: 578  In general, methods in the Broyden class are given in the form: 150  J k + 1 = J k − J k s k s k T J k s k T J k s k + y k y k T y k T s k + ϕ k ( s k T J k s k ) v k v k T , {\displaystyle \mathbf {J} _{k+1}=\mathbf {J} _{k}-{\frac {\mathbf {J} _{k}s_{k}s_{k}^{T}\mathbf {J} _{k}}{s_{k}^{T}\mathbf {J} _{k}s_{k}}}+{\frac {y_{k}y_{k}^{T}}{y_{k}^{T}s_{k}}}+\phi _{k}\left(s_{k}^{T}\mathbf {J} _{k}s_{k}\right)v_{k}v_{k}^{T},} where y k := f ( x k + 1 ) − f ( x k ) , {\displaystyle y_{k}:=\mathbf {f} (\mathbf {x} _{k+1})-\mathbf {f} (\mathbf {x} _{k}),} s k := x k + 1 − x k , {\displaystyle s_{k}:=\mathbf {x} _{k+1}-\mathbf {x} _{k},} and v k = [ y k y k T s k − J k s k s k T J k s k ] , {\displaystyle v_{k}=\left[{\frac {y_{k}}{y_{k}^{T}s_{k}}}-{\frac {\mathbf {J} _{k}s_{k}}{s_{k}^{T}\mathbf {J} _{k}s_{k}}}\right],} and ϕ k ∈ R {\displaystyle \phi _{k}\in \mathbb {R} } for each k = 1 , 2 , . . . {\displaystyle k=1,2,...} . The choice of ϕ k {\displaystyle \phi _{k}} determines the method. Other methods in the Broyden class have been introduced by other authors. The Davidon–Fletcher–Powell (DFP) method, which is the only member of this class being published before the two methods defined by Broyden.: 582  For the DFP method, ϕ k = 1 {\displaystyle \phi _{k}=1} .: 150  Anderson's iterative method, which uses a least squares approach to the Jacobian. Schubert's or sparse Broyden algorithm – a modification for sparse Jacobian matrices. The Pulay approach, often used in density functional theory. A limited memory method by Srivastava for the root finding problem which only uses a few recent iterations. Klement (2014) – uses fewer iterations to solve some systems. Multisecant methods for density functional theory problems == See also == Secant method Newton's method Quasi-Newton method Newton's method in optimization Davidon–Fletcher–Powell formula Broyden–Fletcher–Goldfarb–Shanno (BFGS) method == References == == Further reading == Dennis, J. E.; Schnabel, Robert B. (1983). Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Englewood Cliffs: Prentice Hall. pp. 168–193. ISBN 0-13-627216-9. Fletcher, R. (1987). Practical Methods of Optimization (Second ed.). New York: John Wiley & Sons. pp. 44–79. ISBN 0-471-91547-5. Kelley, C. T. (1995). Iterative Methods for Linear and Nonlinear Equations. Society for Industrial and Applied Mathematics. doi:10.1137/1.9781611970944. ISBN 978-0-89871-352-7. == External links == Simple basic explanation: The story of the blind archer
Wikipedia/Broyden's_method
A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of sample data (and similar data from a larger population). A statistical model represents, often in considerably idealized form, the data-generating process. When referring specifically to probabilities, the corresponding term is probabilistic model. All statistical hypothesis tests and all statistical estimators are derived via statistical models. More generally, statistical models are part of the foundation of statistical inference. A statistical model is usually specified as a mathematical relationship between one or more random variables and other non-random variables. As such, a statistical model is "a formal representation of a theory" (Herman Adèr quoting Kenneth Bollen). == Introduction == Informally, a statistical model can be thought of as a statistical assumption (or set of statistical assumptions) with a certain property: that the assumption allows us to calculate the probability of any event. As an example, consider a pair of ordinary six-sided dice. We will study two different statistical assumptions about the dice. The first statistical assumption is this: for each of the dice, the probability of each face (1, 2, 3, 4, 5, and 6) coming up is ⁠1/6⁠. From that assumption, we can calculate the probability of both dice coming up 5:  ⁠1/6⁠ × ⁠1/6⁠ = ⁠1/36⁠.  More generally, we can calculate the probability of any event: e.g. (1 and 2) or (3 and 3) or (5 and 6). The alternative statistical assumption is this: for each of the dice, the probability of the face 5 coming up is ⁠1/8⁠ (because the dice are weighted). From that assumption, we can calculate the probability of both dice coming up 5:  ⁠1/8⁠ × ⁠1/8⁠ = ⁠1/64⁠.  We cannot, however, calculate the probability of any other nontrivial event, as the probabilities of the other faces are unknown. The first statistical assumption constitutes a statistical model: because with the assumption alone, we can calculate the probability of any event. The alternative statistical assumption does not constitute a statistical model: because with the assumption alone, we cannot calculate the probability of every event. In the example above, with the first assumption, calculating the probability of an event is easy. With some other examples, though, the calculation can be difficult, or even impractical (e.g. it might require millions of years of computation). For an assumption to constitute a statistical model, such difficulty is acceptable: doing the calculation does not need to be practicable, just theoretically possible. == Formal definition == In mathematical terms, a statistical model is a pair ( S , P {\displaystyle S,{\mathcal {P}}} ), where S {\displaystyle S} is the set of possible observations, i.e. the sample space, and P {\displaystyle {\mathcal {P}}} is a set of probability distributions on S {\displaystyle S} . The set P {\displaystyle {\mathcal {P}}} represents all of the models that are considered possible. This set is typically parameterized: P = { F θ : θ ∈ Θ } {\displaystyle {\mathcal {P}}=\{F_{\theta }:\theta \in \Theta \}} . The set Θ {\displaystyle \Theta } defines the parameters of the model. If a parameterization is such that distinct parameter values give rise to distinct distributions, i.e. F θ 1 = F θ 2 ⇒ θ 1 = θ 2 {\displaystyle F_{\theta _{1}}=F_{\theta _{2}}\Rightarrow \theta _{1}=\theta _{2}} (in other words, the mapping is injective), it is said to be identifiable. In some cases, the model can be more complex. In Bayesian statistics, the model is extended by adding a probability distribution over the parameter space Θ {\displaystyle \Theta } . A statistical model can sometimes distinguish two sets of probability distributions. The first set Q = { F θ : θ ∈ Θ } {\displaystyle {\mathcal {Q}}=\{F_{\theta }:\theta \in \Theta \}} is the set of models considered for inference. The second set P = { F λ : λ ∈ Λ } {\displaystyle {\mathcal {P}}=\{F_{\lambda }:\lambda \in \Lambda \}} is the set of models that could have generated the data which is much larger than Q {\displaystyle {\mathcal {Q}}} . Such statistical models are key in checking that a given procedure is robust, i.e. that it does not produce catastrophic errors when its assumptions about the data are incorrect. == An example == Suppose that we have a population of children, with the ages of the children distributed uniformly, in the population. The height of a child will be stochastically related to the age: e.g. when we know that a child is of age 7, this influences the chance of the child being 1.5 meters tall. We could formalize that relationship in a linear regression model, like this: heighti = b0 + b1agei + εi, where b0 is the intercept, b1 is a parameter that age is multiplied by to obtain a prediction of height, εi is the error term, and i identifies the child. This implies that height is predicted by age, with some error. An admissible model must be consistent with all the data points. Thus, a straight line (heighti = b0 + b1agei) cannot be admissible for a model of the data—unless it exactly fits all the data points, i.e. all the data points lie perfectly on the line. The error term, εi, must be included in the equation, so that the model is consistent with all the data points. To do statistical inference, we would first need to assume some probability distributions for the εi. For instance, we might assume that the εi distributions are i.i.d. Gaussian, with zero mean. In this instance, the model would have 3 parameters: b0, b1, and the variance of the Gaussian distribution. We can formally specify the model in the form ( S , P {\displaystyle S,{\mathcal {P}}} ) as follows. The sample space, S {\displaystyle S} , of our model comprises the set of all possible pairs (age, height). Each possible value of θ {\displaystyle \theta } = (b0, b1, σ2) determines a distribution on S {\displaystyle S} ; denote that distribution by F θ {\displaystyle F_{\theta }} . If Θ {\displaystyle \Theta } is the set of all possible values of θ {\displaystyle \theta } , then P = { F θ : θ ∈ Θ } {\displaystyle {\mathcal {P}}=\{F_{\theta }:\theta \in \Theta \}} . (The parameterization is identifiable, and this is easy to check.) In this example, the model is determined by (1) specifying S {\displaystyle S} and (2) making some assumptions relevant to P {\displaystyle {\mathcal {P}}} . There are two assumptions: that height can be approximated by a linear function of age; that errors in the approximation are distributed as i.i.d. Gaussian. The assumptions are sufficient to specify P {\displaystyle {\mathcal {P}}} —as they are required to do. == General remarks == A statistical model is a special class of mathematical model. What distinguishes a statistical model from other mathematical models is that a statistical model is non-deterministic. Thus, in a statistical model specified via mathematical equations, some of the variables do not have specific values, but instead have probability distributions; i.e. some of the variables are stochastic. In the above example with children's heights, ε is a stochastic variable; without that stochastic variable, the model would be deterministic. Statistical models are often used even when the data-generating process being modeled is deterministic. For instance, coin tossing is, in principle, a deterministic process; yet it is commonly modeled as stochastic (via a Bernoulli process). Choosing an appropriate statistical model to represent a given data-generating process is sometimes extremely difficult, and may require knowledge of both the process and relevant statistical analyses. Relatedly, the statistician Sir David Cox has said, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis". There are three purposes for a statistical model, according to Konishi & Kitagawa: Predictions Extraction of information Description of stochastic structures Those three purposes are essentially the same as the three purposes indicated by Friendly & Meyer: prediction, estimation, description. == Dimension of a model == Suppose that we have a statistical model ( S , P {\displaystyle S,{\mathcal {P}}} ) with P = { F θ : θ ∈ Θ } {\displaystyle {\mathcal {P}}=\{F_{\theta }:\theta \in \Theta \}} . In notation, we write that Θ ⊆ R k {\displaystyle \Theta \subseteq \mathbb {R} ^{k}} where k is a positive integer ( R {\displaystyle \mathbb {R} } denotes the real numbers; other sets can be used, in principle). Here, k is called the dimension of the model. The model is said to be parametric if Θ {\displaystyle \Theta } has finite dimension. As an example, if we assume that data arise from a univariate Gaussian distribution, then we are assuming that P = { F μ , σ ( x ) ≡ 1 2 π σ exp ⁡ ( − ( x − μ ) 2 2 σ 2 ) : μ ∈ R , σ > 0 } {\displaystyle {\mathcal {P}}=\left\{F_{\mu ,\sigma }(x)\equiv {\frac {1}{{\sqrt {2\pi }}\sigma }}\exp \left(-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}\right):\mu \in \mathbb {R} ,\sigma >0\right\}} . In this example, the dimension, k, equals 2. As another example, suppose that the data consists of points (x, y) that we assume are distributed according to a straight line with i.i.d. Gaussian residuals (with zero mean): this leads to the same statistical model as was used in the example with children's heights. The dimension of the statistical model is 3: the intercept of the line, the slope of the line, and the variance of the distribution of the residuals. (Note the set of all possible lines has dimension 2, even though geometrically, a line has dimension 1.) Although formally θ ∈ Θ {\displaystyle \theta \in \Theta } is a single parameter that has dimension k, it is sometimes regarded as comprising k separate parameters. For example, with the univariate Gaussian distribution, θ {\displaystyle \theta } is formally a single parameter with dimension 2, but it is often regarded as comprising 2 separate parameters—the mean and the standard deviation. A statistical model is nonparametric if the parameter set Θ {\displaystyle \Theta } is infinite dimensional. A statistical model is semiparametric if it has both finite-dimensional and infinite-dimensional parameters. Formally, if k is the dimension of Θ {\displaystyle \Theta } and n is the number of samples, both semiparametric and nonparametric models have k → ∞ {\displaystyle k\rightarrow \infty } as n → ∞ {\displaystyle n\rightarrow \infty } . If k / n → 0 {\displaystyle k/n\rightarrow 0} as n → ∞ {\displaystyle n\rightarrow \infty } , then the model is semiparametric; otherwise, the model is nonparametric. Parametric models are by far the most commonly used statistical models. Regarding semiparametric and nonparametric models, Sir David Cox has said, "These typically involve fewer assumptions of structure and distributional form but usually contain strong assumptions about independencies". == Nested models == Two statistical models are nested if the first model can be transformed into the second model by imposing constraints on the parameters of the first model. As an example, the set of all Gaussian distributions has, nested within it, the set of zero-mean Gaussian distributions: we constrain the mean in the set of all Gaussian distributions to get the zero-mean distributions. As a second example, the quadratic model y = b0 + b1x + b2x2 + ε, ε ~ 𝒩(0, σ2) has, nested within it, the linear model y = b0 + b1x + ε, ε ~ 𝒩(0, σ2) —we constrain the parameter b2 to equal 0. In both those examples, the first model has a higher dimension than the second model (for the first example, the zero-mean model has dimension 1). Such is often, but not always, the case. As an example where they have the same dimension, the set of positive-mean Gaussian distributions is nested within the set of all Gaussian distributions; they both have dimension 2. == Comparing models == Comparing statistical models is fundamental for much of statistical inference. Konishi & Kitagawa (2008, p. 75) state: "The majority of the problems in statistical inference can be considered to be problems related to statistical modeling. They are typically formulated as comparisons of several statistical models." Common criteria for comparing models include the following: R2, Bayes factor, Akaike information criterion, and the likelihood-ratio test together with its generalization, the relative likelihood. Another way of comparing two statistical models is through the notion of deficiency introduced by Lucien Le Cam. == See also == == Notes == == References == == Further reading == Davison, A. C. (2008), Statistical Models, Cambridge University Press Drton, M.; Sullivant, S. (2007), "Algebraic statistical models" (PDF), Statistica Sinica, 17: 1273–1297 Freedman, D. A. (2009), Statistical Models, Cambridge University Press Helland, I. S. (2010), Steps Towards a Unified Basis for Scientific Models and Methods, World Scientific Kroese, D. P.; Chan, J. C. C. (2014), Statistical Modeling and Computation, Springer Shmueli, G. (2010), "To explain or to predict?", Statistical Science, 25 (3): 289–310, arXiv:1101.0891, doi:10.1214/10-STS330, S2CID 15900983
Wikipedia/Statistical_model
An inverse problem in science is the process of calculating from a set of observations the causal factors that produced them: for example, calculating an image in X-ray computed tomography, source reconstruction in acoustics, or calculating the density of the Earth from measurements of its gravity field. It is called an inverse problem because it starts with the effects and then calculates the causes. It is the inverse of a forward problem, which starts with the causes and then calculates the effects. Inverse problems are some of the most important mathematical problems in science and mathematics because they tell us about parameters that we cannot directly observe. They can be found in system identification, optics, radar, acoustics, communication theory, signal processing, medical imaging, computer vision, geophysics, oceanography, astronomy, remote sensing, natural language processing, machine learning, nondestructive testing, slope stability analysis and many other fields. == History == Starting with the effects to discover the causes has concerned physicists for centuries. A historical example is the calculations of Adams and Le Verrier which led to the discovery of Neptune from the perturbed trajectory of Uranus. However, a formal study of inverse problems was not initiated until the 20th century. One of the earliest examples of a solution to an inverse problem was discovered by Hermann Weyl and published in 1911, describing the asymptotic behavior of eigenvalues of the Laplace–Beltrami operator. Today known as Weyl's law, it is perhaps most easily understood as an answer to the question of whether it is possible to hear the shape of a drum. Weyl conjectured that the eigenfrequencies of a drum would be related to the area and perimeter of the drum by a particular equation, a result improved upon by later mathematicians. The field of inverse problems was later touched on by Soviet-Armenian physicist, Viktor Ambartsumian. While still a student, Ambartsumian thoroughly studied the theory of atomic structure, the formation of energy levels, and the Schrödinger equation and its properties, and when he mastered the theory of eigenvalues of differential equations, he pointed out the apparent analogy between discrete energy levels and the eigenvalues of differential equations. He then asked: given a family of eigenvalues, is it possible to find the form of the equations whose eigenvalues they are? Essentially Ambartsumian was examining the inverse Sturm–Liouville problem, which dealt with determining the equations of a vibrating string. This paper was published in 1929 in the German physics journal Zeitschrift für Physik and remained in obscurity for a rather long time. Describing this situation after many decades, Ambartsumian said, "If an astronomer publishes an article with a mathematical content in a physics journal, then the most likely thing that will happen to it is oblivion." Nonetheless, toward the end of the Second World War, this article, written by the 20-year-old Ambartsumian, was found by Swedish mathematicians and formed the starting point for a whole area of research on inverse problems, becoming the foundation of an entire discipline. Then important efforts have been devoted to a "direct solution" of the inverse scattering problem especially by Gelfand and Levitan in the Soviet Union. They proposed an analytic constructive method for determining the solution. When computers became available, some authors have investigated the possibility of applying their approach to similar problems such as the inverse problem in the 1D wave equation. But it rapidly turned out that the inversion is an unstable process: noise and errors can be tremendously amplified making a direct solution hardly practicable. Then, around the seventies, the least-squares and probabilistic approaches came in and turned out to be very helpful for the determination of parameters involved in various physical systems. This approach met a lot of success. Nowadays inverse problems are also investigated in fields outside physics, such as chemistry, economics, and computer science. Eventually, as numerical models become prevalent in many parts of society, we may expect an inverse problem associated with each of these numerical models. == Conceptual understanding == Since Newton, scientists have extensively attempted to model the world. In particular, when a mathematical model is available (for instance, Newton's gravitational law or Coulomb's equation for electrostatics), we can foresee, given some parameters that describe a physical system (such as a distribution of mass or a distribution of electric charges), the behavior of the system. This approach is known as mathematical modeling and the above-mentioned physical parameters are called the model parameters or simply the model. To be precise, we introduce the notion of state of the physical system: it is the solution of the mathematical model's equation. In optimal control theory, these equations are referred to as the state equations. In many situations we are not truly interested in knowing the physical state but just its effects on some objects (for instance, the effects the gravitational field has on a specific planet). Hence we have to introduce another operator, called the observation operator, which converts the state of the physical system (here the predicted gravitational field) into what we want to observe (here the movements of the considered planet). We can now introduce the so-called forward problem, which consists of two steps: determination of the state of the system from the physical parameters that describe it application of the observation operator to the estimated state of the system so as to predict the behavior of what we want to observe. This leads to introduce another operator F {\displaystyle F} (F stands for "forward") which maps model parameters p {\displaystyle p} into F ( p ) {\displaystyle F(p)} , the data that model p {\displaystyle p} predicts that is the result of this two-step procedure. Operator F {\displaystyle F} is called forward operator or forward map. In this approach we basically attempt at predicting the effects knowing the causes. The table below shows, the Earth being considered as the physical system and for different physical phenomena, the model parameters that describe the system, the physical quantity that describes the state of the physical system and observations commonly made on the state of the system. In the inverse problem approach we, roughly speaking, try to know the causes given the effects. == General statement of the inverse problem == The inverse problem is the "inverse" of the forward problem: instead of determining the data produced by particular model parameters, we want to determine the model parameters that produce the data d obs {\displaystyle d_{\text{obs}}} that is the observation we have recorded (the subscript obs stands for observed). Our goal, in other words, is to determine the model parameters p {\displaystyle p} such that (at least approximately) d obs = F ( p ) {\displaystyle d_{\text{obs}}=F(p)} where F {\displaystyle F} is the forward map. We denote by M {\displaystyle M} the (possibly infinite) number of model parameters, and by N {\displaystyle N} the number of recorded data. We introduce some useful concepts and the associated notations that will be used below: The space of models denoted by P {\displaystyle P} : the vector space spanned by model parameters; it has M {\displaystyle M} dimensions; The space of data denoted by D {\displaystyle D} : D = R N {\displaystyle D=\mathbb {R} ^{N}} if we organize the measured samples in a vector with N {\displaystyle N} components (if our measurements consist of functions, D {\displaystyle D} is a vector space with infinite dimensions); F ( p ) {\displaystyle F(p)} : the response of model p {\displaystyle p} ; it consists of the data predicted by model p {\displaystyle p} ; F ( P ) {\displaystyle F(P)} : the image of P {\displaystyle P} by the forward map, it is a subset of D {\displaystyle D} (but not a subspace unless F {\displaystyle F} is linear) made of responses of all models; d obs − F ( p ) {\displaystyle d_{\text{obs}}-F(p)} : the data misfits (or residuals) associated with model p {\displaystyle p} : they can be arranged as a vector, an element of D {\displaystyle D} . The concept of residuals is very important: in the scope of finding a model that matches the data, their analysis reveals if the considered model can be considered as realistic or not. Systematic unrealistic discrepancies between the data and the model responses also reveals that the forward map is inadequate and may give insights about an improved forward map. When operator F {\displaystyle F} is linear, the inverse problem is linear. Otherwise, that is most often, the inverse problem is nonlinear. Also, models cannot always be described by a finite number of parameters. It is the case when we look for distributed parameters (a distribution of wave-speeds for instance): in such cases the goal of the inverse problem is to retrieve one or several functions. Such inverse problems are inverse problems with infinite dimension. == Linear inverse problems == In the case of a linear forward map and when we deal with a finite number of model parameters, the forward map can be written as a linear system d = F p {\displaystyle d=Fp} where F {\displaystyle F} is the matrix that characterizes the forward map. The linear system can be solved by means of both regularization and Bayesian methods. === An elementary example: Earth's gravitational field === Only a few physical systems are actually linear with respect to the model parameters. One such system from geophysics is that of the Earth's gravitational field. The Earth's gravitational field is determined by the density distribution of the Earth in the subsurface. Because the lithology of the Earth changes quite significantly, we are able to observe minute differences in the Earth's gravitational field on the surface of the Earth. From our understanding of gravity (Newton's Law of Gravitation), we know that the mathematical expression for gravity is: d = G p r 2 ; {\displaystyle d={\frac {Gp}{r^{2}}};} here d {\displaystyle d} is a measure of the local gravitational acceleration, G {\displaystyle G} is the universal gravitational constant, p {\displaystyle p} is the local mass (which is related to density) of the rock in the subsurface and r {\displaystyle r} is the distance from the mass to the observation point. By discretizing the above expression, we are able to relate the discrete data observations on the surface of the Earth to the discrete model parameters (density) in the subsurface that we wish to know more about. For example, consider the case where we have measurements carried out at 5 locations on the surface of the Earth. In this case, our data vector, d {\displaystyle d} is a column vector of dimension (5×1): its i {\displaystyle i} -th component is associated with the i {\displaystyle i} -th observation location. We also know that we only have five unknown masses p j {\displaystyle p_{j}} in the subsurface (unrealistic but used to demonstrate the concept) with known location: we denote by r i j {\displaystyle r_{ij}} the distance between the i {\displaystyle i} -th observation location and the j {\displaystyle j} -th mass. Thus, we can construct the linear system relating the five unknown masses to the five data points as follows: d = F p , {\displaystyle d=Fp,} d = [ d 1 d 2 d 3 d 4 d 5 ] , p = [ p 1 p 2 p 3 p 4 p 5 ] , {\displaystyle d={\begin{bmatrix}d_{1}\\d_{2}\\d_{3}\\d_{4}\\d_{5}\end{bmatrix}},\quad p={\begin{bmatrix}p_{1}\\p_{2}\\p_{3}\\p_{4}\\p_{5}\end{bmatrix}},} F = [ G r 11 2 G r 12 2 G r 13 2 G r 14 2 G r 15 2 G r 21 2 G r 22 2 G r 23 2 G r 24 2 G r 25 2 G r 31 2 G r 32 2 G r 33 2 G r 34 2 G r 35 2 G r 41 2 G r 42 2 G r 43 2 G r 44 2 G r 45 2 G r 51 2 G r 52 2 G r 53 2 G r 54 2 G r 55 2 ] {\displaystyle F={\begin{bmatrix}{\frac {G}{r_{11}^{2}}}&{\frac {G}{r_{12}^{2}}}&{\frac {G}{r_{13}^{2}}}&{\frac {G}{r_{14}^{2}}}&{\frac {G}{r_{15}^{2}}}\\{\frac {G}{r_{21}^{2}}}&{\frac {G}{r_{22}^{2}}}&{\frac {G}{r_{23}^{2}}}&{\frac {G}{r_{24}^{2}}}&{\frac {G}{r_{25}^{2}}}\\{\frac {G}{r_{31}^{2}}}&{\frac {G}{r_{32}^{2}}}&{\frac {G}{r_{33}^{2}}}&{\frac {G}{r_{34}^{2}}}&{\frac {G}{r_{35}^{2}}}\\{\frac {G}{r_{41}^{2}}}&{\frac {G}{r_{42}^{2}}}&{\frac {G}{r_{43}^{2}}}&{\frac {G}{r_{44}^{2}}}&{\frac {G}{r_{45}^{2}}}\\{\frac {G}{r_{51}^{2}}}&{\frac {G}{r_{52}^{2}}}&{\frac {G}{r_{53}^{2}}}&{\frac {G}{r_{54}^{2}}}&{\frac {G}{r_{55}^{2}}}\end{bmatrix}}} To solve for the model parameters that fit our data, we might be able to invert the matrix F {\displaystyle F} to directly convert the measurements into our model parameters. For example: p = F − 1 d obs {\displaystyle p=F^{-1}d_{\text{obs}}} A system with five equations and five unknowns is a very specific situation: our example was designed to end up with this specificity. In general, the numbers of data and unknowns are different so that matrix F {\displaystyle F} is not square. However, even a square matrix can have no inverse: matrix F {\displaystyle F} can be rank deficient (i.e. has zero eigenvalues) and the solution of the system p = F − 1 d obs {\displaystyle p=F^{-1}d_{\text{obs}}} is not unique. Then the solution of the inverse problem will be undetermined. This is a first difficulty. Over-determined systems (more equations than unknowns) have other issues. Also noise may corrupt our observations making d {\displaystyle d} possibly outside the space F ( P ) {\displaystyle F(P)} of possible responses to model parameters so that solution of the system p = F − 1 d obs {\displaystyle p=F^{-1}d_{\text{obs}}} may not exist. This is another difficulty. ==== Tools to overcome the first difficulty ==== The first difficulty reflects a crucial problem: Our observations do not contain enough information and additional data are required. Additional data can come from physical prior information on the parameter values, on their spatial distribution or, more generally, on their mutual dependence. It can also come from other experiments: For instance, we may think of integrating data recorded by gravimeters and seismographs for a better estimation of densities. The integration of this additional information is basically a problem of statistics. This discipline is the one that can answer the question: How to mix quantities of different nature? We will be more precise in the section "Bayesian approach" below. Concerning distributed parameters, prior information about their spatial distribution often consists of information about some derivatives of these distributed parameters. Also, it is common practice, although somewhat artificial, to look for the "simplest" model that reasonably matches the data. This is usually achieved by penalizing the L 1 {\displaystyle L^{1}} norm of the gradient (or the total variation) of the parameters (this approach is also referred to as the maximization of the entropy). One can also make the model simple through a parametrization that introduces degrees of freedom only when necessary. Additional information may also be integrated through inequality constraints on the model parameters or some functions of them. Such constraints are important to avoid unrealistic values for the parameters (negative values for instance). In this case, the space spanned by model parameters will no longer be a vector space but a subset of admissible models denoted by P adm {\displaystyle P_{\text{adm}}} in the sequel. ==== Tools to overcome the second difficulty ==== As mentioned above, noise may be such that our measurements are not the image of any model, so that we cannot look for a model that produces the data but rather look for the best (or optimal) model: that is, the one that best matches the data. This leads us to minimize an objective function, namely a functional that quantifies how big the residuals are or how far the predicted data are from the observed data. Of course, when we have perfect data (i.e. no noise) then the recovered model should fit the observed data perfectly. A standard objective function, φ {\displaystyle \varphi } , is of the form: φ ( p ) = ‖ F p − d obs ‖ 2 {\displaystyle \varphi (p)=\|Fp-d_{\text{obs}}\|^{2}} where ‖ ⋅ ‖ {\displaystyle \|\cdot \|} is the Euclidean norm (it will be the L 2 {\displaystyle L^{2}} norm when the measurements are functions instead of samples) of the residuals. This approach amounts to making use of ordinary least squares, an approach widely used in statistics. However, the Euclidean norm is known to be very sensitive to outliers: to avoid this difficulty we may think of using other distances, for instance the L 1 {\displaystyle L^{1}} norm, in replacement of the L 2 {\displaystyle L^{2}} norm. ==== Bayesian approach ==== Very similar to the least-squares approach is the probabilistic approach: If we know the statistics of the noise that contaminates the data, we can think of seeking the most likely model m, which is the model that matches the maximum likelihood criterion. If the noise is Gaussian, the maximum likelihood criterion appears as a least-squares criterion, the Euclidean scalar product in data space being replaced by a scalar product involving the co-variance of the noise. Also, should prior information on model parameters be available, we could think of using Bayesian inference to formulate the solution of the inverse problem. This approach is described in detail in Tarantola's book. ==== Numerical solution of our elementary example ==== Here we make use of the Euclidean norm to quantify the data misfits. As we deal with a linear inverse problem, the objective function is quadratic. For its minimization, it is classical to compute its gradient using the same rationale (as we would to minimize a function of only one variable). At the optimal model p opt {\displaystyle p_{\text{opt}}} , this gradient vanishes which can be written as: ∇ p φ = 2 ( F T F p opt − F T d obs ) = 0 {\displaystyle \nabla _{p}\varphi =2(F^{\mathrm {T} }Fp_{\text{opt}}-F^{\mathrm {T} }d_{\text{obs}})=0} where FT denotes the matrix transpose of F. This equation simplifies to: F T F p opt = F T d obs {\displaystyle F^{\mathrm {T} }Fp_{\text{opt}}=F^{\mathrm {T} }d_{\text{obs}}} This expression is known as the normal equation and gives us a possible solution to the inverse problem. In our example matrix F T F {\displaystyle F^{\mathrm {T} }F} turns out to be generally full rank so that the equation above makes sense and determines uniquely the model parameters: we do not need integrating additional information for ending up with a unique solution. === Mathematical and computational aspects === Inverse problems are typically ill-posed, as opposed to the well-posed problems usually met in mathematical modeling. Of the three conditions for a well-posed problem suggested by Jacques Hadamard (existence, uniqueness, and stability of the solution or solutions) the condition of stability is most often violated. In the sense of functional analysis, the inverse problem is represented by a mapping between metric spaces. While inverse problems are often formulated in infinite dimensional spaces, limitations to a finite number of measurements, and the practical consideration of recovering only a finite number of unknown parameters, may lead to the problems being recast in discrete form. In this case the inverse problem will typically be ill-conditioned. In these cases, regularization may be used to introduce mild assumptions on the solution and prevent overfitting. Many instances of regularized inverse problems can be interpreted as special cases of Bayesian inference. ==== Numerical solution of the optimization problem ==== Some inverse problems have a very simple solution, for instance, when one has a set of unisolvent functions, meaning a set of ⁠ n {\displaystyle n} ⁠ functions such that evaluating them at ⁠ n {\displaystyle n} ⁠ distinct points yields a set of linearly independent vectors. This means that given a linear combination of these functions, the coefficients can be computed by arranging the vectors as the columns of a matrix and then inverting this matrix. The simplest example of unisolvent functions is polynomials constructed, using the unisolvence theorem, so as to be unisolvent. Concretely, this is done by inverting the Vandermonde matrix. But this a very specific situation. In general, the solution of an inverse problem requires sophisticated optimization algorithms. When the model is described by a large number of parameters (the number of unknowns involved in some diffraction tomography applications can reach one billion), solving the linear system associated with the normal equations can be cumbersome. The numerical method to be used for solving the optimization problem depends in particular on the cost required for computing the solution F p {\displaystyle Fp} of the forward problem. Once chosen the appropriate algorithm for solving the forward problem (a straightforward matrix-vector multiplication may be not adequate when matrix F {\displaystyle F} is huge), the appropriate algorithm for carrying out the minimization can be found in textbooks dealing with numerical methods for the solution of linear systems and for the minimization of quadratic functions (see for instance Ciarlet or Nocedal). Also, the user may wish to add physical constraints to the models: In this case, they have to be familiar with constrained optimization methods, a subject in itself. In all cases, computing the gradient of the objective function often is a key element for the solution of the optimization problem. As mentioned above, information about the spatial distribution of a distributed parameter can be introduced through the parametrization. One can also think of adapting this parametrization during the optimization. Should the objective function be based on a norm other than the Euclidean norm, we have to leave the area of quadratic optimization. As a result, the optimization problem becomes more difficult. In particular, when the L 1 {\displaystyle L^{1}} norm is used for quantifying the data misfit the objective function is no longer differentiable: its gradient does not make sense any longer. Dedicated methods (see for instance Lemaréchal) from non differentiable optimization come in. Once the optimal model is computed we have to address the question: "Can we trust this model?" The question can be formulated as follows: How large is the set of models that match the data "nearly as well" as this model? In the case of quadratic objective functions, this set is contained in a hyper-ellipsoid, a subset of R M {\displaystyle R^{M}} ( M {\displaystyle M} is the number of unknowns), whose size depends on what we mean with "nearly as well", that is on the noise level. The direction of the largest axis of this ellipsoid (eigenvector associated with the smallest eigenvalue of matrix F T F {\displaystyle F^{T}F} ) is the direction of poorly determined components: if we follow this direction, we can bring a strong perturbation to the model without changing significantly the value of the objective function and thus end up with a significantly different quasi-optimal model. We clearly see that the answer to the question "can we trust this model" is governed by the noise level and by the eigenvalues of the Hessian of the objective function or equivalently, in the case where no regularization has been integrated, by the singular values of matrix F {\displaystyle F} . Of course, the use of regularization (or other kinds of prior information) reduces the size of the set of almost optimal solutions and, in turn, increases the confidence we can put in the computed solution. Recent contributions from the field of Algorithmic information theory have proposed a more general approach to such problems, including a noteworthy conceptual framework for extracting generative rules from complex dynamical systems.; In particular, Zenil et al. (2019) introduced a framework called Algorithmic Information Dynamics (AID) which quantifies the algorithmic complexity of system components through controlled perturbation analysis. This method enables the reconstruction of generative rules in discrete dynamical systems—such as cellular automata—without relying on explicit governing equations. By analyzing the algorithmic responses of system states to localized changes, AID provides a novel lens for identifying causal relationships and estimating the reprogrammability of a system toward desired behaviors. This approach offers a useful alternative when classical inverse methods struggle with instability or intractability in highly discrete or nonlinear domains. ==== Stability, regularization and model discretization in infinite dimension ==== We focus here on the recovery of a distributed parameter. When looking for distributed parameters we have to discretize these unknown functions. Doing so, we reduce the dimension of the problem to something finite. But now, the question is: is there any link between the solution we compute and that of the initial problem? Then another question: what do we mean with the solution of the initial problem? Since a finite number of data does not allow the determination of an infinity of unknowns, the original data misfit functional has to be regularized to ensure the uniqueness of the solution. Many times, reducing the unknowns to a finite-dimensional space will provide an adequate regularization: the computed solution will look like a discrete version of the solution we were looking for. For example, a naïve discretization will often work for solving the deconvolution problem: it will work as long as we do not allow missing frequencies to show up in the numerical solution. But many times, regularization has to be integrated explicitly in the objective function. In order to understand what may happen, we have to keep in mind that solving such a linear inverse problem amount to solving a Fredholm integral equation of the first kind: d ( x ) = ∫ Ω K ( x , y ) p ( y ) d y {\displaystyle d(x)=\int _{\Omega }K(x,y)p(y)dy} where K {\displaystyle K} is the kernel, x {\displaystyle x} and y {\displaystyle y} are vectors of R 2 {\displaystyle R^{2}} , and Ω {\displaystyle \Omega } is a domain in R 2 {\displaystyle R^{2}} . This holds for a 2D application. For a 3D application, we consider x , y ∈ R 3 {\displaystyle x,y\in R^{3}} . Note that here the model parameters p {\displaystyle p} consist of a function and that the response of a model also consists of a function denoted by d ( x ) {\displaystyle d(x)} . This equation is an extension to infinite dimension of the matrix equation d = F p {\displaystyle d=Fp} given in the case of discrete problems. For sufficiently smooth K {\displaystyle K} the operator defined above is compact on reasonable Banach spaces such as the L 2 {\displaystyle L^{2}} . F. Riesz theory states that the set of singular values of such an operator contains zero (hence the existence of a null-space), is finite or at most countable, and, in the latter case, they constitute a sequence that goes to zero. In the case of a symmetric kernel, we have an infinity of eigenvalues and the associated eigenvectors constitute a hilbertian basis of L 2 {\displaystyle L^{2}} . Thus any solution of this equation is determined up to an additive function in the null-space and, in the case of infinity of singular values, the solution (which involves the reciprocal of arbitrary small eigenvalues) is unstable: two ingredients that make the solution of this integral equation a typical ill-posed problem! However, we can define a solution through the pseudo-inverse of the forward map (again up to an arbitrary additive function). When the forward map is compact, the classical Tikhonov regularization will work if we use it for integrating prior information stating that the L 2 {\displaystyle L^{2}} norm of the solution should be as small as possible: this will make the inverse problem well-posed. Yet, as in the finite dimension case, we have to question the confidence we can put in the computed solution. Again, basically, the information lies in the eigenvalues of the Hessian operator. Should subspaces containing eigenvectors associated with small eigenvalues be explored for computing the solution, then the solution can hardly be trusted: some of its components will be poorly determined. The smallest eigenvalue is equal to the weight introduced in Tikhonov regularization. Irregular kernels may yield a forward map which is not compact and even unbounded if we naively equip the space of models with the L 2 {\displaystyle L^{2}} norm. In such cases, the Hessian is not a bounded operator and the notion of eigenvalue does not make sense any longer. A mathematical analysis is required to make it a bounded operator and design a well-posed problem: an illustration can be found in. Again, we have to question the confidence we can put in the computed solution and we have to generalize the notion of eigenvalue to get the answer. Analysis of the spectrum of the Hessian operator is thus a key element to determine how reliable the computed solution is. However, such an analysis is usually a very heavy task. This has led several authors to investigate alternative approaches in the case where we are not interested in all the components of the unknown function but only in sub-unknowns that are the images of the unknown function by a linear operator. These approaches are referred to as the " Backus and Gilbert method", Lions's sentinels approach, and the SOLA method: these approaches turned out to be strongly related with one another as explained in Chavent Finally, the concept of limited resolution, often invoked by physicists, is nothing but a specific view of the fact that some poorly determined components may corrupt the solution. But, generally speaking, these poorly determined components of the model are not necessarily associated with high frequencies. === Some classical linear inverse problems for the recovery of distributed parameters === The problems mentioned below correspond to different versions of the Fredholm integral: each of these is associated with a specific kernel K {\displaystyle K} . ==== Deconvolution ==== The goal of deconvolution is to reconstruct the original image or signal p ( x ) {\displaystyle p(x)} which appears as noisy and blurred on the data d ( x ) {\displaystyle d(x)} . From a mathematical point of view, the kernel K ( x , y ) {\displaystyle K(x,y)} here only depends on the difference between x {\displaystyle x} and y {\displaystyle y} . ==== Tomographic methods ==== In these methods we attempt at recovering a distributed parameter, the observation consisting in the measurement of the integrals of this parameter carried out along a family of lines. We denote by Γ x {\displaystyle \Gamma _{x}} the line in this family associated with measurement point x {\displaystyle x} . The observation at x {\displaystyle x} can thus be written as: d ( x ) = ∫ Γ x w ( x , y ) p ( y ) d y {\displaystyle d(x)=\int _{\Gamma _{x}}w(x,y)p(y)\,dy} where s {\displaystyle s} is the arc-length along Γ x {\displaystyle {\Gamma _{x}}} and w ( x , y ) {\displaystyle w(x,y)} a known weighting function. Comparing this equation with the Fredholm integral above, we notice that the kernel K ( x , y ) {\displaystyle K(x,y)} is kind of a delta function that peaks on line Γ x {\displaystyle {\Gamma _{x}}} . With such a kernel, the forward map is not compact. ===== Computed tomography ===== In X-ray computed tomography the lines on which the parameter is integrated are straight lines: the tomographic reconstruction of the parameter distribution is based on the inversion of the Radon transform. Although from a theoretical point of view many linear inverse problems are well understood, problems involving the Radon transform and its generalisations still present many theoretical challenges with questions of sufficiency of data still unresolved. Such problems include incomplete data for the x-ray transform in three dimensions and problems involving the generalisation of the x-ray transform to tensor fields. Solutions explored include Algebraic Reconstruction Technique, filtered backprojection, and as computing power has increased, iterative reconstruction methods such as iterative Sparse Asymptotic Minimum Variance. ===== Diffraction tomography ===== Diffraction tomography is a classical linear inverse problem in exploration seismology: the amplitude recorded at one time for a given source-receiver pair is the sum of contributions arising from points such that the sum of the distances, measured in traveltimes, from the source and the receiver, respectively, is equal to the corresponding recording time. In 3D the parameter is not integrated along lines but over surfaces. Should the propagation velocity be constant, such points are distributed on an ellipsoid. The inverse problems consists in retrieving the distribution of diffracting points from the seismograms recorded along the survey, the velocity distribution being known. A direct solution has been originally proposed by Beylkin and Lambaré et al.: these works were the starting points of approaches known as amplitude preserved migration (see Beylkin and Bleistein). Should geometrical optics techniques (i.e. rays) be used for the solving the wave equation, these methods turn out to be closely related to the so-called least-squares migration methods derived from the least-squares approach (see Lailly, Tarantola). ===== Doppler tomography (astrophysics) ===== If we consider a rotating stellar object, the spectral lines we can observe on a spectral profile will be shifted due to Doppler effect. Doppler tomography aims at converting the information contained in spectral monitoring of the object into a 2D image of the emission (as a function of the radial velocity and of the phase in the periodic rotation movement) of the stellar atmosphere. As explained by Tom Marsh this linear inverse problem is tomography like: we have to recover a distributed parameter which has been integrated along lines to produce its effects in the recordings. ==== Inverse heat conduction ==== Early publications on inverse heat conduction arose from determining surface heat flux during atmospheric re-entry from buried temperature sensors. Other applications where surface heat flux is needed but surface sensors are not practical include: inside reciprocating engines, inside rocket engines; and, testing of nuclear reactor components. A variety of numerical techniques have been developed to address the ill-posedness and sensitivity to measurement error caused by damping and lagging in the temperature signal. == Non-linear inverse problems == Non-linear inverse problems constitute an inherently more difficult family of inverse problems. Here the forward map F {\displaystyle F} is a non-linear operator. Modeling of physical phenomena often relies on the solution of a partial differential equation (see table above except for gravity law): although these partial differential equations are often linear, the physical parameters that appear in these equations depend in a non-linear way of the state of the system and therefore on the observations we make on it. === Some classical non-linear inverse problems === ==== Inverse scattering problems ==== Whereas linear inverse problems were completely solved from the theoretical point of view at the end of the nineteenth century , only one class of nonlinear inverse problems was so before 1970, that of inverse spectral and (one space dimension) inverse scattering problems, after the seminal work of the Russian mathematical school (Krein, Gelfand, Levitan, Marchenko). A large review of the results has been given by Chadan and Sabatier in their book "Inverse Problems of Quantum Scattering Theory" (two editions in English, one in Russian). In this kind of problem, data are properties of the spectrum of a linear operator which describe the scattering. The spectrum is made of eigenvalues and eigenfunctions, forming together the "discrete spectrum", and generalizations, called the continuous spectrum. The very remarkable physical point is that scattering experiments give information only on the continuous spectrum, and that knowing its full spectrum is both necessary and sufficient in recovering the scattering operator. Hence we have invisible parameters, much more interesting than the null space which has a similar property in linear inverse problems. In addition, there are physical motions in which the spectrum of such an operator is conserved as a consequence of such motion. This phenomenon is governed by special nonlinear partial differential evolution equations, for example the Korteweg–de Vries equation. If the spectrum of the operator is reduced to one single eigenvalue, its corresponding motion is that of a single bump that propagates at constant velocity and without deformation, a solitary wave called a "soliton". A perfect signal and its generalizations for the Korteweg–de Vries equation or other integrable nonlinear partial differential equations are of great interest, with many possible applications. This area has been studied as a branch of mathematical physics since the 1970s. Nonlinear inverse problems are also currently studied in many fields of applied science (acoustics, mechanics, quantum mechanics, electromagnetic scattering - in particular radar soundings, seismic soundings, and nearly all imaging modalities). A final example related to the Riemann hypothesis was given by Wu and Sprung, the idea is that in the semiclassical old quantum theory the inverse of the potential inside the Hamiltonian is proportional to the half-derivative of the eigenvalues (energies) counting function n(x). ==== Permeability matching in oil and gas reservoirs ==== The goal is to recover the diffusion coefficient in the parabolic partial differential equation that models single phase fluid flows in porous media. This problem has been the object of many studies since a pioneering work carried out in the early seventies. Concerning two-phase flows an important problem is to estimate the relative permeabilities and the capillary pressures. ==== Inverse problems in the wave equations ==== The goal is to recover the wave-speeds (P and S waves) and the density distributions from seismograms. Such inverse problems are of prime interest in seismology and exploration geophysics. We can basically consider two mathematical models: The acoustic wave equation (in which S waves are ignored when the space dimensions are 2 or 3) The elastodynamics equation in which the P and S wave velocities can be derived from the Lamé parameters and the density. These basic hyperbolic equations can be upgraded by incorporating attenuation, anisotropy, ... The solution of the inverse problem in the 1D wave equation has been the object of many studies. It is one of the very few non-linear inverse problems for which we can prove the uniqueness of the solution. The analysis of the stability of the solution was another challenge. Practical applications, using the least-squares approach, were developed. Extension to 2D or 3D problems and to the elastodynamics equations was attempted since the 80's but turned out to be very difficult ! This problem often referred to as Full Waveform Inversion (FWI), is not yet completely solved: among the main difficulties are the existence of non-Gaussian noise into the seismograms, cycle-skipping issues (also known as phase ambiguity), and the chaotic behavior of the data misfit function. Some authors have investigated the possibility of reformulating the inverse problem so as to make the objective function less chaotic than the data misfit function. ==== Travel-time tomography ==== Realizing how difficult is the inverse problem in the wave equation, seismologists investigated a simplified approach making use of geometrical optics. In particular they aimed at inverting for the propagation velocity distribution, knowing the arrival times of wave-fronts observed on seismograms. These wave-fronts can be associated with direct arrivals or with reflections associated with reflectors whose geometry is to be determined, jointly with the velocity distribution. The arrival time distribution τ ( x ) {\displaystyle {\tau }(x)} ( x {\displaystyle x} is a point in physical space) of a wave-front issued from a point source, satisfies the Eikonal equation: ‖ ∇ τ ( x ) ‖ = s ( x ) , {\displaystyle \|\nabla \tau (x)\|=s(x),} where s ( x ) {\displaystyle s(x)} denotes the slowness (reciprocal of the velocity) distribution. The presence of ‖ ⋅ ‖ {\displaystyle \|\cdot \|} makes this equation nonlinear. It is classically solved by shooting rays (trajectories about which the arrival time is stationary) from the point source. This problem is tomography like: the measured arrival times are the integral along the ray-path of the slowness. But this tomography like problem is nonlinear, mainly because the unknown ray-path geometry depends upon the velocity (or slowness) distribution. In spite of its nonlinear character, travel-time tomography turned out to be very effective for determining the propagation velocity in the Earth or in the subsurface, the latter aspect being a key element for seismic imaging, in particular using methods mentioned in Section "Diffraction tomography". === Mathematical aspects: Hadamard's questions === The questions concern well-posedness: Does the least-squares problem have a unique solution which depends continuously on the data (stability problem)? It is the first question, but it is also a difficult one because of the non-linearity of F {\displaystyle F} . In order to see where the difficulties arise from, Chavent proposed to conceptually split the minimization of the data misfit function into two consecutive steps ( P adm {\displaystyle P_{\text{adm}}} is the subset of admissible models): projection step: given d obs {\displaystyle d_{\text{obs}}} find a projection on F ( P adm ) {\displaystyle F(P_{\text{adm}})} (nearest point on F ( P adm ) {\displaystyle F(P_{\text{adm}})} according to the distance involved in the definition of the objective function) given this projection find one pre-image that is a model whose image by operator F {\displaystyle F} is this projection. Difficulties can - and usually will - arise in both steps: operator F {\displaystyle F} is not likely to be one-to-one, therefore there can be more than one pre-image, even when F {\displaystyle F} is one-to-one, its inverse may not be continuous over F ( P ) {\displaystyle F(P)} , the projection on F ( P adm ) {\displaystyle F(P_{\text{adm}})} may not exist, should this set be not closed, the projection on F ( P adm ) {\displaystyle F(P_{\text{adm}})} can be non-unique and not continuous as this can be non-convex due to the non-linearity of F {\displaystyle F} . We refer to Chavent for a mathematical analysis of these points. === Computational aspects === ==== A non-convex data misfit function ==== The forward map being nonlinear, the data misfit function is likely to be non-convex, making local minimization techniques inefficient. Several approaches have been investigated to overcome this difficulty: use of global optimization techniques such as sampling of the posterior density function and Metropolis algorithm in the inverse problem probabilistic framework, genetic algorithms (alone or in combination with Metropolis algorithm: see for an application to the determination of permeabilities that match the existing permeability data), neural networks, regularization techniques including multi scale analysis; reformulation of the least-squares objective function so as to make it smoother (see for the inverse problem in the wave equations.) ==== Computation of the gradient of the objective function ==== Inverse problems, especially in infinite dimension, may be large size, thus requiring important computing time. When the forward map is nonlinear, the computational difficulties increase and minimizing the objective function can be difficult. Contrary to the linear situation, an explicit use of the Hessian matrix for solving the normal equations does not make sense here: the Hessian matrix varies with models. Much more effective is the evaluation of the gradient of the objective function for some models. Important computational effort can be saved when we can avoid the very heavy computation of the Jacobian (often called "Fréchet derivatives"): the adjoint state method, proposed by Chavent and Lions, is aimed to avoid this very heavy computation. It is now very widely used. == Applications == Inverse problem theory is used extensively in weather predictions, oceanography, hydrology, neuroscience, and petroleum engineering. Another application is inversion of elastic waves for non-destructive characterization of engineering structures. Inverse problems are also found in the field of heat transfer, where a surface heat flux is estimated outgoing from temperature data measured inside a rigid body; and, in understanding the controls on plant-matter decay. The linear inverse problem is also the fundamental of spectral estimation and direction-of-arrival (DOA) estimation in signal processing. Inverse lithography is used in photomask design for semiconductor device fabrication. == Academic journals == Four main academic journals cover inverse problems in general: Inverse Problems Journal of Inverse and Ill-posed Problems Inverse Problems in Science and Engineering Inverse Problems and Imaging Many journals on medical imaging, geophysics, non-destructive testing, etc. are dominated by inverse problems in those areas. == See also == Atmospheric inverse problem – measurement of vertical distribution of physical properties of the atmospheric columnPages displaying wikidata descriptions as a fallback Backus–Gilbert method Computed tomography – Medical imaging procedure using X-rays to produce cross-sectional imagesPages displaying short descriptions of redirect targets Algebraic reconstruction technique – Technique in computed tomography Filtered backprojection – Integral transformPages displaying short descriptions of redirect targets Iterative reconstruction Data assimilation – Technique for combining information from a computer model with information from observations Engineering optimization – Techniques for optimization Grey box model – Mathematical data production model with limited structure Mathematical geophysics – Branch of applied mathematicsPages displaying short descriptions of redirect targets Optimal estimation – applied statistics, a regularized matrix inverse method based on Bayes' theoremPages displaying wikidata descriptions as a fallback Seismic inversion – Geophysical process Tikhonov regularization – Regularization technique for ill-posed problemsPages displaying short descriptions of redirect targets Compressed sensing – Signal processing technique Problem of induction – Question of whether inductive reasoning leads to definitive knowledge == Notes == == References == Chadan, Khosrow & Sabatier, Pierre Célestin (1977). Inverse Problems in Quantum Scattering Theory. Springer-Verlag. ISBN 0-387-08092-9 Aster, Richard; Borchers, Brian, and Thurber, Clifford (2018). Parameter Estimation and Inverse Problems, Third Edition, Elsevier. ISBN 9780128134238, ISBN 9780128134238 Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 19.4. Inverse Problems and the Use of A Priori Information". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. Archived from the original on 2011-08-11. Retrieved 2011-08-17. == Further reading == C. W. Groetsch (1999). Inverse Problems: Activities for Undergraduates. Cambridge University Press. ISBN 978-0-88385-716-8. Kirkeby, Adrian (2024). "Feynman's Inverse Problem". SIAM Review. 66 (4): 694–718. arXiv:2310.15589. doi:10.1137/23M1611488. ISSN 0036-1445. == External links == Inverse Problems International Association Archived 2017-06-15 at the Wayback Machine Eurasian Association on Inverse Problems Finnish Inverse Problems Society Inverse Problems Network Albert Tarantola's website, includes a free PDF version of his Inverse Problem Theory book, and some online articles on Inverse Problems Inverse Problems page at the University of Alabama Archived 2014-04-05 at the Wayback Machine Inverse Problems and Geostatistics Project Archived 2017-11-02 at the Wayback Machine, Niels Bohr Institute, University of Copenhagen Andy Ganse's Geophysical Inverse Theory Resources Page Finnish Centre of Excellence in Inverse Problems Research Archived 2009-02-20 at the Wayback Machine
Wikipedia/Model_inversion
A model is an informative representation of an object, person, or system. The term originally denoted the plans of a building in late 16th-century English, and derived via French and Italian ultimately from Latin modulus, 'a measure'. Models can be divided into physical models (e.g. a ship model or a fashion model) and abstract models (e.g. a set of mathematical equations describing the workings of the atmosphere for the purpose of weather forecasting). Abstract or conceptual models are central to philosophy of science. In scholarly research and applied science, a model should not be confused with a theory: while a model seeks only to represent reality with the purpose of better understanding or predicting the world, a theory is more ambitious in that it claims to be an explanation of reality. == Types of model == === Model in specific contexts === As a noun, model has specific meanings in certain fields, derived from its original meaning of "structural design or layout": Model (art), a person posing for an artist, e.g. a 15th-century criminal representing the biblical Judas in Leonardo da Vinci's painting The Last Supper Model (person), a person who serves as a template for others to copy, as in a role model, often in the context of advertising commercial products; e.g. the first fashion model, Marie Vernet Worth in 1853, wife of designer Charles Frederick Worth. Model (product), a particular design of a product as displayed in a catalogue or show room (e.g. Ford Model T, an early car model) Model (organism) a non-human species that is studied to understand biological phenomena in other organisms, e.g. a guinea pig starved of vitamin C to study scurvy, an experiment that would be immoral to conduct on a person Model (mimicry), a species that is mimicked by another species Model (logic), a structure (a set of items, such as natural numbers 1, 2, 3,..., along with mathematical operations such as addition and multiplication, and relations, such as < {\displaystyle <} ) that satisfies a given system of axioms (basic truisms), i.e. that satisfies the statements of a given theory Model (CGI), a mathematical representation of any surface of an object in three dimensions via specialized software Model (MVC), the information-representing internal component of a software, as distinct from its user interface === Physical model === A physical model (most commonly referred to simply as a model but in this context distinguished from a conceptual model) is a smaller or larger physical representation of an object, person or system. The object being modelled may be small (e.g., an atom) or large (e.g., the Solar System) or life-size (e.g., a fashion model displaying clothes for similarly-built potential customers). The geometry of the model and the object it represents are often similar in the sense that one is a rescaling of the other. However, in many cases the similarity is only approximate or even intentionally distorted. Sometimes the distortion is systematic, e.g., a fixed scale horizontally and a larger fixed scale vertically when modelling topography to enhance a region's mountains. An architectural model permits visualization of internal relationships within the structure or external relationships of the structure to the environment. Another use is as a toy. Instrumented physical models are an effective way of investigating fluid flows for engineering design. Physical models are often coupled with computational fluid dynamics models to optimize the design of equipment and processes. This includes external flow such as around buildings, vehicles, people, or hydraulic structures. Wind tunnel and water tunnel testing is often used for these design efforts. Instrumented physical models can also examine internal flows, for the design of ductwork systems, pollution control equipment, food processing machines, and mixing vessels. Transparent flow models are used in this case to observe the detailed flow phenomenon. These models are scaled in terms of both geometry and important forces, for example, using Froude number or Reynolds number scaling (see Similitude). In the pre-computer era, the UK economy was modelled with the hydraulic model MONIAC, to predict for example the effect of tax rises on employment. === Conceptual model === A conceptual model is a theoretical representation of a system, e.g. a set of mathematical equations attempting to describe the workings of the atmosphere for the purpose of weather forecasting. It consists of concepts used to help understand or simulate a subject the model represents. Abstract or conceptual models are central to philosophy of science, as almost every scientific theory effectively embeds some kind of model of the physical or human sphere. In some sense, a physical model "is always the reification of some conceptual model; the conceptual model is conceived ahead as the blueprint of the physical one", which is then constructed as conceived. Thus, the term refers to models that are formed after a conceptualization or generalization process. === Examples === Conceptual model (computer science), an agreed representation of entities and their relationships, to assist in developing software Economic model, a theoretical construct representing economic processes Language model, a probabilistic model of a natural language, used for speech recognition, language generation, and information retrieval Large language models are artificial neural networks used for generative artificial intelligence (AI), e.g. ChatGPT Mathematical model, a description of a system using mathematical concepts and language Statistical model, a mathematical model that usually specifies the relationship between one or more random variables and other non-random variables Model (CGI), a mathematical representation of any surface of an object in three dimensions via specialized software Medical model, a proposed "set of procedures in which all doctors are trained" Mental model, in psychology, an internal representation of external reality Model (logic), a set along with a collection of finitary operations, and relations that are defined on it, satisfying a given collection of axioms Model (MVC), information-representing component of a software, distinct from the user interface (the "view"), both linked by the "controller" component, in the context of the model–view–controller software design Model act, a law drafted centrally to be disseminated and proposed for enactment in multiple independent legislatures Standard model (disambiguation) == Properties of models, according to general model theory == According to Herbert Stachowiak, a model is characterized by at least three properties: 1. Mapping A model always is a model of something—it is an image or representation of some natural or artificial, existing or imagined original, where this original itself could be a model. 2. Reduction In general, a model will not include all attributes that describe the original but only those that appear relevant to the model's creator or user. 3. Pragmatism A model does not relate unambiguously to its original. It is intended to work as a replacement for the original a) for certain subjects (for whom?) b) within a certain time range (when?) c) restricted to certain conceptual or physical actions (what for?). For example, a street map is a model of the actual streets in a city (mapping), showing the course of the streets while leaving out, say, traffic signs and road markings (reduction), made for pedestrians and vehicle drivers for the purpose of finding one's way in the city (pragmatism). Additional properties have been proposed, like extension and distortion as well as validity. The American philosopher Michael Weisberg differentiates between concrete and mathematical models and proposes computer simulations (computational models) as their own class of models. == Uses of models == According to Bruce Edmonds, there are at least 5 general uses for models: Prediction: reliably anticipating unknown data, including data within the domain of the training data (interpolation), and outside the domain (extrapolation) Explanation: establishing plausible chains of causality by proposing mechanisms that can explain patterns seen in data Theoretical exposition: discovering or proposing new hypotheses, or refuting existing hypotheses about the behaviour of the system being modelled Description: representing important aspects of the system being modelled Illustration: communicating an idea or explanation == See also == == References == == External links == Media related to Physical models at Wikimedia Commons
Wikipedia/Model
In economics, an input–output model is a quantitative economic model that represents the interdependencies between different sectors of a national economy or different regional economies. Wassily Leontief (1906–1999) is credited with developing this type of analysis and earned the Nobel Prize in Economics for his development of this model. == Origins == Francois Quesnay had developed a cruder version of this technique called Tableau économique, and Léon Walras's work Elements of Pure Economics on general equilibrium theory also was a forerunner and made a generalization of Leontief's seminal concept. Alexander Bogdanov has been credited with originating the concept in a report delivered to the All Russia Conference on the Scientific Organisation of Labour and Production Processes, in January 1921. This approach was also developed by Lev Kritzman. Thomas Remington, has argued that their work provided a link between Quesnay's tableau économique and the subsequent contributions by Vladimir Groman and Vladimir Bazarov to Gosplan's method of material balance planning. Wassily Leontief's work in the input–output model was influenced by the works of the classical economists Karl Marx and Jean Charles Léonard de Sismondi. Karl Marx's economics provided an early outline involving a set of tables where the economy consisted of two interlinked departments. Leontief was the first to use a matrix representation of a national (or regional) economy. == Basic derivation == The model depicts inter-industry relationships within an economy, showing how output from one industrial sector may become an input to another industrial sector. In the inter-industry matrix, column entries typically represent inputs to an industrial sector, while row entries represent outputs from a given sector. This format, therefore, shows how dependent each sector is on every other sector, both as a customer of outputs from other sectors and as a supplier of inputs. Sectors may also depend internally on a portion of their own production as delineated by the entries of the matrix diagonal. Each column of the input–output matrix shows the monetary value of inputs to each sector and each row represents the value of each sector's outputs. Say that we have an economy with n {\displaystyle n} sectors. Each sector produces x i {\displaystyle x_{i}} units of a single homogeneous good. Assume that the j {\displaystyle j} th sector, in order to produce 1 unit, must use a i j {\displaystyle a_{ij}} units from sector i {\displaystyle i} . Furthermore, assume that each sector sells some of its output to other sectors (intermediate output) and some of its output to consumers (final output, or final demand). Call final demand in the i {\displaystyle i} th sector y i {\displaystyle y_{i}} . Then we might write x i = a i 1 x 1 + a i 2 x 2 + ⋯ + a i n x n + y i , {\displaystyle x_{i}=a_{i1}x_{1}+a_{i2}x_{2}+\cdots +a_{in}x_{n}+y_{i},} or total output equals intermediate output plus final output. If we let A {\displaystyle A} be the matrix of coefficients a i j {\displaystyle a_{ij}} , x {\displaystyle \mathbf {x} } be the vector of total output, and y {\displaystyle \mathbf {y} } be the vector of final demand, then our expression for the economy becomes which after re-writing becomes ( I − A ) x = y {\displaystyle \left(I-A\right)\mathbf {x} =\mathbf {y} } . If the matrix I − A {\displaystyle I-A} is invertible then this is a linear system of equations with a unique solution, and so given some final demand vector the required output can be found. Furthermore, if the principal minors of the matrix I − A {\displaystyle I-A} are all positive (known as the Hawkins–Simon condition), the required output vector x {\displaystyle \mathbf {x} } is non-negative. === Example === Consider an economy with two goods, A and B. The matrix of coefficients and the final demand is given by A = [ 0.5 0.2 0.4 0.1 ] and y = [ 7 4 ] . {\displaystyle A={\begin{bmatrix}0.5&0.2\\0.4&0.1\end{bmatrix}}{\text{ and }}\mathbf {y} ={\begin{bmatrix}7\\4\end{bmatrix}}.} Intuitively, this corresponds to finding the amount of output each sector should produce given that we want 7 units of good A and 4 units of good B. Then solving the system of linear equations derived above gives us x = ( I − A ) − 1 y = [ 19.19 12.97 ] . {\displaystyle \mathbf {x} =\left(I-A\right)^{-1}\mathbf {y} ={\begin{bmatrix}19.19\\12.97\end{bmatrix}}.} === Further research === There is extensive literature on these models. The model has been extended to work with non-linear relationships between sectors. There is the Hawkins–Simon condition on producibility. There has been research on disaggregation to clustered inter-industry flows, and on the study of constellations of industries. A great deal of empirical work has been done to identify coefficients, and data has been published for the national economy as well as for regions. The Leontief system can be extended to a model of general equilibrium; it offers a method of decomposing work done at a macro level. === Regional multipliers === While national input–output tables are commonly created by countries' statistics agencies, officially published regional input–output tables are rare. Therefore, economists often use location quotients to create regional multipliers starting from national data. This technique has been criticized because there are several location quotient regionalization techniques, and none are universally superior across all use-cases. === Introducing transportation === Transportation is implicit in the notion of inter-industry flows. It is explicitly recognized when transportation is identified as an industry – how much is purchased from transportation in order to produce. But this is not very satisfactory because transportation requirements differ, depending on industry locations and capacity constraints on regional production. Also, the receiver of goods generally pays freight cost, and often transportation data are lost because transportation costs are treated as part of the cost of the goods. Walter Isard and his student, Leon Moses, were quick to see the spatial economy and transportation implications of input–output, and began work in this area in the 1950s developing a concept of interregional input–output. Take a one region versus the world case. We wish to know something about inter-regional commodity flows, so introduce a column into the table headed "exports" and we introduce an "import" row. A more satisfactory way to proceed would be to tie regions together at the industry level. That is, we could identify both intra-region inter-industry transactions and inter-region inter-industry transactions. The problem here is that the table grows quickly. Input–output is conceptually simple. Its extension to a model of equilibrium in the national economy has been done successfully using high-quality data. One who wishes to work with input–output systems must deal with industry classification, data estimation, and inverting very large, often ill-conditioned matrices. The quality of the data and matrices of the input-output model can be improved by modelling activities with digital twins and solving the problem of optimizing management decisions. Moreover, changes in relative prices are not readily handled by this modelling approach alone. Input–output accounts are part and parcel to a more flexible form of modelling, computable general equilibrium models. Two additional difficulties are of interest in transportation work. There is the question of substituting one input for another, and there is the question about the stability of coefficients as production increases or decreases. These are intertwined questions. They have to do with the nature of regional production functions. === Technology Assumptions === To construct input-output tables from supply and use tables, four principal assumptions can be applied. The choice depends on whether product-by-product or industry-by-industry input-output tables are to be established. == Usefulness == Because the input–output model is fundamentally linear in nature, it lends itself to rapid computation as well as flexibility in computing the effects of changes in demand. Input–output models for different regions can also be linked together to investigate the effects of inter-regional trade, and additional columns can be added to the table to perform environmentally extended input–output analysis (EEIOA). For example, information on fossil fuel inputs to each sector can be used to investigate flows of embodied carbon within and between different economies. The structure of the input–output model has been incorporated into national accounting in many developed countries, and as such can be used to calculate important measures such as national GDP. Input–output economics has been used to study regional economies within a nation, and as a tool for national and regional economic planning. A main use of input–output analysis is to measure the economic impacts of events as well as public investments or programs as shown by IMPLAN and Regional Input–Output Modeling System. It is also used to identify economically related industry clusters and also so-called "key" or "target" industries (industries that are most likely to enhance the internal coherence of a specified economy). By linking industrial output to satellite accounts articulating energy use, effluent production, space needs, and so on, input–output analysts have extended the approaches application to a wide variety of uses. === Input–output and socialist planning === The input–output model is one of the major conceptual models for a socialist planned economy. This model involves the direct determination of physical quantities to be produced in each industry, which are used to formulate a consistent economic plan of resource allocation. This method of planning is contrasted with price-directed Lange-model socialism and Soviet-style material balance planning. In the economy of the Soviet Union, planning was conducted using the method of material balances up until the country's dissolution. The method of material balances was first developed in the 1930s during the Soviet Union's rapid industrialization drive. Input–output planning was never adopted because the material balance system had become entrenched in the Soviet economy, and input–output planning was shunned for ideological reasons. As a result, the benefits of consistent and detailed planning through input–output analysis were never realized in the Soviet-type economies. == Criticism of Input-Output Models == The Australia Institute critiques input-output (IO) models for their biases and limitations in assessing the economic impacts of projects and policies. There are limitations and biases inherent in IO models, citing concerns that they are "biased" and "abused" by organizations like the Australian Bureau of Statistics and the Productivity Commission. For instance, the institute's research points out that IO models often assume fixed prices and don't account for resource constraints, which can lead to unrealistic and inflated economic impact estimates. IO models can be misinterpreted and used to justify projects or policies that are not economically sound. The Australia Institute suggests that more robust and comprehensive economic analysis methods should be used to assess economic impacts, rather than relying solely on IO models. == Measuring input–output tables == The mathematics of input–output economics is straightforward, but the data requirements are enormous because the expenditures and revenues of each branch of economic activity have to be represented. As a result, not all countries collect the required data and data quality varies, even though a set of standards for the data's collection has been set out by the United Nations through its System of National Accounts (SNA): the most recent standard is the 2008 SNA. Because the data collection and preparation process for the input–output accounts is necessarily labor and computer intensive, input–output tables are often published long after the year in which the data were collected—typically as much as 5–7 years after. Moreover, the economic "snapshot" that the benchmark version of the tables provides of the economy's cross-section is typically taken only once every few years, at best. However, many developed countries estimate input–output accounts annually and with much greater recency. This is because while most uses of the input–output analysis focus on the matrix set of inter-industry exchanges, the actual focus of the analysis from the perspective of most national statistical agencies is the benchmarking of gross domestic product. Input–output tables therefore are an instrumental part of national accounts. As suggested above, the core input–output table reports only intermediate goods and services that are exchanged among industries. But an array of row vectors, typically aligned at the bottom of this matrix, record non-industrial inputs by industry like payments for labor; indirect business taxes; dividends, interest, and rents; capital consumption allowances (depreciation); other property-type income (like profits); and purchases from foreign suppliers (imports). At a national level, although excluding the imports, when summed this is called "gross product originating" or "gross domestic product by industry." Another array of column vectors is called "final demand" or "gross product consumed." This displays columns of spending by households, governments, changes in industry stocks, and industries on investment, as well as net exports. (See also Gross domestic product.) In any case, by employing the results of an economic census which asks for the sales, payrolls, and material/equipment/service input of each establishment, statistical agencies back into estimates of industry-level profits and investments using the input–output matrix as a sort of double-accounting framework. == Dynamic Extensions == === The Leontief IO model with capital formation endogenized === The IO model discussed above is static because it does not describe the evolution of the economy over time: it does not include different time periods. Dynamic Leontief models are obtained by endogenizing the formation of capital stock over time. Denote by y I {\displaystyle y^{I}} the vector of capital formation, with y i I {\displaystyle y_{i}^{I}} its i {\displaystyle i} th element, and by I i j ( t ) {\displaystyle I_{ij}(t)} the amount of capital good i {\displaystyle i} (for example, a blade) used in sector j {\displaystyle j} ( for example, wind power generation), for investment at time t {\displaystyle t} . We then have y i I ( t ) = ∑ j I i j ( t ) {\displaystyle y_{i}^{I}(t)=\sum _{j}I_{ij}(t)} We assume that it takes one year for investment in plant and equipment to become productive capacity. Denoting by K i j ( t ) {\displaystyle K_{ij}(t)} the stock of i {\displaystyle i} at the beginning of time t {\displaystyle t} , and by δ ∈ ( 0 , 1 ] {\displaystyle \delta \in (0,1]} the rate of depreciation, we then have: Here, δ i j K i j ( t ) {\displaystyle \delta _{ij}K_{ij}(t)} refers to the amount of capital stock that is used up in year t {\displaystyle t} . Denote by x ¯ j ( t ) {\displaystyle {\bar {x}}_{j}(t)} the productive capacity in t {\displaystyle t} , and assume the following proportionalty between K i j ( t ) {\displaystyle K_{ij}(t)} and x ¯ j ( t ) {\displaystyle {\bar {x}}_{j}(t)} : The matrix B = [ b i j ] {\displaystyle B=[b_{ij}]} is called the capital coefficient matrix. From (2) and (3), we obtain the following expression for y I {\displaystyle y^{I}} : y I ( t ) = B x ¯ ( t + 1 ) + ( δ − I ) x ¯ ( t ) {\displaystyle y^{I}(t)=B{\bar {x}}(t+1)+(\delta -I){\bar {x}}(t)} Assuming that the productive capacity is always fully utilized, we obtain the following expression for (1) with endogenized capital formation: x ( t ) = A x ( t ) + B x ( t + 1 ) + ( δ − I ) B x ( t ) + y o ( t ) , {\displaystyle x(t)=Ax(t)+Bx(t+1)+(\delta -I)Bx(t)+y^{o}(t),} where y o {\displaystyle y^{o}} stands for the items of final demand other than y I {\displaystyle y^{I}} . Rearranged, we have B x ( t + 1 ) = ( I − A + ( I − δ ) B ) x ( t ) − y o ( t ) = ( I − A ¯ + B ) x ( t ) − y o ( t ) {\displaystyle {\begin{aligned}Bx(t+1)&=(I-A+(I-\delta )B)x(t)-y^{o}(t)\\&=(I-{\bar {A}}+B)x(t)-y^{o}(t)\end{aligned}}} wehere A ¯ = A + δ B {\displaystyle {\bar {A}}=A+\delta B} . If B {\displaystyle B} is non-singular, this model could be solved for x ( t + 1 ) {\displaystyle x(t+1)} for given x ( t ) {\displaystyle x(t)} and y o ( t ) {\displaystyle y^{o}(t)} : x ( t + 1 ) = [ I + B − 1 ( I − A ¯ ) ] x ( t ) − B − 1 y o ( t ) {\displaystyle x(t+1)=[I+B^{-1}(I-{\bar {A}})]x(t)-B^{-1}y^{o}(t)} This is the Leontief dynamic forward-looking model A caveat to this model is that B {\displaystyle B} will, in general, be singular, and the above formulation cannot be obtained. This is because some products, such as energy items, are not used as capital goods, and the corresponding rows of the matrix B {\displaystyle B} will be zeros. This fact has prompted some researchers to consolidate the sectors until the non-singularity of B {\displaystyle B} is achieved, at the cost of sector resolution. Apart from this feature, many studies have found that the outcomes obtained for this forward-looking model invariably lead to unrealistic and widely fluctuating results that lack economic interpretation. This has resulted in a gradual decline in interest in the model after the 1970s, although there is a recent increase in interest within the context of disaster analysis. == Input–output analysis versus consistency analysis == Despite the clear ability of the input–output model to depict and analyze the dependence of one industry or sector on another, Leontief and others never managed to introduce the full spectrum of dependency relations in a market economy. In 2003, Mohammad Gani, a pupil of Leontief, introduced consistency analysis in his book Foundations of Economic Science, which formally looks exactly like the input–output table but explores the dependency relations in terms of payments and intermediation relations. Consistency analysis explores the consistency of plans of buyers and sellers by decomposing the input–output table into four matrices, each for a different kind of means of payment. It integrates micro and macroeconomics into one model and deals with money in a value-free manner. It deals with the flow of funds via the movement of goods. == Notes == == See also == == References == ابونوری, اسمعیل, فرهادی, & عزیزاله. (2017). آزمون فروض تکنولوژی در محاسبه جدول داده ستانده متقارن ایران: یک رهیافت اقتصاد سنجی. پژوهشهای اقتصادی ایران, 21(69), 117–145. == Bibliography == Dietzenbacher, Erik and Michael L. Lahr, eds. Wassily Leontief and Input–Output Economics. Cambridge University Press, 2004. Isard, Walter et al. Methods of Regional Analysis: An Introduction to Regional Science. MIT Press 1960. Isard, Walter and Thomas W. Langford. Regional Input–Output Study: Recollections, Reflections, and Diverse Notes on the Philadelphia Experience. The MIT Press. 1971. Lahr, Michael L. and Erik Dietzenbacher, eds. Input–Output Analysis: Frontiers and Extensions. Palgrave, 2001. Leontief, Wassily W. Input–Output Economics. 2nd ed., New York: Oxford University Press, 1986. Miller, Ronald E. and Peter D. Blair. Input–Output Analysis: Foundations and Extensions. Prentice Hall, 1985. Miller, Ronald E. and Peter D. Blair. Input–Output Analysis: Foundations and Extensions, 2nd edition. Cambridge University Press, 2009. Miller, Ronald E., Karen R. Polenske, and Adam Z. Rose, eds. Frontiers of Input–Output Analysis. N.Y.: Oxford UP, 1989.[HB142 F76 1989/ Suzz] Miernyk, William H. The Elements of Input–Output Anaysis, 1965.Web Book-William H. Miernyk Archived 26 November 2019 at the Wayback Machine. Polenske, Karen. Advances in Input–Output Analysis. 1976. Pokrovskii, Vladimir N. Econodynamics. The Theory of Social Production, Springer, Dordrecht, Heidelberg et cetera, 2011. ten Raa, Thijs. The Economics of Input–Output Analysis. Cambridge University Press, 2005. US Department of Commerce, Bureau of Economic Analysis . Regional multipliers: A user handbook for regional input–output modeling system (RIMS II). Third edition. Washington, D.C.: U.S. Government Printing Office. 1997. Eurostat Eurostat manual of supply, use and input-output tables. Office for Official Publications of the European Communities, 2008. == External links == International Input–Output Association Input–Output Accounts Data, Bureau of Economic Analysis Input–Output Analysis and Related Methods Archived 5 May 2021 at the Wayback Machine, San José State University Doing Business project input/output tables for reforms Energy Economics. Input–Output Analysis: Lecture – 6 and Lecture 7 – two introductory videos on Input–Output methodology with a focus on energy economics from IIT Kharagpur. === Models === REMI (Regional Economic Models, Inc.) IMPLAN (Impact Analysis for Planning) REDYN (Regional Dynamics Model)
Wikipedia/Input–output_model
In statistics, the term linear model refers to any model which assumes linearity in the system. The most common occurrence is in connection with regression models and the term is often taken as synonymous with linear regression model. However, the term is also used in time series analysis with a different meaning. In each case, the designation "linear" is used to identify a subclass of models for which substantial reduction in the complexity of the related statistical theory is possible. == Linear regression models == For the regression case, the statistical model is as follows. Given a (random) sample ( Y i , X i 1 , … , X i p ) , i = 1 , … , n {\displaystyle (Y_{i},X_{i1},\ldots ,X_{ip}),\,i=1,\ldots ,n} the relation between the observations Y i {\displaystyle Y_{i}} and the independent variables X i j {\displaystyle X_{ij}} is formulated as Y i = β 0 + β 1 ϕ 1 ( X i 1 ) + ⋯ + β p ϕ p ( X i p ) + ε i i = 1 , … , n {\displaystyle Y_{i}=\beta _{0}+\beta _{1}\phi _{1}(X_{i1})+\cdots +\beta _{p}\phi _{p}(X_{ip})+\varepsilon _{i}\qquad i=1,\ldots ,n} where ϕ 1 , … , ϕ p {\displaystyle \phi _{1},\ldots ,\phi _{p}} may be nonlinear functions. In the above, the quantities ε i {\displaystyle \varepsilon _{i}} are random variables representing errors in the relationship. The "linear" part of the designation relates to the appearance of the regression coefficients, β j {\displaystyle \beta _{j}} in a linear way in the above relationship. Alternatively, one may say that the predicted values corresponding to the above model, namely Y ^ i = β 0 + β 1 ϕ 1 ( X i 1 ) + ⋯ + β p ϕ p ( X i p ) ( i = 1 , … , n ) , {\displaystyle {\hat {Y}}_{i}=\beta _{0}+\beta _{1}\phi _{1}(X_{i1})+\cdots +\beta _{p}\phi _{p}(X_{ip})\qquad (i=1,\ldots ,n),} are linear functions of the β j {\displaystyle \beta _{j}} . Given that estimation is undertaken on the basis of a least squares analysis, estimates of the unknown parameters β j {\displaystyle \beta _{j}} are determined by minimising a sum of squares function S = ∑ i = 1 n ε i 2 = ∑ i = 1 n ( Y i − β 0 − β 1 ϕ 1 ( X i 1 ) − ⋯ − β p ϕ p ( X i p ) ) 2 . {\displaystyle S=\sum _{i=1}^{n}\varepsilon _{i}^{2}=\sum _{i=1}^{n}\left(Y_{i}-\beta _{0}-\beta _{1}\phi _{1}(X_{i1})-\cdots -\beta _{p}\phi _{p}(X_{ip})\right)^{2}.} From this, it can readily be seen that the "linear" aspect of the model means the following: the function to be minimised is a quadratic function of the β j {\displaystyle \beta _{j}} for which minimisation is a relatively simple problem; the derivatives of the function are linear functions of the β j {\displaystyle \beta _{j}} making it easy to find the minimising values; the minimising values β j {\displaystyle \beta _{j}} are linear functions of the observations Y i {\displaystyle Y_{i}} ; the minimising values β j {\displaystyle \beta _{j}} are linear functions of the random errors ε i {\displaystyle \varepsilon _{i}} which makes it relatively easy to determine the statistical properties of the estimated values of β j {\displaystyle \beta _{j}} . == Time series models == An example of a linear time series model is an autoregressive moving average model. Here the model for values { X t {\displaystyle X_{t}} } in a time series can be written in the form X t = c + ε t + ∑ i = 1 p ϕ i X t − i + ∑ i = 1 q θ i ε t − i . {\displaystyle X_{t}=c+\varepsilon _{t}+\sum _{i=1}^{p}\phi _{i}X_{t-i}+\sum _{i=1}^{q}\theta _{i}\varepsilon _{t-i}.\,} where again the quantities ε i {\displaystyle \varepsilon _{i}} are random variables representing innovations which are new random effects that appear at a certain time but also affect values of X {\displaystyle X} at later times. In this instance the use of the term "linear model" refers to the structure of the above relationship in representing X t {\displaystyle X_{t}} as a linear function of past values of the same time series and of current and past values of the innovations. This particular aspect of the structure means that it is relatively simple to derive relations for the mean and covariance properties of the time series. Note that here the "linear" part of the term "linear model" is not referring to the coefficients ϕ i {\displaystyle \phi _{i}} and θ i {\displaystyle \theta _{i}} , as it would be in the case of a regression model, which looks structurally similar. == Other uses in statistics == There are some other instances where "nonlinear model" is used to contrast with a linearly structured model, although the term "linear model" is not usually applied. One example of this is nonlinear dimensionality reduction. == See also == General linear model Generalized linear model Linear predictor function Linear system Linear regression Statistical model == References ==
Wikipedia/Linear_model
In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. An objective function is either a loss function or its opposite (in specific domains, variously called a reward function, a profit function, a utility function, a fitness function, etc.), in which case it is to be maximized. The loss function could include terms from several levels of the hierarchy. In statistics, typically a loss function is used for parameter estimation, and the event in question is some function of the difference between estimated and true values for an instance of data. The concept, as old as Laplace, was reintroduced in statistics by Abraham Wald in the middle of the 20th century. In the context of economics, for example, this is usually economic cost or regret. In classification, it is the penalty for an incorrect classification of an example. In actuarial science, it is used in an insurance context to model benefits paid over premiums, particularly since the works of Harald Cramér in the 1920s. In optimal control, the loss is the penalty for failing to achieve a desired value. In financial risk management, the function is mapped to a monetary loss. == Examples == === Regret === Leonard J. Savage argued that using non-Bayesian methods such as minimax, the loss function should be based on the idea of regret, i.e., the loss associated with a decision should be the difference between the consequences of the best decision that could have been made under circumstances will be known and the decision that was in fact taken before they were known. === Quadratic loss function === The use of a quadratic loss function is common, for example when using least squares techniques. It is often more mathematically tractable than other loss functions because of the properties of variances, as well as being symmetric: an error above the target causes the same loss as the same magnitude of error below the target. If the target is t, then a quadratic loss function is λ ( x ) = C ( t − x ) 2 {\displaystyle \lambda (x)=C(t-x)^{2}\;} for some constant C; the value of the constant makes no difference to a decision, and can be ignored by setting it equal to 1. This is also known as the squared error loss (SEL). Many common statistics, including t-tests, regression models, design of experiments, and much else, use least squares methods applied using linear regression theory, which is based on the quadratic loss function. The quadratic loss function is also used in linear-quadratic optimal control problems. In these problems, even in the absence of uncertainty, it may not be possible to achieve the desired values of all target variables. Often loss is expressed as a quadratic form in the deviations of the variables of interest from their desired values; this approach is tractable because it results in linear first-order conditions. In the context of stochastic control, the expected value of the quadratic form is used. The quadratic loss assigns more importance to outliers than to the true data due to its square nature, so alternatives like the Huber, Log-Cosh and SMAE losses are used when the data has many large outliers. === 0-1 loss function === In statistics and decision theory, a frequently used loss function is the 0-1 loss function L ( y ^ , y ) = [ y ^ ≠ y ] {\displaystyle L({\hat {y}},y)=\left[{\hat {y}}\neq y\right]} using Iverson bracket notation, i.e. it evaluates to 1 when y ^ ≠ y {\displaystyle {\hat {y}}\neq y} , and 0 otherwise. == Constructing loss and objective functions == In many applications, objective functions, including loss functions as a particular case, are determined by the problem formulation. In other situations, the decision maker’s preference must be elicited and represented by a scalar-valued function (called also utility function) in a form suitable for optimization — the problem that Ragnar Frisch has highlighted in his Nobel Prize lecture. The existing methods for constructing objective functions are collected in the proceedings of two dedicated conferences. In particular, Andranik Tangian showed that the most usable objective functions — quadratic and additive — are determined by a few indifference points. He used this property in the models for constructing these objective functions from either ordinal or cardinal data that were elicited through computer-assisted interviews with decision makers. Among other things, he constructed objective functions to optimally distribute budgets for 16 Westfalian universities and the European subsidies for equalizing unemployment rates among 271 German regions. == Expected loss == In some contexts, the value of the loss function itself is a random quantity because it depends on the outcome of a random variable X. === Statistics === Both frequentist and Bayesian statistical theory involve making a decision based on the expected value of the loss function; however, this quantity is defined differently under the two paradigms. ==== Frequentist expected loss ==== We first define the expected loss in the frequentist context. It is obtained by taking the expected value with respect to the probability distribution, Pθ, of the observed data, X. This is also referred to as the risk function of the decision rule δ and the parameter θ. Here the decision rule depends on the outcome of X. The risk function is given by: R ( θ , δ ) = E θ ⁡ L ( θ , δ ( X ) ) = ∫ X L ( θ , δ ( x ) ) d P θ ( x ) . {\displaystyle R(\theta ,\delta )=\operatorname {E} _{\theta }L{\big (}\theta ,\delta (X){\big )}=\int _{X}L{\big (}\theta ,\delta (x){\big )}\,\mathrm {d} P_{\theta }(x).} Here, θ is a fixed but possibly unknown state of nature, X is a vector of observations stochastically drawn from a population, E θ {\displaystyle \operatorname {E} _{\theta }} is the expectation over all population values of X, dPθ is a probability measure over the event space of X (parametrized by θ) and the integral is evaluated over the entire support of X. ==== Bayes Risk ==== In a Bayesian approach, the expectation is calculated using the prior distribution π* of the parameter θ: ρ ( π ∗ , a ) = ∫ Θ ∫ X L ( θ , a ( x ) ) d P ( x | θ ) d π ∗ ( θ ) = ∫ X ∫ Θ L ( θ , a ( x ) ) d π ∗ ( θ | x ) d M ( x ) {\displaystyle \rho (\pi ^{*},a)=\int _{\Theta }\int _{\mathbf {X}}L(\theta ,a({\mathbf {x}}))\,\mathrm {d} P({\mathbf {x}}\vert \theta )\,\mathrm {d} \pi ^{*}(\theta )=\int _{\mathbf {X}}\int _{\Theta }L(\theta ,a({\mathbf {x}}))\,\mathrm {d} \pi ^{*}(\theta \vert {\mathbf {x}})\,\mathrm {d} M({\mathbf {x}})} where m(x) is known as the predictive likelihood wherein θ has been "integrated out," π* (θ | x) is the posterior distribution, and the order of integration has been changed. One then should choose the action a* which minimises this expected loss, which is referred to as Bayes Risk. In the latter equation, the integrand inside dx is known as the Posterior Risk, and minimising it with respect to decision a also minimizes the overall Bayes Risk. This optimal decision, a* is known as the Bayes (decision) Rule - it minimises the average loss over all possible states of nature θ, over all possible (probability-weighted) data outcomes. One advantage of the Bayesian approach is to that one need only choose the optimal action under the actual observed data to obtain a uniformly optimal one, whereas choosing the actual frequentist optimal decision rule as a function of all possible observations, is a much more difficult problem. Of equal importance though, the Bayes Rule reflects consideration of loss outcomes under different states of nature, θ. ==== Examples in statistics ==== For a scalar parameter θ, a decision function whose output θ ^ {\displaystyle {\hat {\theta }}} is an estimate of θ, and a quadratic loss function (squared error loss) L ( θ , θ ^ ) = ( θ − θ ^ ) 2 , {\displaystyle L(\theta ,{\hat {\theta }})=(\theta -{\hat {\theta }})^{2},} the risk function becomes the mean squared error of the estimate, R ( θ , θ ^ ) = E θ ⁡ [ ( θ − θ ^ ) 2 ] . {\displaystyle R(\theta ,{\hat {\theta }})=\operatorname {E} _{\theta }\left[(\theta -{\hat {\theta }})^{2}\right].} An Estimator found by minimizing the Mean squared error estimates the Posterior distribution's mean. In density estimation, the unknown parameter is probability density itself. The loss function is typically chosen to be a norm in an appropriate function space. For example, for L2 norm, L ( f , f ^ ) = ‖ f − f ^ ‖ 2 2 , {\displaystyle L(f,{\hat {f}})=\|f-{\hat {f}}\|_{2}^{2}\,,} the risk function becomes the mean integrated squared error R ( f , f ^ ) = E ⁡ ( ‖ f − f ^ ‖ 2 ) . {\displaystyle R(f,{\hat {f}})=\operatorname {E} \left(\|f-{\hat {f}}\|^{2}\right).\,} === Economic choice under uncertainty === In economics, decision-making under uncertainty is often modelled using the von Neumann–Morgenstern utility function of the uncertain variable of interest, such as end-of-period wealth. Since the value of this variable is uncertain, so is the value of the utility function; it is the expected value of utility that is maximized. == Decision rules == A decision rule makes a choice using an optimality criterion. Some commonly used criteria are: Minimax: Choose the decision rule with the lowest worst loss — that is, minimize the worst-case (maximum possible) loss: a r g m i n δ max θ ∈ Θ R ( θ , δ ) . {\displaystyle {\underset {\delta }{\operatorname {arg\,min} }}\ \max _{\theta \in \Theta }\ R(\theta ,\delta ).} Invariance: Choose the decision rule which satisfies an invariance requirement. Choose the decision rule with the lowest average loss (i.e. minimize the expected value of the loss function): a r g m i n δ E θ ∈ Θ ⁡ [ R ( θ , δ ) ] = a r g m i n δ ∫ θ ∈ Θ R ( θ , δ ) p ( θ ) d θ . {\displaystyle {\underset {\delta }{\operatorname {arg\,min} }}\operatorname {E} _{\theta \in \Theta }[R(\theta ,\delta )]={\underset {\delta }{\operatorname {arg\,min} }}\ \int _{\theta \in \Theta }R(\theta ,\delta )\,p(\theta )\,d\theta .} == Selecting a loss function == Sound statistical practice requires selecting an estimator consistent with the actual acceptable variation experienced in the context of a particular applied problem. Thus, in the applied use of loss functions, selecting which statistical method to use to model an applied problem depends on knowing the losses that will be experienced from being wrong under the problem's particular circumstances. A common example involves estimating "location". Under typical statistical assumptions, the mean or average is the statistic for estimating location that minimizes the expected loss experienced under the squared-error loss function, while the median is the estimator that minimizes expected loss experienced under the absolute-difference loss function. Still different estimators would be optimal under other, less common circumstances. In economics, when an agent is risk neutral, the objective function is simply expressed as the expected value of a monetary quantity, such as profit, income, or end-of-period wealth. For risk-averse or risk-loving agents, loss is measured as the negative of a utility function, and the objective function to be optimized is the expected value of utility. Other measures of cost are possible, for example mortality or morbidity in the field of public health or safety engineering. For most optimization algorithms, it is desirable to have a loss function that is globally continuous and differentiable. Two very commonly used loss functions are the squared loss, L ( a ) = a 2 {\displaystyle L(a)=a^{2}} , and the absolute loss, L ( a ) = | a | {\displaystyle L(a)=|a|} . However the absolute loss has the disadvantage that it is not differentiable at a = 0 {\displaystyle a=0} . The squared loss has the disadvantage that it has the tendency to be dominated by outliers—when summing over a set of a {\displaystyle a} 's (as in ∑ i = 1 n L ( a i ) {\textstyle \sum _{i=1}^{n}L(a_{i})} ), the final sum tends to be the result of a few particularly large a-values, rather than an expression of the average a-value. The choice of a loss function is not arbitrary. It is very restrictive and sometimes the loss function may be characterized by its desirable properties. Among the choice principles are, for example, the requirement of completeness of the class of symmetric statistics in the case of i.i.d. observations, the principle of complete information, and some others. W. Edwards Deming and Nassim Nicholas Taleb argue that empirical reality, not nice mathematical properties, should be the sole basis for selecting loss functions, and real losses often are not mathematically nice and are not differentiable, continuous, symmetric, etc. For example, a person who arrives before a plane gate closure can still make the plane, but a person who arrives after can not, a discontinuity and asymmetry which makes arriving slightly late much more costly than arriving slightly early. In drug dosing, the cost of too little drug may be lack of efficacy, while the cost of too much may be tolerable toxicity, another example of asymmetry. Traffic, pipes, beams, ecologies, climates, etc. may tolerate increased load or stress with little noticeable change up to a point, then become backed up or break catastrophically. These situations, Deming and Taleb argue, are common in real-life problems, perhaps more common than classical smooth, continuous, symmetric, differentials cases. == See also == Bayesian regret Loss functions for classification Discounted maximum loss Hinge loss Scoring rule Statistical risk == References == == Further reading == Aretz, Kevin; Bartram, Söhnke M.; Pope, Peter F. (April–June 2011). "Asymmetric Loss Functions and the Rationality of Expected Stock Returns" (PDF). International Journal of Forecasting. 27 (2): 413–437. doi:10.1016/j.ijforecast.2009.10.008. SSRN 889323. Berger, James O. (1985). Statistical decision theory and Bayesian Analysis (2nd ed.). New York: Springer-Verlag. Bibcode:1985sdtb.book.....B. ISBN 978-0-387-96098-2. MR 0804611. Cecchetti, S. (2000). "Making monetary policy: Objectives and rules". Oxford Review of Economic Policy. 16 (4): 43–59. doi:10.1093/oxrep/16.4.43. Horowitz, Ann R. (1987). "Loss functions and public policy". Journal of Macroeconomics. 9 (4): 489–504. doi:10.1016/0164-0704(87)90016-4. Waud, Roger N. (1976). "Asymmetric Policymaker Utility Functions and Optimal Policy under Uncertainty". Econometrica. 44 (1): 53–66. doi:10.2307/1911380. JSTOR 1911380.
Wikipedia/Objective_function
In classical mechanics, a constraint on a system is a parameter that the system must obey. For example, a box sliding down a slope must remain on the slope. There are two different types of constraints: holonomic and non-holonomic. == Types of constraint == First class constraints and second class constraints Primary constraints, secondary constraints, tertiary constraints, quaternary constraints Holonomic constraints, also called integrable constraints, (depending on time and the coordinates but not on the momenta) and Nonholonomic system Pfaffian constraints Scleronomic constraints (not depending on time) and rheonomic constraints (depending on time) Ideal constraints: those for which the work done by the constraint forces under a virtual displacement vanishes. == References ==
Wikipedia/Constraint_(classical_mechanics)
A surrogate model is an engineering method used when an outcome of interest cannot be easily measured or computed, so an approximate mathematical model of the outcome is used instead. Most engineering design problems require experiments and/or simulations to evaluate design objective and constraint functions as a function of design variables. For example, in order to find the optimal airfoil shape for an aircraft wing, an engineer simulates the airflow around the wing for different shape variables (e.g., length, curvature, material, etc.). For many real-world problems, however, a single simulation can take many minutes, hours, or even days to complete. As a result, routine tasks such as design optimization, design space exploration, sensitivity analysis and "what-if" analysis become impossible since they require thousands or even millions of simulation evaluations. One way of alleviating this burden is by constructing approximation models, known as surrogate models, metamodels or emulators, that mimic the behavior of the simulation model as closely as possible while being computationally cheaper to evaluate. Surrogate models are constructed using a data-driven, bottom-up approach. The exact, inner working of the simulation code is not assumed to be known (or even understood), relying solely on the input-output behavior. A model is constructed based on modeling the response of the simulator to a limited number of intelligently chosen data points. This approach is also known as behavioral modeling or black-box modeling, though the terminology is not always consistent. When only a single design variable is involved, the process is known as curve fitting. Though using surrogate models in lieu of experiments and simulations in engineering design is more common, surrogate modeling may be used in many other areas of science where there are expensive experiments and/or function evaluations. == Goals == The scientific challenge of surrogate modeling is the generation of a surrogate that is as accurate as possible, using as few simulation evaluations as possible. The process comprises three major steps which may be interleaved iteratively: Sample selection (also known as sequential design, optimal experimental design (OED) or active learning) Construction of the surrogate model and optimizing the model parameters (i.e., bias-variance tradeoff) Appraisal of the accuracy of the surrogate. The accuracy of the surrogate depends on the number and location of samples (expensive experiments or simulations) in the design space. Various design of experiments (DOE) techniques cater to different sources of errors, in particular, errors due to noise in the data or errors due to an improper surrogate model. == Types of surrogate models == Popular surrogate modeling approaches are: polynomial response surfaces; kriging; more generalized Bayesian approaches; gradient-enhanced kriging (GEK); radial basis function; support vector machines; space mapping; artificial neural networks and Bayesian networks. Other methods recently explored include Fourier surrogate modeling and random forests. For some problems, the nature of the true function is not known a priori, and therefore it is not clear which surrogate model will be the most accurate one. In addition, there is no consensus on how to obtain the most reliable estimates of the accuracy of a given surrogate. Many other problems have known physics properties. In these cases, physics-based surrogates such as space-mapping based models are commonly used. == Invariance properties == Recently proposed comparison-based surrogate models (e.g., ranking support vector machines) for evolutionary algorithms, such as CMA-ES, allow preservation of some invariance properties of surrogate-assisted optimizers: Invariance with respect to monotonic transformations of the function (scaling) Invariance with respect to orthogonal transformations of the search space (rotation) == Applications == An important distinction can be made between two different applications of surrogate models: design optimization and design space approximation (also known as emulation). In surrogate model-based optimization, an initial surrogate is constructed using some of the available budgets of expensive experiments and/or simulations. The remaining experiments/simulations are run for designs which the surrogate model predicts may have promising performance. The process usually takes the form of the following search/update procedure. Initial sample selection (the experiments and/or simulations to be run) Construct surrogate model Search surrogate model (the model can be searched extensively, e.g., using a genetic algorithm, as it is cheap to evaluate) Run and update experiment/simulation at new location(s) found by search and add to sample Iterate steps 2 to 4 until out of time or design is "good enough" Depending on the type of surrogate used and the complexity of the problem, the process may converge on a local or global optimum, or perhaps none at all. In design space approximation, one is not interested in finding the optimal parameter vector, but rather in the global behavior of the system. Here the surrogate is tuned to mimic the underlying model as closely as needed over the complete design space. Such surrogates are a useful, cheap way to gain insight into the global behavior of the system. Optimization can still occur as a post-processing step, although with no update procedure (see above), the optimum found cannot be validated. == Surrogate modeling software == Surrogate Modeling Toolbox (SMT: https://github.com/SMTorg/smt) is a Python package that contains a collection of surrogate modeling methods, sampling techniques, and benchmarking functions. This package provides a library of surrogate models that is simple to use and facilitates the implementation of additional methods. SMT is different from existing surrogate modeling libraries because of its emphasis on derivatives, including training derivatives used for gradient-enhanced modeling, prediction derivatives, and derivatives with respect to the training data. It also includes new surrogate models that are not available elsewhere: kriging by partial-least squares reduction and energy-minimizing spline interpolation. Python library SAMBO Optimization supports sequential optimization with arbitrary models, with tree-based models and Gaussian process models built in. Surrogates.jl is a Julia packages which offers tools like random forests, radial basis methods and kriging. == Surrogate-Assisted Evolutionary Algorithms (SAEAs) == SAEAs are an advanced class of optimization techniques that integrate evolutionary algorithms (EAs) with surrogate models. In traditional EAs, evaluating the fitness of candidate solutions often requires computationally expensive simulations or experiments. SAEAs address this challenge by building a surrogate model, which is a computationally inexpensive approximation of the objective function or constraint functions. The surrogate model serves as a substitute for the actual evaluation process during the evolutionary search. It allows the algorithm to quickly estimate the fitness of new candidate solutions, thereby reducing the number of expensive evaluations needed. This significantly speeds up the optimization process, especially in cases where the objective function evaluations are time-consuming or resource-intensive. SAEAs typically involve three main steps: (1) building the surrogate model using a set of initial sampled data points, (2) performing the evolutionary search using the surrogate model to guide the selection, crossover, and mutation operations, and (3) periodically updating the surrogate model with new data points generated during the evolutionary process to improve its accuracy. By balancing exploration (searching new areas in the solution space) and exploitation (refining known promising areas), SAEAs can efficiently find high-quality solutions to complex optimization problems. They have been successfully applied in various fields, including engineering design, machine learning, and computational finance, where traditional optimization methods may struggle due to the high computational cost of fitness evaluations. == See also == Linear approximation Response surface methodology Kriging Radial basis functions Gradient-enhanced kriging (GEK) OptiY Space mapping Surrogate endpoint Surrogate data Fitness approximation Computer experiment Conceptual model Bayesian regression Bayesian model selection == References == == Further reading == Queipo, N.V., Haftka, R.T., Shyy, W., Goel, T., Vaidyanathan, R., Tucker, P.K. (2005), “Surrogate-based analysis and optimization,” Progress in Aerospace Sciences, 41, 1–28. D. Gorissen, I. Couckuyt, P. Demeester, T. Dhaene, K. Crombecq, (2010), “A Surrogate Modeling and Adaptive Sampling Toolbox for Computer Based Design," Journal of Machine Learning Research, Vol. 11, pp. 2051−2055, July 2010. T-Q. Pham, A. Kamusella, H. Neubert, “Auto-Extraction of Modelica Code from Finite Element Analysis or Measurement Data," 8th International Modelica Conference, 20–22 March 2011 in Dresden. Forrester, Alexander, Andras Sobester, and Andy Keane, Engineering design via surrogate modelling: a practical guide, John Wiley & Sons, 2008. Bouhlel, M. A. and Bartoli, N. and Otsmane, A. and Morlier, J. (2016) "Improving kriging surrogates of high-dimensional design models by Partial Least Squares dimension reduction", Structural and Multidisciplinary Optimization 53 (5), 935–952 Bouhlel, M. A. and Bartoli, N. and Otsmane, A. and Morlier, J. (2016) "An improved approach for estimating the hyperparameters of the kriging model for high-dimensional problems through the partial least squares method", Mathematical Problems in Engineering == External links == Matlab code for surrogate modelling Matlab SUrrogate MOdeling Toolbox – Matlab SUMO Toolbox Surrogate Modeling Toolbox -- Python
Wikipedia/Surrogate_model
Microscale models form a broad class of computational models that simulate fine-scale details, in contrast with macroscale models, which amalgamate details into select categories. Microscale and macroscale models can be used together to understand different aspects of the same problem. == Applications == Macroscale models can include ordinary, partial, and integro-differential equations, where categories and flows between the categories determine the dynamics, or may involve only algebraic equations. An abstract macroscale model may be combined with more detailed microscale models. Connections between the two scales are related to multiscale modeling. One mathematical technique for multiscale modeling of nanomaterials is based upon the use of multiscale Green's function. In contrast, microscale models can simulate a variety of details, such as individual bacteria in biofilms, individual pedestrians in simulated neighborhoods, individual light beams in ray-tracing imagery, individual houses in cities, fine-scale pores and fluid flow in batteries, fine-scale compartments in meteorology, fine-scale structures in particulate systems, and other models where interactions among individuals and background conditions determine the dynamics. Discrete-event models, individual-based models, and agent-based models are special cases of microscale models. However, microscale models do not require discrete individuals or discrete events. Fine details on topography, buildings, and trees can add microscale detail to meteorological simulations and can connect to what is called mesoscale models in that discipline. Square-meter-sized landscape resolution available from lidar images allows water flow across land surfaces to be modeled, for example, rivulets and water pockets, using gigabyte-sized arrays of detail. Models of neural networks may include individual neurons but may run in continuous time and thereby lack precise discrete events. == History == Ideas for computational microscale models arose in the earliest days of computing and were applied to complex systems that could not accurately be described by standard mathematical forms. Two themes emerged in the work of two founders of modern computation around the middle of the 20th century. First, pioneer Alan Turing used simplified macroscale models to understand the chemical basis of morphogenesis, but then proposed and used computational microscale models to understand the nonlinearities and other conditions that would arise in actual biological systems. Second, pioneer John von Neumann created a cellular automaton to understand the possibilities for self-replication of arbitrarily complex entities, which had a microscale representation in the cellular automaton but no simplified macroscale form. This second theme is taken to be part of agent-based models, where the entities ultimately can be artificially intelligent agents operating autonomously. By the last quarter of the 20th century, computational capacity had grown so far that up to tens of thousands of individuals or more could be included in microscale models, and that sparse arrays could be applied to also achieve high performance. Continued increases in computing capacity allowed hundreds of millions of individuals to be simulated on ordinary computers with microscale models by the early 21st century. The term "microscale model" arose later in the 20th century and now appears in the literature of many branches of physical and biological science. == Example == Figure 1 represents a fundamental macroscale model: population growth in an unlimited environment. Its equation is relevant elsewhere, such as compounding growth of capital in economics or exponential decay in physics. It has one amalgamated variable, N ( t ) {\displaystyle N(t)} , the number of individuals in the population at some time t {\displaystyle t} . It has an amalgamated parameter r = β − δ {\displaystyle r=\beta -\delta } , the annual growth rate of the population, calculated as the difference between the annual birth rate β {\displaystyle \beta } and the annual death rate δ {\displaystyle \delta } . Time t {\displaystyle t} can be measured in years, as shown here for illustration, or in any other suitable unit. The macroscale model of Figure 1 amalgamates parameters and incorporates a number of simplifying approximations: the birth and death rates are constant; all individuals are identical, with no genetics or age structure; fractions of individuals are meaningful; parameters are constant and do not evolve; habitat is perfectly uniform; no immigration or emigration occurs; and randomness does not enter. These approximations of the macroscale model can all be refined in analogous microscale models. On the first approximation listed above—that birth and death rates are constant—the macroscale model of Figure 1 is exactly the mean of a large number of stochastic trials with the growth rate fluctuating randomly in each instance of time. Microscale stochastic details are subsumed into a partial differential diffusion equation and that equation is used to establish the equivalence. To relax other assumptions, researchers have applied computational methods. Figure 2 is a sample computational microscale algorithm that corresponds to the macroscale model of Figure 1. When all individuals are identical and mutations in birth and death rates are disabled, the microscale dynamics closely parallel the macroscale dynamics (Figures 3A and 3B). The slight differences between the two models arise from stochastic variations in the microscale version not present in the deterministic macroscale model. These variations will be different each time the algorithm is carried out, arising from intentional variations in random number sequences. When not all individuals are identical, the microscale dynamics can differ significantly from the macroscale dynamics, simulating more realistic situations than can be modeled at the macroscale (Figures 3C and 3D). The microscale model does not explicitly incorporate the differential equation, though for large populations it simulates it closely. When individuals differ from one another, the system has a well-defined behavior but the differential equations governing that behavior are difficult to codify. The algorithm of Figure 2 is a basic example of what is called an equation-free model. When mutations are enabled in the microscale model ( σ > 0 {\displaystyle \sigma >0} ), the population grows more rapidly than in the macroscale model (Figures 3C and 3D). Mutations in parameters allow some individuals to have higher birth rates and others to have lower death rates, and those individuals contribute proportionally more to the population. All else being equal, the average birth rate drifts to higher values and the average death rate drifts to lower values as the simulation progresses. This drift is tracked in the data structures named beta and delta of the microscale algorithm of Figure 2. The algorithm of Figure 2 is a simplified microscale model using the Euler method. Other algorithms such as the Gillespie method and the discrete event method are also used in practice. Versions of the algorithm in practical use include efficiencies such as removing individuals from consideration once they die (to reduce memory requirements and increase speed) and scheduling stochastic events into the future (to provide a continuous time scale and to further improve speed). Such approaches can be orders of magnitude faster. == Complexity == The complexity of systems addressed by microscale models leads to complexity in the models themselves, and the specification of a microscale model can be tens or hundreds of times larger than its corresponding macroscale model. (The simplified example of Figure 2 has 25 times as many lines in its specification as does Figure 1.) Since bugs occur in computer software and cannot completely be removed by standard methods such as testing, and since complex models often are neither published in detail nor peer-reviewed, their validity has been called into question. Guidelines on best practices for microscale models exist but no papers on the topic claim a full resolution of the problem of validating complex models. == Future == Computing capacity is reaching levels where populations of entire countries or even the entire world are within the reach of microscale models, and improvements in the census and travel data allow further improvements in parameterizing such models. Remote sensors from Earth-observing satellites and ground-based observatories such as the National Ecological Observatory Network (NEON) provide large amounts of data for calibration. Potential applications range from predicting and reducing the spread of disease to helping understand the dynamics of the earth. == Figures == Figure 1. One of the simplest of macroscale models: an ordinary differential equation describing continuous exponential growth. N ( t ) {\displaystyle N(t)} is the size of the population at the time t {\displaystyle t} and d N ( t ) / d t {\displaystyle dN(t)/dt} is the rate of change through time in a single dimension N {\displaystyle N} . N ( 0 ) {\displaystyle N(0)} is the initial population t = 0 {\displaystyle t=0} , β {\displaystyle \beta } is the birth rate per time unit, and δ {\displaystyle \delta } is a death rate per time unit. At the left is the differential form; at the right is the explicit solution in terms of standard mathematical functions, which follows in this case from the differential form. Almost all macroscale models are more complex than this example, in that they have multiple dimensions, lack explicit solutions in terms of standard mathematical functions, and must be understood from their differential forms. Figure 2. A basic algorithm applying the Euler method to an individual-based model. See text for discussion. The algorithm, represented in pseudocode, begins with invocation of procedure Microscale ⁡ ( ) {\displaystyle \operatorname {Microscale} ()} , which uses the data structures to carry out the simulation according to the numbered steps described at the right. It repeatedly invokes function Mutation ⁡ ( v ) {\displaystyle \operatorname {Mutation} (v)} , which returns its parameter perturbed by a random number drawn from a uniform distribution with standard deviation defined by the variable s i g m a {\displaystyle sigma} . (The square root of 12 appears because the standard deviation of a uniform distribution includes that factor.) Function Rand ⁡ ( ) {\displaystyle \operatorname {Rand} ()} in the algorithm is assumed to return a uniformly distributed random number 0 ≤ R a n d ( ) < 1 {\displaystyle 0\leq Rand()<1} . The data are assumed to be reset to their initial values on each invocation of Microscale ⁡ ( ) {\displaystyle \operatorname {Microscale} ()} . Figure 3. Graphical comparison of the dynamics of macroscale and microscale simulations of Figures 1 and 2, respectively. (A) The black curve plots the exact solution to the macroscale model of Figure 1 with β = 1 / 5 {\displaystyle \beta =1/5} per year, δ = 1 / 10 {\displaystyle \delta =1/10} per year, and N 0 = 1000 {\displaystyle N_{0}=1000} individuals. (B) Red dots show the dynamics of the microscale model of Figure 2, shown at intervals of one year, using the same values of β {\displaystyle \beta } , δ {\displaystyle \delta } , and N 0 {\displaystyle N_{0}} , and with no mutations ( σ = 0 ) {\displaystyle (\sigma =0)} . (C) Blue dots show the dynamics of the microscale model with mutations having a standard deviation of σ = 0.006 {\displaystyle \sigma =0.006} . (D) Green dots show results with larger mutations, σ = 0.010 {\displaystyle \sigma =0.010} . == References ==
Wikipedia/Microscale_and_macroscale_models
A molecular model is a physical model of an atomistic system that represents molecules and their processes. They play an important role in understanding chemistry and generating and testing hypotheses. The creation of mathematical models of molecular properties and behavior is referred to as molecular modeling, and their graphical depiction is referred to as molecular graphics. The term, "molecular model" refer to systems that contain one or more explicit atoms (although solvent atoms may be represented implicitly) and where nuclear structure is neglected. The electronic structure is often also omitted unless it is necessary in illustrating the function of the molecule being modeled. Molecular models may be created for several reasons – as pedagogic tools for students or those unfamiliar with atomistic structures; as objects to generate or test theories (e.g., the structure of DNA); as analogue computers (e.g., for measuring distances and angles in flexible systems); or as aesthetically pleasing objects on the boundary of art and science. The construction of physical models is often a creative act, and many bespoke examples have been carefully created in the workshops of science departments. There is a very wide range of approaches to physical modeling, including ball-and-stick models available for purchase commercially, to molecular models created using 3D printers. The main strategy, initially in textbooks and research articles and more recently on computers. Molecular graphics has made the visualization of molecular models on computer hardware easier, more accessible, and inexpensive, although physical models are widely used to enhance the tactile and visual message being portrayed. == History == In the 1600s, Johannes Kepler speculated on the symmetry of snowflakes and the close packing of spherical objects such as fruit. The symmetrical arrangement of closely packed spheres informed theories of molecular structure in the late 1800s, and many theories of crystallography and solid state inorganic structure used collections of equal and unequal spheres to simulate packing and predict structure. John Dalton represented compounds as aggregations of circular atoms, and although Johann Josef Loschmidt did not create physical models, his diagrams based on circles are two-dimensional analogues of later models. August Wilhelm von Hofmann is credited with the first physical molecular model around 1860. Note how the size of the carbon appears smaller than the hydrogen. The importance of stereochemistry was not then recognised and the model is essentially topological (it should be a 3-dimensional tetrahedron). Jacobus Henricus van 't Hoff and Joseph Le Bel introduced the concept of chemistry in three dimensions of space, that is, stereochemistry. Van 't Hoff built tetrahedral molecules representing the three-dimensional properties of carbon. == Models based on spheres == Repeating units will help to show how easy it is and clear it is to represent molecules through balls that represent atoms. The binary compounds sodium chloride (NaCl) and caesium chloride (CsCl) have cubic structures but have different space groups. This can be rationalised in terms of close packing of spheres of different sizes. For example, NaCl can be described as close-packed chloride ions (in a face-centered cubic lattice) with sodium ions in the octahedral holes. After the development of X-ray crystallography as a tool for determining crystal structures, many laboratories built models based on spheres. With the development of plastic or polystyrene balls it is now easy to create such models. == Models based on ball-and-stick == The concept of the chemical bond as a direct link between atoms can be modelled by linking balls (atoms) with sticks/rods (bonds). This has been extremely popular and is still widely used today. Initially atoms were made of spherical wooden balls with specially drilled holes for rods. Thus carbon can be represented as a sphere with four holes at the tetrahedral angles cos−1(−1⁄3) ≈ 109.47°. A problem with rigid bonds and holes is that systems with arbitrary angles could not be built. This can be overcome with flexible bonds, originally helical springs but now usually plastic. This also allows double and triple bonds to be approximated by multiple single bonds. The model shown to the left represents a ball-and-stick model of proline. The balls have colours: black represents carbon (C); red, oxygen (O); blue, nitrogen (N); and white, hydrogen (H). Each ball is drilled with as many holes as its conventional valence (C: 4; N: 3; O: 2; H: 1) directed towards the vertices of a tetrahedron. Single bonds are represented by (fairly) rigid grey rods. Double and triple bonds use two longer flexible bonds which restrict rotation and support conventional cis/trans stereochemistry. However, most molecules require holes at other angles and specialist companies manufacture kits and bespoke models. Besides tetrahedral, trigonal and octahedral holes, there were all-purpose balls with 24 holes. These models allowed rotation about the single rod bonds, which could be both an advantage (showing molecular flexibility) and a disadvantage (models are floppy). The approximate scale was 5 cm per ångström (0.5 m/nm or 500,000,000:1), but was not consistent over all elements. Arnold Beevers in Edinburgh created small models using PMMA balls and stainless steel rods. By using individually drilled balls with precise bond angles and bond lengths in these models, large crystal structures to be accurately created, but with light and rigid form. Figure 4 shows a unit cell of ruby in this style. == Skeletal models == Crick and Watson's DNA model and the protein-building kits of Kendrew were among the first skeletal models. These were based on atomic components where the valences were represented by rods; the atoms were points at the intersections. Bonds were created by linking components with tubular connectors with locking screws. André Dreiding introduced a molecular modelling kit in the late 1950s which dispensed with the connectors. A given atom would have solid and hollow valence spikes. The solid rods clicked into the tubes forming a bond, usually with free rotation. These were and are very widely used in organic chemistry departments and were made so accurately that interatomic measurements could be made by ruler. More recently, inexpensive plastic models (such as Orbit) use a similar principle. A small plastic sphere has protuberances onto which plastic tubes can be fitted. The flexibility of the plastic means that distorted geometries can be made. == Polyhedral models == Many inorganic solids consist of atoms surrounded by a coordination sphere of electronegative atoms (e.g. PO4 tetrahedra, TiO6 octahedra). Structures can be modelled by gluing together polyhedra made of paper or plastic. == Composite models == A good example of composite models is the Nicholson approach, widely used from the late 1970s for building models of biological macromolecules. The components are primarily amino acids and nucleic acids with preformed residues representing groups of atoms. Many of these atoms are directly moulded into the template, and fit together by pushing plastic stubs into small holes. The plastic grips well and makes bonds difficult to rotate, so that arbitrary torsion angles can be set and retain their value. The conformations of the backbone and side chains are determined by pre-computing the torsion angles and then adjusting the model with a protractor. The plastic is white and can be painted to distinguish between O and N atoms. Hydrogen atoms are normally implicit and modelled by snipping off the spokes. A model of a typical protein with approximately 300 residues could take a month to build. It was common for laboratories to build a model for each protein solved. By 2005, so many protein structures were being determined that relatively few models were made. == Computer-based models == With the development of computer-based physical modelling, it is now possible to create complete single-piece models by feeding the coordinates of a surface into the computer. Figure 6 shows models of anthrax toxin, left (at a scale of approximately 20 Å/cm or 1:5,000,000) and green fluorescent protein, right (5 cm high, at a scale of about 4 Å/cm or 1:25,000,000) from 3D Molecular Design. Models are made of plaster or starch, using a rapid prototyping process. It has also recently become possible to create accurate molecular models inside glass blocks using a technique known as subsurface laser engraving. The image at right shows the 3D structure of an E. coli protein (DNA polymerase beta-subunit, PDB code 1MMI) etched inside a block of glass by British company Luminorum Ltd. == Computational Models == Computers can also model molecules mathematically. Programs such as Avogadro can run on typical desktops and can predict bond lengths and angles, molecular polarity and charge distribution, and even quantum mechanical properties such as absorption and emission spectra. However, these sorts of programs cannot model molecules as more atoms are added, because the number of calculations is quadratic in the number of atoms involved; if four times as many atoms are used in a molecule, the calculations with take 16 times as long. For most practical purposes, such as drug design or protein folding, the calculations of a model require supercomputing or cannot be done on classical computers at all in a reasonable amount of time. Quantum computers can model molecules with fewer calculations because the type of calculations performed in each cycle by a quantum computer are well-suited to molecular modelling. == Common colors == Some of the most common colors used in molecular models are as follows: == Chronology == This table is an incomplete chronology of events where physical molecular models provided major scientific insights. == See also == Molecular design software Molecular graphics Molecular modelling Ribbon diagram Software for molecular mechanics modeling Space-filling (Calotte) model == References == == Further reading == Barlow, W. (1883). "Probable Nature of the Internal Symmetry of Crystals". Nature. 29 (738): 186–8. Bibcode:1883Natur..29..186B. doi:10.1038/029186a0. Barlow, W.; Pope, W.J. (1906). "A development of the atomic theory which correlates chemical and crystalline structure and leads to a demonstration of the nature of valency". J. Chem. Soc. 89: 1675–1744. doi:10.1039/ct9068901675. Whittaker, A.G. (2009). "Molecular Models - Tangible Representations of the Abstract". PDB Newsletter. 41: 4–5. [1] history of molecular models Paper presented at the EuroScience Open Forum (ESOF), Stockholm on August 25, 2004, W. Gerhard Pohl, Austrian Chemical Society. Photo of van't Hoff's tetrahedral models, and Loschmidt's organic formulae (only 2-dimensional). Wooster, W.A.; et al. (1945). "A Spherical Template for Drilling Balls for Crystal Structure Models". J. Sci. Instrum. 22 (7): 130. Bibcode:1945JScI...22..130W. doi:10.1088/0950-7671/22/7/405. Wooster's biographical notes including setting up of Crystal Structure Ltd. == External links == History of Visualization of Biological Macromolecules by Eric Martz and Eric Francoeur. Contains a mixture of physical models and molecular graphics.
Wikipedia/Molecular_model
The neighbour-sensing mathematical model of hyphal growth is a set of interactive computer models that simulate the way fungi hyphae grow in three-dimensional space. The three-dimensional simulation is an experimental tool which can be used to study the morphogenesis of fungal hyphal networks. The modelling process starts with the proposition that each hypha in the fungal mycelium generates a certain abstract field that, like known physical fields, decreases with increasing distance. Both scalar and vector fields are included in the models. The field(s) and its (their) gradient(s) are used to inform the algorithm that calculates the likelihood of branching, the angle of branching and the growth direction of each hyphal tip in the simulated mycelium. The growth vector is being informed of its surroundings. The virtual hyphal tip is 'sensing' the neighbouring mycelium; thus, it is called the neighbour-sensing model. Cross-walls in living hyphae are formed only at right angles to the long axis of the hypha. A daughter hyphal apex can only arise if a branch is initiated. So, for fungi, hyphal branch formation is the equivalent of cell division in animals, plants, and protists. The position of origin of a branch and its direction and rate of growth are the main formative events in the development of fungal tissues and organs. Consequently, by simulating the mathematics of the control of hyphal growth and branching, the neighbour-sensing model provides the user with a way of experimenting with features that may regulate hyphal growth patterns during morphogenesis to arrive at suggestions that could be tested with live fungi. The model was proposed by Audrius Meškauskas and David Moore in 2004, and developed using the supercomputing facilities of the University of Manchester. The key idea of this model is that all parts of the fungal mycelium have identical field generation systems, field sensing mechanisms and growth direction-altering algorithms. Under properly chosen model parameters, it is possible to observe the transformation of the initial unordered mycelium structure into various forms, some of which are natural-like fungal fruit bodies and other complex structures. In one of the simplest examples, it is assumed that the hyphal tips try to keep a 45-degree orientation with relation to the Earth’s gravity vector field, and also generate some kind of scalar field that the growing tips try to avoid. This combination of parameters leads to the development of hollow conical structures, similar to the fruit bodies of some primitive fungi. In another example, the hypha generates a vector field parallel to the hyphal axis, and the tips tend to turn parallel to that field. After more tips turn in the same direction, their hyphae form a stronger directional field. In this way, it is possible to observe the spontaneous orientation of growing hypha in a single direction, which simulates the strands, cords and rhizomorphs produced by many species of fungi in nature. The parameters under which the model operates can be changed during its execution. This allows a greater variety of structures to be formed (including mushroom-like shapes) and may be supposed to simulate cases where the growth strategy depends on an internal biological clock. The neighbour-sensing model explains how various fungal structures may arise because of the ‘crowd behaviour’ (convergence) of the community of hyphal tips that make up the mycelium. == Literature == Meškauskas A, Fricker M.D, Moore D (2004). Simulating colonial growth of fungi with the neighbour-sensing model of hyphal growth. Mycological research, 108, 1241-1256. pdf Meškauskas, A., McNulty, Moore, D. (2004). Concerted regulation of tropisms in all hyphal tips is sufficient to generate most fungal structures. Mycological research, 108, 341-353. pdf Money NP. (2004) Theoretical biology: mushrooms in cyberspace. Nature, 431(7004):32. link Davidson A.F, Boswell G.P., Fischer M.W.F, Heaton L., Hofstadler D, Roper M. (2011). IMA Fungus. 2(1): 33–37. NCBI link == Additional links == Further details are available from these websites: [1] (primary) and [2] (mirror). The programs, with extensive documentation, are distributed as freeware by both these sites. == Notes ==
Wikipedia/Neighbour-sensing_model
A mathematical model may mean a description of a physical system using mathematical concepts and language. "Mathematical model" may also refer to: Model theory, a branch of mathematical logic, in which a model is an abstract structure that satisfies a set of logical sentences Physical models of mathematical objects, created for instructional or artistic purposes, including: Polyhedron model, a physical model of a polyhedron Mathematical Models (Cundy and Rollett), a book about constructing models of mathematical objects for secondary-school education Mathematical Models (Fischer), a book of photographs and commentary on university collections of physical models of mathematical objects == See also == Model (disambiguation)
Wikipedia/Mathematical_model_(disambiguation)
Scientific modelling is an activity that produces models representing empirical objects, phenomena, and physical processes, to make a particular part or feature of the world easier to understand, define, quantify, visualize, or simulate. It requires selecting and identifying relevant aspects of a situation in the real world and then developing a model to replicate a system with those features. Different types of models may be used for different purposes, such as conceptual models to better understand, operational models to operationalize, mathematical models to quantify, computational models to simulate, and graphical models to visualize the subject. Modelling is an essential and inseparable part of many scientific disciplines, each of which has its own ideas about specific types of modelling. The following was said by John von Neumann. ... the sciences do not try to explain, they hardly even try to interpret, they mainly make models. By a model is meant a mathematical construct which, with the addition of certain verbal interpretations, describes observed phenomena. The justification of such a mathematical construct is solely and precisely that it is expected to work—that is, correctly to describe phenomena from a reasonably wide area. There is also an increasing attention to scientific modelling in fields such as science education, philosophy of science, systems theory, and knowledge visualization. There is a growing collection of methods, techniques and meta-theory about all kinds of specialized scientific modelling. == Overview == A scientific model seeks to represent empirical objects, phenomena, and physical processes in a logical and objective way. All models are in simulacra, that is, simplified reflections of reality that, despite being approximations, can be extremely useful. Building and disputing models is fundamental to the scientific enterprise. Complete and true representation may be impossible, but scientific debate often concerns which is the better model for a given task, e.g., which is the more accurate climate model for seasonal forecasting. Attempts to formalize the principles of the empirical sciences use an interpretation to model reality, in the same way logicians axiomatize the principles of logic. The aim of these attempts is to construct a formal system that will not produce theoretical consequences that are contrary to what is found in reality. Predictions or other statements drawn from such a formal system mirror or map the real world only insofar as these scientific models are true. For the scientist, a model is also a way in which the human thought processes can be amplified. For instance, models that are rendered in software allow scientists to leverage computational power to simulate, visualize, manipulate and gain intuition about the entity, phenomenon, or process being represented. Such computer models are in silico. Other types of scientific models are in vivo (living models, such as laboratory rats) and in vitro (in glassware, such as tissue culture). == Basics == === Modelling as a substitute for direct measurement and experimentation === Models are typically used when it is either impossible or impractical to create experimental conditions in which scientists can directly measure outcomes. Direct measurement of outcomes under controlled conditions (see Scientific method) will always be more reliable than modeled estimates of outcomes. Within modeling and simulation, a model is a task-driven, purposeful simplification and abstraction of a perception of reality, shaped by physical, legal, and cognitive constraints. It is task-driven because a model is captured with a certain question or task in mind. Simplifications leave all the known and observed entities and their relation out that are not important for the task. Abstraction aggregates information that is important but not needed in the same detail as the object of interest. Both activities, simplification, and abstraction, are done purposefully. However, they are done based on a perception of reality. This perception is already a model in itself, as it comes with a physical constraint. There are also constraints on what we are able to legally observe with our current tools and methods, and cognitive constraints that limit what we are able to explain with our current theories. This model comprises the concepts, their behavior, and their relations informal form and is often referred to as a conceptual model. In order to execute the model, it needs to be implemented as a computer simulation. This requires more choices, such as numerical approximations or the use of heuristics. Despite all these epistemological and computational constraints, simulation has been recognized as the third pillar of scientific methods: theory building, simulation, and experimentation. === Simulation === A simulation is a way to implement the model, often employed when the model is too complex for the analytical solution. A steady-state simulation provides information about the system at a specific instant in time (usually at equilibrium, if such a state exists). A dynamic simulation provides information over time. A simulation shows how a particular object or phenomenon will behave. Such a simulation can be useful for testing, analysis, or training in those cases where real-world systems or concepts can be represented by models. === Structure === Structure is a fundamental and sometimes intangible notion covering the recognition, observation, nature, and stability of patterns and relationships of entities. From a child's verbal description of a snowflake, to the detailed scientific analysis of the properties of magnetic fields, the concept of structure is an essential foundation of nearly every mode of inquiry and discovery in science, philosophy, and art. === Systems === A system is a set of interacting or interdependent entities, real or abstract, forming an integrated whole. In general, a system is a construct or collection of different elements that together can produce results not obtainable by the elements alone. The concept of an 'integrated whole' can also be stated in terms of a system embodying a set of relationships which are differentiated from relationships of the set to other elements, and form relationships between an element of the set and elements not a part of the relational regime. There are two types of system models: 1) discrete in which the variables change instantaneously at separate points in time and, 2) continuous where the state variables change continuously with respect to time. === Generating a model === Modelling is the process of generating a model as a conceptual representation of some phenomenon. Typically a model will deal with only some aspects of the phenomenon in question, and two models of the same phenomenon may be essentially different—that is to say, that the differences between them comprise more than just a simple renaming of components. Such differences may be due to differing requirements of the model's end users, or to conceptual or aesthetic differences among the modelers and to contingent decisions made during the modelling process. Considerations that may influence the structure of a model might be the modeler's preference for a reduced ontology, preferences regarding statistical models versus deterministic models, discrete versus continuous time, etc. In any case, users of a model need to understand the assumptions made that are pertinent to its validity for a given use. Building a model requires abstraction. Assumptions are used in modelling in order to specify the domain of application of the model. For example, the special theory of relativity assumes an inertial frame of reference. This assumption was contextualized and further explained by the general theory of relativity. A model makes accurate predictions when its assumptions are valid, and might well not make accurate predictions when its assumptions do not hold. Such assumptions are often the point with which older theories are succeeded by new ones (the general theory of relativity works in non-inertial reference frames as well). === Evaluating a model === A model is evaluated first and foremost by its consistency to empirical data; any model inconsistent with reproducible observations must be modified or rejected. One way to modify the model is by restricting the domain over which it is credited with having high validity. A case in point is Newtonian physics, which is highly useful except for the very small, the very fast, and the very massive phenomena of the universe. However, a fit to empirical data alone is not sufficient for a model to be accepted as valid. Factors important in evaluating a model include: Ability to explain past observations Ability to predict future observations Cost of use, especially in combination with other models Refutability, enabling estimation of the degree of confidence in the model Simplicity, or even aesthetic appeal People may attempt to quantify the evaluation of a model using a utility function. === Visualization === Visualization is any technique for creating images, diagrams, or animations to communicate a message. Visualization through visual imagery has been an effective way to communicate both abstract and concrete ideas since the dawn of man. Examples from history include cave paintings, Egyptian hieroglyphs, Greek geometry, and Leonardo da Vinci's revolutionary methods of technical drawing for engineering and scientific purposes. === Space mapping === Space mapping refers to a methodology that employs a "quasi-global" modelling formulation to link companion "coarse" (ideal or low-fidelity) with "fine" (practical or high-fidelity) models of different complexities. In engineering optimization, space mapping aligns (maps) a very fast coarse model with its related expensive-to-compute fine model so as to avoid direct expensive optimization of the fine model. The alignment process iteratively refines a "mapped" coarse model (surrogate model). == Types == == Applications == === Modelling and simulation === One application of scientific modelling is the field of modelling and simulation, generally referred to as "M&S". M&S has a spectrum of applications which range from concept development and analysis, through experimentation, measurement, and verification, to disposal analysis. Projects and programs may use hundreds of different simulations, simulators and model analysis tools. The figure shows how modelling and simulation is used as a central part of an integrated program in a defence capability development process. == See also == Abductive reasoning – Inference seeking the simplest and most likely explanation All models are wrong – Aphorism in statistics Data and information visualization – Visual representation of data Heuristic – Problem-solving method Inverse problem – Process of calculating the causal factors that produced a set of observations Scientific visualization – Interdisciplinary branch of science concerned with presenting scientific data visually Statistical model – Type of mathematical model == References == == Further reading == Nowadays there are some 40 magazines about scientific modelling which offer all kinds of international forums. Since the 1960s there is a strongly growing number of books and magazines about specific forms of scientific modelling. There is also a lot of discussion about scientific modelling in the philosophy-of-science literature. A selection: Rainer Hegselmann, Ulrich Müller and Klaus Troitzsch (eds.) (1996). Modelling and Simulation in the Social Sciences from the Philosophy of Science Point of View. Theory and Decision Library. Dordrecht: Kluwer. Paul Humphreys (2004). Extending Ourselves: Computational Science, Empiricism, and Scientific Method. Oxford: Oxford University Press. Johannes Lenhard, Günter Küppers and Terry Shinn (Eds.) (2006) "Simulation: Pragmatic Constructions of Reality", Springer Berlin. Tom Ritchey (2012). "Outline for a Morphology of Modelling Methods: Contribution to a General Theory of Modelling". In: Acta Morphologica Generalis, Vol 1. No 1. pp. 1–20. William Silvert (2001). "Modelling as a Discipline". In: Int. J. General Systems. Vol. 30(3), pp. 261. Sergio Sismondo and Snait Gissis (eds.) (1999). Modeling and Simulation. Special Issue of Science in Context 12. Eric Winsberg (2018) "Philosophy and Climate Science" Cambridge: Cambridge University Press Eric Winsberg (2010) "Science in the Age of Computer Simulation" Chicago: University of Chicago Press Eric Winsberg (2003). "Simulated Experiments: Methodology for a Virtual World". In: Philosophy of Science 70: 105–125. Tomáš Helikar, Jim A Rogers (2009). "ChemChains: a platform for simulation and analysis of biochemical networks aimed to laboratory scientists". BioMed Central. == External links == Models. Entry in the Internet Encyclopedia of Philosophy Models in Science. Entry in the Stanford Encyclopedia of Philosophy The World as a Process: Simulations in the Natural and Social Sciences, in: R. Hegselmann et al. (eds.), Modelling and Simulation in the Social Sciences from the Philosophy of Science Point of View, Theory and Decision Library. Dordrecht: Kluwer 1996, 77-100. Research in simulation and modelling of various physical systems Modelling Water Quality Information Center, U.S. Department of Agriculture Ecotoxicology & Models A Morphology of Modelling Methods. Acta Morphologica Generalis, Vol 1. No 1. pp. 1–20.
Wikipedia/Scientific_model
The World in the Model: How Economists Work and Think is a work by Mary S. Morgan published by Cambridge University Press in 2012. == Content == Mary S. Morgan, defined by Robert Sugden "a major philosopher and historian of economics", analyzes with examples how economists work and think using models. Her book reconstructs the path taken by models to become economists' "natural way of doing economics.": 17  For Morgan, both the "method of mathematical postulation and proof" and that of modelling have emerged in late 19th century,: 18  and models have become economists' main tools only from the 1930s, replacing classical economics relying on "universal laws". A concept stressed throughout this work is that: Economic modelling is not primarily a method of proof, but rather a method of inquiry.: 239  The pragmatic orientation of her work is stated right at the beginning, noting that "Science is messy",: xv  thus Asking: What qualities do models need to make them useful in a science? and What functions do models play in a science? are more fruitful than asking What are models?: xvi  According to Morgan,: 225  who draws a parallel with physics, economists follow four general steps in their work with models:: 17  Step 1: Create a model the is relevant to the problem at hand Step 2: Interrogate the model Step 3: Demonstrate the answer to a question using the internal resources and dynamics of the model Step 4: Build a narrative that links the answers offered by the model to the external world For the Author, economists: 37  ... reason about the small world in the model and reason about the big economic world with the model. The existence of a world inside the model itself is a central feature of the work. Modelling is not just an activity of abstraction, simplification, idealization, and mathematization, but entails the creation of new artifacts to be explored and reasoned with, in a play where the economist and the model are "jointly active participants".: 256  As noted by Sugden, Morgan repeats at several point this double function of models: Models are objects to enquire into and to enquire with: economists enquire into the world of the economic model, and use them to enquire with into the economic world that the model represents: 217  The case studies span a period going from early nineteenth century to the second half of the twentieth century, though the earlier Quesnay's Tableau économiques are mentioned among the antecedents of modelling (Chapter 1). Models start thus with Ricardo's "model farm" (Chapter 2), to study for example how an increase in grain prices would affect rents, to continue with the Edgeworth box for the trading of goods, later the subject of developments by Vilfredo Pareto (Chapter 3), with the rational agent (Chapter 4), and with Newlyn–Phillips hydraulic machine of the economy (Chapter 5). The book continues with the business cycle work of Ragnar Frisch and Jan Tinbergen (this latter father of Econometrics), and the macroeconomic models of Meade, Samuelson and Hicks (Chapter 6), with supply and demand models (Chapter 7), all the way to modern simulation modelling (Chapter 8). Monte Carlo methods are included, and a full chapter (Chapter 9) is devoted to the Prisoner Dilemma, where the Author discusses Nash equilibrium using both the classical example of the two prisoners confronted with the choice to either confess or stay silent while ignoring their companion's choice, and a less seen example drawn from Puccini's Tosca, with illustrations from a paper of Anatol Rapoport. As noted by economist Gene Callahan Morgan is attentive to the specialized talents that are needed in economic modelling, including a tacit, craft-based, knowledge that can only be acquired via apprenticeship.: 15  In discussing what makes a model "fruitful" Morgan notes that while they must have enough internal resources to operate, including some salient aspects of "the economic world," they need to be able to generate variety in their outcome, as to potentially surprise the analyst, even when the surprise is that too many solutions are possible, as noted by Paul Samuelson when trying to translate Keynes's General Theory of Employment, Interest and Money into a model.: 229  Also, size in relation to content matters,: 237  so that models must also be small enough to be manipulable. Morgan aims to provide an account of "modelling (in economics) as an autonomous epistemic genre".: xvi  In this, the use of narratives in relation to models is not just rhetorical; it is foremost epistemological. Ragnar Frisch's image of the business cycle as a rocking horse randomly hit by a boy with a club successfully conveyed to the economist of the 1930's the concept of an harmonic process impulsed by shocks.: 239  The centrality of narration for models in the work of Morgan is noted by François Claveau and Gene Callahan. This latter notes how this is well illustrated by the prisoner dilemma, that would be incomprehensible if offered only in terms of payoffs, without the accompanying story.: 372  Chapter 7 offers a discussion of what makes models different from physical experiments. Morgan sees a difference in that experiments are "made of the same stuff" of the world, while in the case of models "there is no shared stuff".: 287  Coming to how economic models influence policy thanks to their scientific authority, Morgan, citing Nancy Cartwright, offers a word of caution: the looseness of the criteria of plausibility always make it doubtful, difficult, and potentially dangerous, to use these little mathematical models to intervene directly in the economic world.: 248  == Reception == The book is praised by Verena Halsmayer and by Maxime Desmarais-Tremblay for showing the variety of tools mobilized by economists, such as the hydraulic system of the Newlyn–Philips Machine made of pipes, valves and tanks, graphs such as those of the Edgeworth box, pen and paper tabulated records such as Ricardo's ideal farm, conceptual games such as the Prisoner dilemma, and modern day equations. For François Claveau it is disappointing that Morgan, though author of The History of Econometric Ideas (1990) does not discuss econometric models, nor the distinction between these, embedded into real world data and statistical testing, and non-econometric models. Claveau also laments the absence of a discussion of the difference between theories and models. A limit of Morgan's work that is flagged by Verena Halsmayer is that in examining a selected set of successful models the book discounts alternative traditions, the "verbal economics" and other historical traditions, evading the strategic development leading to the dominance of (neoclassical) economic modeling, thus ignoring "what was lost by adopting modeling as the dominant mode of economic knowledge production". Robert Sugden praises Morgan's vivid portrait of Ricardo. Morgan details how the numerical examples of his model farm were inspired by agricultural experiments run by other gentleman farmers. The same reviewer notes that the book of Morgan, as the tile implies, has more to say about how economist work inside the model than in how they look at the world outside it. As a modeller, Sugden suggests that Morgan's emphasis on how economists manipulate their models to obtain insights may discount models' autonomy, and in what way models may speak with their own voice. For this work Morgan won in 2013 the best book award from The European Society for the History of Economic Thought. == See also == Sociology of quantification Models as Mediators == References ==
Wikipedia/The_World_in_the_Model:_How_Economists_Work_and_Think
Physical science is a branch of natural science that studies non-living systems, in contrast to life science. It in turn has many branches, each referred to as a "physical science", together is called the "physical sciences". == Definition == Physical science can be described as all of the following: A branch of science (a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe). A branch of natural science – natural science is a major branch of science that tries to explain and predict nature's phenomena, based on empirical evidence. In natural science, hypotheses must be verified scientifically to be regarded as scientific theory. Validity, accuracy, and social mechanisms ensuring quality control, such as peer review and repeatability of findings, are amongst the criteria and methods used for this purpose. Natural science can be broken into two main branches: life science (for example biology) and physical science. Each of these branches, and all of their sub-branches, are referred to as natural sciences. A branch of applied science - the application of the scientific method and scientific knowledge to attain practical goals. It includes a broad range of disciplines, such as engineering and medicine., although medicine would not normally be considered a physical science. Applied science is often contrasted with basic science, which is focused on advancing scientific theories and laws that explain and predict natural or other phenomena. == Branches == Physics – natural and physical science could involve the study of matter and its motion through space and time, along with related concepts such as energy and force. More broadly, it is the general analysis of nature, conducted in order to understand how the universe behaves. Branches of physics Astronomy – study of celestial objects (such as stars, galaxies, planets, moons, asteroids, comets and nebulae), the physics, chemistry, and evolution of such objects, and phenomena that originate outside the atmosphere of Earth, including supernovae explosions, gamma-ray bursts, and cosmic microwave background radiation. Branches of astronomy Chemistry – studies the composition, structure, properties and change of matter. In this realm, chemistry deals with such topics as the properties of individual atoms, the manner in which atoms form chemical bonds in the formation of compounds, the interactions of substances through intermolecular forces to give matter its general properties, and the interactions between substances through chemical reactions to form different substances. Branches of chemistry Earth science – all-embracing term referring to the fields of science dealing with planet Earth. Earth science is the study of how the natural environment (ecosphere or Earth system) works and how it evolved to its current state. It includes the study of the atmosphere, hydrosphere, lithosphere, and biosphere. Materials science - an interdisciplinary field of researching and discovering materials. Materials scientists emphasize understanding how the history of a material (processing) influences its structure, and also the material's properties and performance. The understanding of processing structure properties relationships is called the materials paradigm. This paradigm is used for advanced understanding in a variety of research areas, including nanotechnology, biomaterials, and metallurgy. Computer science - the study of computation, information, and automation. Computer science spans theoretical disciplines (such as algorithms, theory of computation, and information theory) to applied disciplines (including the design and implementation of hardware and software). == History == History of physical science – history of the branch of natural science that studies non-living systems, in contrast to the life sciences. It in turn has many branches, each referred to as a "physical science", together called the "physical sciences". However, the term "physical" creates an unintended, somewhat arbitrary distinction, since many branches of physical science also study biological phenomena (organic chemistry, for example). The four main branches of physical science are astronomy, physics, chemistry, and the Earth sciences, which include meteorology and geology. There are also a number of others which have become important in the twenty-first such as materials science and computer science. History of physics – history of the physical science that studies matter and its motion through space-time, and related concepts such as energy and force History of acoustics – history of the study of mechanical waves in solids, liquids, and gases (such as vibration and sound) History of agrophysics – history of the study of physics applied to agroecosystems History of soil physics – history of the study of soil physical properties and processes. History of astrophysics – history of the study of the physical aspects of celestial objects History of astronomy – history of the study of the universe beyond Earth, including its formation and development, and the evolution, physics, chemistry, meteorology, and motion of celestial objects (such as galaxies, planets, etc.) and phenomena that originate outside the atmosphere of Earth (such as the cosmic background radiation). History of astrodynamics – history of the application of ballistics and celestial mechanics to the practical problems concerning the motion of rockets and other spacecraft. History of astrometry – history of the branch of astronomy that involves precise measurements of the positions and movements of stars and other celestial bodies. History of cosmology – history of the discipline that deals with the nature of the Universe as a whole. History of extragalactic astronomy – history of the branch of astronomy concerned with objects outside our own Milky Way Galaxy History of galactic astronomy – history of the study of our own Milky Way galaxy and all its contents. History of physical cosmology – history of the study of the largest-scale structures and dynamics of the universe and is concerned with fundamental questions about its formation and evolution. History of planetary science – history of the scientific study of planets (including Earth), moons, and planetary systems, in particular those of the Solar System and the processes that form them. History of stellar astronomy – history of the natural science that deals with the study of celestial objects (such as stars, planets, comets, nebulae, star clusters, and galaxies) and phenomena that originate outside the atmosphere of Earth (such as cosmic background radiation) History of atmospheric physics – history of the study of the application of physics to the atmosphere History of atomic, molecular, and optical physics – history of the study of how matter and light interact History of biophysics – history of the study of physical processes relating to biology History of medical physics – history of the application of physics concepts, theories and methods to medicine. History of neurophysics – history of the branch of biophysics dealing with the nervous system. History of chemical physics – history of the branch of physics that studies chemical processes from the point of view of physics. History of computational physics – history of the study and implementation of numerical algorithms to solve problems in physics for which a quantitative theory already exists. History of condensed matter physics – history of the study of the physical properties of condensed phases of matter. History of cryogenics – history of cryogenics is the study of the production of very low temperature (below −150 °C, −238 °F or 123K) and the behavior of materials at those temperatures. History of Dynamics – history of the study of the causes of motion and changes in motion History of econophysics – history of the interdisciplinary research field, applying theories and methods originally developed by physicists in order to solve problems in economics History of electromagnetism – history of the branch of science concerned with the forces that occur between electrically charged particles. History of geophysics – history of the physics of the Earth and its environment in space; also the study of the Earth using quantitative physical methods History of materials physics – history of the use of physics to describe materials in many different ways such as force, heat, light and mechanics. History of mathematical physics – history of the application of mathematics to problems in physics and the development of mathematical methods for such applications and for the formulation of physical theories. History of mechanics – history of the branch of physics concerned with the behavior of physical bodies when subjected to forces or displacements, and the subsequent effects of the bodies on their environment. History of biomechanics – history of the study of the structure and function of biological systems such as humans, animals, plants, organs, and cells by means of the methods of mechanics. History of classical mechanics – history of one of the two major sub-fields of mechanics, which is concerned with the set of physical laws describing the motion of bodies under the action of a system of forces. History of continuum mechanics – history of the branch of mechanics that deals with the analysis of the kinematics and the mechanical behavior of materials modeled as a continuous mass rather than as discrete particles. History of fluid mechanics – history of the study of fluids and the forces on them. History of quantum mechanics – history of the branch of physics dealing with physical phenomena where the action is on the order of the Planck constant. History of thermodynamics – history of the branch of physical science concerned with heat and its relation to other forms of energy and work. History of nuclear physics – history of the field of physics that studies the building blocks and interactions of atomic nuclei. History of optics – history of the branch of physics which involves the behavior and properties of light, including its interactions with matter and the construction of instruments that use or detect it. History of particle physics – history of the branch of physics that studies the existence and interactions of particles that are the constituents of what is usually referred to as matter or radiation. History of psychophysics – history of the quantitatively investigates the relationship between physical stimuli and the sensations and perceptions they affect. History of plasma physics – history of the state of matter similar to gas in which a certain portion of the particles are ionized. History of polymer physics – history of the field of physics that studies polymers, their fluctuations, mechanical properties, as well as the kinetics of reactions involving degradation and polymerization of polymers and monomers respectively. History of quantum physics – history of the branch of physics dealing with physical phenomena where the action is on the order of the Planck constant. History of theory of relativity – History of statics – history of the branch of mechanics concerned with the analysis of loads (force, torque/moment) on physical systems in static equilibrium, that is, in a state where the relative positions of subsystems do not vary over time, or where components and structures are at a constant velocity. History of solid state physics – history of the study of rigid matter, or solids, through methods such as quantum mechanics, crystallography, electromagnetism, and metallurgy. History of vehicle dynamics – history of the dynamics of vehicles, here assumed to be ground vehicles. History of chemistry – history of the physical science of atomic matter (matter that is composed of chemical elements), especially its chemical reactions, but also including its properties, structure, composition, behavior, and changes as they relate the chemical reactions History of analytical chemistry – history of the study of the separation, identification, and quantification of the chemical components of natural and artificial materials. History of astrochemistry – history of the study of the abundance and reactions of chemical elements and molecules in the universe, and their interaction with radiation. History of cosmochemistry – history of the study of the chemical composition of matter in the universe and the processes that led to those compositions History of atmospheric chemistry – history of the branch of atmospheric science in which the chemistry of the Earth's atmosphere and that of other planets is studied. It is a multidisciplinary field of research and draws on environmental chemistry, physics, meteorology, computer modeling, oceanography, geology and volcanology, and other disciplines History of biochemistry – history of the study of chemical processes in living organisms, including, but not limited to, living matter. Biochemistry governs all living organisms and living processes. History of agrochemistry – history of the study of both chemistry and biochemistry which are important in agricultural production, the processing of raw products into foods and beverages, and in environmental monitoring and remediation. History of bioinorganic chemistry – history of the examines the role of metals in biology. History of bioorganic chemistry – history of the rapidly growing scientific discipline that combines organic chemistry and biochemistry. History of biophysical chemistry – history of the new branch of chemistry that covers a broad spectrum of research activities involving biological systems. History of environmental chemistry – history of the scientific study of the chemical and biochemical phenomena that occur in natural places. History of immunochemistry – history of the branch of chemistry that involves the study of the reactions and components on the immune system. History of medicinal chemistry – history of the discipline at the intersection of chemistry, especially synthetic organic chemistry, and pharmacology and various other biological specialties, where they are involved with design, chemical synthesis, and development for market of pharmaceutical agents (drugs). History of pharmacology – history of the branch of medicine and biology concerned with the study of drug action. History of natural product chemistry – history of the chemical compound or substance produced by a living organism – history of the found in nature that usually has a pharmacological or biological activity for use in pharmaceutical drug discovery and drug design. History of neurochemistry – history of the specific study of neurochemicals, which include neurotransmitters and other molecules such as neuro-active drugs that influence neuron function. History of computational chemistry – history of the branch of chemistry that uses principles of computer science to assist in solving chemical problems. History of chemo-informatics – history of the use of computer and informational techniques, applied to a range of problems in the field of chemistry. History of molecular mechanics – history of the uses Newtonian mechanics to model molecular systems. History of Flavor chemistry – history of someone who uses chemistry to engineer artificial and natural flavors. History of Flow chemistry – history of the chemical reaction is run in a continuously flowing stream rather than in batch production. History of geochemistry – history of the study of the mechanisms behind major geological systems using chemistry History of aqueous geochemistry – history of the study of the role of various elements in watersheds, including copper, sulfur, mercury, and how elemental fluxes are exchanged through atmospheric-terrestrial-aquatic interactions History of isotope geochemistry – history of the study of the relative and absolute concentrations of the elements and their isotopes using chemistry and geology History of ocean chemistry – history of the study of the chemistry of marine environments including the influences of different variables. History of organic geochemistry – history of the study of the impacts and processes that organisms have had on Earth History of regional, environmental and exploration geochemistry – history of the study of the spatial variation in the chemical composition of materials at the surface of the Earth History of inorganic chemistry – history of the branch of chemistry concerned with the properties and behavior of inorganic compounds. History of nuclear chemistry – history of the subfield of chemistry dealing with radioactivity, nuclear processes, and nuclear properties. History of radiochemistry – history of the chemistry of radioactive materials, where radioactive isotopes of elements are used to study the properties and chemical reactions of non-radioactive isotopes (often within radiochemistry the absence of radioactivity leads to a substance being described as being inactive as the isotopes are stable). History of organic chemistry – history of the study of the structure, properties, composition, reactions, and preparation (by synthesis or by other means) of carbon-based compounds, hydrocarbons, and their derivatives. History of petrochemistry – history of the branch of chemistry that studies the transformation of crude oil (petroleum) and natural gas into useful products or raw materials. History of organometallic chemistry – history of the study of chemical compounds containing bonds between carbon and a metal. History of photochemistry – history of the study of chemical reactions that proceed with the absorption of light by atoms or molecules.. History of physical chemistry – history of the study of macroscopic, atomic, subatomic, and particulate phenomena in chemical systems in terms of physical laws and concepts. History of chemical kinetics – history of the study of rates of chemical processes. History of chemical thermodynamics – history of the study of the interrelation of heat and work with chemical reactions or with physical changes of state within the confines of the laws of thermodynamics. History of electrochemistry – history of the branch of chemistry that studies chemical reactions which take place in a solution at the interface of an electron conductor (a metal or a semiconductor) and an ionic conductor (the electrolyte), and which involve electron transfer between the electrode and the electrolyte or species in solution. History of Femtochemistry – history of the Femtochemistry is the science that studies chemical reactions on extremely short timescales, approximately 10−15 seconds (one femtosecond, hence the name). History of mathematical chemistry – history of the area of research engaged in novel applications of mathematics to chemistry; it concerns itself principally with the mathematical modeling of chemical phenomena. History of mechanochemistry – history of the coupling of the mechanical and the chemical phenomena on a molecular scale and includes mechanical breakage, chemical behavior of mechanically stressed solids (e.g., stress-corrosion cracking), tribology, polymer degradation under shear, cavitation-related phenomena (e.g., sonochemistry and sonoluminescence), shock wave chemistry and physics, and even the burgeoning field of molecular machines. History of physical organic chemistry – history of the study of the interrelationships between structure and reactivity in organic molecules. History of quantum chemistry – history of the branch of chemistry whose primary focus is the application of quantum mechanics in physical models and experiments of chemical systems. History of sonochemistry – history of the study of the effect of sonic waves and wave properties on chemical systems. History of stereochemistry – history of the study of the relative spatial arrangement of atoms within molecules. History of supramolecular chemistry – history of the area of chemistry beyond the molecules and focuses on the chemical systems made up of a discrete number of assembled molecular subunits or components. History of thermochemistry – history of the study of the energy and heat associated with chemical reactions and/or physical transformations. History of phytochemistry – history of the strict sense of the word the study of phytochemicals. History of polymer chemistry – history of the multidisciplinary science that deals with the chemical synthesis and chemical properties of polymers or macromolecules. History of solid-state chemistry – history of the study of the synthesis, structure, and properties of solid phase materials, particularly, but not necessarily exclusively of, non-molecular solids Multidisciplinary fields involving chemistry History of chemical biology – history of the scientific discipline spanning the fields of chemistry and biology that involves the application of chemical techniques and tools, often compounds produced through synthetic chemistry, to the study and manipulation of biological systems. History of chemical engineering – history of the branch of engineering that deals with physical science (e.g., chemistry and physics), and life sciences (e.g., biology, microbiology and biochemistry) with mathematics and economics, to the process of converting raw materials or chemicals into more useful or valuable forms. History of chemical oceanography – history of the study of the behavior of the chemical elements within the Earth's oceans. History of chemical physics – history of the branch of physics that studies chemical processes from the point of view of physics. History of materials science – history of the interdisciplinary field applying the properties of matter to various areas of science and engineering. History of nanotechnology – history of the study of manipulating matter on an atomic and molecular scale History of oenology – history of the science and study of all aspects of wine and winemaking except vine-growing and grape-harvesting, which is a subfield called viticulture. History of spectroscopy – history of the study of the interaction between matter and radiated energy History of surface science – history of the Surface science is the study of physical and chemical phenomena that occur at the interface of two phases, including solid–liquid interfaces, solid–gas interfaces, solid–vacuum interfaces, and liquid–gas interfaces. History of Earth science – history of the all-embracing term for the sciences related to the planet Earth. Earth science, and all of its branches, are branches of physical science. History of atmospheric sciences – history of the umbrella term for the study of the atmosphere, its processes, the effects other systems have on the atmosphere, and the effects of the atmosphere on these other systems. History of climatology History of meteorology History of atmospheric chemistry History of biogeography – history of the study of the distribution of species (biology), organisms, and ecosystems in geographic space and through geological time. History of cartography – history of the study and practice of making maps or globes. History of climatology – history of the study of climate, scientifically defined as weather conditions averaged over a period of time History of coastal geography – history of the study of the dynamic interface between the ocean and the land, incorporating both the physical geography (i.e. coastal geomorphology, geology and oceanography) and the human geography (sociology and history) of the coast. History of environmental science – history of an integrated, quantitative, and interdisciplinary approach to the study of environmental systems. History of ecology – history of the scientific study of the distribution and abundance of living organisms and how the distribution and abundance are affected by interactions between the organisms and their environment. History of Freshwater biology – history of the scientific biological study of freshwater ecosystems and is a branch of limnology History of marine biology – history of the scientific study of organisms in the ocean or other marine or brackish bodies of water History of parasitology – history of the Parasitology is the study of parasites, their hosts, and the relationship between them. History of population dynamics – history of the Population dynamics is the branch of life sciences that studies short-term and long-term changes in the size and age composition of populations, and the biological and environmental processes influencing those changes. History of environmental chemistry – history of the Environmental chemistry is the scientific study of the chemical and biochemical phenomena that occur in natural places. History of environmental soil science – history of the Environmental soil science is the study of the interaction of humans with the pedosphere as well as critical aspects of the biosphere, the lithosphere, the hydrosphere, and the atmosphere. History of environmental geology – history of the Environmental geology, like hydrogeology, is an applied science concerned with the practical application of the principles of geology in the solving of environmental problems. History of toxicology – history of the branch of biology, chemistry, and medicine concerned with the study of the adverse effects of chemicals on living organisms. History of geodesy – history of the scientific discipline that deals with the measurement and representation of the Earth, including its gravitational field, in a three-dimensional time-varying space History of geography – history of the science that studies the lands, features, inhabitants, and phenomena of Earth History of geoinformatics – history of the science and the technology which develops and uses information science infrastructure to address the problems of geography, geosciences and related branches of engineering. History of geology – history of the study of the Earth, with the general exclusion of present-day life, flow within the ocean, and the atmosphere. History of planetary geology – history of the planetary science discipline concerned with the geology of the celestial bodies such as the planets and their moons, asteroids, comets, and meteorites. History of geomorphology – history of the scientific study of landforms and the processes that shape them History of geostatistics – history of the branch of statistics focusing on spatial or spatiotemporal datasets History of geophysics – history of the physics of the Earth and its environment in space; also the study of the Earth using quantitative physical methods. History of glaciology – history of the study of glaciers, or more generally ice and natural phenomena that involve ice. History of hydrology – history of the study of the movement, distribution, and quality of water on Earth and other planets, including the hydrologic cycle, water resources and environmental watershed sustainability. History of hydrogeology – history of the area of geology that deals with the distribution and movement of groundwater in the soil and rocks of the Earth's crust (commonly in aquifers). History of mineralogy – history of the study of chemistry, crystal structure, and physical (including optical) properties of minerals. History of meteorology – history of the interdisciplinary scientific study of the atmosphere which explains and forecasts weather events. History of oceanography – history of the branch of Earth science that studies the ocean History of paleoclimatology – history of the study of changes in climate taken on the scale of the entire history of Earth History of paleontology – history of the study of prehistoric life History of petrology – history of the branch of geology that studies the origin, composition, distribution and structure of rocks. History of limnology – history of the study of inland waters History of seismology – history of the scientific study of earthquakes and the propagation of elastic waves through the Earth or through other planet-like bodies History of soil science – history of the study of soil as a natural resource on the surface of the earth including soil formation, classification and mapping; physical, chemical, biological, and fertility properties of soils; and these properties in relation to the use and management of soils. History of topography – history of the study of surface shape and features of the Earth and other observable astronomical objects including planets, moons, and asteroids. History of volcanology – history of the study of volcanoes, lava, magma, and related geological, geophysical and geochemical phenomena. == General principles == Principle – law or rule that has to be, or usually is to be followed, or can be desirably followed, or is an inevitable consequence of something, such as the laws observed in nature or the way that a system is constructed. The principles of such a system are understood by its users as the essential characteristics of the system, or reflecting system's designed purpose, and the effective operation or use of which would be impossible if any one of the principles was to be ignored. === Basic principles of physics === Physics – branch of science that studies matter and its motion through space and time, along with related concepts such as energy and force. Physics is one of the "fundamental sciences" because the other natural sciences (like biology, geology, etc.) deal with systems that seem to obey the laws of physics. According to physics, the physical laws of matter, energy, and the fundamental forces of nature govern the interactions between particles and physical entities (such as planets, molecules, atoms, or subatomic particles). Some of the basic pursuits of physics, which include some of the most prominent developments in modern science in the last millennium, include: Describing the nature, measuring and quantifying of bodies and their motion, dynamics etc. Newton's laws of motion Mass, force and weight Momentum and conservation of energy Gravity, theories of gravity Energy, work, and their relationship Motion, position, and energy Different forms of Energy, their interconversion and the inevitable loss of energy in the form of heat (Thermodynamics) Energy conservation, conversion, and transfer. Energy source the transfer of energy from one source to work in another. Kinetic molecular theory Phases of matter and phase transitions Temperature and thermometers Energy and heat Heat flow: conduction, convection, and radiation The four laws of thermodynamics The principles of waves and sound The principles of electricity, magnetism, and electromagnetism The principles, sources, and properties of light === Basic principles of astronomy === Astronomy – science of celestial bodies and their interactions in space. Its studies include the following: The life and characteristics of stars and galaxies Origins of the universe. Physical science uses the Big Bang theory as the commonly accepted scientific theory of the origin of the universe. A heliocentric Solar System. Ancient cultures saw the Earth as the centre of the Solar System or universe (geocentrism). In the 16th century, Nicolaus Copernicus advanced the ideas of heliocentrism, recognizing the Sun as the centre of the Solar System. The structure of solar systems, planets, comets, asteroids, and meteors The shape and structure of Earth (roughly spherical, see also Spherical Earth) Earth in the Solar System Time measurement The composition and features of the Moon Interactions of the Earth and Moon (Note: Astronomy should not be confused with astrology, which assumes that people's destiny and human affairs in general correlate to the apparent positions of astronomical objects in the sky – although the two fields share a common origin, they are quite different; astronomers embrace the scientific method, while astrologers do not.) === Basic principles of chemistry === Chemistry – branch of science that studies the composition, structure, properties and change of matter. Chemistry is chiefly concerned with atoms and molecules and their interactions and transformations, for example, the properties of the chemical bonds formed between atoms to create chemical compounds. As such, chemistry studies the involvement of electrons and various forms of energy in photochemical reactions, oxidation-reduction reactions, changes in phases of matter, and separation of mixtures. Preparation and properties of complex substances, such as alloys, polymers, biological molecules, and pharmaceutical agents are considered in specialized fields of chemistry. Physical chemistry Chemical thermodynamics Reaction kinetics Molecular structure Quantum chemistry Spectroscopy Theoretical chemistry Electron configuration Molecular modelling Molecular dynamics Statistical mechanics Computational chemistry Mathematical chemistry Cheminformatics Nuclear chemistry The nature of the atomic nucleus Characterization of radioactive decay Nuclear reactions Organic chemistry Organic compounds Organic reaction Functional groups Organic synthesis Inorganic chemistry Inorganic compounds Crystal structure Coordination chemistry Solid-state chemistry Biochemistry Analytical chemistry Instrumental analysis Electroanalytical method Wet chemistry Electrochemistry Redox reaction Materials chemistry === Basic principles of Earth science === Earth science – the science of the planet Earth, as of 2018 the only identified life-bearing planet. Its studies include the following: The water cycle and the process of transpiration Freshwater Oceanography Weathering and erosion Rocks Agrophysics Soil science Pedogenesis Soil fertility Earth's tectonic structure Geomorphology and geophysics Physical geography Seismology: stress, strain, and earthquakes Characteristics of mountains and volcanoes Characteristics and formation of fossils Atmospheric sciences – the branches of science that study the atmosphere, its processes, the effects other systems have on the atmosphere, and the effects of the atmosphere on these other systems. Atmosphere of Earth Atmospheric pressure and winds Evaporation, condensation, and humidity Fog and clouds Meteorology, weather, climatology, and climate Hydrology, clouds and precipitation Air masses and weather fronts Major storms: thunderstorms, tornadoes, and hurricanes Major climate groups Speleology Cave == Notable physical scientists == List of physicists List of astronomers List of chemists === Earth scientists === List of Russian Earth scientists == See also == Outline of science Outline of natural science Outline of physical science Outline of earth science Outline of formal science Outline of social science Outline of applied science == Notes == == References == === Works cited === Feynman, R.P.; Leighton, R.B.; Sands, M. (1963). The Feynman Lectures on Physics. Vol. 1. ISBN 0-201-02116-1. {{cite book}}: ISBN / Date incompatibility (help) Holzner, S. (2006). Physics for Dummies. John Wiley & Sons. ISBN 0-470-61841-8. Physics is the study of your world and universe around you. Maxwell, J.C. (1878). Matter and Motion. D. Van Nostrand. ISBN 0-486-66895-9. {{cite book}}: ISBN / Date incompatibility (help) Young, H.D.; Freedman, R.A. (2014). Sears and Zemansky's University Physics with Modern Physics Technology Update (13th ed.). Pearson Education. ISBN 978-1-292-02063-1. == External links == Physical science topics and articles for school curricula (grades K-12)
Wikipedia/Physical_sciences
Continuous modelling is the mathematical practice of applying a model to continuous data (data which has a potentially infinite number, and divisibility, of attributes). They often use differential equations and are converse to discrete modelling. Modelling is generally broken down into several steps: Making assumptions about the data: The modeller decides what is influencing the data and what can be safely ignored. Making equations to fit the assumptions. Solving the equations. Verifying the results: Various statistical tests are applied to the data and the model and compared. If the model passes the verification progress, putting it into practice. If the model fails the verification progress, altering it and subjecting it again to verification; if it persists in fitting the data more poorly than a competing model, it is abandoned. == References == == External links == Definition by the UK National Physical Laboratory
Wikipedia/Continuous_model
Power is the amount of energy transferred or converted per unit time. In the International System of Units, the unit of power is the watt, equal to one joule per second. Power is a scalar quantity. Specifying power in particular systems may require attention to other quantities; for example, the power involved in moving a ground vehicle is the product of the aerodynamic drag plus traction force on the wheels, and the velocity of the vehicle. The output power of a motor is the product of the torque that the motor generates and the angular velocity of its output shaft. Likewise, the power dissipated in an electrical element of a circuit is the product of the current flowing through the element and of the voltage across the element. == Definition == Power is the rate with respect to time at which work is done or, more generally, the rate of change of total mechanical energy. It is given by: P = d E d t , {\displaystyle P={\frac {dE}{dt}},} where P is power, E is the total mechanical energy (sum of kinetic and potential energy), and t is time. For cases where only work is considered, power is also expressed as: P = d W d t , {\displaystyle P={\frac {dW}{dt}},} where W is the work done on the system. However, in systems where potential energy changes without explicit work being done (e.g., changing fields or conservative forces), the total energy definition is more general. We will now show that the mechanical power generated by a force F on a body moving at the velocity v can be expressed as the product: P = d W d t = F ⋅ v {\displaystyle P={\frac {dW}{dt}}=\mathbf {F} \cdot \mathbf {v} } If a constant force F is applied throughout a distance x, the work done is defined as W = F ⋅ x {\displaystyle W=\mathbf {F} \cdot \mathbf {x} } . In this case, power can be written as: P = d W d t = d d t ( F ⋅ x ) = F ⋅ d x d t = F ⋅ v . {\displaystyle P={\frac {dW}{dt}}={\frac {d}{dt}}\left(\mathbf {F} \cdot \mathbf {x} \right)=\mathbf {F} \cdot {\frac {d\mathbf {x} }{dt}}=\mathbf {F} \cdot \mathbf {v} .} If instead the force is variable over a three-dimensional curve C, then the work is expressed in terms of the line integral: W = ∫ C F ⋅ d r = ∫ Δ t F ⋅ d r d t d t = ∫ Δ t F ⋅ v d t . {\displaystyle W=\int _{C}\mathbf {F} \cdot d\mathbf {r} =\int _{\Delta t}\mathbf {F} \cdot {\frac {d\mathbf {r} }{dt}}\ dt=\int _{\Delta t}\mathbf {F} \cdot \mathbf {v} \,dt.} From the fundamental theorem of calculus, we know that P = d W d t = d d t ∫ Δ t F ⋅ v d t = F ⋅ v . {\displaystyle P={\frac {dW}{dt}}={\frac {d}{dt}}\int _{\Delta t}\mathbf {F} \cdot \mathbf {v} \,dt=\mathbf {F} \cdot \mathbf {v} .} Hence the formula is valid for any general situation. In older works, power is sometimes called activity. == Units == The dimension of power is energy divided by time. In the International System of Units (SI), the unit of power is the watt (W), which is equal to one joule per second. Other common and traditional measures are horsepower (hp), comparing to the power of a horse; one mechanical horsepower equals about 745.7 watts. Other units of power include ergs per second (erg/s), foot-pounds per minute, dBm, a logarithmic measure relative to a reference of 1 milliwatt, calories per hour, BTU per hour (BTU/h), and tons of refrigeration. == Average power and instantaneous power == As a simple example, burning one kilogram of coal releases more energy than detonating a kilogram of TNT, but because the TNT reaction releases energy more quickly, it delivers more power than the coal. If ΔW is the amount of work performed during a period of time of duration Δt, the average power Pavg over that period is given by the formula P a v g = Δ W Δ t . {\displaystyle P_{\mathrm {avg} }={\frac {\Delta W}{\Delta t}}.} It is the average amount of work done or energy converted per unit of time. Average power is often called "power" when the context makes it clear. Instantaneous power is the limiting value of the average power as the time interval Δt approaches zero. P = lim Δ t → 0 P a v g = lim Δ t → 0 Δ W Δ t = d W d t . {\displaystyle P=\lim _{\Delta t\to 0}P_{\mathrm {avg} }=\lim _{\Delta t\to 0}{\frac {\Delta W}{\Delta t}}={\frac {dW}{dt}}.} When power P is constant, the amount of work performed in time period t can be calculated as W = P t . {\displaystyle W=Pt.} In the context of energy conversion, it is more customary to use the symbol E rather than W. == Mechanical power == Power in mechanical systems is the combination of forces and movement. In particular, power is the product of a force on an object and the object's velocity, or the product of a torque on a shaft and the shaft's angular velocity. Mechanical power is also described as the time derivative of work. In mechanics, the work done by a force F on an object that travels along a curve C is given by the line integral: W C = ∫ C F ⋅ v d t = ∫ C F ⋅ d x , {\displaystyle W_{C}=\int _{C}\mathbf {F} \cdot \mathbf {v} \,dt=\int _{C}\mathbf {F} \cdot d\mathbf {x} ,} where x defines the path C and v is the velocity along this path. If the force F is derivable from a potential (conservative), then applying the gradient theorem (and remembering that force is the negative of the gradient of the potential energy) yields: W C = U ( A ) − U ( B ) , {\displaystyle W_{C}=U(A)-U(B),} where A and B are the beginning and end of the path along which the work was done. The power at any point along the curve C is the time derivative: P ( t ) = d W d t = F ⋅ v = − d U d t . {\displaystyle P(t)={\frac {dW}{dt}}=\mathbf {F} \cdot \mathbf {v} =-{\frac {dU}{dt}}.} In one dimension, this can be simplified to: P ( t ) = F ⋅ v . {\displaystyle P(t)=F\cdot v.} In rotational systems, power is the product of the torque τ and angular velocity ω, P ( t ) = τ ⋅ ω , {\displaystyle P(t)={\boldsymbol {\tau }}\cdot {\boldsymbol {\omega }},} where ω is angular frequency, measured in radians per second. The ⋅ {\displaystyle \cdot } represents scalar product. In fluid power systems such as hydraulic actuators, power is given by P ( t ) = p Q , {\displaystyle P(t)=pQ,} where p is pressure in pascals or N/m2, and Q is volumetric flow rate in m3/s in SI units. === Mechanical advantage === If a mechanical system has no losses, then the input power must equal the output power. This provides a simple formula for the mechanical advantage of the system. Let the input power to a device be a force FA acting on a point that moves with velocity vA and the output power be a force FB acts on a point that moves with velocity vB. If there are no losses in the system, then P = F B v B = F A v A , {\displaystyle P=F_{\text{B}}v_{\text{B}}=F_{\text{A}}v_{\text{A}},} and the mechanical advantage of the system (output force per input force) is given by M A = F B F A = v A v B . {\displaystyle \mathrm {MA} ={\frac {F_{\text{B}}}{F_{\text{A}}}}={\frac {v_{\text{A}}}{v_{\text{B}}}}.} The similar relationship is obtained for rotating systems, where TA and ωA are the torque and angular velocity of the input and TB and ωB are the torque and angular velocity of the output. If there are no losses in the system, then P = T A ω A = T B ω B , {\displaystyle P=T_{\text{A}}\omega _{\text{A}}=T_{\text{B}}\omega _{\text{B}},} which yields the mechanical advantage M A = T B T A = ω A ω B . {\displaystyle \mathrm {MA} ={\frac {T_{\text{B}}}{T_{\text{A}}}}={\frac {\omega _{\text{A}}}{\omega _{\text{B}}}}.} These relations are important because they define the maximum performance of a device in terms of velocity ratios determined by its physical dimensions. See for example gear ratios. == Electrical power == The instantaneous electrical power P delivered to a component is given by P ( t ) = I ( t ) ⋅ V ( t ) , {\displaystyle P(t)=I(t)\cdot V(t),} where P ( t ) {\displaystyle P(t)} is the instantaneous power, measured in watts (joules per second), V ( t ) {\displaystyle V(t)} is the potential difference (or voltage drop) across the component, measured in volts, and I ( t ) {\displaystyle I(t)} is the current through it, measured in amperes. If the component is a resistor with time-invariant voltage to current ratio, then: P = I ⋅ V = I 2 ⋅ R = V 2 R , {\displaystyle P=I\cdot V=I^{2}\cdot R={\frac {V^{2}}{R}},} where R = V I {\displaystyle R={\frac {V}{I}}} is the electrical resistance, measured in ohms. == Peak power and duty cycle == In the case of a periodic signal s ( t ) {\displaystyle s(t)} of period T {\displaystyle T} , like a train of identical pulses, the instantaneous power p ( t ) = | s ( t ) | 2 {\textstyle p(t)=|s(t)|^{2}} is also a periodic function of period T {\displaystyle T} . The peak power is simply defined by: P 0 = max [ p ( t ) ] . {\displaystyle P_{0}=\max[p(t)].} The peak power is not always readily measurable, however, and the measurement of the average power P a v g {\displaystyle P_{\mathrm {avg} }} is more commonly performed by an instrument. If one defines the energy per pulse as ε p u l s e = ∫ 0 T p ( t ) d t {\displaystyle \varepsilon _{\mathrm {pulse} }=\int _{0}^{T}p(t)\,dt} then the average power is P a v g = 1 T ∫ 0 T p ( t ) d t = ε p u l s e T . {\displaystyle P_{\mathrm {avg} }={\frac {1}{T}}\int _{0}^{T}p(t)\,dt={\frac {\varepsilon _{\mathrm {pulse} }}{T}}.} One may define the pulse length τ {\displaystyle \tau } such that P 0 τ = ε p u l s e {\displaystyle P_{0}\tau =\varepsilon _{\mathrm {pulse} }} so that the ratios P a v g P 0 = τ T {\displaystyle {\frac {P_{\mathrm {avg} }}{P_{0}}}={\frac {\tau }{T}}} are equal. These ratios are called the duty cycle of the pulse train. == Radiant power == Power is related to intensity at a radius r {\displaystyle r} ; the power emitted by a source can be written as: P ( r ) = I ( 4 π r 2 ) . {\displaystyle P(r)=I(4\pi r^{2}).} == See also == Simple machines Orders of magnitude (power) Pulsed power Intensity – in the radiative sense, power per area Power gain – for linear, two-port networks Power density Signal strength Sound power == References ==
Wikipedia/Mechanical_power_(physics)
Applied mechanics is the branch of science concerned with the motion of any substance that can be experienced or perceived by humans without the help of instruments. In short, when mechanics concepts surpass being theoretical and are applied and executed, general mechanics becomes applied mechanics. It is this stark difference that makes applied mechanics an essential understanding for practical everyday life. It has numerous applications in a wide variety of fields and disciplines, including but not limited to structural engineering, astronomy, oceanography, meteorology, hydraulics, mechanical engineering, aerospace engineering, nanotechnology, structural design, earthquake engineering, fluid dynamics, planetary sciences, and other life sciences. Connecting research between numerous disciplines, applied mechanics plays an important role in both science and engineering. Pure mechanics describes the response of bodies (solids and fluids) or systems of bodies to external behavior of a body, in either a beginning state of rest or of motion, subjected to the action of forces. Applied mechanics bridges the gap between physical theory and its application to technology. Composed of two main categories, Applied Mechanics can be split into classical mechanics; the study of the mechanics of macroscopic solids, and fluid mechanics; the study of the mechanics of macroscopic fluids. Each branch of applied mechanics contains subcategories formed through their own subsections as well. Classical mechanics, divided into statics and dynamics, are even further subdivided, with statics' studies split into rigid bodies and rigid structures, and dynamics' studies split into kinematics and kinetics. Like classical mechanics, fluid mechanics is also divided into two sections: statics and dynamics. Within the practical sciences, applied mechanics is useful in formulating new ideas and theories, discovering and interpreting phenomena, and developing experimental and computational tools. In the application of the natural sciences, mechanics was said to be complemented by thermodynamics, the study of heat and more generally energy, and electromechanics, the study of electricity and magnetism. == Overview == Engineering problems are generally tackled with applied mechanics through the application of theories of classical mechanics and fluid mechanics. Because applied mechanics can be applied in engineering disciplines like civil engineering, mechanical engineering, aerospace engineering, materials engineering, and biomedical engineering, it is sometimes referred to as engineering mechanics. Science and engineering are interconnected with respect to applied mechanics, as researches in science are linked to research processes in civil, mechanical, aerospace, materials and biomedical engineering disciplines. In civil engineering, applied mechanics’ concepts can be applied to structural design and a variety of engineering sub-topics like structural, coastal, geotechnical, construction, and earthquake engineering. In mechanical engineering, it can be applied in mechatronics and robotics, design and drafting, nanotechnology, machine elements, structural analysis, friction stir welding, and acoustical engineering. In aerospace engineering, applied mechanics is used in aerodynamics, aerospace structural mechanics and propulsion, aircraft design and flight mechanics. In materials engineering, applied mechanics’ concepts are used in thermoelasticity, elasticity theory, fracture and failure mechanisms, structural design optimisation, fracture and fatigue, active materials and composites, and computational mechanics. Research in applied mechanics can be directly linked to biomedical engineering areas of interest like orthopaedics; biomechanics; human body motion analysis; soft tissue modelling of muscles, tendons, ligaments, and cartilage; biofluid mechanics; and dynamic systems, performance enhancement, and optimal control. == Brief history == The first science with a theoretical foundation based in mathematics was mechanics; the underlying principles of mechanics were first delineated by Isaac Newton in his 1687 book Philosophiæ Naturalis Principia Mathematica. One of the earliest works to define applied mechanics as its own discipline was the three volume Handbuch der Mechanik written by German physicist and engineer Franz Josef Gerstner. The first seminal work on applied mechanics to be published in English was A Manual of Applied Mechanics in 1858 by English mechanical engineer William Rankine. August Föppl, a German mechanical engineer and professor, published Vorlesungen über technische Mechanik in 1898 in which he introduced calculus to the study of applied mechanics. Applied mechanics was established as a discipline separate from classical mechanics in the early 1920s with the publication of Journal of Applied Mathematics and Mechanics, the creation of the Society of Applied Mathematics and Mechanics, and the first meeting of the International Congress of Applied Mechanics. In 1921 Austrian scientist Richard von Mises started the Journal of Applied Mathematics and Mechanics (Zeitschrift für Angewante Mathematik und Mechanik) and in 1922 with German scientist Ludwig Prandtl founded the Society of Applied Mathematics and Mechanics (Gesellschaft für Angewandte Mathematik und Mechanik). During a 1922 conference on hydrodynamics and aerodynamics in Innsbruck, Austria, Theodore von Kármán, a Hungarian engineer, and Tullio Levi-Civita, an Italian mathematician, met and decided to organize a conference on applied mechanics. In 1924 the first meeting of the International Congress of Applied Mechanics was held in Delft, the Netherlands attended by more than 200 scientist from around the world. Since this first meeting the congress has been held every four years, except during World War II; the name of the meeting was changed to International Congress of Theoretical and Applied Mechanics in 1960. Due to the unpredictable political landscape in Europe after the First World War and upheaval of World War II many European scientist and engineers emigrated to the United States. Ukrainian engineer Stephan Timoshenko fled the Bolsheviks Red Army in 1918 and eventually emigrated to the U.S. in 1922; over the next twenty-two years he taught applied mechanics at the University of Michigan and Stanford University. Timoshenko authored thirteen textbooks in applied mechanics, many considered the gold standard in their fields; he also founded the Applied Mechanics Division of the American Society of Mechanical Engineers in 1927 and is considered “America’s Father of Engineering Mechanics.” In 1930 Theodore von Kármán left Germany and became the first director of the Aeronautical Laboratory at the California Institute of Technology; von Kármán would later co-found the Jet Propulsion Laboratory in 1944. With the leadership of Timoshenko and von Kármán, the influx of talent from Europe, and the rapid growth of the aeronautical and defense industries, applied mechanics became a mature discipline in the U.S. by 1950. == Branches == === Dynamics === Dynamics, the study of the motion and movement of various objects, can be further divided into two branches, kinematics and kinetics. For classical mechanics, kinematics would be the analysis of moving bodies using time, velocities, displacement, and acceleration. Kinetics would be the study of moving bodies through the lens of the effects of forces and masses. In the context of fluid mechanics, fluid dynamics pertains to the flow and describing of the motion of various fluids. === Statics === The study of statics is the study and describing of bodies at rest. Static analysis in classical mechanics can be broken down into two categories, non-deformable bodies and deformable bodies. When studying non-deformable bodies, considerations relating to the forces acting on the rigid structures are analyzed. When studying deformable bodies, the examination of the structure and material strength is observed. In the context of fluid mechanics, the resting state of the pressure unaffected fluid is taken into account. == Relationship to classical mechanics == Applied Mechanics is a result of the practical applications of various engineering/mechanical disciplines; as illustrated in the table below. == Examples == === Newtonian foundation === Being one of the first sciences for which a systematic theoretical framework was developed, mechanics was spearheaded by Sir Isaac Newton's Principia (published in 1687). It is the "divide and rule" strategy developed by Newton that helped to govern motion and split it into dynamics or statics. Depending on the type of force, type of matter, and the external forces, acting on said matter, will dictate the "Divide and Rule" strategy within dynamic and static studies. === Archimedes' principle === Archimedes' principle is a major one that contains many defining propositions pertaining to fluid mechanics. As stated by proposition 7 of Archimedes' principle, a solid that is heavier than the fluid its placed in, will descend to the bottom of the fluid. If the solid is to be weighed within the fluid, the fluid will be measured as lighter than the weight of the amount of fluid that was displaced by said solid. Further developed upon by proposition 5, if the solid is lighter than the fluid it is placed in, the solid will have to be forcibly immersed to be fully covered by the liquid. The weight of the amount of displaced fluids will then be equal to the weight of the solid. == Major topics == This section based on the "AMR Subject Classification Scheme" from the journal Applied Mechanics Reviews. === Foundations and basic methods === Continuum mechanics Finite element method Finite difference method Other computational methods Experimental system analysis === Dynamics and vibration === Dynamics (mechanics) Kinematics Vibrations of solids (basic) Vibrations (structural elements) Vibrations (structures) Wave motion in solids Impact on solids Waves in incompressible fluids Waves in compressible fluids Solid fluid interactions Astronautics (celestial and orbital mechanics) Explosions and ballistics Acoustics === Automatic control === System theory and design Optimal control system System and control applications Robotics Manufacturing === Mechanics of solids === Elasticity Viscoelasticity Plasticity and viscoplasticity Composite material mechanics Cables, rope, beams, etc Plates, shells, membranes, etc Structural stability (buckling, postbuckling) Electromagneto solid mechanics Soil mechanics (basic) Soil mechanics (applied) Rock mechanics Material processing Fracture and damage processes Fracture and damage mechanics Experimental stress analysis Material Testing Structures (basic) Structures (ground) Structures (ocean and coastal) Structures (mobile) Structures (containment) Friction and wear Machine elements Machine design Fastening and joining === Mechanics of fluids === Rheology Hydraulics Incompressible flow Compressible flow Rarefied flow Multiphase flow Wall Layers (incl boundary layers) Internal flow (pipe, channel, and couette) Internal flow (inlets, nozzles, diffusers, and cascades) Free shear layers (mixing layers, jets, wakes, cavities, and plumes)\ Flow stability Turbulence Electromagneto fluid and plasma dynamics Hydromechanics Aerodynamics Machinery fluid dynamics Lubrication Flow measurements and visualization === Thermal sciences === Thermodynamics Heat transfer (one phase convection) Heat transfer (two phase convection) Heat transfer (conduction) Heat transfer (radiation and combined modes) Heat transfer (devices and systems) Thermodynamics of solids Mass transfer (with and without heat transfer) Combustion Prime movers and propulsion systems === Earth sciences === Micromeritics Porous media Geomechanics Earthquake mechanics Hydrology, oceanology, and meteorology === Energy systems and environment === Fossil fuel systems Nuclear systems Geothermal systems Solar energy systems Wind energy systems Ocean energy system Energy distribution and storage Environmental fluid mechanics Hazardous waste containment and disposal === Biosciences === Biomechanics Human factor engineering Rehabilitation engineering Sports mechanics == Applications == Electrical Engineering Civil engineering Mechanical Engineering Nuclear engineering Architectural engineering Chemical engineering Petroleum engineering == Publications == Journal of Applied Mathematics and Mechanics Newsletters of the Applied Mechanics Division Journal of Applied Mechanics Applied Mechanics Reviews Applied Mechanics Quarterly Journal of Mechanics and Applied Mathematics Journal of Applied Mathematics and Mechanics (PMM) Gesellschaft für Angewandte Mathematik und Mechanik Acta Mechanica Sinica == See also == == References == == Further reading == J.P. Den Hartog, Strength of Materials, Dover, New York, 1949. F.P. Beer, E.R. Johnston, J.T. DeWolf, Mechanics of Materials, McGraw-Hill, New York, 1981. S.P. Timoshenko, History of Strength of Materials, Dover, New York, 1953. J.E. Gordon, The New Science of Strong Materials, Princeton, 1984. H. Petroski, To Engineer Is Human, St. Martins, 1985. T.A. McMahon and J.T. Bonner, On Size and Life, Scientific American Library, W.H. Freeman, 1983. M. F. Ashby, Materials Selection in Design, Pergamon, 1992. A.H. Cottrell, Mechanical Properties of Matter, Wiley, New York, 1964. S.A. Wainwright, W.D. Biggs, J.D. Organisms, Edward Arnold, 1976. S. Vogel, Comparative Biomechanics, Princeton, 2003. J. Howard, Mechanics of Motor Proteins and the Cytoskeleton, Sinauer Associates, 2001. J.L. Meriam, L.G. Kraige. Engineering Mechanics Volume 2: Dynamics, John Wiley & Sons., New York, 1986. J.L. Meriam, L.G. Kraige. Engineering Mechanics Volume 1: Statics, John Wiley & Sons., New York, 1986. == External links == Video and web lectures Engineering Mechanics Video Lectures and Web Notes Applied Mechanics Video Lectures By Prof.SK. Gupta, Department of Applied Mechanics, IIT Delhi
Wikipedia/Applied_mechanics
In classical mechanics, Routh's procedure or Routhian mechanics is a hybrid formulation of Lagrangian mechanics and Hamiltonian mechanics developed by Edward John Routh. Correspondingly, the Routhian is the function which replaces both the Lagrangian and Hamiltonian functions. Although Routhian mechanics is equivalent to Lagrangian mechanics and Hamiltonian mechanics, and introduces no new physics, it offers an alternative way to solve mechanical problems. == Definitions == The Routhian, like the Hamiltonian, can be obtained from a Legendre transform of the Lagrangian, and has a similar mathematical form to the Hamiltonian, but is not exactly the same. The difference between the Lagrangian, Hamiltonian, and Routhian functions are their variables. For a given set of generalized coordinates representing the degrees of freedom in the system, the Lagrangian is a function of the coordinates and velocities, while the Hamiltonian is a function of the coordinates and momenta. The Routhian differs from these functions in that some coordinates are chosen to have corresponding generalized velocities, the rest to have corresponding generalized momenta. This choice is arbitrary, and can be done to simplify the problem. It also has the consequence that the Routhian equations are exactly the Hamiltonian equations for some coordinates and corresponding momenta, and the Lagrangian equations for the rest of the coordinates and their velocities. In each case the Lagrangian and Hamiltonian functions are replaced by a single function, the Routhian. The full set thus has the advantages of both sets of equations, with the convenience of splitting one set of coordinates to the Hamilton equations, and the rest to the Lagrangian equations. In the case of Lagrangian mechanics, the generalized coordinates q1, q2, ... and the corresponding velocities dq1/dt, dq2/dt, ..., and possibly time t, enter the Lagrangian, L ( q 1 , q 2 , … , q ˙ 1 , q ˙ 2 , … , t ) , q ˙ i = d q i d t , {\displaystyle L(q_{1},q_{2},\ldots ,{\dot {q}}_{1},{\dot {q}}_{2},\ldots ,t)\,,\quad {\dot {q}}_{i}={\frac {dq_{i}}{dt}}\,,} where the overdots denote time derivatives. In Hamiltonian mechanics, the generalized coordinates q1, q2, ... and the corresponding generalized momenta p1, p2, ..., and possibly time, enter the Hamiltonian, H ( q 1 , q 2 , … , p 1 , p 2 , … , t ) = ∑ i q ˙ i p i − L ( q 1 , q 2 , … , q ˙ 1 ( p 1 ) , q ˙ 2 ( p 2 ) , … , t ) , p i = ∂ L ∂ q ˙ i , {\displaystyle H(q_{1},q_{2},\ldots ,p_{1},p_{2},\ldots ,t)=\sum _{i}{\dot {q}}_{i}p_{i}-L(q_{1},q_{2},\ldots ,{\dot {q}}_{1}(p_{1}),{\dot {q}}_{2}(p_{2}),\ldots ,t)\,,\quad p_{i}={\frac {\partial L}{\partial {\dot {q}}_{i}}}\,,} where the second equation is the definition of the generalized momentum pi corresponding to the coordinate qi (partial derivatives are denoted using ∂). The velocities dqi/dt are expressed as functions of their corresponding momenta by inverting their defining relation. In this context, pi is said to be the momentum "canonically conjugate" to qi. The Routhian is intermediate between L and H; some coordinates q1, q2, ..., qn are chosen to have corresponding generalized momenta p1, p2, ..., pn, the rest of the coordinates ζ1, ζ2, ..., ζs to have generalized velocities dζ1/dt, dζ2/dt, ..., dζs/dt, and time may appear explicitly; where again the generalized velocity dqi/dt is to be expressed as a function of generalized momentum pi via its defining relation. The choice of which n coordinates are to have corresponding momenta, out of the n + s coordinates, is arbitrary. The above is used by Landau and Lifshitz, and Goldstein. Some authors may define the Routhian to be the negative of the above definition. Given the length of the general definition, a more compact notation is to use boldface for tuples (or vectors) of the variables, thus q = (q1, q2, ..., qn), ζ = (ζ1, ζ2, ..., ζs), p = (p1, p2, ..., pn), and d ζ/dt = (dζ1/dt, dζ2/dt, ..., dζs/dt), so that R ( q , ζ , p , ζ ˙ , t ) = p ⋅ q ˙ − L ( q , ζ , q ˙ , ζ ˙ , t ) , {\displaystyle R(\mathbf {q} ,{\boldsymbol {\zeta }},\mathbf {p} ,{\dot {\boldsymbol {\zeta }}},t)=\mathbf {p} \cdot {\dot {\mathbf {q} }}-L(\mathbf {q} ,{\boldsymbol {\zeta }},{\dot {\mathbf {q} }},{\dot {\boldsymbol {\zeta }}},t)\,,} where · is the dot product defined on the tuples, for the specific example appearing here: p ⋅ q ˙ = ∑ i = 1 n p i q ˙ i . {\displaystyle \mathbf {p} \cdot {\dot {\mathbf {q} }}=\sum _{i=1}^{n}p_{i}{\dot {q}}_{i}\,.} == Equations of motion == For reference, the Euler-Lagrange equations for s degrees of freedom are a set of s coupled second order ordinary differential equations in the coordinates d d t ∂ L ∂ q ˙ j = ∂ L ∂ q j , {\displaystyle {\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {q}}_{j}}}={\frac {\partial L}{\partial q_{j}}}\,,} where j = 1, 2, ..., s, and the Hamiltonian equations for n degrees of freedom are a set of 2n coupled first order ordinary differential equations in the coordinates and momenta q ˙ i = ∂ H ∂ p i , p ˙ i = − ∂ H ∂ q i . {\displaystyle {\dot {q}}_{i}={\frac {\partial H}{\partial p_{i}}}\,,\quad {\dot {p}}_{i}=-{\frac {\partial H}{\partial q_{i}}}\,.} Below, the Routhian equations of motion are obtained in two ways, in the process other useful derivatives are found that can be used elsewhere. === Two degrees of freedom === Consider the case of a system with two degrees of freedom, q and ζ, with generalized velocities dq/dt and dζ/dt, and the Lagrangian is time-dependent. (The generalization to any number of degrees of freedom follows exactly the same procedure as with two). The Lagrangian of the system will have the form L ( q , ζ , q ˙ , ζ ˙ , t ) {\displaystyle L(q,\zeta ,{\dot {q}},{\dot {\zeta }},t)} The differential of L is d L = ∂ L ∂ q d q + ∂ L ∂ ζ d ζ + ∂ L ∂ q ˙ d q ˙ + ∂ L ∂ ζ ˙ d ζ ˙ + ∂ L ∂ t d t . {\displaystyle dL={\frac {\partial L}{\partial q}}dq+{\frac {\partial L}{\partial \zeta }}d\zeta +{\frac {\partial L}{\partial {\dot {q}}}}d{\dot {q}}+{\frac {\partial L}{\partial {\dot {\zeta }}}}d{\dot {\zeta }}+{\frac {\partial L}{\partial t}}dt\,.} Now change variables, from the set (q, ζ, dq/dt, dζ/dt) to (q, ζ, p, dζ/dt), simply switching the velocity dq/dt to the momentum p. This change of variables in the differentials is the Legendre transformation. The differential of the new function to replace L will be a sum of differentials in dq, dζ, dp, d(dζ/dt), and dt. Using the definition of generalized momentum and Lagrange's equation for the coordinate q: p = ∂ L ∂ q ˙ , p ˙ = d d t ∂ L ∂ q ˙ = ∂ L ∂ q {\displaystyle p={\frac {\partial L}{\partial {\dot {q}}}}\,,\quad {\dot {p}}={\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {q}}}}={\frac {\partial L}{\partial q}}} we have d L = p ˙ d q + ∂ L ∂ ζ d ζ + p d q ˙ + ∂ L ∂ ζ ˙ d ζ ˙ + ∂ L ∂ t d t {\displaystyle dL={\dot {p}}dq+{\frac {\partial L}{\partial \zeta }}d\zeta +pd{\dot {q}}+{\frac {\partial L}{\partial {\dot {\zeta }}}}d{\dot {\zeta }}+{\frac {\partial L}{\partial t}}dt} and to replace pd(dq/dt) by (dq/dt)dp, recall the product rule for differentials, and substitute p d q ˙ = d ( q ˙ p ) − q ˙ d p {\displaystyle pd{\dot {q}}=d({\dot {q}}p)-{\dot {q}}dp} to obtain the differential of a new function in terms of the new set of variables: d ( L − p q ˙ ) = p ˙ d q + ∂ L ∂ ζ d ζ − q ˙ d p + ∂ L ∂ ζ ˙ d ζ ˙ + ∂ L ∂ t d t . {\displaystyle d(L-p{\dot {q}})={\dot {p}}dq+{\frac {\partial L}{\partial \zeta }}d\zeta -{\dot {q}}dp+{\frac {\partial L}{\partial {\dot {\zeta }}}}d{\dot {\zeta }}+{\frac {\partial L}{\partial t}}dt\,.} Introducing the Routhian R ( q , ζ , p , ζ ˙ , t ) = p q ˙ ( p ) − L {\displaystyle R(q,\zeta ,p,{\dot {\zeta }},t)=p{\dot {q}}(p)-L} where again the velocity dq/dt is a function of the momentum p, we have d R = − p ˙ d q − ∂ L ∂ ζ d ζ + q ˙ d p − ∂ L ∂ ζ ˙ d ζ ˙ − ∂ L ∂ t d t , {\displaystyle dR=-{\dot {p}}dq-{\frac {\partial L}{\partial \zeta }}d\zeta +{\dot {q}}dp-{\frac {\partial L}{\partial {\dot {\zeta }}}}d{\dot {\zeta }}-{\frac {\partial L}{\partial t}}dt\,,} but from the above definition, the differential of the Routhian is d R = ∂ R ∂ q d q + ∂ R ∂ ζ d ζ + ∂ R ∂ p d p + ∂ R ∂ ζ ˙ d ζ ˙ + ∂ R ∂ t d t . {\displaystyle dR={\frac {\partial R}{\partial q}}dq+{\frac {\partial R}{\partial \zeta }}d\zeta +{\frac {\partial R}{\partial p}}dp+{\frac {\partial R}{\partial {\dot {\zeta }}}}d{\dot {\zeta }}+{\frac {\partial R}{\partial t}}dt\,.} Comparing the coefficients of the differentials dq, dζ, dp, d(dζ/dt), and dt, the results are Hamilton's equations for the coordinate q, q ˙ = ∂ R ∂ p , p ˙ = − ∂ R ∂ q , {\displaystyle {\dot {q}}={\frac {\partial R}{\partial p}}\,,\quad {\dot {p}}=-{\frac {\partial R}{\partial q}}\,,} and Lagrange's equation for the coordinate ζ d d t ∂ R ∂ ζ ˙ = ∂ R ∂ ζ {\displaystyle {\frac {d}{dt}}{\frac {\partial R}{\partial {\dot {\zeta }}}}={\frac {\partial R}{\partial \zeta }}} which follow from ∂ L ∂ ζ = − ∂ R ∂ ζ , ∂ L ∂ ζ ˙ = − ∂ R ∂ ζ ˙ , {\displaystyle {\frac {\partial L}{\partial \zeta }}=-{\frac {\partial R}{\partial \zeta }}\,,\quad {\frac {\partial L}{\partial {\dot {\zeta }}}}=-{\frac {\partial R}{\partial {\dot {\zeta }}}}\,,} and taking the total time derivative of the second equation and equating to the first. Notice the Routhian replaces the Hamiltonian and Lagrangian functions in all the equations of motion. The remaining equation states the partial time derivatives of L and R are negatives ∂ L ∂ t = − ∂ R ∂ t . {\displaystyle {\frac {\partial L}{\partial t}}=-{\frac {\partial R}{\partial t}}\,.} === Any number of degrees of freedom === For n + s coordinates as defined above, with Routhian R ( q 1 , … , q n , ζ 1 , … , ζ s , p 1 , … , p n , ζ ˙ 1 , … , ζ ˙ s , t ) = ∑ i = 1 n p i q ˙ i ( p i ) − L {\displaystyle R(q_{1},\ldots ,q_{n},\zeta _{1},\ldots ,\zeta _{s},p_{1},\ldots ,p_{n},{\dot {\zeta }}_{1},\ldots ,{\dot {\zeta }}_{s},t)=\sum _{i=1}^{n}p_{i}{\dot {q}}_{i}(p_{i})-L} the equations of motion can be derived by a Legendre transformation of this Routhian as in the previous section, but another way is to simply take the partial derivatives of R with respect to the coordinates qi and ζj, momenta pi, and velocities dζj/dt, where i = 1, 2, ..., n, and j = 1, 2, ..., s. The derivatives are ∂ R ∂ q i = − ∂ L ∂ q i = − d d t ∂ L ∂ q ˙ i = − p ˙ i {\displaystyle {\frac {\partial R}{\partial q_{i}}}=-{\frac {\partial L}{\partial q_{i}}}=-{\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {q}}_{i}}}=-{\dot {p}}_{i}} ∂ R ∂ p i = q ˙ i {\displaystyle {\frac {\partial R}{\partial p_{i}}}={\dot {q}}_{i}} ∂ R ∂ ζ j = − ∂ L ∂ ζ j , {\displaystyle {\frac {\partial R}{\partial \zeta _{j}}}=-{\frac {\partial L}{\partial \zeta _{j}}}\,,} ∂ R ∂ ζ ˙ j = − ∂ L ∂ ζ ˙ j , {\displaystyle {\frac {\partial R}{\partial {\dot {\zeta }}_{j}}}=-{\frac {\partial L}{\partial {\dot {\zeta }}_{j}}}\,,} ∂ R ∂ t = − ∂ L ∂ t . {\displaystyle {\frac {\partial R}{\partial t}}=-{\frac {\partial L}{\partial t}}\,.} The first two are identically the Hamiltonian equations. Equating the total time derivative of the fourth set of equations with the third (for each value of j) gives the Lagrangian equations. The fifth is just the same relation between time partial derivatives as before. To summarize The total number of equations is 2n + s, there are 2n Hamiltonian equations plus s Lagrange equations. == Energy == Since the Lagrangian has the same units as energy, the units of the Routhian are also energy. In SI units this is the Joule. Taking the total time derivative of the Lagrangian leads to the general result ∂ L ∂ t = d d t ( ∑ i = 1 n q ˙ i ∂ L ∂ q ˙ i + ∑ j = 1 s ζ ˙ j ∂ L ∂ ζ ˙ j − L ) . {\displaystyle {\frac {\partial L}{\partial t}}={\frac {d}{dt}}\left(\sum _{i=1}^{n}{\dot {q}}_{i}{\frac {\partial L}{\partial {\dot {q}}_{i}}}+\sum _{j=1}^{s}{\dot {\zeta }}_{j}{\frac {\partial L}{\partial {\dot {\zeta }}_{j}}}-L\right)\,.} If the Lagrangian is independent of time, the partial time derivative of the Lagrangian is zero, ∂L/∂t = 0, so the quantity under the total time derivative in brackets must be a constant, it is the total energy of the system E = ∑ i = 1 n q ˙ i ∂ L ∂ q ˙ i + ∑ j = 1 s ζ ˙ j ∂ L ∂ ζ ˙ j − L . {\displaystyle E=\sum _{i=1}^{n}{\dot {q}}_{i}{\frac {\partial L}{\partial {\dot {q}}_{i}}}+\sum _{j=1}^{s}{\dot {\zeta }}_{j}{\frac {\partial L}{\partial {\dot {\zeta }}_{j}}}-L\,.} (If there are external fields interacting with the constituents of the system, they can vary throughout space but not time). This expression requires the partial derivatives of L with respect to all the velocities dqi/dt and dζj/dt. Under the same condition of R being time independent, the energy in terms of the Routhian is a little simpler, substituting the definition of R and the partial derivatives of R with respect to the velocities dζj/dt, E = R − ∑ j = 1 s ζ ˙ j ∂ R ∂ ζ ˙ j . {\displaystyle E=R-\sum _{j=1}^{s}{\dot {\zeta }}_{j}{\frac {\partial R}{\partial {\dot {\zeta }}_{j}}}\,.} Notice only the partial derivatives of R with respect to the velocities dζj/dt are needed. In the case that s = 0 and the Routhian is explicitly time-independent, then E = R, that is, the Routhian equals the energy of the system. The same expression for R in when s = 0 is also the Hamiltonian, so in all E = R = H. If the Routhian has explicit time dependence, the total energy of the system is not constant. The general result is ∂ R ∂ t = d d t ( R − ∑ j = 1 s ζ ˙ j ∂ R ∂ ζ ˙ j ) , {\displaystyle {\frac {\partial R}{\partial t}}={\dfrac {d}{dt}}\left(R-\sum _{j=1}^{s}{\dot {\zeta }}_{j}{\frac {\partial R}{\partial {\dot {\zeta }}_{j}}}\right)\,,} which can be derived from the total time derivative of R in the same way as for L. == Cyclic coordinates == Often the Routhian approach may offer no advantage, but one notable case where this is useful is when a system has cyclic coordinates (also called "ignorable coordinates"), by definition those coordinates which do not appear in the original Lagrangian. The Lagrangian equations are powerful results, used frequently in theory and practice, since the equations of motion in the coordinates are easy to set up. However, if cyclic coordinates occur there will still be equations to solve for all the coordinates, including the cyclic coordinates despite their absence in the Lagrangian. The Hamiltonian equations are useful theoretical results, but less useful in practice because coordinates and momenta are related together in the solutions - after solving the equations the coordinates and momenta must be eliminated from each other. Nevertheless, the Hamiltonian equations are perfectly suited to cyclic coordinates because the equations in the cyclic coordinates trivially vanish, leaving only the equations in the non cyclic coordinates. The Routhian approach has the best of both approaches, because cyclic coordinates can be split off to the Hamiltonian equations and eliminated, leaving behind the non cyclic coordinates to be solved from the Lagrangian equations. Overall fewer equations need to be solved compared to the Lagrangian approach. The Routhian formulation is useful for systems with cyclic coordinates, because by definition those coordinates do not enter L, and hence R. The corresponding partial derivatives of L and R with respect to those coordinates are zero, which equates to the corresponding generalized momenta reducing to constants. To make this concrete, if the qi are all cyclic coordinates, and the ζj are all non cyclic, then ∂ L ∂ q i = p ˙ i = − ∂ R ∂ q i = 0 ⇒ p i = α i , {\displaystyle {\frac {\partial L}{\partial q_{i}}}={\dot {p}}_{i}=-{\frac {\partial R}{\partial q_{i}}}=0\quad \Rightarrow \quad p_{i}=\alpha _{i}\,,} where the αi are constants. With these constants substituted into the Routhian, R is a function of only the non cyclic coordinates and velocities (and in general time also) R ( ζ 1 , … , ζ s , α 1 , … , α n , ζ ˙ 1 , … , ζ ˙ s , t ) = ∑ i = 1 n α i q ˙ i ( α i ) − L ( ζ 1 , … , ζ s , q ˙ 1 ( α 1 ) , … , q ˙ n ( α n ) , ζ ˙ 1 , … , ζ ˙ s , t ) , {\displaystyle R(\zeta _{1},\ldots ,\zeta _{s},\alpha _{1},\ldots ,\alpha _{n},{\dot {\zeta }}_{1},\ldots ,{\dot {\zeta }}_{s},t)=\sum _{i=1}^{n}\alpha _{i}{\dot {q}}_{i}(\alpha _{i})-L(\zeta _{1},\ldots ,\zeta _{s},{\dot {q}}_{1}(\alpha _{1}),\ldots ,{\dot {q}}_{n}(\alpha _{n}),{\dot {\zeta }}_{1},\ldots ,{\dot {\zeta }}_{s},t)\,,} The 2n Hamiltonian equation in the cyclic coordinates automatically vanishes, q ˙ i = ∂ R ∂ α i = f i ( ζ 1 ( t ) , … , ζ s ( t ) , ζ ˙ 1 ( t ) , … , ζ ˙ s ( t ) , α 1 , … , α n , t ) , p ˙ i = − ∂ R ∂ q i = 0 , {\displaystyle {\dot {q}}_{i}={\frac {\partial R}{\partial \alpha _{i}}}=f_{i}(\zeta _{1}(t),\ldots ,\zeta _{s}(t),{\dot {\zeta }}_{1}(t),\ldots ,{\dot {\zeta }}_{s}(t),\alpha _{1},\ldots ,\alpha _{n},t)\,,\quad {\dot {p}}_{i}=-{\frac {\partial R}{\partial q_{i}}}=0\,,} and the s Lagrangian equations are in the non cyclic coordinates d d t ∂ R ∂ ζ ˙ j = ∂ R ∂ ζ j . {\displaystyle {\frac {d}{dt}}{\frac {\partial R}{\partial {\dot {\zeta }}_{j}}}={\frac {\partial R}{\partial \zeta _{j}}}\,.} Thus the problem has been reduced to solving the Lagrangian equations in the non cyclic coordinates, with the advantage of the Hamiltonian equations cleanly removing the cyclic coordinates. Using those solutions, the equations for q ˙ i {\displaystyle {\dot {q}}_{i}} can be integrated to compute q i ( t ) {\displaystyle q_{i}(t)} . If we are interested in how the cyclic coordinates change with time, the equations for the generalized velocities corresponding to the cyclic coordinates can be integrated. == Examples == Routh's procedure does not guarantee the equations of motion will be simple, however it will lead to fewer equations. === Central potential in spherical coordinates === One general class of mechanical systems with cyclic coordinates are those with central potentials, because potentials of this form only have dependence on radial separations and no dependence on angles. Consider a particle of mass m under the influence of a central potential V(r) in spherical polar coordinates (r, θ, φ) L ( r , r ˙ , θ , θ ˙ , ϕ ˙ ) = m 2 ( r ˙ 2 + r 2 θ ˙ 2 + r 2 sin 2 ⁡ θ φ ˙ 2 ) − V ( r ) . {\displaystyle L(r,{\dot {r}},\theta ,{\dot {\theta }},{\dot {\phi }})={\frac {m}{2}}({\dot {r}}^{2}+{r}^{2}{\dot {\theta }}^{2}+r^{2}\sin ^{2}\theta {\dot {\varphi }}^{2})-V(r)\,.} Notice φ is cyclic, because it does not appear in the Lagrangian. The momentum conjugate to φ is the constant p ϕ = ∂ L ∂ ϕ ˙ = m r 2 sin 2 ⁡ θ ϕ ˙ , {\displaystyle p_{\phi }={\frac {\partial L}{\partial {\dot {\phi }}}}=mr^{2}\sin ^{2}\theta {\dot {\phi }}\,,} in which r and dφ/dt can vary with time, but the angular momentum pφ is constant. The Routhian can be taken to be R ( r , r ˙ , θ , θ ˙ ) = p ϕ ϕ ˙ − L = p ϕ ϕ ˙ − m 2 r ˙ 2 − m 2 r 2 θ ˙ 2 − p ϕ ϕ ˙ 2 + V ( r ) = p ϕ ϕ ˙ 2 − m 2 r ˙ 2 − m 2 r 2 θ ˙ 2 + V ( r ) = p ϕ 2 2 m r 2 sin 2 ⁡ θ − m 2 r ˙ 2 − m 2 r 2 θ ˙ 2 + V ( r ) . {\displaystyle {\begin{aligned}R(r,{\dot {r}},\theta ,{\dot {\theta }})&=p_{\phi }{\dot {\phi }}-L\\&=p_{\phi }{\dot {\phi }}-{\frac {m}{2}}{\dot {r}}^{2}-{\frac {m}{2}}r^{2}{\dot {\theta }}^{2}-{\frac {p_{\phi }{\dot {\phi }}}{2}}+V(r)\\&={\frac {p_{\phi }{\dot {\phi }}}{2}}-{\frac {m}{2}}{\dot {r}}^{2}-{\frac {m}{2}}r^{2}{\dot {\theta }}^{2}+V(r)\\&={\frac {p_{\phi }^{2}}{2mr^{2}\sin ^{2}\theta }}-{\frac {m}{2}}{\dot {r}}^{2}-{\frac {m}{2}}r^{2}{\dot {\theta }}^{2}+V(r)\,.\end{aligned}}} We can solve for r and θ using Lagrange's equations, and do not need to solve for φ since it is eliminated by Hamiltonian's equations. The r equation is d d t ∂ R ∂ r ˙ = ∂ R ∂ r ⇒ − m r ¨ = − p ϕ 2 m r 3 sin 2 ⁡ θ − m r θ ˙ 2 + ∂ V ∂ r , {\displaystyle {\frac {d}{dt}}{\frac {\partial R}{\partial {\dot {r}}}}={\frac {\partial R}{\partial r}}\quad \Rightarrow \quad -m{\ddot {r}}=-{\frac {p_{\phi }^{2}}{mr^{3}\sin ^{2}\theta }}-mr{\dot {\theta }}^{2}+{\frac {\partial V}{\partial r}}\,,} and the θ equation is d d t ∂ R ∂ θ ˙ = ∂ R ∂ θ ⇒ − m ( 2 r r ˙ θ ˙ + r 2 θ ¨ ) = − p ϕ 2 cos ⁡ θ m r 2 sin 3 ⁡ θ . {\displaystyle {\frac {d}{dt}}{\frac {\partial R}{\partial {\dot {\theta }}}}={\frac {\partial R}{\partial \theta }}\quad \Rightarrow \quad -m(2r{\dot {r}}{\dot {\theta }}+r^{2}{\ddot {\theta }})=-{\frac {p_{\phi }^{2}\cos \theta }{mr^{2}\sin ^{3}\theta }}\,.} The Routhian approach has obtained two coupled nonlinear equations. By contrast the Lagrangian approach leads to three nonlinear coupled equations, mixing in the first and second time derivatives of φ in all of them, despite its absence from the Lagrangian. The r equation is d d t ∂ L ∂ r ˙ = ∂ L ∂ r ⇒ m r ¨ = m r θ ˙ 2 + m r sin 2 ⁡ θ ϕ ˙ 2 − ∂ V ∂ r , {\displaystyle {\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {r}}}}={\frac {\partial L}{\partial r}}\quad \Rightarrow \quad m{\ddot {r}}=mr{\dot {\theta }}^{2}+mr\sin ^{2}\theta {\dot {\phi }}^{2}-{\frac {\partial V}{\partial r}}\,,} the θ equation is d d t ∂ L ∂ θ ˙ = ∂ L ∂ θ ⇒ 2 r r ˙ θ ˙ + r 2 θ ¨ = r 2 sin ⁡ θ cos ⁡ θ ϕ ˙ 2 , {\displaystyle {\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {\theta }}}}={\frac {\partial L}{\partial \theta }}\quad \Rightarrow \quad 2r{\dot {r}}{\dot {\theta }}+r^{2}{\ddot {\theta }}=r^{2}\sin \theta \cos \theta {\dot {\phi }}^{2}\,,} the φ equation is d d t ∂ L ∂ ϕ ˙ = ∂ L ∂ ϕ ⇒ 2 r r ˙ sin 2 ⁡ θ ϕ ˙ + 2 r 2 sin ⁡ θ cos ⁡ θ θ ˙ ϕ ˙ + r 2 sin 2 ⁡ θ ϕ ¨ = 0 . {\displaystyle {\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {\phi }}}}={\frac {\partial L}{\partial \phi }}\quad \Rightarrow \quad 2r{\dot {r}}\sin ^{2}\theta {\dot {\phi }}+2r^{2}\sin \theta \cos \theta {\dot {\theta }}{\dot {\phi }}+r^{2}\sin ^{2}\theta {\ddot {\phi }}=0\,.} === Symmetric mechanical systems === ==== Spherical pendulum ==== Consider the spherical pendulum, a mass m (known as a "pendulum bob") attached to a rigid rod of length l of negligible mass, subject to a local gravitational field g. The system rotates with angular velocity dφ/dt which is not constant. The angle between the rod and vertical is θ and is not constant. The Lagrangian is L ( θ , θ ˙ , ϕ ˙ ) = m ℓ 2 2 ( θ ˙ 2 + sin 2 ⁡ θ ϕ ˙ 2 ) + m g ℓ cos ⁡ θ , {\displaystyle L(\theta ,{\dot {\theta }},{\dot {\phi }})={\frac {m\ell ^{2}}{2}}({\dot {\theta }}^{2}+\sin ^{2}\theta {\dot {\phi }}^{2})+mg\ell \cos \theta \,,} and φ is the cyclic coordinate for the system with constant momentum p ϕ = ∂ L ∂ ϕ ˙ = m ℓ 2 sin 2 ⁡ θ ϕ ˙ . {\displaystyle p_{\phi }={\frac {\partial L}{\partial {\dot {\phi }}}}=m\ell ^{2}\sin ^{2}\theta {\dot {\phi }}\,.} which again is physically the angular momentum of the system about the vertical. The angle θ and angular velocity dφ/dt vary with time, but the angular momentum is constant. The Routhian is R ( θ , θ ˙ ) = p ϕ ϕ ˙ − L = p ϕ ϕ ˙ − m ℓ 2 2 θ ˙ 2 − p ϕ ϕ ˙ 2 − m g ℓ cos ⁡ θ = p ϕ ϕ ˙ 2 − m ℓ 2 2 θ ˙ 2 − m g ℓ cos ⁡ θ = p ϕ 2 2 m ℓ 2 sin 2 ⁡ θ − m ℓ 2 2 θ ˙ 2 − m g ℓ cos ⁡ θ {\displaystyle {\begin{aligned}R(\theta ,{\dot {\theta }})&=p_{\phi }{\dot {\phi }}-L\\&=p_{\phi }{\dot {\phi }}-{\frac {m\ell ^{2}}{2}}{\dot {\theta }}^{2}-{\frac {p_{\phi }{\dot {\phi }}}{2}}-mg\ell \cos \theta \\&={\frac {p_{\phi }{\dot {\phi }}}{2}}-{\frac {m\ell ^{2}}{2}}{\dot {\theta }}^{2}-mg\ell \cos \theta \\&={\frac {p_{\phi }^{2}}{2m\ell ^{2}\sin ^{2}\theta }}-{\frac {m\ell ^{2}}{2}}{\dot {\theta }}^{2}-mg\ell \cos \theta \end{aligned}}} The θ equation is found from the Lagrangian equations d d t ∂ R ∂ θ ˙ = ∂ R ∂ θ ⇒ − m ℓ 2 θ ¨ = − p ϕ 2 cos ⁡ θ m ℓ 2 sin 3 ⁡ θ + m g ℓ sin ⁡ θ , {\displaystyle {\frac {d}{dt}}{\frac {\partial R}{\partial {\dot {\theta }}}}={\frac {\partial R}{\partial \theta }}\quad \Rightarrow \quad -m\ell ^{2}{\ddot {\theta }}=-{\frac {p_{\phi }^{2}\cos \theta }{m\ell ^{2}\sin ^{3}\theta }}+mg\ell \sin \theta \,,} or simplifying by introducing the constants a = p ϕ 2 m 2 ℓ 4 , b = g ℓ , {\displaystyle a={\frac {p_{\phi }^{2}}{m^{2}\ell ^{4}}}\,,\quad b={\frac {g}{\ell }}\,,} gives θ ¨ = a cos ⁡ θ sin 3 ⁡ θ − b sin ⁡ θ . {\displaystyle {\ddot {\theta }}=a{\frac {\cos \theta }{\sin ^{3}\theta }}-b\sin \theta \,.} This equation resembles the simple nonlinear pendulum equation, because it can swing through the vertical axis, with an additional term to account for the rotation about the vertical axis (the constant a is related to the angular momentum pφ). Applying the Lagrangian approach there are two nonlinear coupled equations to solve. The θ equation is d d t ∂ L ∂ θ ˙ = ∂ L ∂ θ ⇒ m ℓ 2 θ ¨ = m ℓ 2 sin ⁡ θ cos ⁡ θ ϕ ˙ 2 − m g ℓ sin ⁡ θ , {\displaystyle {\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {\theta }}}}={\frac {\partial L}{\partial \theta }}\quad \Rightarrow \quad m\ell ^{2}{\ddot {\theta }}=m\ell ^{2}\sin \theta \cos \theta {\dot {\phi }}^{2}-mg\ell \sin \theta \,,} and the φ equation is d d t ∂ L ∂ ϕ ˙ = ∂ L ∂ ϕ ⇒ 2 sin ⁡ θ cos ⁡ θ θ ˙ ϕ ˙ + sin 2 ⁡ θ ϕ ¨ = 0 . {\displaystyle {\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {\phi }}}}={\frac {\partial L}{\partial \phi }}\quad \Rightarrow \quad 2\sin \theta \cos \theta {\dot {\theta }}{\dot {\phi }}+\sin ^{2}\theta {\ddot {\phi }}=0\,.} ==== Heavy symmetrical top ==== The heavy symmetrical top of mass M has Lagrangian L ( θ , θ ˙ , ψ ˙ , ϕ ˙ ) = I 1 2 ( θ ˙ 2 + ϕ ˙ 2 sin 2 ⁡ θ ) + I 3 2 ( ψ ˙ 2 + ϕ ˙ 2 cos 2 ⁡ θ ) + I 3 ψ ˙ ϕ ˙ cos ⁡ θ − M g ℓ cos ⁡ θ {\displaystyle L(\theta ,{\dot {\theta }},{\dot {\psi }},{\dot {\phi }})={\frac {I_{1}}{2}}({\dot {\theta }}^{2}+{\dot {\phi }}^{2}\sin ^{2}\theta )+{\frac {I_{3}}{2}}({\dot {\psi }}^{2}+{\dot {\phi }}^{2}\cos ^{2}\theta )+I_{3}{\dot {\psi }}{\dot {\phi }}\cos \theta -Mg\ell \cos \theta } where ψ, φ, θ are the Euler angles, θ is the angle between the vertical z-axis and the top's z′-axis, ψ is the rotation of the top about its own z′-axis, and φ the azimuthal of the top's z′-axis around the vertical z-axis. The principal moments of inertia are I1 about the top's own x′ axis, I2 about the top's own y′ axes, and I3 about the top's own z′-axis. Since the top is symmetric about its z′-axis, I1 = I2. Here the simple relation for local gravitational potential energy V = Mglcosθ is used where g is the acceleration due to gravity, and the centre of mass of the top is a distance l from its tip along its z′-axis. The angles ψ, φ are cyclic. The constant momenta are the angular momenta of the top about its axis and its precession about the vertical, respectively: p ψ = ∂ L ∂ ψ ˙ = I 3 ψ ˙ + I 3 ϕ ˙ cos ⁡ θ {\displaystyle p_{\psi }={\frac {\partial L}{\partial {\dot {\psi }}}}=I_{3}{\dot {\psi }}+I_{3}{\dot {\phi }}\cos \theta } p ϕ = ∂ L ∂ ϕ ˙ = ϕ ˙ ( I 1 sin 2 ⁡ θ + I 3 cos 2 ⁡ θ ) + I 3 ψ ˙ cos ⁡ θ {\displaystyle p_{\phi }={\frac {\partial L}{\partial {\dot {\phi }}}}={\dot {\phi }}(I_{1}\sin ^{2}\theta +I_{3}\cos ^{2}\theta )+I_{3}{\dot {\psi }}\cos \theta } From these, eliminating dψ/dt: p ϕ − p ψ cos ⁡ θ = I 1 ϕ ˙ sin 2 ⁡ θ {\displaystyle p_{\phi }-p_{\psi }\cos \theta =I_{1}{\dot {\phi }}\sin ^{2}\theta } we have ϕ ˙ = p ϕ − p ψ cos ⁡ θ I 1 sin 2 ⁡ θ , {\displaystyle {\dot {\phi }}={\frac {p_{\phi }-p_{\psi }\cos \theta }{I_{1}\sin ^{2}\theta }}\,,} and to eliminate dφ/dt, substitute this result into pψ and solve for dψ/dt to find ψ ˙ = p ψ I 3 − cos ⁡ θ ( p ϕ − p ψ cos ⁡ θ I 1 sin 2 ⁡ θ ) . {\displaystyle {\dot {\psi }}={\frac {p_{\psi }}{I_{3}}}-\cos \theta \left({\frac {p_{\phi }-p_{\psi }\cos \theta }{I_{1}\sin ^{2}\theta }}\right)\,.} The Routhian can be taken to be R ( θ , θ ˙ ) = p ψ ψ ˙ + p ϕ ϕ ˙ − L = 1 2 ( p ψ ψ ˙ + p ϕ ϕ ˙ ) − I 1 θ ˙ 2 2 + M g ℓ cos ⁡ θ {\displaystyle R(\theta ,{\dot {\theta }})=p_{\psi }{\dot {\psi }}+p_{\phi }{\dot {\phi }}-L={\frac {1}{2}}(p_{\psi }{\dot {\psi }}+p_{\phi }{\dot {\phi }})-{\frac {I_{1}{\dot {\theta }}^{2}}{2}}+Mg\ell \cos \theta } and since p ϕ ϕ ˙ 2 = p ϕ 2 2 I 1 sin 2 ⁡ θ − p ψ p ϕ cos ⁡ θ 2 I 1 sin 2 ⁡ θ , {\displaystyle {\frac {p_{\phi }{\dot {\phi }}}{2}}={\frac {p_{\phi }^{2}}{2I_{1}\sin ^{2}\theta }}-{\frac {p_{\psi }p_{\phi }\cos \theta }{2I_{1}\sin ^{2}\theta }}\,,} p ψ ψ ˙ 2 = p ψ 2 2 I 3 − p ψ p ϕ cos ⁡ θ 2 I 1 sin 2 ⁡ θ + p ψ 2 cos 2 ⁡ θ 2 I 1 sin 2 ⁡ θ {\displaystyle {\frac {p_{\psi }{\dot {\psi }}}{2}}={\frac {p_{\psi }^{2}}{2I_{3}}}-{\frac {p_{\psi }p_{\phi }\cos \theta }{2I_{1}\sin ^{2}\theta }}+{\frac {p_{\psi }^{2}\cos ^{2}\theta }{2I_{1}\sin ^{2}\theta }}} we have R = p ψ 2 2 I 3 + p ψ 2 cos 2 ⁡ θ 2 I 1 sin 2 ⁡ θ + p ϕ 2 2 I 1 sin 2 ⁡ θ − p ψ p ϕ cos ⁡ θ I 1 sin 2 ⁡ θ − I 1 θ ˙ 2 2 + M g ℓ cos ⁡ θ . {\displaystyle R={\frac {p_{\psi }^{2}}{2I_{3}}}+{\frac {p_{\psi }^{2}\cos ^{2}\theta }{2I_{1}\sin ^{2}\theta }}+{\frac {p_{\phi }^{2}}{2I_{1}\sin ^{2}\theta }}-{\frac {p_{\psi }p_{\phi }\cos \theta }{I_{1}\sin ^{2}\theta }}-{\frac {I_{1}{\dot {\theta }}^{2}}{2}}+Mg\ell \cos \theta \,.} The first term is constant, and can be ignored since only the derivatives of R will enter the equations of motion. The simplified Routhian, without loss of information, is thus R = 1 2 I 1 sin 2 ⁡ θ [ p ψ 2 cos 2 ⁡ θ + p ϕ 2 − 2 p ψ p ϕ cos ⁡ θ ] − I 1 θ ˙ 2 2 + M g ℓ cos ⁡ θ {\displaystyle R={\frac {1}{2I_{1}\sin ^{2}\theta }}\left[p_{\psi }^{2}\cos ^{2}\theta +p_{\phi }^{2}-2p_{\psi }p_{\phi }\cos \theta \right]-{\frac {I_{1}{\dot {\theta }}^{2}}{2}}+Mg\ell \cos \theta } The equation of motion for θ is, by direct calculation, d d t ∂ R ∂ θ ˙ = ∂ R ∂ θ ⇒ {\displaystyle {\frac {d}{dt}}{\frac {\partial R}{\partial {\dot {\theta }}}}={\frac {\partial R}{\partial \theta }}\quad \Rightarrow \quad } − I 1 θ ¨ = − cos ⁡ θ I 1 sin 3 ⁡ θ [ p ψ 2 cos 2 ⁡ θ + p ϕ 2 − p ψ p ϕ 2 cos ⁡ θ ] + 1 2 I 1 sin 2 ⁡ θ [ − 2 p ψ 2 cos ⁡ θ sin ⁡ θ + p ψ p ϕ 2 sin ⁡ θ ] − M g ℓ sin ⁡ θ , {\displaystyle -I_{1}{\ddot {\theta }}=-{\frac {\cos \theta }{I_{1}\sin ^{3}\theta }}\left[p_{\psi }^{2}\cos ^{2}\theta +p_{\phi }^{2}-{\frac {p_{\psi }p_{\phi }}{2}}\cos \theta \right]+{\frac {1}{2I_{1}\sin ^{2}\theta }}\left[-2p_{\psi }^{2}\cos \theta \sin \theta +{\frac {p_{\psi }p_{\phi }}{2}}\sin \theta \right]-Mg\ell \sin \theta \,,} or by introducing the constants a = p ψ 2 I 1 2 , b = p ϕ 2 I 1 2 , c = p ψ p ϕ 2 I 1 2 , k = M g ℓ I 1 , {\displaystyle a={\frac {p_{\psi }^{2}}{I_{1}^{2}}}\,,\quad b={\frac {p_{\phi }^{2}}{I_{1}^{2}}}\,,\quad c={\frac {p_{\psi }p_{\phi }}{2I_{1}^{2}}}\,,\quad k={\frac {Mg\ell }{I_{1}}}\,,} a simpler form of the equation is obtained θ ¨ = cos ⁡ θ sin 3 ⁡ θ ( a cos 2 ⁡ θ + b − c cos ⁡ θ ) + 1 2 sin ⁡ θ ( 2 a cos ⁡ θ − c ) + k sin ⁡ θ . {\displaystyle {\ddot {\theta }}={\frac {\cos \theta }{\sin ^{3}\theta }}(a\cos ^{2}\theta +b-c\cos \theta )+{\frac {1}{2\sin \theta }}(2a\cos \theta -c)+k\sin \theta \,.} Although the equation is highly nonlinear, there is only one equation to solve for, it was obtained directly, and the cyclic coordinates are not involved. By contrast, the Lagrangian approach leads to three nonlinear coupled equations to solve, despite the absence of the coordinates ψ and φ in the Lagrangian. The θ equation is d d t ∂ L ∂ θ ˙ = ∂ L ∂ θ ⇒ I 1 θ ¨ = ( I 1 − I 3 ) ϕ ˙ 2 sin ⁡ θ cos ⁡ θ − I 3 ψ ˙ ϕ ˙ sin ⁡ θ + M g ℓ sin ⁡ θ , {\displaystyle {\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {\theta }}}}={\frac {\partial L}{\partial \theta }}\quad \Rightarrow \quad I_{1}{\ddot {\theta }}=(I_{1}-I_{3}){\dot {\phi }}^{2}\sin \theta \cos \theta -I_{3}{\dot {\psi }}{\dot {\phi }}\sin \theta +Mg\ell \sin \theta \,,} the ψ equation is d d t ∂ L ∂ ψ ˙ = ∂ L ∂ ψ ⇒ ψ ¨ + ϕ ¨ cos ⁡ θ − ϕ ˙ θ ˙ sin ⁡ θ = 0 , {\displaystyle {\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {\psi }}}}={\frac {\partial L}{\partial \psi }}\quad \Rightarrow \quad {\ddot {\psi }}+{\ddot {\phi }}\cos \theta -{\dot {\phi }}{\dot {\theta }}\sin \theta =0\,,} and the φ equation is d d t ∂ L ∂ ϕ ˙ = ∂ L ∂ ϕ ⇒ ϕ ¨ ( I 1 sin 2 ⁡ θ + I 3 cos 2 ⁡ θ ) + ϕ ˙ ( I 1 − I 3 ) 2 sin ⁡ θ cos ⁡ θ θ ˙ + I 3 ψ ¨ cos ⁡ θ − I 3 ψ ˙ sin ⁡ θ θ ˙ = 0 , {\displaystyle {\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {\phi }}}}={\frac {\partial L}{\partial \phi }}\quad \Rightarrow \quad {\ddot {\phi }}(I_{1}\sin ^{2}\theta +I_{3}\cos ^{2}\theta )+{\dot {\phi }}(I_{1}-I_{3})2\sin \theta \cos \theta {\dot {\theta }}+I_{3}{\ddot {\psi }}\cos \theta -I_{3}{\dot {\psi }}\sin \theta {\dot {\theta }}=0\,,} === Velocity-dependent potentials === ==== Classical charged particle in a uniform magnetic field ==== Consider a classical charged particle of mass m and electric charge q in a static (time-independent) uniform (constant throughout space) magnetic field B. The Lagrangian for a charged particle in a general electromagnetic field given by the magnetic potential A and electric potential ϕ {\displaystyle \phi } is L = m 2 r ˙ 2 − q ϕ + q r ˙ ⋅ A , {\displaystyle L={\frac {m}{2}}{\dot {\mathbf {r} }}^{2}-q\phi +q{\dot {\mathbf {r} }}\cdot \mathbf {A} \,,} It is convenient to use cylindrical coordinates (r, θ, z), so that r ˙ = v = ( v r , v θ , v z ) = ( r ˙ , r θ ˙ , z ˙ ) , {\displaystyle {\dot {\mathbf {r} }}=\mathbf {v} =(v_{r},v_{\theta },v_{z})=({\dot {r}},r{\dot {\theta }},{\dot {z}})\,,} B = ( B r , B θ , B z ) = ( 0 , 0 , B ) . {\displaystyle \mathbf {B} =(B_{r},B_{\theta },B_{z})=(0,0,B)\,.} In this case of no electric field, the electric potential is zero, ϕ = 0 {\displaystyle \phi =0} , and we can choose the axial gauge for the magnetic potential A = 1 2 B × r ⇒ A = ( A r , A θ , A z ) = ( 0 , B r / 2 , 0 ) , {\displaystyle \mathbf {A} ={\frac {1}{2}}\mathbf {B} \times \mathbf {r} \quad \Rightarrow \quad \mathbf {A} =(A_{r},A_{\theta },A_{z})=(0,Br/2,0)\,,} and the Lagrangian is L ( r , r ˙ , θ ˙ , z ˙ ) = m 2 ( r ˙ 2 + r 2 θ ˙ 2 + z ˙ 2 ) + q B r 2 θ ˙ 2 . {\displaystyle L(r,{\dot {r}},{\dot {\theta }},{\dot {z}})={\frac {m}{2}}({\dot {r}}^{2}+r^{2}{\dot {\theta }}^{2}+{\dot {z}}^{2})+{\frac {qBr^{2}{\dot {\theta }}}{2}}\,.} Notice this potential has an effectively cylindrical symmetry (although it also has angular velocity dependence), since the only spatial dependence is on the radial length from an imaginary cylinder axis. There are two cyclic coordinates, θ and z. The canonical momenta conjugate to θ and z are the constants p θ = ∂ L ∂ θ ˙ = m r 2 θ ˙ + q B r 2 2 , p z = ∂ L ∂ z ˙ = m z ˙ , {\displaystyle p_{\theta }={\frac {\partial L}{\partial {\dot {\theta }}}}=mr^{2}{\dot {\theta }}+{\frac {qBr^{2}}{2}}\,,\quad p_{z}={\frac {\partial L}{\partial {\dot {z}}}}=m{\dot {z}}\,,} so the velocities are θ ˙ = 1 m r 2 ( p θ − q B r 2 2 ) , z ˙ = p z m . {\displaystyle {\dot {\theta }}={\frac {1}{mr^{2}}}\left(p_{\theta }-{\frac {qBr^{2}}{2}}\right)\,,\quad {\dot {z}}={\frac {p_{z}}{m}}\,.} The angular momentum about the z axis is not pθ, but the quantity mr2dθ/dt, which is not conserved due to the contribution from the magnetic field. The canonical momentum pθ is the conserved quantity. It is still the case that pz is the linear or translational momentum along the z axis, which is also conserved. The radial component r and angular velocity dθ/dt can vary with time, but pθ is constant, and since pz is constant it follows dz/dt is constant. The Routhian can take the form R ( r , r ˙ ) = p θ θ ˙ + p z z ˙ − L = p θ θ ˙ + p z z ˙ − m 2 r ˙ 2 − p θ θ ˙ 2 − p z z ˙ 2 − 1 2 q B r 2 θ ˙ = ( p θ − q B r 2 ) θ ˙ 2 − m 2 r ˙ 2 + p z z ˙ 2 = 1 2 m r 2 ( p θ − q B r 2 ) ( p θ − q B r 2 2 ) − m 2 r ˙ 2 + p z 2 2 m = 1 2 m r 2 ( p θ 2 − 3 2 q B r 2 + ( q B ) 2 r 4 2 ) − m 2 r ˙ 2 {\displaystyle {\begin{aligned}R(r,{\dot {r}})&=p_{\theta }{\dot {\theta }}+p_{z}{\dot {z}}-L\\&=p_{\theta }{\dot {\theta }}+p_{z}{\dot {z}}-{\frac {m}{2}}{\dot {r}}^{2}-{\frac {p_{\theta }{\dot {\theta }}}{2}}-{\frac {p_{z}{\dot {z}}}{2}}-{\frac {1}{2}}qBr^{2}{\dot {\theta }}\\[6pt]&=(p_{\theta }-qBr^{2}){\frac {\dot {\theta }}{2}}-{\frac {m}{2}}{\dot {r}}^{2}+{\frac {p_{z}{\dot {z}}}{2}}\\[6pt]&={\frac {1}{2mr^{2}}}\left(p_{\theta }-qBr^{2}\right)\left(p_{\theta }-{\frac {qBr^{2}}{2}}\right)-{\frac {m}{2}}{\dot {r}}^{2}+{\frac {p_{z}^{2}}{2m}}\\[6pt]&={\frac {1}{2mr^{2}}}\left(p_{\theta }^{2}-{\frac {3}{2}}qBr^{2}+{\frac {(qB)^{2}r^{4}}{2}}\right)-{\frac {m}{2}}{\dot {r}}^{2}\end{aligned}}} where in the last line, the pz2/2m term is a constant and can be ignored without loss of continuity. The Hamiltonian equations for θ and z automatically vanish and do not need to be solved for. The Lagrangian equation in r d d t ∂ R ∂ r ˙ = ∂ R ∂ r {\displaystyle {\frac {d}{dt}}{\frac {\partial R}{\partial {\dot {r}}}}={\frac {\partial R}{\partial r}}} is by direct calculation − m r ¨ = 1 2 m [ − 2 r 3 ( p θ 2 − 3 2 q B r 2 + ( q B ) 2 r 4 2 ) + 1 r 2 ( − 3 q B r + 2 ( q B ) 2 r 3 ) ] , {\displaystyle -m{\ddot {r}}={\frac {1}{2m}}\left[{\frac {-2}{r^{3}}}\left(p_{\theta }^{2}-{\frac {3}{2}}qBr^{2}+{\frac {(qB)^{2}r^{4}}{2}}\right)+{\frac {1}{r^{2}}}(-3qBr+2(qB)^{2}r^{3})\right]\,,} which after collecting terms is m r ¨ = 1 2 m [ 2 p θ 2 r 3 − ( q B ) 2 r ] , {\displaystyle m{\ddot {r}}={\frac {1}{2m}}\left[{\frac {2p_{\theta }^{2}}{r^{3}}}-(qB)^{2}r\right]\,,} and simplifying further by introducing the constants a = p θ 2 m 2 , b = − ( q B ) 2 2 m 2 , {\displaystyle a={\frac {p_{\theta }^{2}}{m^{2}}}\,,\quad b=-{\frac {(qB)^{2}}{2m^{2}}}\,,} the differential equation is r ¨ = a r 3 + b r {\displaystyle {\ddot {r}}={\frac {a}{r^{3}}}+br} To see how z changes with time, integrate the momenta expression for pz above z = p z m t + c z , {\displaystyle z={\frac {p_{z}}{m}}t+c_{z}\,,} where cz is an arbitrary constant, the initial value of z to be specified in the initial conditions. The motion of the particle in this system is helicoidal, with the axial motion uniform (constant) but the radial and angular components varying in a spiral according to the equation of motion derived above. The initial conditions on r, dr/dt, θ, dθ/dt, will determine if the trajectory of the particle has a constant r or varying r. If initially r is nonzero but dr/dt = 0, while θ and dθ/dt are arbitrary, then the initial velocity of the particle has no radial component, r is constant, so the motion will be in a perfect helix. If r is constant, the angular velocity is also constant according to the conserved pθ. With the Lagrangian approach, the equation for r would include dθ/dt which has to be eliminated, and there would be equations for θ and z to solve for. The r equation is d d t ∂ L ∂ r ˙ = ∂ L ∂ r ⇒ m r ¨ = m r θ ˙ 2 + q B r θ ˙ , {\displaystyle {\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {r}}}}={\frac {\partial L}{\partial r}}\quad \Rightarrow \quad m{\ddot {r}}=mr{\dot {\theta }}^{2}+qBr{\dot {\theta }}\,,} the θ equation is d d t ∂ L ∂ θ ˙ = ∂ L ∂ θ ⇒ m ( 2 r r ˙ θ ˙ + r 2 θ ¨ ) + q B r r ˙ = 0 , {\displaystyle {\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {\theta }}}}={\frac {\partial L}{\partial \theta }}\quad \Rightarrow \quad m(2r{\dot {r}}{\dot {\theta }}+r^{2}{\ddot {\theta }})+qBr{\dot {r}}=0\,,} and the z equation is d d t ∂ L ∂ z ˙ = ∂ L ∂ z ⇒ m z ¨ = 0 . {\displaystyle {\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {z}}}}={\frac {\partial L}{\partial z}}\quad \Rightarrow \quad m{\ddot {z}}=0\,.} The z equation is trivial to integrate, but the r and θ equations are not, in any case the time derivatives are mixed in all the equations and must be eliminated. == See also == Calculus of variations Phase space Configuration space Many-body problem Rigid body mechanics == Footnotes == == Notes == == References == Landau, L. D.; Lifshitz, E. M. (15 January 1976). Mechanics (3rd ed.). Butterworth Heinemann. p. 134. ISBN 9780750628969. Hand, L. N.; Finch, J. D. (13 November 1998). Analytical Mechanics (2nd ed.). Cambridge University Press. p. 23. ISBN 9780521575720. Kibble, T. W. B.; Berkshire, F. H. (2004). Classical Mechanics (5th ed.). Imperial College Press. p. 236. ISBN 9781860944352. Goldstein, Herbert (1980). Classical Mechanics (2nd ed.). San Francisco, CA: Addison Wesley. pp. 352–353. ISBN 0201029189. Goldstein, Herbert; Poole, Charles P. Jr.; Safko, John L. (2002). Classical Mechanics (3rd ed.). San Francisco, CA: Addison Wesley. pp. 347–349. ISBN 0-201-65702-3.
Wikipedia/Routhian_mechanics
A moment is a mathematical expression involving the product of a distance and a physical quantity such as a force or electric charge. Moments are usually defined with respect to a fixed reference point and refer to physical quantities located some distance from the reference point. For example, the moment of force, often called torque, is the product of a force on an object and the distance from the reference point to the object. In principle, any physical quantity can be multiplied by a distance to produce a moment. Commonly used quantities include forces, masses, and electric charge distributions; a list of examples is provided later. == Elaboration == In its most basic form, a moment is the product of the distance to a point, raised to a power, and a physical quantity (such as force or electrical charge) at that point: μ n = r n Q , {\displaystyle \mu _{n}=r^{n}\,Q,} where Q {\displaystyle Q} is the physical quantity such as a force applied at a point, or a point charge, or a point mass, etc. If the quantity is not concentrated solely at a single point, the moment is the integral of that quantity's density over space: μ n = ∫ r n ρ ( r ) d r {\displaystyle \mu _{n}=\int r^{n}\rho (r)\,dr} where ρ {\displaystyle \rho } is the distribution of the density of charge, mass, or whatever quantity is being considered. More complex forms take into account the angular relationships between the distance and the physical quantity, but the above equations capture the essential feature of a moment, namely the existence of an underlying r n ρ ( r ) {\displaystyle r^{n}\rho (r)} or equivalent term. This implies that there are multiple moments (one for each value of n) and that the moment generally depends on the reference point from which the distance r {\displaystyle r} is measured, although for certain moments (technically, the lowest non-zero moment) this dependence vanishes and the moment becomes independent of the reference point. Each value of n corresponds to a different moment: the 1st moment corresponds to n = 1; the 2nd moment to n = 2, etc. The 0th moment (n = 0) is sometimes called the monopole moment; the 1st moment (n = 1) is sometimes called the dipole moment, and the 2nd moment (n = 2) is sometimes called the quadrupole moment, especially in the context of electric charge distributions. === Examples === The moment of force, or torque, is a first moment: τ = r F {\displaystyle \mathbf {\tau } =rF} , or, more generally, r × F {\displaystyle \mathbf {r} \times \mathbf {F} } . Similarly, angular momentum is the 1st moment of momentum: L = r × p {\displaystyle \mathbf {L} =\mathbf {r} \times \mathbf {p} } . Momentum itself is not a moment. The electric dipole moment is also a 1st moment: p = q d {\displaystyle \mathbf {p} =q\,\mathbf {d} } for two opposite point charges or ∫ r ρ ( r ) d 3 r {\textstyle \int \mathbf {r} \,\rho (\mathbf {r} )\,d^{3}r} for a distributed charge with charge density ρ ( r ) {\displaystyle \rho (\mathbf {r} )} . Moments of mass: The total mass is the zeroth moment of mass. The center of mass is the 1st moment of mass normalized by total mass: R = 1 M ∑ i r i m i {\textstyle \mathbf {R} ={\frac {1}{M}}\sum _{i}\mathbf {r} _{i}m_{i}} for a collection of point masses, or 1 M ∫ r ρ ( r ) d 3 r {\textstyle {\frac {1}{M}}\int \mathbf {r} \rho (\mathbf {r} )\,d^{3}r} for an object with mass distribution ρ ( r ) {\displaystyle \rho (\mathbf {r} )} . The moment of inertia is the 2nd moment of mass: I = r 2 m {\displaystyle I=r^{2}m} for a point mass, ∑ i r i 2 m i {\textstyle \sum _{i}r_{i}^{2}m_{i}} for a collection of point masses, or ∫ r 2 ρ ( r ) d 3 r {\textstyle \int r^{2}\rho (\mathbf {r} )\,d^{3}r} for an object with mass distribution ρ ( r ) {\displaystyle \rho (\mathbf {r} )} . The center of mass is often (but not always) taken as the reference point. == Multipole moments == Assuming a density function that is finite and localized to a particular region, outside that region a 1/r potential may be expressed as a series of spherical harmonics: Φ ( r ) = ∫ ρ ( r ′ ) | r − r ′ | d 3 r ′ = ∑ ℓ = 0 ∞ ∑ m = − ℓ ℓ ( 4 π 2 ℓ + 1 ) q ℓ m Y ℓ m ( θ , φ ) r ℓ + 1 {\displaystyle \Phi (\mathbf {r} )=\int {\frac {\rho (\mathbf {r'} )}{|\mathbf {r} -\mathbf {r'} |}}\,d^{3}r'=\sum _{\ell =0}^{\infty }\sum _{m=-\ell }^{\ell }\left({\frac {4\pi }{2\ell +1}}\right)q_{\ell m}\,{\frac {Y_{\ell m}(\theta ,\varphi )}{r^{\ell +1}}}} The coefficients q ℓ m {\displaystyle q_{\ell m}} are known as multipole moments, and take the form: q ℓ m = ∫ ( r ′ ) ℓ ρ ( r ′ ) Y ℓ m ∗ ( θ ′ , φ ′ ) d 3 r ′ {\displaystyle q_{\ell m}=\int (r')^{\ell }\,\rho (\mathbf {r'} )\,Y_{\ell m}^{*}(\theta ',\varphi ')\,d^{3}r'} where r ′ {\displaystyle \mathbf {r} '} expressed in spherical coordinates ( r ′ , φ ′ , θ ′ ) {\displaystyle \left(r',\varphi ',\theta '\right)} is a variable of integration. A more complete treatment may be found in pages describing multipole expansion or spherical multipole moments. (The convention in the above equations was taken from Jackson – the conventions used in the referenced pages may be slightly different.) When ρ {\displaystyle \rho } represents an electric charge density, the q l m {\displaystyle q_{lm}} are, in a sense, projections of the moments of electric charge: q 00 {\displaystyle q_{00}} is the monopole moment; the q 1 m {\displaystyle q_{1m}} are projections of the dipole moment, the q 2 m {\displaystyle q_{2m}} are projections of the quadrupole moment, etc. == Applications of multipole moments == The multipole expansion applies to 1/r scalar potentials, examples of which include the electric potential and the gravitational potential. For these potentials, the expression can be used to approximate the strength of a field produced by a localized distribution of charges (or mass) by calculating the first few moments. For sufficiently large r, a reasonable approximation can be obtained from just the monopole and dipole moments. Higher fidelity can be achieved by calculating higher order moments. Extensions of the technique can be used to calculate interaction energies and intermolecular forces. The technique can also be used to determine the properties of an unknown distribution ρ {\displaystyle \rho } . Measurements pertaining to multipole moments may be taken and used to infer properties of the underlying distribution. This technique applies to small objects such as molecules, but has also been applied to the universe itself, being for example the technique employed by the WMAP and Planck experiments to analyze the cosmic microwave background radiation. == History == In works believed to stem from Ancient Greece, the concept of a moment is alluded to by the word ῥοπή (rhopḗ, lit. "inclination") and composites like ἰσόρροπα (isorropa, lit. "of equal inclinations"). The context of these works is mechanics and geometry involving the lever. In particular, in extant works attributed to Archimedes, the moment is pointed out in phrasings like: "Commensurable magnitudes (σύμμετρα μεγέθεα) [A and B] are equally balanced (ἰσορροπέοντι) if their distances [to the center Γ, i.e., ΑΓ and ΓΒ] are inversely proportional (ἀντιπεπονθότως) to their weights (βάρεσιν)." Moreover, in extant texts such as The Method of Mechanical Theorems, moments are used to infer the center of gravity, area, and volume of geometric figures. In 1269, William of Moerbeke translates various works of Archimedes and Eutocious into Latin. The term ῥοπή is transliterated into ropen. Around 1450, Jacobus Cremonensis translates ῥοπή in similar texts into the Latin term momentum (lit. "movement").: 331  The same term is kept in a 1501 translation by Giorgio Valla, and subsequently by Francesco Maurolico, Federico Commandino, Guidobaldo del Monte, Adriaan van Roomen, Florence Rivault, Francesco Buonamici, Marin Mersenne, and Galileo Galilei. That said, why was the word momentum chosen for the translation? One clue, according to Treccani, is that momento in medieval Italy, the place the early translators lived, in a transferred sense meant both a "moment of time" and a "moment of weight" (a small amount of weight that turns the scale). In 1554, Francesco Maurolico clarifies the Latin term momentum in the work Prologi sive sermones. Here is a Latin to English translation as given by Marshall Clagett: "[...] equal weights at unequal distances do not weigh equally, but unequal weights [at these unequal distances may] weigh equally. For a weight suspended at a greater distance is heavier, as is obvious in a balance. Therefore, there exists a certain third kind of power or third difference of magnitude—one that differs from both body and weight—and this they call moment. Therefore, a body acquires weight from both quantity [i.e., size] and quality [i.e., material], but a weight receives its moment from the distance at which it is suspended. Therefore, when distances are reciprocally proportional to weights, the moments [of the weights] are equal, as Archimedes demonstrated in The Book on Equal Moments. Therefore, weights or [rather] moments like other continuous quantities, are joined at some common terminus, that is, at something common to both of them like the center of weight, or at a point of equilibrium. Now the center of gravity in any weight is that point which, no matter how often or whenever the body is suspended, always inclines perpendicularly toward the universal center. In addition to body, weight, and moment, there is a certain fourth power, which can be called impetus or force. Aristotle investigates it in On Mechanical Questions, and it is completely different from [the] three aforesaid [powers or magnitudes]. [...]" in 1586, Simon Stevin uses the Dutch term staltwicht ("parked weight") for momentum in De Beghinselen Der Weeghconst. In 1632, Galileo Galilei publishes Dialogue Concerning the Two Chief World Systems and uses the Italian momento with many meanings, including the one of his predecessors. In 1643, Thomas Salusbury translates some of Galilei's works into English. Salusbury translates Latin momentum and Italian momento into the English term moment. In 1765, the Latin term momentum inertiae (English: moment of inertia) is used by Leonhard Euler to refer to one of Christiaan Huygens's quantities in Horologium Oscillatorium. Huygens 1673 work involving finding the center of oscillation had been stimulated by Marin Mersenne, who suggested it to him in 1646. In 1811, the French term moment d'une force (English: moment of a force) with respect to a point and plane is used by Siméon Denis Poisson in Traité de mécanique. An English translation appears in 1842. In 1884, the term torque is suggested by James Thomson in the context of measuring rotational forces of machines (with propellers and rotors). Today, a dynamometer is used to measure the torque of machines. In 1893, Karl Pearson uses the term n-th moment and μ n {\displaystyle \mu _{n}} in the context of curve-fitting scientific measurements. Pearson wrote in response to John Venn, who, some years earlier, observed a peculiar pattern involving meteorological data and asked for an explanation of its cause. In Pearson's response, this analogy is used: the mechanical "center of gravity" is the mean and the "distance" is the deviation from the mean. This later evolved into moments in mathematics. The analogy between the mechanical concept of a moment and the statistical function involving the sum of the nth powers of deviations was noticed by several earlier, including Laplace, Kramp, Gauss, Encke, Czuber, Quetelet, and De Forest. == See also == Torque (or moment of force), see also the article couple (mechanics) Moment (mathematics) Mechanical equilibrium, applies when an object is balanced so that the sum of the clockwise moments about a pivot is equal to the sum of the anticlockwise moments about the same pivot Moment of inertia ( I = Σ m r 2 ) {\displaystyle \left(I=\Sigma mr^{2}\right)} , analogous to mass in discussions of rotational motion. It is a measure of an object's resistance to changes in its rotation rate Moment of momentum ( L = r × m v ) {\displaystyle (\mathbf {L} =\mathbf {r} \times m\mathbf {v} )} , the rotational analog of linear momentum. Magnetic moment ( μ = I A ) {\displaystyle \left(\mathbf {\mu } =I\mathbf {A} \right)} , a dipole moment measuring the strength and direction of a magnetic source. Electric dipole moment, a dipole moment measuring the charge difference and direction between two or more charges. For example, the electric dipole moment between a charge of –q and q separated by a distance of d is ( p = q d ) {\displaystyle (\mathbf {p} =q\mathbf {d} )} Bending moment, a moment that results in the bending of a structural element First moment of area, a property of an object related to its resistance to shear stress Second moment of area, a property of an object related to its resistance to bending and deflection Polar moment of inertia, a property of an object related to its resistance to torsion Image moments, statistical properties of an image Seismic moment, quantity used to measure the size of an earthquake Plasma moments, fluid description of plasma in terms of density, velocity and pressure List of area moments of inertia List of moments of inertia Multipole expansion Spherical multipole moments == Notes == == References == == External links == Media related to Moment (physics) at Wikimedia Commons [1] A dictionary definition of moment.
Wikipedia/Moment_(physics)
A fictitious force, also known as an inertial force or pseudo-force, is a force that appears to act on an object when its motion is described or experienced from a non-inertial frame of reference. Unlike real forces, which result from physical interactions between objects, fictitious forces occur due to the acceleration of the observer’s frame of reference rather than any actual force acting on a body. These forces are necessary for describing motion correctly within an accelerating frame, ensuring that Newton's second law of motion remains applicable. Common examples of fictitious forces include the centrifugal force, which appears to push objects outward in a rotating system; the Coriolis force, which affects moving objects in a rotating frame such as the Earth; and the Euler force, which arises when a rotating system changes its angular velocity. While these forces are not real in the sense of being caused by physical interactions, they are essential for accurately analyzing motion within accelerating reference frames, particularly in disciplines such as classical mechanics, meteorology, and astrophysics. Fictitious forces play a crucial role in understanding everyday phenomena, such as weather patterns influenced by the Coriolis effect and the perceived weightlessness experienced by astronauts in free-fall orbits. They are also fundamental in engineering applications, including navigation systems and rotating machinery. According to General relativity theory we perceive gravitational force when space is bending near heavy objects, so even this might be called a fictitious force. == Measurable examples of fictitious forces == Passengers in a vehicle accelerating in the forward direction may perceive they are acted upon by a force moving them into the direction of the backrest of their seats for instance. An example in a rotating reference frame may be the impression that it is a force which seems to move objects outward toward the rim of a centrifuge or carousel. The fictitious force called a pseudo force might also be referred to as a body force. It is due to an object's inertia when the reference frame does not move inertially any more but begins to accelerate relative to the free object. In terms of the example of the passenger vehicle, a pseudo force seems to be active just before the body touches the backrest of the seat in the car. A person in the car leaning forward first moves a bit backward in relation to the already accelerating car before touching the backrest. The motion in this short period seems to be the result of a force on the person; i.e., it is a pseudo force. A pseudo force does not arise from any physical interaction between two objects, such as electromagnetism or contact forces. It is only a consequence of the acceleration of the physical object the non-inertial reference frame is connected to, i.e. the vehicle in this case. From the viewpoint of the respective accelerating frame, an acceleration of the inert object appears to be present, apparently requiring a "force" for this to have happened. As stated by Iro: Such an additional force due to nonuniform relative motion of two reference frames is called a pseudo-force. The pseudo force on an object arises as an imaginary influence when the frame of reference used to describe the object's motion is accelerating compared to a non-accelerating frame. The pseudo force "explains", using Newton's second law mechanics, why an object does not follow Newton's second law and "floats freely" as if weightless. As a frame may accelerate in any arbitrary way, so may pseudo forces also be as arbitrary (but only in direct response to the acceleration of the frame). An example of a pseudo force as defined by Iro is the Coriolis force, maybe better to be called: the Coriolis effect. The gravitational force would also be a fictitious force (pseudo force) in a field model in which particles distort spacetime due to their mass, such as in the theory of general relativity. Assuming Newton's second law in the form F = ma, fictitious forces are always proportional to the mass m. The fictitious force that has been called an inertial force is also referred to as a d'Alembert force, or sometimes as a pseudo force. D'Alembert's principle is just another way of formulating Newton's second law of motion. It defines an inertial force as the negative of the product of mass times acceleration, just for the sake of easier calculations. (A d'Alembert force is not to be confused with a contact force arising from the physical interaction between two objects, which is the subject of Newton's third law – 'action is reaction'. In terms of the example of the passenger vehicle above, a contact force emerges when the body of the passenger touches the backrest of the seat in the car. It is present for as long as the car is accelerated.) Four fictitious forces have been defined for frames accelerated in commonly occurring ways: one caused by any acceleration relative to the origin in a straight line (rectilinear acceleration); two involving rotation: centrifugal force and Coriolis effect and a fourth, called the Euler force, caused by a variable rate of rotation, should that occur. == Background == The role of fictitious forces in Newtonian mechanics is described by Tonnelat: For Newton, the appearance of acceleration always indicates the existence of absolute motion – absolute motion of matter where real forces are concerned; absolute motion of the reference system, where so-called fictitious forces, such as inertial forces or those of Coriolis, are concerned. Fictitious forces arise in classical mechanics and special relativity in all non-inertial frames. Inertial frames are privileged over non-inertial frames because they do not have physics whose causes are outside of the system, while non-inertial frames do. Fictitious forces, or physics whose cause is outside of the system, are no longer necessary in general relativity, since these physics are explained with the geodesics of spacetime: "The field of all possible space-time null geodesics or photon paths unifies the absolute local non-rotation standard throughout space-time.". == On Earth == The surface of the Earth is a rotating reference frame. To solve classical mechanics problems exactly in an Earthbound reference frame, three fictitious forces must be introduced: the Coriolis force, the centrifugal force (described below) and the Euler force. The Euler force is typically ignored because the variations in the angular velocity of the rotating surface of the Earth are usually insignificant. Both of the other fictitious forces are weak compared to most typical forces in everyday life, but they can be detected under careful conditions. For example, Léon Foucault used his Foucault pendulum to show that the Coriolis force results from the Earth's rotation. If the Earth were to rotate twenty times faster (making each day only ~72 minutes long), people could easily get the impression that such fictitious forces were pulling on them, as on a spinning carousel. People in temperate and tropical latitudes would, in fact, need to hold on, in order to avoid being launched into orbit by the centrifugal force. When moving along the equator in a ship heading in an easterly direction, objects appear to be slightly lighter than on the way back. This phenomenon has been observed and is called the Eötvös effect. == Detection of non-inertial reference frame == Observers inside a closed box that is moving with a constant velocity cannot detect their own motion; however, observers within an accelerating reference frame can detect that they are in a non-inertial reference frame from the fictitious forces that arise. For example, for straight-line acceleration Vladimir Arnold presents the following theorem: In a coordinate system K which moves by translation relative to an inertial system k, the motion of a mechanical system takes place as if the coordinate system were inertial, but on every point of mass m an additional "inertial force" acted: F = −ma, where a is the acceleration of the system K. Other accelerations also give rise to fictitious forces, as described mathematically below. The physical explanation of motions in an inertial frame is the simplest possible, requiring no fictitious forces: fictitious forces are zero, providing a means to distinguish inertial frames from others. An example of the detection of a non-inertial, rotating reference frame is the precession of a Foucault pendulum. In the non-inertial frame of the Earth, the fictitious Coriolis force is necessary to explain observations. In an inertial frame outside the Earth, no such fictitious force is necessary. == Example concerning Circular motion == The effect of a fictitious force also occurs when a car takes the bend. Observed from a non-inertial frame of reference attached to the car, the fictitious force called the centrifugal force appears. As the car enters a left turn, a suitcase first on the left rear seat slides to the right rear seat and then continues until it comes into contact with the closed door on the right. This motion marks the phase of the fictitious centrifugal force as it is the inertia of the suitcase which plays a role in this piece of movement. It may seem that there must be a force responsible for this movement, but actually, this movement arises because of the inertia of the suitcase, which is (still) a 'free object' within an already accelerating frame of reference. After the suitcase has come into contact with the closed door of the car, the situation with the emergence of contact forces becomes current. The centripetal force on the car is now also transferred to the suitcase and the situation of Newton's third law comes into play, with the centripetal force as the action part and with the so-called reactive centrifugal force as the reaction part. The reactive centrifugal force is also due to the inertia of the suitcase. Now however the inertia appears in the form of a manifesting resistance to a change in its state of motion. Suppose a few miles further the car is moving at constant speed travelling a roundabout, again and again, then the occupants will feel as if they are being pushed to the outside of the vehicle by the (reactive) centrifugal force, away from the centre of the turn. The situation can be viewed from inertial as well as from non-inertial frames. From the viewpoint of an inertial reference frame stationary with respect to the road, the car is accelerating toward the centre of the circle. It is accelerating, because the direction of the velocity is changing, despite the car having constant speed. This inward acceleration is called centripetal acceleration, it requires a centripetal force to maintain the circular motion. This force is exerted by the ground upon the wheels, in this case, from the friction between the wheels and the road. The car is accelerating, due to the unbalanced force, which causes it to move in a circle. (See also banked turn.) From the viewpoint of a rotating frame, moving with the car, a fictitious centrifugal force appears to be present pushing the car toward the outside of the road (and pushing the occupants toward the outside of the car). The centrifugal force balances the friction between wheels and the road, making the car stationary in this non-inertial frame. A classic example of a fictitious force in circular motion is the experiment of rotating spheres tied by a cord and spinning around their centre of mass. In this case, the identification of a rotating, non-inertial frame of reference can be based upon the vanishing of fictitious forces. In an inertial frame, fictitious forces are not necessary to explain the tension in the string joining the spheres. In a rotating frame, Coriolis and centrifugal forces must be introduced to predict the observed tension. In the rotating reference frame perceived on the surface of the Earth, a centrifugal force reduces the apparent force of gravity by about one part in a thousand, depending on latitude. This reduction is zero at the poles, maximum at the equator. The fictitious Coriolis force, which is observed in rotational frames, is ordinarily visible only in very large-scale motion like the projectile motion of long-range guns or the circulation of the Earth's atmosphere (see Rossby number). Neglecting air resistance, an object dropped from a 50-meter-high tower at the equator will fall 7.7 millimetres eastward of the spot below where it is dropped because of the Coriolis force. == Fictitious forces and work == Fictitious forces can be considered to do work, provided that they move an object on a trajectory that changes its energy from potential to kinetic. For example, consider some persons in rotating chairs holding a weight in their outstretched hands. If they pull their hand inward toward their body, from the perspective of the rotating reference frame, they have done work against the centrifugal force. When the weight is let go, it spontaneously flies outward relative to the rotating reference frame, because the centrifugal force does work on the object, converting its potential energy into kinetic. From an inertial viewpoint, of course, the object flies away from them because it is suddenly allowed to move in a straight line. This illustrates that the work done, like the total potential and kinetic energy of an object, can be different in a non-inertial frame than in an inertial one. == Gravity as a fictitious force == The notion of "fictitious force" also arises in Einstein's general theory of relativity. All fictitious forces are proportional to the mass of the object upon which they act, which is also true for gravity. This led Albert Einstein to wonder whether gravity could be modeled as a fictitious force. He noted that a freefalling observer in a closed box would not be able to detect the force of gravity; hence, freefalling reference frames are equivalent to inertial reference frames (the equivalence principle). Developing this insight, Einstein formulated a theory with gravity as a fictitious force, and attributed the apparent acceleration due to gravity to the curvature of spacetime. This idea underlies Einstein's theory of general relativity. See the Eötvös experiment. == Mathematical derivation of fictitious forces == === General derivation === Many problems require use of noninertial reference frames, for example, those involving satellites and particle accelerators. Figure 2 shows a particle with mass m and position vector xA(t) in a particular inertial frame A. Consider a non-inertial frame B whose origin relative to the inertial one is given by XAB(t). Let the position of the particle in frame B be xB(t). What is the force on the particle as expressed in the coordinate system of frame B? To answer this question, let the coordinate axis in B be represented by unit vectors uj with j any of { 1, 2, 3 } for the three coordinate axes. Then x B = ∑ j = 1 3 x j u j . {\displaystyle \mathbf {x} _{\mathrm {B} }=\sum _{j=1}^{3}x_{j}\mathbf {u} _{j}\,.} The interpretation of this equation is that xB is the vector displacement of the particle as expressed in terms of the coordinates in frame B at the time t. From frame A the particle is located at: x A = X A B + ∑ j = 1 3 x j u j . {\displaystyle \mathbf {x} _{\mathrm {A} }=\mathbf {X} _{\mathrm {AB} }+\sum _{j=1}^{3}x_{j}\mathbf {u} _{j}\,.} As an aside, the unit vectors { uj } cannot change magnitude, so derivatives of these vectors express only rotation of the coordinate system B. On the other hand, vector XAB simply locates the origin of frame B relative to frame A, and so cannot include rotation of frame B. Taking a time derivative, the velocity of the particle is: d x A d t = d X A B d t + ∑ j = 1 3 d x j d t u j + ∑ j = 1 3 x j d u j d t . {\displaystyle {\frac {d\mathbf {x} _{\mathrm {A} }}{dt}}={\frac {d\mathbf {X} _{\mathrm {AB} }}{dt}}+\sum _{j=1}^{3}{\frac {dx_{j}}{dt}}\mathbf {u} _{j}+\sum _{j=1}^{3}x_{j}{\frac {d\mathbf {u} _{j}}{dt}}\,.} The second term summation is the velocity of the particle, say vB as measured in frame B. That is: d x A d t = v A B + v B + ∑ j = 1 3 x j d u j d t . {\displaystyle {\frac {d\mathbf {x} _{\mathrm {A} }}{dt}}=\mathbf {v} _{\mathrm {AB} }+\mathbf {v} _{\mathrm {B} }+\sum _{j=1}^{3}x_{j}{\frac {d\mathbf {u} _{j}}{dt}}.} The interpretation of this equation is that the velocity of the particle seen by observers in frame A consists of what observers in frame B call the velocity, namely vB, plus two extra terms related to the rate of change of the frame-B coordinate axes. One of these is simply the velocity of the moving origin vAB. The other is a contribution to velocity due to the fact that different locations in the non-inertial frame have different apparent velocities due to the rotation of the frame; a point seen from a rotating frame has a rotational component of velocity that is greater the further the point is from the origin. To find the acceleration, another time differentiation provides: d 2 x A d t 2 = a A B + d v B d t + ∑ j = 1 3 d x j d t d u j d t + ∑ j = 1 3 x j d 2 u j d t 2 . {\displaystyle {\frac {d^{2}\mathbf {x} _{\mathrm {A} }}{dt^{2}}}=\mathbf {a} _{\mathrm {AB} }+{\frac {d\mathbf {v} _{\mathrm {B} }}{dt}}+\sum _{j=1}^{3}{\frac {dx_{j}}{dt}}{\frac {d\mathbf {u} _{j}}{dt}}+\sum _{j=1}^{3}x_{j}{\frac {d^{2}\mathbf {u} _{j}}{dt^{2}}}.} Using the same formula already used for the time derivative of xB, the velocity derivative on the right is: d v B d t = ∑ j = 1 3 d v j d t u j + ∑ j = 1 3 v j d u j d t = a B + ∑ j = 1 3 v j d u j d t . {\displaystyle {\frac {d\mathbf {v} _{\mathrm {B} }}{dt}}=\sum _{j=1}^{3}{\frac {dv_{j}}{dt}}\mathbf {u} _{j}+\sum _{j=1}^{3}v_{j}{\frac {d\mathbf {u} _{j}}{dt}}=\mathbf {a} _{\mathrm {B} }+\sum _{j=1}^{3}v_{j}{\frac {d\mathbf {u} _{j}}{dt}}.} Consequently, The interpretation of this equation is as follows: the acceleration of the particle in frame A consists of what observers in frame B call the particle acceleration aB, but in addition, there are three acceleration terms related to the movement of the frame-B coordinate axes: one term related to the acceleration of the origin of frame B, namely aAB, and two terms related to the rotation of frame B. Consequently, observers in B will see the particle motion as possessing "extra" acceleration, which they will attribute to "forces" acting on the particle, but which observers in A say are "fictitious" forces arising simply because observers in B do not recognize the non-inertial nature of frame B. The factor of two in the Coriolis force arises from two equal contributions: (i) the apparent change of an inertially constant velocity with time because rotation makes the direction of the velocity seem to change (a dvB/dt term) and (ii) an apparent change in the velocity of an object when its position changes, putting it nearer to or further from the axis of rotation (the change in ∑ x j d u j / d t {\textstyle \sum x_{j}\,d\mathbf {u} _{j}/dt} due to change in x j ). To put matters in terms of forces, the accelerations are multiplied by the particle mass: F A = F B + m a A B + 2 m ∑ j = 1 3 v j d u j d t + m ∑ j = 1 3 x j d 2 u j d t 2 . {\displaystyle \mathbf {F} _{\mathrm {A} }=\mathbf {F} _{\mathrm {B} }+m\mathbf {a} _{\mathrm {AB} }+2m\sum _{j=1}^{3}v_{j}{\frac {d\mathbf {u} _{j}}{dt}}+m\sum _{j=1}^{3}x_{j}{\frac {d^{2}\mathbf {u} _{j}}{dt^{2}}}\ .} The force observed in frame B, FB = maB is related to the actual force on the particle, FA, by F B = F A + F f i c t i t i o u s , {\displaystyle \mathbf {F} _{\mathrm {B} }=\mathbf {F} _{\mathrm {A} }+\mathbf {F} _{\mathrm {fictitious} },} where: F f i c t i t i o u s = − m a A B − 2 m ∑ j = 1 3 v j d u j d t − m ∑ j = 1 3 x j d 2 u j d t 2 . {\displaystyle \mathbf {F} _{\mathrm {fictitious} }=-m\mathbf {a} _{\mathrm {AB} }-2m\sum _{j=1}^{3}v_{j}{\frac {d\mathbf {u} _{j}}{dt}}-m\sum _{j=1}^{3}x_{j}{\frac {d^{2}\mathbf {u} _{j}}{dt^{2}}}\,.} Thus, problems may be solved in frame B by assuming that Newton's second law holds (with respect to quantities in that frame) and treating Ffictitious as an additional force. Below are a number of examples applying this result for fictitious forces. More examples can be found in the article on centrifugal force. === Rotating coordinate systems === A common situation in which noninertial reference frames are useful is when the reference frame is rotating. Because such rotational motion is non-inertial, due to the acceleration present in any rotational motion, a fictitious force can always be invoked by using a rotational frame of reference. Despite this complication, the use of fictitious forces often simplifies the calculations involved. To derive expressions for the fictitious forces, derivatives are needed for the apparent time rate of change of vectors that take into account time-variation of the coordinate axes. If the rotation of frame 'B' is represented by a vector Ω pointed along the axis of rotation with the orientation given by the right-hand rule, and with magnitude given by | Ω | = d θ d t = ω ( t ) , {\displaystyle |{\boldsymbol {\Omega }}|={\frac {d\theta }{dt}}=\omega (t),} then the time derivative of any of the three unit vectors describing frame B is d u j ( t ) d t = Ω × u j ( t ) , {\displaystyle {\frac {d\mathbf {u} _{j}(t)}{dt}}={\boldsymbol {\Omega }}\times \mathbf {u} _{j}(t),} and d 2 u j ( t ) d t 2 = d Ω d t × u j + Ω × d u j ( t ) d t = d Ω d t × u j + Ω × [ Ω × u j ( t ) ] , {\displaystyle {\frac {d^{2}\mathbf {u} _{j}(t)}{dt^{2}}}={\frac {d{\boldsymbol {\Omega }}}{dt}}\times \mathbf {u} _{j}+{\boldsymbol {\Omega }}\times {\frac {d\mathbf {u} _{j}(t)}{dt}}={\frac {d{\boldsymbol {\Omega }}}{dt}}\times \mathbf {u} _{j}+{\boldsymbol {\Omega }}\times \left[{\boldsymbol {\Omega }}\times \mathbf {u} _{j}(t)\right],} as is verified using the properties of the vector cross product. These derivative formulas now are applied to the relationship between acceleration in an inertial frame, and that in a coordinate frame rotating with time-varying angular velocity ω(t). From the previous section, where subscript A refers to the inertial frame and B to the rotating frame, setting aAB = 0 to remove any translational acceleration, and focusing on only rotational properties (see Eq. 1): d 2 x A d t 2 = a B + 2 ∑ j = 1 3 v j d u j d t + ∑ j = 1 3 x j d 2 u j d t 2 , {\displaystyle {\frac {d^{2}\mathbf {x} _{\mathrm {A} }}{dt^{2}}}=\mathbf {a} _{\mathrm {B} }+2\sum _{j=1}^{3}v_{j}\ {\frac {d\mathbf {u} _{j}}{dt}}+\sum _{j=1}^{3}x_{j}{\frac {d^{2}\mathbf {u} _{j}}{dt^{2}}},} a A = a B + 2 ∑ j = 1 3 v j Ω × u j ( t ) + ∑ j = 1 3 x j d Ω d t × u j + ∑ j = 1 3 x j Ω × [ Ω × u j ( t ) ] = a B + 2 Ω × ∑ j = 1 3 v j u j ( t ) + d Ω d t × ∑ j = 1 3 x j u j + Ω × [ Ω × ∑ j = 1 3 x j u j ( t ) ] . {\displaystyle {\begin{aligned}\mathbf {a} _{\mathrm {A} }&=\mathbf {a} _{\mathrm {B} }+\ 2\sum _{j=1}^{3}v_{j}{\boldsymbol {\Omega }}\times \mathbf {u} _{j}(t)+\sum _{j=1}^{3}x_{j}{\frac {d{\boldsymbol {\Omega }}}{dt}}\times \mathbf {u} _{j}\ +\sum _{j=1}^{3}x_{j}{\boldsymbol {\Omega }}\times \left[{\boldsymbol {\Omega }}\times \mathbf {u} _{j}(t)\right]\\&=\mathbf {a} _{\mathrm {B} }+2{\boldsymbol {\Omega }}\times \sum _{j=1}^{3}v_{j}\mathbf {u} _{j}(t)+{\frac {d{\boldsymbol {\Omega }}}{dt}}\times \sum _{j=1}^{3}x_{j}\mathbf {u} _{j}+{\boldsymbol {\Omega }}\times \left[{\boldsymbol {\Omega }}\times \sum _{j=1}^{3}x_{j}\mathbf {u} _{j}(t)\right].\end{aligned}}} Collecting terms, the result is the so-called acceleration transformation formula: a A = a B + 2 Ω × v B + d Ω d t × x B + Ω × ( Ω × x B ) . {\displaystyle \mathbf {a} _{\mathrm {A} }=\mathbf {a} _{\mathrm {B} }+2{\boldsymbol {\Omega }}\times \mathbf {v} _{\mathrm {B} }+{\frac {d{\boldsymbol {\Omega }}}{dt}}\times \mathbf {x} _{\mathrm {B} }+{\boldsymbol {\Omega }}\times \left({\boldsymbol {\Omega }}\times \mathbf {x} _{\mathrm {B} }\right)\,.} The physical acceleration aA due to what observers in the inertial frame A call real external forces on the object is, therefore, not simply the acceleration aB seen by observers in the rotational frame B, but has several additional geometric acceleration terms associated with the rotation of B. As seen in the rotational frame, the acceleration aB of the particle is given by rearrangement of the above equation as: a B = a A − 2 Ω × v B − Ω × ( Ω × x B ) − d Ω d t × x B . {\displaystyle \mathbf {a} _{\mathrm {B} }=\mathbf {a} _{\mathrm {A} }-2{\boldsymbol {\Omega }}\times \mathbf {v} _{\mathrm {B} }-{\boldsymbol {\Omega }}\times ({\boldsymbol {\Omega }}\times \mathbf {x} _{\mathrm {B} })-{\frac {d{\boldsymbol {\Omega }}}{dt}}\times \mathbf {x} _{\mathrm {B} }.} The net force upon the object according to observers in the rotating frame is FB = maB. If their observations are to result in the correct force on the object when using Newton's laws, they must consider that the additional force Ffict is present, so the end result is FB = FA + Ffict. Thus, the fictitious force used by observers in B to get the correct behaviour of the object from Newton's laws equals: F f i c t = − 2 m Ω × v B − m Ω × ( Ω × x B ) − m d Ω d t × x B . {\displaystyle \mathbf {F} _{\mathrm {fict} }=-2m{\boldsymbol {\Omega }}\times \mathbf {v} _{\mathrm {B} }-m{\boldsymbol {\Omega }}\times ({\boldsymbol {\Omega }}\times \mathbf {x} _{\mathrm {B} })-m{\frac {d{\boldsymbol {\Omega }}}{dt}}\times \mathbf {x} _{\mathrm {B} }.} Here, the first term is the Coriolis force, the second term is the centrifugal force, and the third term is the Euler force. === Orbiting coordinate systems === As a related example, suppose the moving coordinate system B rotates with a constant angular speed ω in a circle of radius R about the fixed origin of inertial frame A, but maintains its coordinate axes fixed in orientation, as in Figure 3. The acceleration of an observed body is now (see Eq. 1): d 2 x A d t 2 = a A B + a B + 2 ∑ j = 1 3 v j d u j d t + ∑ j = 1 3 x j d 2 u j d t 2 = a A B + a B , {\displaystyle {\begin{aligned}{\frac {d^{2}\mathbf {x} _{A}}{dt^{2}}}&=\mathbf {a} _{AB}+\mathbf {a} _{B}+2\ \sum _{j=1}^{3}v_{j}\ {\frac {d\mathbf {u} _{j}}{dt}}+\sum _{j=1}^{3}x_{j}\ {\frac {d^{2}\mathbf {u} _{j}}{dt^{2}}}\\&=\mathbf {a} _{AB}\ +\mathbf {a} _{B}\ ,\end{aligned}}} where the summations are zero inasmuch as the unit vectors have no time dependence. The origin of the system B is located according to frame A at: X A B = R ( cos ⁡ ( ω t ) , sin ⁡ ( ω t ) ) , {\displaystyle \mathbf {X} _{AB}=R\left(\cos(\omega t),\ \sin(\omega t)\right)\ ,} leading to a velocity of the origin of frame B as: v A B = d d t X A B = Ω × X A B , {\displaystyle \mathbf {v} _{AB}={\frac {d}{dt}}\mathbf {X} _{AB}=\mathbf {\Omega \times X} _{AB}\ ,} leading to an acceleration of the origin of B given by: a A B = d 2 d t 2 X A B = Ω × ( Ω × X A B ) = − ω 2 X A B . {\displaystyle \mathbf {a} _{AB}={\frac {d^{2}}{dt^{2}}}\mathbf {X} _{AB}=\mathbf {\Omega \ \times } \left(\mathbf {\Omega \times X} _{AB}\right)=-\omega ^{2}\mathbf {X} _{AB}\,.} Because the first term, which is Ω × ( Ω × X A B ) , {\displaystyle \mathbf {\Omega \ \times } \left(\mathbf {\Omega \times X} _{AB}\right)\,,} is of the same form as the normal centrifugal force expression: Ω × ( Ω × x B ) , {\displaystyle {\boldsymbol {\Omega }}\times \left({\boldsymbol {\Omega }}\times \mathbf {x} _{B}\right)\,,} it is a natural extension of standard terminology (although there is no standard terminology for this case) to call this term a "centrifugal force". Whatever terminology is adopted, the observers in frame B must introduce a fictitious force, this time due to the acceleration from the orbital motion of their entire coordinate frame, that is radially outward away from the centre of rotation of the origin of their coordinate system: F f i c t = m ω 2 X A B , {\displaystyle \mathbf {F} _{\mathrm {fict} }=m\omega ^{2}\mathbf {X} _{AB}\,,} and of magnitude: | F f i c t | = m ω 2 R . {\displaystyle |\mathbf {F} _{\mathrm {fict} }|=m\omega ^{2}R\,.} This "centrifugal force" has differences from the case of a rotating frame. In the rotating frame the centrifugal force is related to the distance of the object from the origin of frame B, while in the case of an orbiting frame, the centrifugal force is independent of the distance of the object from the origin of frame B, but instead depends upon the distance of the origin of frame B from its centre of rotation, resulting in the same centrifugal fictitious force for all objects observed in frame B. === Orbiting and rotating === As a combination example, Figure 4 shows a coordinate system B that orbits inertial frame A as in Figure 3, but the coordinate axes in frame B turn so unit vector u1 always points toward the centre of rotation. This example might apply to a test tube in a centrifuge, where vector u1 points along the axis of the tube toward its opening at its top. It also resembles the Earth–Moon system, where the Moon always presents the same face to the Earth. In this example, unit vector u3 retains a fixed orientation, while vectors u1, u2 rotate at the same rate as the origin of coordinates. That is, u 1 = ( − cos ⁡ ω t , − sin ⁡ ω t ) ; u 2 = ( sin ⁡ ω t , − cos ⁡ ω t ) . {\displaystyle \mathbf {u} _{1}=(-\cos \omega t,\ -\sin \omega t)\ ;\ \mathbf {u} _{2}=(\sin \omega t,\ -\cos \omega t)\,.} d d t u 1 = Ω × u 1 = ω u 2 ; d d t u 2 = Ω × u 2 = − ω u 1 . {\displaystyle {\frac {d}{dt}}\mathbf {u} _{1}=\mathbf {\Omega \times u_{1}} =\omega \mathbf {u} _{2}\ ;\ {\frac {d}{dt}}\mathbf {u} _{2}=\mathbf {\Omega \times u_{2}} =-\omega \mathbf {u} _{1}\ \ .} Hence, the acceleration of a moving object is expressed as (see Eq. 1): d 2 x A d t 2 = a A B + a B + 2 ∑ j = 1 3 v j d u j d t + ∑ j = 1 3 x j d 2 u j d t 2 = Ω × ( Ω × X A B ) + a B + 2 ∑ j = 1 3 v j Ω × u j + ∑ j = 1 3 x j Ω × ( Ω × u j ) = Ω × ( Ω × X A B ) + a B + 2 Ω × v B + Ω × ( Ω × x B ) = Ω × ( Ω × ( X A B + x B ) ) + a B + 2 Ω × v B , {\displaystyle {\begin{aligned}{\frac {d^{2}\mathbf {x} _{A}}{dt^{2}}}&=\mathbf {a} _{AB}+\mathbf {a} _{B}+2\ \sum _{j=1}^{3}v_{j}\ {\frac {d\mathbf {u} _{j}}{dt}}+\ \sum _{j=1}^{3}x_{j}\ {\frac {d^{2}\mathbf {u} _{j}}{dt^{2}}}\\&=\mathbf {\Omega \ \times } \left(\mathbf {\Omega \times X} _{AB}\right)+\mathbf {a} _{B}+2\ \sum _{j=1}^{3}v_{j}\ \mathbf {\Omega \times u_{j}} \ +\ \sum _{j=1}^{3}x_{j}\ {\boldsymbol {\Omega }}\times \left({\boldsymbol {\Omega }}\times \mathbf {u} _{j}\right)\\&=\mathbf {\Omega \ \times } \left(\mathbf {\Omega \times X} _{AB}\right)+\mathbf {a} _{B}+2\ {\boldsymbol {\Omega }}\times \mathbf {v} _{B}\ \ +\ {\boldsymbol {\Omega }}\times \left({\boldsymbol {\Omega }}\times \mathbf {x} _{B}\right)\\&=\mathbf {\Omega \ \times } \left(\mathbf {\Omega \times } (\mathbf {X} _{AB}+\mathbf {x} _{B})\right)+\mathbf {a} _{B}+2\ {\boldsymbol {\Omega }}\times \mathbf {v} _{B}\ \,,\end{aligned}}} where the angular acceleration term is zero for the constant rate of rotation. Because the first term, which is Ω × ( Ω × ( X A B + x B ) ) , {\displaystyle \mathbf {\Omega \ \times } \left(\mathbf {\Omega \times } (\mathbf {X} _{AB}+\mathbf {x} _{B})\right)\,,} is of the same form as the normal centrifugal force expression: Ω × ( Ω × x B ) , {\displaystyle {\boldsymbol {\Omega }}\times \left({\boldsymbol {\Omega }}\times \mathbf {x} _{B}\right)\,,} it is a natural extension of standard terminology (although there is no standard terminology for this case) to call this term the "centrifugal force". Applying this terminology to the example of a tube in a centrifuge, if the tube is far enough from the center of rotation, |XAB| = R ≫ |xB|, all the matter in the test tube sees the same acceleration (the same centrifugal force). Thus, in this case, the fictitious force is primarily a uniform centrifugal force along the axis of the tube, away from the centre of rotation, with a value |Ffict| = ω2 R, where R is the distance of the matter in the tube from the centre of the centrifuge. It is the standard specification of a centrifuge to use the "effective" radius of the centrifuge to estimate its ability to provide centrifugal force. Thus, the first estimate of centrifugal force in a centrifuge can be based upon the distance of the tubes from the centre of rotation, and corrections applied if needed. Also, the test tube confines motion to the direction down the length of the tube, so vB is opposite to u1 and the Coriolis force is opposite to u2, that is, against the wall of the tube. If the tube is spun for a long enough time, the velocity vB drops to zero as the matter comes to an equilibrium distribution. For more details, see the articles on sedimentation and the Lamm equation. A related problem is that of centrifugal forces for the Earth–Moon–Sun system, where three rotations appear: the daily rotation of the Earth about its axis, the lunar-month rotation of the Earth–Moon system about its centre of mass, and the annual revolution of the Earth–Moon system about the Sun. These three motions influence the tides. === Crossing a carousel === Figure 5 shows another example comparing the observations of an inertial observer with those of an observer on a rotating carousel. The carousel rotates at a constant angular velocity represented by the vector Ω with magnitude ω, pointing upward according to the right-hand rule. A rider on the carousel walks radially across it at a constant speed, in what appears to the walker to be the straight line path inclined at 45° in Figure 5. To the stationary observer, however, the walker travels a spiral path. The points identified on both paths in Figure 5 correspond to the same times spaced at equal time intervals. We ask how two observers, one on the carousel and one in an inertial frame, formulate what they see using Newton's laws. ==== Inertial observer ==== The observer at rest describes the path followed by the walker as a spiral. Adopting the coordinate system shown in Figure 5, the trajectory is described by r(t): r ( t ) = R ( t ) u R = [ x ( t ) y ( t ) ] = [ R ( t ) cos ⁡ ( ω t + π / 4 ) R ( t ) sin ⁡ ( ω t + π / 4 ) ] , {\displaystyle \mathbf {r} (t)=R(t)\mathbf {u} _{R}={\begin{bmatrix}x(t)\\y(t)\end{bmatrix}}={\begin{bmatrix}R(t)\cos(\omega t+\pi /4)\\R(t)\sin(\omega t+\pi /4)\end{bmatrix}},} where the added π/4 sets the path angle at 45° to start with (just an arbitrary choice of direction), uR is a unit vector in the radial direction pointing from the centre of the carousel to the walker at the time t. The radial distance R(t) increases steadily with time according to: R ( t ) = s t , {\displaystyle R(t)=st,} with s the speed of walking. According to simple kinematics, the velocity is then the first derivative of the trajectory: v ( t ) = d R d t [ cos ⁡ ( ω t + π / 4 ) sin ⁡ ( ω t + π / 4 ) ] + ω R ( t ) [ − sin ⁡ ( ω t + π / 4 ) cos ⁡ ( ω t + π / 4 ) ] = d R d t u R + ω R ( t ) u θ , {\displaystyle {\begin{aligned}\mathbf {v} (t)&={\frac {dR}{dt}}{\begin{bmatrix}\cos(\omega t+\pi /4)\\\sin(\omega t+\pi /4)\end{bmatrix}}+\omega R(t){\begin{bmatrix}-\sin(\omega t+\pi /4)\\\cos(\omega t+\pi /4)\end{bmatrix}}\\&={\frac {dR}{dt}}\mathbf {u} _{R}+\omega R(t)\mathbf {u} _{\theta },\end{aligned}}} with uθ a unit vector perpendicular to uR at time t (as can be verified by noticing that the vector dot product with the radial vector is zero) and pointing in the direction of travel. The acceleration is the first derivative of the velocity: a ( t ) = d 2 R d t 2 [ cos ⁡ ( ω t + π / 4 ) sin ⁡ ( ω t + π / 4 ) ] + 2 d R d t ω [ − sin ⁡ ( ω t + π / 4 ) cos ⁡ ( ω t + π / 4 ) ] − ω 2 R ( t ) [ cos ⁡ ( ω t + π / 4 ) sin ⁡ ( ω t + π / 4 ) ] = 2 s ω [ − sin ⁡ ( ω t + π / 4 ) cos ⁡ ( ω t + π / 4 ) ] − ω 2 R ( t ) [ cos ⁡ ( ω t + π / 4 ) sin ⁡ ( ω t + π / 4 ) ] = 2 s ω u θ − ω 2 R ( t ) u R . {\displaystyle {\begin{aligned}\mathbf {a} (t)&={\frac {d^{2}R}{dt^{2}}}{\begin{bmatrix}\cos(\omega t+\pi /4)\\\sin(\omega t+\pi /4)\end{bmatrix}}+2{\frac {dR}{dt}}\omega {\begin{bmatrix}-\sin(\omega t+\pi /4)\\\cos(\omega t+\pi /4)\end{bmatrix}}-\omega ^{2}R(t){\begin{bmatrix}\cos(\omega t+\pi /4)\\\sin(\omega t+\pi /4)\end{bmatrix}}\\&=2s\omega {\begin{bmatrix}-\sin(\omega t+\pi /4)\\\cos(\omega t+\pi /4)\end{bmatrix}}-\omega ^{2}R(t){\begin{bmatrix}\cos(\omega t+\pi /4)\\\sin(\omega t+\pi /4)\end{bmatrix}}\\&=2s\ \omega \ \mathbf {u} _{\theta }-\omega ^{2}R(t)\ \mathbf {u} _{R}\,.\end{aligned}}} The last term in the acceleration is radially inward of magnitude ω2 R, which is therefore the instantaneous centripetal acceleration of circular motion. The first term is perpendicular to the radial direction, and pointing in the direction of travel. Its magnitude is 2sω, and it represents the acceleration of the walker as the edge of the carousel is neared, and the arc of the circle travelled in a fixed time increases, as can be seen by the increased spacing between points for equal time steps on the spiral in Figure 5 as the outer edge of the carousel is approached. Applying Newton's laws, multiplying the acceleration by the mass of the walker, the inertial observer concludes that the walker is subject to two forces: the inward radially directed centripetal force and another force perpendicular to the radial direction that is proportional to the speed of the walker. ==== Rotating observer ==== The rotating observer sees the walker travel a straight line from the centre of the carousel to the periphery, as shown in Figure 5. Moreover, the rotating observer sees that the walker moves at a constant speed in the same direction, so applying Newton's law of inertia, there is zero force upon the walker. These conclusions do not agree with the inertial observer. To obtain agreement, the rotating observer has to introduce fictitious forces that appear to exist in the rotating world, even though there is no apparent reason for them, no apparent gravitational mass, electric charge or what have you, that could account for these fictitious forces. To agree with the inertial observer, the forces applied to the walker must be exactly those found above. They can be related to the general formulas already derived, namely: F f i c t = − 2 m Ω × v B − m Ω × ( Ω × x B ) − m d Ω d t × x B . {\displaystyle \mathbf {F} _{\mathrm {fict} }=-2m{\boldsymbol {\Omega }}\times \mathbf {v} _{\mathrm {B} }-m{\boldsymbol {\Omega }}\times ({\boldsymbol {\Omega }}\times \mathbf {x} _{\mathrm {B} })-m{\frac {d{\boldsymbol {\Omega }}}{dt}}\times \mathbf {x} _{\mathrm {B} }.} In this example, the velocity seen in the rotating frame is: v B = s u R , {\displaystyle \mathbf {v} _{\mathrm {B} }=s\mathbf {u} _{R},} with uR a unit vector in the radial direction. The position of the walker as seen on the carousel is: x B = R ( t ) u R , {\displaystyle \mathbf {x} _{\mathrm {B} }=R(t)\mathbf {u} _{R},} and the time derivative of Ω is zero for uniform angular rotation. Noticing that Ω × u R = ω u θ {\displaystyle {\boldsymbol {\Omega }}\times \mathbf {u} _{R}=\omega \mathbf {u} _{\theta }} and Ω × u θ = − ω u R , {\displaystyle {\boldsymbol {\Omega }}\times \mathbf {u} _{\theta }=-\omega \mathbf {u} _{R}\,,} we find: F f i c t = − 2 m ω s u θ + m ω 2 R ( t ) u R . {\displaystyle \mathbf {F} _{\mathrm {fict} }=-2m\omega s\mathbf {u} _{\theta }+m\omega ^{2}R(t)\mathbf {u} _{R}.} To obtain a straight-line motion in the rotating world, a force exactly opposite in sign to the fictitious force must be applied to reduce the net force on the walker to zero, so Newton's law of inertia will predict a straight line motion, in agreement with what the rotating observer sees. The fictitious forces that must be combated are the Coriolis force (first term) and the centrifugal force (second term). (These terms are approximate.) By applying forces to counter these two fictitious forces, the rotating observer ends up applying exactly the same forces upon the walker that the inertial observer predicted were needed. Because they differ only by the constant walking velocity, the walker and the rotational observer see the same accelerations. From the walker's perspective, the fictitious force is experienced as real, and combating this force is necessary to stay on a straight line radial path holding a constant speed. It is like battling a crosswind while being thrown to the edge of the carousel. === Observation === Notice that this kinematical discussion does not delve into the mechanism by which the required forces are generated. That is the subject of kinetics. In the case of the carousel, the kinetic discussion would involve perhaps a study of the walker's shoes and the friction they need to generate against the floor of the carousel, or perhaps the dynamics of skateboarding if the walker switched to travel by skateboard. Whatever the means of travel across the carousel, the forces calculated above must be realized. A very rough analogy is heating your house: you must have a certain temperature to be comfortable, but whether you heat by burning gas or by burning coal is another problem. Kinematics sets the thermostat, kinetics fires the furnace. == See also == == References == == Further reading == Lev D. Landau and E. M. Lifshitz (1976). Mechanics. Course of Theoretical Physics. Vol. 1 (3rd ed.). Butterworth-Heinenan. pp. 128–130. ISBN 0-7506-2896-0. Keith Symon (1971). Mechanics (3rd ed.). Addison-Wesley. ISBN 0-201-07392-7. Jerry B. Marion (1970). Classical Dynamics of Particles and Systems. Academic Press. ISBN 0-12-472252-0. Marcel J. Sidi (1997). Spacecraft Dynamics and Control: A Practical Engineering Approach. Cambridge University Press. Chapter 4.8. ISBN 0-521-78780-7. == External links == Q and A from Richard C. Brill, Honolulu Community College NASA's David Stern: Lesson Plans for Teachers #23 on Inertial Forces Coriolis Force Motion over a flat surface Java physlet by Brian Fiedler illustrating fictitious forces. The physlet shows both the perspective as seen from a rotating and from a non-rotating point of view. Motion over a parabolic surface Java physlet by Brian Fiedler illustrating fictitious forces. The physlet shows both the perspective as seen from a rotating and as seen from a non-rotating point of view.
Wikipedia/Fictitious_force
In classical mechanics, a reactive centrifugal force forms part of an action–reaction pair with a centripetal force. In accordance with Newton's first law of motion, an object moves in a straight line in the absence of a net force acting on the object. A curved path ensues when a force that is orthogonal to the object's motion acts on it; this force is often called a centripetal force, as it is directed toward the center of curvature of the path. Then in accordance with Newton's third law of motion, there will also be an equal and opposite force exerted by the object on some other object, and this reaction force is sometimes called a reactive centrifugal force, as it is directed in the opposite direction of the centripetal force. In the case of a ball held in circular motion by a string, the centripetal force is the force exerted by the string on the ball. The reactive centrifugal force on the other hand is the force the ball exerts on the string, placing it under tension. Unlike the inertial force known as centrifugal force, which exists only in the rotating frame of reference, the reactive force is a real Newtonian force that is observed in any reference frame. The two forces will only have the same magnitude in the special cases where circular motion arises and where the axis of rotation is the origin of the rotating frame of reference. == Paired forces == The figure at right shows a ball in uniform circular motion held to its path by a string tied to an immovable post. In this system a centripetal force upon the ball provided by the string maintains the circular motion, and the reaction to it, which some refer to as the reactive centrifugal force, acts upon the string and the post. Newton's first law requires that any body moving along any path other than a straight line be subject to a net non-zero force, and the free body diagram shows the force upon the ball (center panel) exerted by the string to maintain the ball in its circular motion. Newton's third law of action and reaction states that if the string exerts an inward centripetal force on the ball, the ball will exert an equal but outward reaction upon the string, shown in the free body diagram of the string (lower panel) as the reactive centrifugal force. The string transmits the reactive centrifugal force from the ball to the fixed post, pulling upon the post. Again according to Newton's third law, the post exerts a reaction upon the string, labeled the post reaction, pulling upon the string. The two forces upon the string are equal and opposite, exerting no net force upon the string (assuming that the string is massless), but placing the string under tension. The reason the post appears to be "immovable" is because it is fixed to the earth. If the rotating ball was tethered to the mast of a boat, for example, the boat mast and ball would both experience rotation about a central point. == Applications == Even though the reactive centrifugal is rarely used in analyses in the physics literature, the concept is applied within some mechanical engineering concepts. An example of this kind of engineering concept is an analysis of the stresses within a rapidly rotating turbine blade. The blade can be treated as a stack of layers going from the axis out to the edge of the blade. Each layer exerts an outward (centrifugal) force on the immediately adjacent, radially inward layer and an inward (centripetal) force on the immediately adjacent, radially outward layer. At the same time the inner layer exerts an elastic centripetal force on the middle layer, while and the outer layer exerts an elastic centrifugal force, which results in an internal stress. It is the stresses in the blade and their causes that mainly interest mechanical engineers in this situation. Another example of a rotating device in which a reactive centrifugal force can be identified used to describe the system behavior is the centrifugal clutch. A centrifugal clutch is used in small engine-powered devices such as chain saws, go-karts and model helicopters. It allows the engine to start and idle without driving the device, but automatically and smoothly engages the drive as the engine speed rises. A spring is used to constrain the spinning clutch shoes. At low speeds, the spring provides the centripetal force to the shoes, which move to larger radius as the speed increases and the spring stretches under tension. At higher speeds, when the shoes can't move any further out to increase the spring tension, due to the outer drum, the drum provides some of the centripetal force that keeps the shoes moving in a circular path. The force of tension applied to the spring, and the outward force applied to the drum by the spinning shoes are the corresponding reactive centrifugal forces. The mutual force between the drum and the shoes provides the friction needed to engage the output drive shaft that is connected to the drum. Thus the centrifugal clutch illustrates both the fictitious centrifugal force and the reactive centrifugal force. == Difference from centrifugal pseudoforce == The "reactive centrifugal force" discussed in this article is not the same thing as the centrifugal pseudoforce, which is usually what is meant by the term "centrifugal force". Reactive centrifugal force, being one-half of the reaction pair together with centripetal force, is a concept which applies in any reference frame. This distinguishes it from the inertial or fictitious centrifugal force, which appears only in rotating frames. == Gravitational two-body case == In a two-body rotation, such as a planet and moon rotating about their common center of mass or barycentre, the forces on both bodies are centripetal. In that case, the reaction to the centripetal force of the planet on the moon is the centripetal force of the moon on the planet. == References ==
Wikipedia/Reactive_centrifugal_force
Centrifugal force is a fictitious force in Newtonian mechanics (also called an "inertial" or "pseudo" force) that appears to act on all objects when viewed in a rotating frame of reference. It appears to be directed radially away from the axis of rotation of the frame. The magnitude of the centrifugal force F on an object of mass m at the perpendicular distance ρ from the axis of a rotating frame of reference with angular velocity ω is F = m ω 2 ρ {\textstyle F=m\omega ^{2}\rho } . This fictitious force is often applied to rotating devices, such as centrifuges, centrifugal pumps, centrifugal governors, and centrifugal clutches, and in centrifugal railways, planetary orbits and banked curves, when they are analyzed in a non–inertial reference frame such as a rotating coordinate system. The term has sometimes also been used for the reactive centrifugal force, a real frame-independent Newtonian force that exists as a reaction to a centripetal force in some scenarios. == History == From 1659, the Neo-Latin term vi centrifuga ("centrifugal force") is attested in Christiaan Huygens' notes and letters. Note, that in Latin centrum means "center" and ‑fugus (from fugiō) means "fleeing, avoiding". Thus, centrifugus means "fleeing from the center" in a literal translation. In 1673, in Horologium Oscillatorium, Huygens writes (as translated by Richard J. Blackwell): There is another kind of oscillation in addition to the one we have examined up to this point; namely, a motion in which a suspended weight is moved around through the circumference of a circle. From this we were led to the construction of another clock at about the same time we invented the first one. [...] I originally intended to publish here a lengthy description of these clocks, along with matters pertaining to circular motion and centrifugal force, as it might be called, a subject about which I have more to say than I am able to do at present. But, in order that those interested in these things can sooner enjoy these new and not useless speculations, and in order that their publication not be prevented by some accident, I have decided, contrary to my plan, to add this fifth part [...]. The same year, Isaac Newton received Huygens work via Henry Oldenburg and replied "I pray you return [Mr. Huygens] my humble thanks [...] I am glad we can expect another discourse of the vis centrifuga, which speculation may prove of good use in natural philosophy and astronomy, as well as mechanics". In 1687, in Principia, Newton further develops vis centrifuga ("centrifugal force"). Around this time, the concept is also further evolved by Newton, Gottfried Wilhelm Leibniz, and Robert Hooke. In the late 18th century, the modern conception of the centrifugal force evolved as a "fictitious force" arising in a rotating reference. Centrifugal force has also played a role in debates in classical mechanics about detection of absolute motion. Newton suggested two arguments to answer the question of whether absolute rotation can be detected: the rotating bucket argument, and the rotating spheres argument. According to Newton, in each scenario the centrifugal force would be observed in the object's local frame (the frame where the object is stationary) only if the frame were rotating with respect to absolute space. Around 1883, Mach's principle was proposed where, instead of absolute rotation, the motion of the distant stars relative to the local inertial frame gives rise through some (hypothetical) physical law to the centrifugal force and other inertia effects. Today's view is based upon the idea of an inertial frame of reference, which privileges observers for which the laws of physics take on their simplest form, and in particular, frames that do not use centrifugal forces in their equations of motion in order to describe motions correctly. Around 1914, the analogy between centrifugal force (sometimes used to create artificial gravity) and gravitational forces led to the equivalence principle of general relativity. == Introduction == Centrifugal force is an outward force apparent in a rotating reference frame. It does not exist when a system is described relative to an inertial frame of reference. All measurements of position and velocity must be made relative to some frame of reference. For example, an analysis of the motion of an object in an airliner in flight could be made relative to the airliner, to the surface of the Earth, or even to the Sun. A reference frame that is at rest (or one that moves with no rotation and at constant velocity) relative to the "fixed stars" is generally taken to be an inertial frame. Any system can be analyzed in an inertial frame (and so with no centrifugal force). However, it is often more convenient to describe a rotating system by using a rotating frame—the calculations are simpler, and descriptions more intuitive. When this choice is made, fictitious forces, including the centrifugal force, arise. In a reference frame rotating about an axis through its origin, all objects, regardless of their state of motion, appear to be under the influence of a radially (from the axis of rotation) outward force that is proportional to their mass, to the distance from the axis of rotation of the frame, and to the square of the angular velocity of the frame. This is the centrifugal force. As humans usually experience centrifugal force from within the rotating reference frame, e.g. on a merry-go-round or vehicle, this is much more well-known than centripetal force. Motion relative to a rotating frame results in another fictitious force: the Coriolis force. If the rate of rotation of the frame changes, a third fictitious force (the Euler force) is required. These fictitious forces are necessary for the formulation of correct equations of motion in a rotating reference frame and allow Newton's laws to be used in their normal form in such a frame (with one exception: the fictitious forces do not obey Newton's third law: they have no equal and opposite counterparts). Newton's third law requires the counterparts to exist within the same frame of reference, hence centrifugal and centripetal force, which do not, are not action and reaction (as is sometimes erroneously contended). == Examples == === Vehicle driving round a curve === A common experience that gives rise to the idea of a centrifugal force is encountered by passengers riding in a vehicle, such as a car, that is changing direction. If a car is traveling at a constant speed along a straight road, then a passenger inside is not accelerating and, according to Newton's second law of motion, the net force acting on them is therefore zero (all forces acting on them cancel each other out). If the car enters a curve that bends to the left, the passenger experiences an apparent force that seems to be pulling them towards the right. This is the fictitious centrifugal force. It is needed within the passengers' local frame of reference to explain their sudden tendency to start accelerating to the right relative to the car—a tendency which they must resist by applying a rightward force to the car (for instance, a frictional force against the seat) in order to remain in a fixed position inside. Since they push the seat toward the right, Newton's third law says that the seat pushes them towards the left. The centrifugal force must be included in the passenger's reference frame (in which the passenger remains at rest): it counteracts the leftward force applied to the passenger by the seat, and explains why this otherwise unbalanced force does not cause them to accelerate. However, it would be apparent to a stationary observer watching from an overpass above that the frictional force exerted on the passenger by the seat is not being balanced; it constitutes a net force to the left, causing the passenger to accelerate toward the inside of the curve, as they must in order to keep moving with the car rather than proceeding in a straight line as they otherwise would. Thus the "centrifugal force" they feel is the result of a "centrifugal tendency" caused by inertia. Similar effects are encountered in aeroplanes and roller coasters where the magnitude of the apparent force is often reported in "G's". === Stone on a string === If a stone is whirled round on a string, in a horizontal plane, the only real force acting on the stone in the horizontal plane is applied by the string (gravity acts vertically). There is a net force on the stone in the horizontal plane which acts toward the center. In an inertial frame of reference, were it not for this net force acting on the stone, the stone would travel in a straight line, according to Newton's first law of motion. In order to keep the stone moving in a circular path, a centripetal force, in this case provided by the string, must be continuously applied to the stone. As soon as it is removed (for example if the string breaks) the stone moves in a straight line, as viewed from above. In this inertial frame, the concept of centrifugal force is not required as all motion can be properly described using only real forces and Newton's laws of motion. In a frame of reference rotating with the stone around the same axis as the stone, the stone is stationary. However, the force applied by the string is still acting on the stone. If one were to apply Newton's laws in their usual (inertial frame) form, one would conclude that the stone should accelerate in the direction of the net applied force—towards the axis of rotation—which it does not do. The centrifugal force and other fictitious forces must be included along with the real forces in order to apply Newton's laws of motion in the rotating frame. === Earth === The Earth constitutes a rotating reference frame because it rotates once every 23 hours and 56 minutes around its axis. Because the rotation is slow, the fictitious forces it produces are often small, and in everyday situations can generally be neglected. Even in calculations requiring high precision, the centrifugal force is generally not explicitly included, but rather lumped in with the gravitational force: the strength and direction of the local "gravity" at any point on the Earth's surface is actually a combination of gravitational and centrifugal forces. However, the fictitious forces can be of arbitrary size. For example, in an Earth-bound reference system (where the earth is represented as stationary), the fictitious force (the net of Coriolis and centrifugal forces) is enormous and is responsible for the Sun orbiting around the Earth. This is due to the large mass and velocity of the Sun (relative to the Earth). ==== Weight of an object at the poles and on the equator ==== If an object is weighed with a simple spring balance at one of the Earth's poles, there are two forces acting on the object: the Earth's gravity, which acts in a downward direction, and the equal and opposite restoring force in the spring, acting upward. Since the object is stationary and not accelerating, there is no net force acting on the object and the force from the spring is equal in magnitude to the force of gravity on the object. In this case, the balance shows the value of the force of gravity on the object. When the same object is weighed on the equator, the same two real forces act upon the object. However, the object is moving in a circular path as the Earth rotates and therefore experiencing a centripetal acceleration. When considered in an inertial frame (that is to say, one that is not rotating with the Earth), the non-zero acceleration means that force of gravity will not balance with the force from the spring. In order to have a net centripetal force, the magnitude of the restoring force of the spring must be less than the magnitude of force of gravity. This reduced restoring force in the spring is reflected on the scale as less weight — about 0.3% less at the equator than at the poles. In the Earth reference frame (in which the object being weighed is at rest), the object does not appear to be accelerating; however, the two real forces, gravity and the force from the spring, are the same magnitude and do not balance. The centrifugal force must be included to make the sum of the forces be zero to match the apparent lack of acceleration. Note: In fact, the observed weight difference is more — about 0.53%. Earth's gravity is a bit stronger at the poles than at the equator, because the Earth is not a perfect sphere, so an object at the poles is slightly closer to the center of the Earth than one at the equator; this effect combines with the centrifugal force to produce the observed weight difference. == Formulation == For the following formalism, the rotating frame of reference is regarded as a special case of a non-inertial reference frame that is rotating relative to an inertial reference frame denoted the stationary frame. === Time derivatives in a rotating frame === In a rotating frame of reference, the time derivatives of any vector function P of time—such as the velocity and acceleration vectors of an object—will differ from its time derivatives in the stationary frame. If P1, P2, P3 are the components of P with respect to unit vectors i, j, k directed along the axes of the rotating frame (i.e. P = P1 i + P2 j +P3 k), then the first time derivative [⁠dP/dt⁠] of P with respect to the rotating frame is, by definition, ⁠dP1/dt⁠ i + ⁠dP2/dt⁠ j + ⁠dP3/dt⁠ k. If the absolute angular velocity of the rotating frame is ω then the derivative ⁠dP/dt⁠ of P with respect to the stationary frame is related to [⁠dP/dt⁠] by the equation: d P d t = [ d P d t ] + ω × P , {\displaystyle {\frac {\mathrm {d} {\boldsymbol {P}}}{\mathrm {d} t}}=\left[{\frac {\mathrm {d} {\boldsymbol {P}}}{\mathrm {d} t}}\right]+{\boldsymbol {\omega }}\times {\boldsymbol {P}}\ ,} where × denotes the vector cross product. In other words, the rate of change of P in the stationary frame is the sum of its apparent rate of change in the rotating frame and a rate of rotation ω × P attributable to the motion of the rotating frame. The vector ω has magnitude ω equal to the rate of rotation and is directed along the axis of rotation according to the right-hand rule. === Acceleration === Newton's law of motion for a particle of mass m written in vector form is: F = m a , {\displaystyle {\boldsymbol {F}}=m{\boldsymbol {a}}\ ,} where F is the vector sum of the physical forces applied to the particle and a is the absolute acceleration (that is, acceleration in an inertial frame) of the particle, given by: a = d 2 r d t 2 , {\displaystyle {\boldsymbol {a}}={\frac {\mathrm {d} ^{2}{\boldsymbol {r}}}{\mathrm {d} t^{2}}}\ ,} where r is the position vector of the particle (not to be confused with radius, as used above.) By applying the transformation above from the stationary to the rotating frame three times (twice to d r d t {\textstyle {\frac {\mathrm {d} {\boldsymbol {r}}}{\mathrm {d} t}}} and once to d d t [ d r d t ] {\textstyle {\frac {\mathrm {d} }{\mathrm {d} t}}\left[{\frac {\mathrm {d} {\boldsymbol {r}}}{\mathrm {d} t}}\right]} ), the absolute acceleration of the particle can be written as: a = d 2 r d t 2 = d d t d r d t = d d t ( [ d r d t ] + ω × r ) = [ d 2 r d t 2 ] + ω × [ d r d t ] + d ω d t × r + ω × d r d t = [ d 2 r d t 2 ] + ω × [ d r d t ] + d ω d t × r + ω × ( [ d r d t ] + ω × r ) = [ d 2 r d t 2 ] + d ω d t × r + 2 ω × [ d r d t ] + ω × ( ω × r ) . {\displaystyle {\begin{aligned}{\boldsymbol {a}}&={\frac {\mathrm {d} ^{2}{\boldsymbol {r}}}{\mathrm {d} t^{2}}}={\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\mathrm {d} {\boldsymbol {r}}}{\mathrm {d} t}}={\frac {\mathrm {d} }{\mathrm {d} t}}\left(\left[{\frac {\mathrm {d} {\boldsymbol {r}}}{\mathrm {d} t}}\right]+{\boldsymbol {\omega }}\times {\boldsymbol {r}}\ \right)\\&=\left[{\frac {\mathrm {d} ^{2}{\boldsymbol {r}}}{\mathrm {d} t^{2}}}\right]+{\boldsymbol {\omega }}\times \left[{\frac {\mathrm {d} {\boldsymbol {r}}}{\mathrm {d} t}}\right]+{\frac {\mathrm {d} {\boldsymbol {\omega }}}{\mathrm {d} t}}\times {\boldsymbol {r}}+{\boldsymbol {\omega }}\times {\frac {\mathrm {d} {\boldsymbol {r}}}{\mathrm {d} t}}\\&=\left[{\frac {\mathrm {d} ^{2}{\boldsymbol {r}}}{\mathrm {d} t^{2}}}\right]+{\boldsymbol {\omega }}\times \left[{\frac {\mathrm {d} {\boldsymbol {r}}}{\mathrm {d} t}}\right]+{\frac {\mathrm {d} {\boldsymbol {\omega }}}{\mathrm {d} t}}\times {\boldsymbol {r}}+{\boldsymbol {\omega }}\times \left(\left[{\frac {\mathrm {d} {\boldsymbol {r}}}{\mathrm {d} t}}\right]+{\boldsymbol {\omega }}\times {\boldsymbol {r}}\ \right)\\&=\left[{\frac {\mathrm {d} ^{2}{\boldsymbol {r}}}{\mathrm {d} t^{2}}}\right]+{\frac {\mathrm {d} {\boldsymbol {\omega }}}{\mathrm {d} t}}\times {\boldsymbol {r}}+2{\boldsymbol {\omega }}\times \left[{\frac {\mathrm {d} {\boldsymbol {r}}}{\mathrm {d} t}}\right]+{\boldsymbol {\omega }}\times ({\boldsymbol {\omega }}\times {\boldsymbol {r}})\ .\end{aligned}}} === Force === The apparent acceleration in the rotating frame is [ d 2 r d t 2 ] {\displaystyle \left[{\frac {\mathrm {d} ^{2}{\boldsymbol {r}}}{\mathrm {d} t^{2}}}\right]} . An observer unaware of the rotation would expect this to be zero in the absence of outside forces. However, Newton's laws of motion apply only in the inertial frame and describe dynamics in terms of the absolute acceleration d 2 r d t 2 {\displaystyle {\frac {\mathrm {d} ^{2}{\boldsymbol {r}}}{\mathrm {d} t^{2}}}} . Therefore, the observer perceives the extra terms as contributions due to fictitious forces. These terms in the apparent acceleration are independent of mass; so it appears that each of these fictitious forces, like gravity, pulls on an object in proportion to its mass. When these forces are added, the equation of motion has the form: F + ( − m d ω d t × r ) ⏟ Euler + ( − 2 m ω × [ d r d t ] ) ⏟ Coriolis + ( − m ω × ( ω × r ) ) ⏟ centrifugal = m [ d 2 r d t 2 ] . {\displaystyle {\boldsymbol {F}}+\underbrace {\left(-m{\frac {\mathrm {d} {\boldsymbol {\omega }}}{\mathrm {d} t}}\times {\boldsymbol {r}}\right)} _{\text{Euler}}+\underbrace {\left(-2m{\boldsymbol {\omega }}\times \left[{\frac {\mathrm {d} {\boldsymbol {r}}}{\mathrm {d} t}}\right]\right)} _{\text{Coriolis}}+\underbrace {\left(-m{\boldsymbol {\omega }}\times ({\boldsymbol {\omega }}\times {\boldsymbol {r}})\right)} _{\text{centrifugal}}=m\left[{\frac {\mathrm {d} ^{2}{\boldsymbol {r}}}{\mathrm {d} t^{2}}}\right]\ .} From the perspective of the rotating frame, the additional force terms are experienced just like the real external forces and contribute to the apparent acceleration. The additional terms on the force side of the equation can be recognized as, reading from left to right, the Euler force − m d ω / d t × r {\displaystyle -m\mathrm {d} {\boldsymbol {\omega }}/\mathrm {d} t\times {\boldsymbol {r}}} , the Coriolis force − 2 m ω × [ d r / d t ] {\displaystyle -2m{\boldsymbol {\omega }}\times \left[\mathrm {d} {\boldsymbol {r}}/\mathrm {d} t\right]} , and the centrifugal force − m ω × ( ω × r ) {\displaystyle -m{\boldsymbol {\omega }}\times ({\boldsymbol {\omega }}\times {\boldsymbol {r}})} , respectively. Unlike the other two fictitious forces, the centrifugal force always points radially outward from the axis of rotation of the rotating frame, with magnitude m ω 2 r ⊥ {\displaystyle m\omega ^{2}r_{\perp }} , where r ⊥ {\displaystyle r_{\perp }} is the component of the position vector perpendicular to ω {\displaystyle {\boldsymbol {\omega }}} , and unlike the Coriolis force in particular, it is independent of the motion of the particle in the rotating frame. As expected, for a non-rotating inertial frame of reference ( ω = 0 ) {\displaystyle ({\boldsymbol {\omega }}=0)} the centrifugal force and all other fictitious forces disappear. Similarly, as the centrifugal force is proportional to the distance from object to the axis of rotation of the frame, the centrifugal force vanishes for objects that lie upon the axis. === Potential === The centrifugal force per unit mass can also be derived as the gradient of a centrifugal potential. For example, the centrifugal potential at the perpendicular distance ρ from the axis of a rotating frame of reference with angular velocity ω is 0.5 ω 2 ρ 2 {\textstyle 0.5\omega ^{2}\rho ^{2}} (see also: Geopotential#Centrifugal potential.) == Absolute rotation == Three scenarios were suggested by Newton to answer the question of whether the absolute rotation of a local frame can be detected; that is, if an observer can decide whether an observed object is rotating or if the observer is rotating. The shape of the surface of water rotating in a bucket. The shape of the surface becomes concave to balance the centrifugal force against the other forces upon the liquid. The tension in a string joining two spheres rotating about their center of mass. The tension in the string will be proportional to the centrifugal force on each sphere as it rotates around the common center of mass. In these scenarios, the effects attributed to centrifugal force are only observed in the local frame (the frame in which the object is stationary) if the object is undergoing absolute rotation relative to an inertial frame. By contrast, in an inertial frame, the observed effects arise as a consequence of the inertia and the known forces without the need to introduce a centrifugal force. Based on this argument, the privileged frame, wherein the laws of physics take on the simplest form, is a stationary frame in which no fictitious forces need to be invoked. Within this view of physics, any other phenomenon that is usually attributed to centrifugal force can be used to identify absolute rotation. For example, the oblateness of a sphere of freely flowing material is often explained in terms of centrifugal force. The oblate spheroid shape reflects, following Clairaut's theorem, the balance between containment by gravitational attraction and dispersal by centrifugal force. That the Earth is itself an oblate spheroid, bulging at the equator where the radial distance and hence the centrifugal force is larger, is taken as one of the evidences for its absolute rotation. == Applications == The operations of numerous common rotating mechanical systems are most easily conceptualized in terms of centrifugal force. For example: A centrifugal governor regulates the speed of an engine by using spinning masses that move radially, adjusting the throttle, as the engine changes speed. In the reference frame of the spinning masses, centrifugal force causes the radial movement. A centrifugal clutch is used in small engine-powered devices such as chain saws, go-karts and model helicopters. It allows the engine to start and idle without driving the device but automatically and smoothly engages the drive as the engine speed rises. Inertial drum brake ascenders used in rock climbing and the inertia reels used in many automobile seat belts operate on the same principle. Centrifugal forces can be used to generate artificial gravity, as in proposed designs for rotating space stations. The Mars Gravity Biosatellite would have studied the effects of Mars-level gravity on mice with gravity simulated in this way. Spin casting and centrifugal casting are production methods that use centrifugal force to disperse liquid metal or plastic throughout the negative space of a mold. Centrifuges are used in science and industry to separate substances. In the reference frame spinning with the centrifuge, the centrifugal force induces a hydrostatic pressure gradient in fluid-filled tubes oriented perpendicular to the axis of rotation, giving rise to large buoyant forces which push low-density particles inward. Elements or particles denser than the fluid move outward under the influence of the centrifugal force. This is effectively Archimedes' principle as generated by centrifugal force as opposed to being generated by gravity. Some amusement rides make use of centrifugal forces. For instance, a Gravitron's spin forces riders against a wall and allows riders to be elevated above the machine's floor in defiance of Earth's gravity. Nevertheless, all of these systems can also be described without requiring the concept of centrifugal force, in terms of motions and forces in a stationary frame, at the cost of taking somewhat more care in the consideration of forces and motions within the system. == Other uses of the term == While the majority of the scientific literature uses the term centrifugal force to refer to the particular fictitious force that arises in rotating frames, there are a few limited instances in the literature of the term applied to other distinct physical concepts. === In Lagrangian mechanics === One of these instances occurs in Lagrangian mechanics. Lagrangian mechanics formulates mechanics in terms of generalized coordinates {qk}, which can be as simple as the usual polar coordinates ( r , θ ) {\displaystyle (r,\ \theta )} or a much more extensive list of variables. Within this formulation the motion is described in terms of generalized forces, using in place of Newton's laws the Euler–Lagrange equations. Among the generalized forces, those involving the square of the time derivatives {(dqk  ⁄ dt )2} are sometimes called centrifugal forces. In the case of motion in a central potential the Lagrangian centrifugal force has the same form as the fictitious centrifugal force derived in a co-rotating frame. However, the Lagrangian use of "centrifugal force" in other, more general cases has only a limited connection to the Newtonian definition. === As a reactive force === In another instance the term refers to the reaction force to a centripetal force, or reactive centrifugal force. A body undergoing curved motion, such as circular motion, is accelerating toward a center at any particular point in time. This centripetal acceleration is provided by a centripetal force, which is exerted on the body in curved motion by some other body. In accordance with Newton's third law of motion, the body in curved motion exerts an equal and opposite force on the other body. This reactive force is exerted by the body in curved motion on the other body that provides the centripetal force and its direction is from that other body toward the body in curved motion. This reaction force is sometimes described as a centrifugal inertial reaction, that is, a force that is centrifugally directed, which is a reactive force equal and opposite to the centripetal force that is curving the path of the mass. The concept of the reactive centrifugal force is sometimes used in mechanics and engineering. It is sometimes referred to as just centrifugal force rather than as reactive centrifugal force although this usage is deprecated in elementary mechanics. == See also == Balancing of rotating masses Centrifugal mechanism of acceleration Equivalence principle Folk physics Lagrangian point Lamm equation == Notes == == References == == External links == Media related to Centrifugal force at Wikimedia Commons
Wikipedia/Centrifugal_force
Centripetal force (from Latin centrum, "center" and petere, "to seek") is the force that makes a body follow a curved path. The direction of the centripetal force is always orthogonal to the motion of the body and towards the fixed point of the instantaneous center of curvature of the path. Isaac Newton coined the term, describing it as "a force by which bodies are drawn or impelled, or in any way tend, towards a point as to a centre". In Newtonian mechanics, gravity provides the centripetal force causing astronomical orbits. One common example involving centripetal force is the case in which a body moves with uniform speed along a circular path. The centripetal force is directed at right angles to the motion and also along the radius towards the centre of the circular path. The mathematical description was derived in 1659 by the Dutch physicist Christiaan Huygens. == Formula == From the kinematics of curved motion it is known that an object moving at tangential speed v along a path with radius of curvature r accelerates toward the center of curvature at a rate a c = lim Δ t → 0 Δ v Δ t , a c = v 2 r {\displaystyle {\textbf {a}}_{c}=\lim _{\Delta t\to 0}{\frac {\Delta {\textbf {v}}}{\Delta t}},\quad a_{c}={\frac {v^{2}}{r}}} Here, a c {\displaystyle a_{c}} is the centripetal acceleration and Δ v {\displaystyle \Delta {\textbf {v}}} is the difference between the velocity vectors at t + Δ t {\displaystyle t+\Delta {t}} and t {\displaystyle t} . By Newton's second law, the cause of acceleration is a net force acting on the object, which is proportional to its mass m and its acceleration. The force, usually referred to as a centripetal force, has a magnitude F c = m a c = m v 2 r {\displaystyle F_{c}=ma_{c}=m{\frac {v^{2}}{r}}} and is, like centripetal acceleration, directed toward the center of curvature of the object's trajectory. === Derivation === The centripetal acceleration can be inferred from the diagram of the velocity vectors at two instances. In the case of uniform circular motion the velocities have constant magnitude. Because each one is perpendicular to its respective position vector, simple vector subtraction implies two similar isosceles triangles with congruent angles – one comprising a base of Δ v {\displaystyle \Delta {\textbf {v}}} and a leg length of v {\displaystyle v} , and the other a base of Δ r {\displaystyle \Delta {\textbf {r}}} (position vector difference) and a leg length of r {\displaystyle r} : | Δ v | v = | Δ r | r {\displaystyle {\frac {|\Delta {\textbf {v}}|}{v}}={\frac {|\Delta {\textbf {r}}|}{r}}} | Δ v | = v r | Δ r | {\displaystyle |\Delta {\textbf {v}}|={\frac {v}{r}}|\Delta {\textbf {r}}|} Therefore, | Δ v | {\displaystyle |\Delta {\textbf {v}}|} can be substituted with v r | Δ r | {\displaystyle {\frac {v}{r}}|\Delta {\textbf {r}}|} : a c = lim Δ t → 0 | Δ v | Δ t = v r lim Δ t → 0 | Δ r | Δ t = v 2 r {\displaystyle a_{c}=\lim _{\Delta t\to 0}{\frac {|\Delta {\textbf {v}}|}{\Delta t}}={\frac {v}{r}}\lim _{\Delta t\to 0}{\frac {|\Delta {\textbf {r}}|}{\Delta t}}={\frac {v^{2}}{r}}} The direction of the force is toward the center of the circle in which the object is moving, or the osculating circle (the circle that best fits the local path of the object, if the path is not circular). The speed in the formula is squared, so twice the speed needs four times the force, at a given radius. This force is also sometimes written in terms of the angular velocity ω of the object about the center of the circle, related to the tangential velocity by the formula v = ω r {\displaystyle v=\omega r} so that F c = m r ω 2 . {\displaystyle F_{c}=mr\omega ^{2}\,.} Expressed using the orbital period T for one revolution of the circle, ω = 2 π T {\displaystyle \omega ={\frac {2\pi }{T}}} the equation becomes F c = m r ( 2 π T ) 2 . {\displaystyle F_{c}=mr\left({\frac {2\pi }{T}}\right)^{2}.} In particle accelerators, velocity can be very high (close to the speed of light in vacuum) so the same rest mass now exerts greater inertia (relativistic mass) thereby requiring greater force for the same centripetal acceleration, so the equation becomes: F c = γ m v 2 r {\displaystyle F_{c}={\frac {\gamma mv^{2}}{r}}} where γ = 1 1 − v 2 c 2 {\displaystyle \gamma ={\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}} is the Lorentz factor. Thus the centripetal force is given by: F c = γ m v ω {\displaystyle F_{c}=\gamma mv\omega } which is the rate of change of relativistic momentum γ m v {\displaystyle \gamma mv} . == Sources == In the case of an object that is swinging around on the end of a rope in a horizontal plane, the centripetal force on the object is supplied by the tension of the rope. The rope example is an example involving a 'pull' force. The centripetal force can also be supplied as a 'push' force, such as in the case where the normal reaction of a wall supplies the centripetal force for a wall of death or a Rotor rider. Newton's idea of a centripetal force corresponds to what is nowadays referred to as a central force. When a satellite is in orbit around a planet, gravity is considered to be a centripetal force even though in the case of eccentric orbits, the gravitational force is directed towards the focus, and not towards the instantaneous center of curvature. Another example of centripetal force arises in the helix that is traced out when a charged particle moves in a uniform magnetic field in the absence of other external forces. In this case, the magnetic force is the centripetal force that acts towards the helix axis. == Analysis of several cases == Below are three examples of increasing complexity, with derivations of the formulas governing velocity and acceleration. === Uniform circular motion === Uniform circular motion refers to the case of constant rate of rotation. Here are two approaches to describing this case. ==== Calculus derivation ==== In two dimensions, the position vector r {\displaystyle {\textbf {r}}} , which has magnitude (length) r {\displaystyle r} and directed at an angle θ {\displaystyle \theta } above the x-axis, can be expressed in Cartesian coordinates using the unit vectors x ^ {\displaystyle {\hat {\mathbf {x} }}} and y ^ {\displaystyle {\hat {\mathbf {y} }}} : r = r cos ⁡ ( θ ) x ^ + r sin ⁡ ( θ ) y ^ . {\displaystyle {\textbf {r}}=r\cos(\theta ){\hat {\mathbf {x} }}+r\sin(\theta ){\hat {\mathbf {y} }}.} The assumption of uniform circular motion requires three things: The object moves only on a circle. The radius of the circle r {\displaystyle r} does not change in time. The object moves with constant angular velocity ω {\displaystyle \omega } around the circle. Therefore, θ = ω t {\displaystyle \theta =\omega t} where t {\displaystyle t} is time. The velocity v {\displaystyle {\textbf {v}}} and acceleration a {\displaystyle {\textbf {a}}} of the motion are the first and second derivatives of position with respect to time: r = r cos ⁡ ( ω t ) x ^ + r sin ⁡ ( ω t ) y ^ , {\displaystyle {\textbf {r}}=r\cos(\omega t){\hat {\mathbf {x} }}+r\sin(\omega t){\hat {\mathbf {y} }},} v = r ˙ = − r ω sin ⁡ ( ω t ) x ^ + r ω cos ⁡ ( ω t ) y ^ , {\displaystyle {\textbf {v}}={\dot {\textbf {r}}}=-r\omega \sin(\omega t){\hat {\mathbf {x} }}+r\omega \cos(\omega t){\hat {\mathbf {y} }},} a = r ¨ = − ω 2 ( r cos ⁡ ( ω t ) x ^ + r sin ⁡ ( ω t ) y ^ ) . {\displaystyle {\textbf {a}}={\ddot {\textbf {r}}}=-\omega ^{2}(r\cos(\omega t){\hat {\mathbf {x} }}+r\sin(\omega t){\hat {\mathbf {y} }}).} The term in parentheses is the original expression of r {\displaystyle {\textbf {r}}} in Cartesian coordinates. Consequently, a = − ω 2 r . {\displaystyle {\textbf {a}}=-\omega ^{2}{\textbf {r}}.} The negative sign shows that the acceleration is pointed towards the center of the circle (opposite the radius), hence it is called "centripetal" (i.e. "center-seeking"). While objects naturally follow a straight path (due to inertia), this centripetal acceleration describes the circular motion path caused by a centripetal force. ==== Derivation using vectors ==== The image at right shows the vector relationships for uniform circular motion. The rotation itself is represented by the angular velocity vector Ω, which is normal to the plane of the orbit (using the right-hand rule) and has magnitude given by: | Ω | = d θ d t = ω , {\displaystyle |\mathbf {\Omega } |={\frac {\mathrm {d} \theta }{\mathrm {d} t}}=\omega \ ,} with θ the angular position at time t. In this subsection, dθ/dt is assumed constant, independent of time. The distance traveled dℓ of the particle in time dt along the circular path is d ℓ = Ω × r ( t ) d t , {\displaystyle \mathrm {d} {\boldsymbol {\ell }}=\mathbf {\Omega } \times \mathbf {r} (t)\mathrm {d} t\ ,} which, by properties of the vector cross product, has magnitude rdθ and is in the direction tangent to the circular path. Consequently, d r d t = lim Δ t → 0 r ( t + Δ t ) − r ( t ) Δ t = d ℓ d t . {\displaystyle {\frac {\mathrm {d} \mathbf {r} }{\mathrm {d} t}}=\lim _{{\Delta }t\to 0}{\frac {\mathbf {r} (t+{\Delta }t)-\mathbf {r} (t)}{{\Delta }t}}={\frac {\mathrm {d} {\boldsymbol {\ell }}}{\mathrm {d} t}}\ .} In other words, v = d e f d r d t = d ℓ d t = Ω × r ( t ) . {\displaystyle \mathbf {v} \ {\stackrel {\mathrm {def} }{=}}\ {\frac {\mathrm {d} \mathbf {r} }{\mathrm {d} t}}={\frac {\mathrm {d} \mathbf {\boldsymbol {\ell }} }{\mathrm {d} t}}=\mathbf {\Omega } \times \mathbf {r} (t)\ .} Differentiating with respect to time, a = d e f d v d t = Ω × d r ( t ) d t = Ω × [ Ω × r ( t ) ] . {\displaystyle \mathbf {a} \ {\stackrel {\mathrm {def} }{=}}\ {\frac {\mathrm {d} \mathbf {v} }{d\mathrm {t} }}=\mathbf {\Omega } \times {\frac {\mathrm {d} \mathbf {r} (t)}{\mathrm {d} t}}=\mathbf {\Omega } \times \left[\mathbf {\Omega } \times \mathbf {r} (t)\right]\ .} Lagrange's formula states: a × ( b × c ) = b ( a ⋅ c ) − c ( a ⋅ b ) . {\displaystyle \mathbf {a} \times \left(\mathbf {b} \times \mathbf {c} \right)=\mathbf {b} \left(\mathbf {a} \cdot \mathbf {c} \right)-\mathbf {c} \left(\mathbf {a} \cdot \mathbf {b} \right)\ .} Applying Lagrange's formula with the observation that Ω • r(t) = 0 at all times, a = − | Ω | 2 r ( t ) . {\displaystyle \mathbf {a} =-{|\mathbf {\Omega |} }^{2}\mathbf {r} (t)\ .} In words, the acceleration is pointing directly opposite to the radial displacement r at all times, and has a magnitude: | a | = | r ( t ) | ( d θ d t ) 2 = r ω 2 {\displaystyle |\mathbf {a} |=|\mathbf {r} (t)|\left({\frac {\mathrm {d} \theta }{\mathrm {d} t}}\right)^{2}=r{\omega }^{2}} where vertical bars |...| denote the vector magnitude, which in the case of r(t) is simply the radius r of the path. This result agrees with the previous section, though the notation is slightly different. When the rate of rotation is made constant in the analysis of nonuniform circular motion, that analysis agrees with this one. A merit of the vector approach is that it is manifestly independent of any coordinate system. ==== Example: The banked turn ==== The upper panel in the image at right shows a ball in circular motion on a banked curve. The curve is banked at an angle θ from the horizontal, and the surface of the road is considered to be slippery. The objective is to find what angle the bank must have so the ball does not slide off the road. Intuition tells us that, on a flat curve with no banking at all, the ball will simply slide off the road; while with a very steep banking, the ball will slide to the center unless it travels the curve rapidly. Apart from any acceleration that might occur in the direction of the path, the lower panel of the image above indicates the forces on the ball. There are two forces; one is the force of gravity vertically downward through the center of mass of the ball mg, where m is the mass of the ball and g is the gravitational acceleration; the second is the upward normal force exerted by the road at a right angle to the road surface man. The centripetal force demanded by the curved motion is also shown above. This centripetal force is not a third force applied to the ball, but rather must be provided by the net force on the ball resulting from vector addition of the normal force and the force of gravity. The resultant or net force on the ball found by vector addition of the normal force exerted by the road and vertical force due to gravity must equal the centripetal force dictated by the need to travel a circular path. The curved motion is maintained so long as this net force provides the centripetal force requisite to the motion. The horizontal net force on the ball is the horizontal component of the force from the road, which has magnitude |Fh| = m|an| sin θ. The vertical component of the force from the road must counteract the gravitational force: |Fv| = m|an| cos θ = m|g|, which implies |an| = |g| / cos θ. Substituting into the above formula for |Fh| yields a horizontal force to be: | F h | = m | g | sin ⁡ θ cos ⁡ θ = m | g | tan ⁡ θ . {\displaystyle |\mathbf {F} _{\mathrm {h} }|=m|\mathbf {g} |{\frac {\sin \theta }{\cos \theta }}=m|\mathbf {g} |\tan \theta \,.} On the other hand, at velocity |v| on a circular path of radius r, kinematics says that the force needed to turn the ball continuously into the turn is the radially inward centripetal force Fc of magnitude: | F c | = m | a c | = m | v | 2 r . {\displaystyle |\mathbf {F} _{\mathrm {c} }|=m|\mathbf {a} _{\mathrm {c} }|={\frac {m|\mathbf {v} |^{2}}{r}}\,.} Consequently, the ball is in a stable path when the angle of the road is set to satisfy the condition: m | g | tan ⁡ θ = m | v | 2 r , {\displaystyle m|\mathbf {g} |\tan \theta ={\frac {m|\mathbf {v} |^{2}}{r}}\,,} or, tan ⁡ θ = | v | 2 | g | r . {\displaystyle \tan \theta ={\frac {|\mathbf {v} |^{2}}{|\mathbf {g} |r}}\,.} As the angle of bank θ approaches 90°, the tangent function approaches infinity, allowing larger values for |v|2/r. In words, this equation states that for greater speeds (bigger |v|) the road must be banked more steeply (a larger value for θ), and for sharper turns (smaller r) the road also must be banked more steeply, which accords with intuition. When the angle θ does not satisfy the above condition, the horizontal component of force exerted by the road does not provide the correct centripetal force, and an additional frictional force tangential to the road surface is called upon to provide the difference. If friction cannot do this (that is, the coefficient of friction is exceeded), the ball slides to a different radius where the balance can be realized. These ideas apply to air flight as well. See the FAA pilot's manual. === Nonuniform circular motion === As a generalization of the uniform circular motion case, suppose the angular rate of rotation is not constant. The acceleration now has a tangential component, as shown the image at right. This case is used to demonstrate a derivation strategy based on a polar coordinate system. Let r(t) be a vector that describes the position of a point mass as a function of time. Since we are assuming circular motion, let r(t) = R·ur, where R is a constant (the radius of the circle) and ur is the unit vector pointing from the origin to the point mass. The direction of ur is described by θ, the angle between the x-axis and the unit vector, measured counterclockwise from the x-axis. The other unit vector for polar coordinates, uθ is perpendicular to ur and points in the direction of increasing θ. These polar unit vectors can be expressed in terms of Cartesian unit vectors in the x and y directions, denoted i ^ {\displaystyle {\hat {\mathbf {i} }}} and j ^ {\displaystyle {\hat {\mathbf {j} }}} respectively: u r = cos ⁡ θ i ^ + sin ⁡ θ j ^ {\displaystyle \mathbf {u} _{r}=\cos \theta \ {\hat {\mathbf {i} }}+\sin \theta \ {\hat {\mathbf {j} }}} and u θ = − sin ⁡ θ i ^ + cos ⁡ θ j ^ . {\displaystyle \mathbf {u} _{\theta }=-\sin \theta \ {\hat {\mathbf {i} }}+\cos \theta \ {\hat {\mathbf {j} }}.} One can differentiate to find velocity: v = r d u r d t = r d d t ( cos ⁡ θ i ^ + sin ⁡ θ j ^ ) = r d θ d t d d θ ( cos ⁡ θ i ^ + sin ⁡ θ j ^ ) = r d θ d t ( − sin ⁡ θ i ^ + cos ⁡ θ j ^ ) = r d θ d t u θ = ω r u θ {\displaystyle {\begin{aligned}\mathbf {v} &=r{\frac {d\mathbf {u} _{r}}{dt}}\\&=r{\frac {d}{dt}}\left(\cos \theta \ {\hat {\mathbf {i} }}+\sin \theta \ {\hat {\mathbf {j} }}\right)\\&=r{\frac {d\theta }{dt}}{\frac {d}{d\theta }}\left(\cos \theta \ {\hat {\mathbf {i} }}+\sin \theta \ {\hat {\mathbf {j} }}\right)\\&=r{\frac {d\theta }{dt}}\left(-\sin \theta \ {\hat {\mathbf {i} }}+\cos \theta \ {\hat {\mathbf {j} }}\right)\\&=r{\frac {d\theta }{dt}}\mathbf {u} _{\theta }\\&=\omega r\mathbf {u} _{\theta }\end{aligned}}} where ω is the angular velocity dθ/dt. This result for the velocity matches expectations that the velocity should be directed tangentially to the circle, and that the magnitude of the velocity should be rω. Differentiating again, and noting that d u θ d t = − d θ d t u r = − ω u r , {\displaystyle {\frac {d\mathbf {u} _{\theta }}{dt}}=-{\frac {d\theta }{dt}}\mathbf {u} _{r}=-\omega \mathbf {u} _{r}\ ,} we find that the acceleration, a is: a = r ( d ω d t u θ − ω 2 u r ) . {\displaystyle \mathbf {a} =r\left({\frac {d\omega }{dt}}\mathbf {u} _{\theta }-\omega ^{2}\mathbf {u} _{r}\right)\ .} Thus, the radial and tangential components of the acceleration are: a r = − ω 2 r u r = − | v | 2 r u r {\displaystyle \mathbf {a} _{r}=-\omega ^{2}r\ \mathbf {u} _{r}=-{\frac {|\mathbf {v} |^{2}}{r}}\ \mathbf {u} _{r}} and a θ = r d ω d t u θ = d | v | d t u θ , {\displaystyle \mathbf {a} _{\theta }=r\ {\frac {d\omega }{dt}}\ \mathbf {u} _{\theta }={\frac {d|\mathbf {v} |}{dt}}\ \mathbf {u} _{\theta }\ ,} where |v| = r ω is the magnitude of the velocity (the speed). These equations express mathematically that, in the case of an object that moves along a circular path with a changing speed, the acceleration of the body may be decomposed into a perpendicular component that changes the direction of motion (the centripetal acceleration), and a parallel, or tangential component, that changes the speed. === General planar motion === ==== Polar coordinates ==== The above results can be derived perhaps more simply in polar coordinates, and at the same time extended to general motion within a plane, as shown next. Polar coordinates in the plane employ a radial unit vector uρ and an angular unit vector uθ, as shown above. A particle at position r is described by: r = ρ u ρ , {\displaystyle \mathbf {r} =\rho \mathbf {u} _{\rho }\ ,} where the notation ρ is used to describe the distance of the path from the origin instead of R to emphasize that this distance is not fixed, but varies with time. The unit vector uρ travels with the particle and always points in the same direction as r(t). Unit vector uθ also travels with the particle and stays orthogonal to uρ. Thus, uρ and uθ form a local Cartesian coordinate system attached to the particle, and tied to the path travelled by the particle. By moving the unit vectors so their tails coincide, as seen in the circle at the left of the image above, it is seen that uρ and uθ form a right-angled pair with tips on the unit circle that trace back and forth on the perimeter of this circle with the same angle θ(t) as r(t). When the particle moves, its velocity is v = d ρ d t u ρ + ρ d u ρ d t . {\displaystyle \mathbf {v} ={\frac {\mathrm {d} \rho }{\mathrm {d} t}}\mathbf {u} _{\rho }+\rho {\frac {\mathrm {d} \mathbf {u} _{\rho }}{\mathrm {d} t}}\,.} To evaluate the velocity, the derivative of the unit vector uρ is needed. Because uρ is a unit vector, its magnitude is fixed, and it can change only in direction, that is, its change duρ has a component only perpendicular to uρ. When the trajectory r(t) rotates an amount dθ, uρ, which points in the same direction as r(t), also rotates by dθ. See image above. Therefore, the change in uρ is d u ρ = u θ d θ , {\displaystyle \mathrm {d} \mathbf {u} _{\rho }=\mathbf {u} _{\theta }\mathrm {d} \theta \,,} or d u ρ d t = u θ d θ d t . {\displaystyle {\frac {\mathrm {d} \mathbf {u} _{\rho }}{\mathrm {d} t}}=\mathbf {u} _{\theta }{\frac {\mathrm {d} \theta }{\mathrm {d} t}}\,.} In a similar fashion, the rate of change of uθ is found. As with uρ, uθ is a unit vector and can only rotate without changing size. To remain orthogonal to uρ while the trajectory r(t) rotates an amount dθ, uθ, which is orthogonal to r(t), also rotates by dθ. See image above. Therefore, the change duθ is orthogonal to uθ and proportional to dθ (see image above): d u θ d t = − d θ d t u ρ . {\displaystyle {\frac {\mathrm {d} \mathbf {u} _{\theta }}{\mathrm {d} t}}=-{\frac {\mathrm {d} \theta }{\mathrm {d} t}}\mathbf {u} _{\rho }\,.} The equation above shows the sign to be negative: to maintain orthogonality, if duρ is positive with dθ, then duθ must decrease. Substituting the derivative of uρ into the expression for velocity: v = d ρ d t u ρ + ρ u θ d θ d t = v ρ u ρ + v θ u θ = v ρ + v θ . {\displaystyle \mathbf {v} ={\frac {\mathrm {d} \rho }{\mathrm {d} t}}\mathbf {u} _{\rho }+\rho \mathbf {u} _{\theta }{\frac {\mathrm {d} \theta }{\mathrm {d} t}}=v_{\rho }\mathbf {u} _{\rho }+v_{\theta }\mathbf {u} _{\theta }=\mathbf {v} _{\rho }+\mathbf {v} _{\theta }\,.} To obtain the acceleration, another time differentiation is done: a = d 2 ρ d t 2 u ρ + d ρ d t d u ρ d t + d ρ d t u θ d θ d t + ρ d u θ d t d θ d t + ρ u θ d 2 θ d t 2 . {\displaystyle \mathbf {a} ={\frac {\mathrm {d} ^{2}\rho }{\mathrm {d} t^{2}}}\mathbf {u} _{\rho }+{\frac {\mathrm {d} \rho }{\mathrm {d} t}}{\frac {\mathrm {d} \mathbf {u} _{\rho }}{\mathrm {d} t}}+{\frac {\mathrm {d} \rho }{\mathrm {d} t}}\mathbf {u} _{\theta }{\frac {\mathrm {d} \theta }{\mathrm {d} t}}+\rho {\frac {\mathrm {d} \mathbf {u} _{\theta }}{\mathrm {d} t}}{\frac {\mathrm {d} \theta }{\mathrm {d} t}}+\rho \mathbf {u} _{\theta }{\frac {\mathrm {d} ^{2}\theta }{\mathrm {d} t^{2}}}\,.} Substituting the derivatives of uρ and uθ, the acceleration of the particle is: a = d 2 ρ d t 2 u ρ + 2 d ρ d t u θ d θ d t − ρ u ρ ( d θ d t ) 2 + ρ u θ d 2 θ d t 2 , = u ρ [ d 2 ρ d t 2 − ρ ( d θ d t ) 2 ] + u θ [ 2 d ρ d t d θ d t + ρ d 2 θ d t 2 ] = u ρ [ d v ρ d t − v θ 2 ρ ] + u θ [ 2 ρ v ρ v θ + ρ d d t v θ ρ ] . {\displaystyle {\begin{aligned}\mathbf {a} &={\frac {\mathrm {d} ^{2}\rho }{\mathrm {d} t^{2}}}\mathbf {u} _{\rho }+2{\frac {\mathrm {d} \rho }{\mathrm {d} t}}\mathbf {u} _{\theta }{\frac {\mathrm {d} \theta }{\mathrm {d} t}}-\rho \mathbf {u} _{\rho }\left({\frac {\mathrm {d} \theta }{\mathrm {d} t}}\right)^{2}+\rho \mathbf {u} _{\theta }{\frac {\mathrm {d} ^{2}\theta }{\mathrm {d} t^{2}}}\ ,\\&=\mathbf {u} _{\rho }\left[{\frac {\mathrm {d} ^{2}\rho }{\mathrm {d} t^{2}}}-\rho \left({\frac {\mathrm {d} \theta }{\mathrm {d} t}}\right)^{2}\right]+\mathbf {u} _{\theta }\left[2{\frac {\mathrm {d} \rho }{\mathrm {d} t}}{\frac {\mathrm {d} \theta }{\mathrm {d} t}}+\rho {\frac {\mathrm {d} ^{2}\theta }{\mathrm {d} t^{2}}}\right]\\&=\mathbf {u} _{\rho }\left[{\frac {\mathrm {d} v_{\rho }}{\mathrm {d} t}}-{\frac {v_{\theta }^{2}}{\rho }}\right]+\mathbf {u} _{\theta }\left[{\frac {2}{\rho }}v_{\rho }v_{\theta }+\rho {\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {v_{\theta }}{\rho }}\right]\,.\end{aligned}}} As a particular example, if the particle moves in a circle of constant radius R, then dρ/dt = 0, v = vθ, and: a = u ρ [ − ρ ( d θ d t ) 2 ] + u θ [ ρ d 2 θ d t 2 ] = u ρ [ − v 2 r ] + u θ [ d v d t ] {\displaystyle \mathbf {a} =\mathbf {u} _{\rho }\left[-\rho \left({\frac {\mathrm {d} \theta }{\mathrm {d} t}}\right)^{2}\right]+\mathbf {u} _{\theta }\left[\rho {\frac {\mathrm {d} ^{2}\theta }{\mathrm {d} t^{2}}}\right]=\mathbf {u} _{\rho }\left[-{\frac {v^{2}}{r}}\right]+\mathbf {u} _{\theta }\left[{\frac {\mathrm {d} v}{\mathrm {d} t}}\right]\ } where v = v θ . {\displaystyle v=v_{\theta }.} These results agree with those above for nonuniform circular motion. See also the article on non-uniform circular motion. If this acceleration is multiplied by the particle mass, the leading term is the centripetal force and the negative of the second term related to angular acceleration is sometimes called the Euler force. For trajectories other than circular motion, for example, the more general trajectory envisioned in the image above, the instantaneous center of rotation and radius of curvature of the trajectory are related only indirectly to the coordinate system defined by uρ and uθ and to the length |r(t)| = ρ. Consequently, in the general case, it is not straightforward to disentangle the centripetal and Euler terms from the above general acceleration equation. To deal directly with this issue, local coordinates are preferable, as discussed next. ==== Local coordinates ==== Local coordinates mean a set of coordinates that travel with the particle, and have orientation determined by the path of the particle. Unit vectors are formed as shown in the image at right, both tangential and normal to the path. This coordinate system sometimes is referred to as intrinsic or path coordinates or nt-coordinates, for normal-tangential, referring to these unit vectors. These coordinates are a very special example of a more general concept of local coordinates from the theory of differential forms. Distance along the path of the particle is the arc length s, considered to be a known function of time. s = s ( t ) . {\displaystyle s=s(t)\ .} A center of curvature is defined at each position s located a distance ρ (the radius of curvature) from the curve on a line along the normal un (s). The required distance ρ(s) at arc length s is defined in terms of the rate of rotation of the tangent to the curve, which in turn is determined by the path itself. If the orientation of the tangent relative to some starting position is θ(s), then ρ(s) is defined by the derivative dθ/ds: 1 ρ ( s ) = κ ( s ) = d θ d s . {\displaystyle {\frac {1}{\rho (s)}}=\kappa (s)={\frac {\mathrm {d} \theta }{\mathrm {d} s}}\ .} The radius of curvature usually is taken as positive (that is, as an absolute value), while the curvature κ is a signed quantity. A geometric approach to finding the center of curvature and the radius of curvature uses a limiting process leading to the osculating circle. See image above. Using these coordinates, the motion along the path is viewed as a succession of circular paths of ever-changing center, and at each position s constitutes non-uniform circular motion at that position with radius ρ. The local value of the angular rate of rotation then is given by: ω ( s ) = d θ d t = d θ d s d s d t = 1 ρ ( s ) d s d t = v ( s ) ρ ( s ) , {\displaystyle \omega (s)={\frac {\mathrm {d} \theta }{\mathrm {d} t}}={\frac {\mathrm {d} \theta }{\mathrm {d} s}}{\frac {\mathrm {d} s}{\mathrm {d} t}}={\frac {1}{\rho (s)}}\ {\frac {\mathrm {d} s}{\mathrm {d} t}}={\frac {v(s)}{\rho (s)}}\ ,} with the local speed v given by: v ( s ) = d s d t . {\displaystyle v(s)={\frac {\mathrm {d} s}{\mathrm {d} t}}\ .} As for the other examples above, because unit vectors cannot change magnitude, their rate of change is always perpendicular to their direction (see the left-hand insert in the image above): d u n ( s ) d s = u t ( s ) d θ d s = u t ( s ) 1 ρ ; {\displaystyle {\frac {d\mathbf {u} _{\mathrm {n} }(s)}{ds}}=\mathbf {u} _{\mathrm {t} }(s){\frac {d\theta }{ds}}=\mathbf {u} _{\mathrm {t} }(s){\frac {1}{\rho }}\ ;} d u t ( s ) d s = − u n ( s ) d θ d s = − u n ( s ) 1 ρ . {\displaystyle {\frac {d\mathbf {u} _{\mathrm {t} }(s)}{\mathrm {d} s}}=-\mathbf {u} _{\mathrm {n} }(s){\frac {\mathrm {d} \theta }{\mathrm {d} s}}=-\mathbf {u} _{\mathrm {n} }(s){\frac {1}{\rho }}\ .} Consequently, the velocity and acceleration are: v ( t ) = v u t ( s ) ; {\displaystyle \mathbf {v} (t)=v\mathbf {u} _{\mathrm {t} }(s)\ ;} and using the chain-rule of differentiation: a ( t ) = d v d t u t ( s ) − v 2 ρ u n ( s ) ; {\displaystyle \mathbf {a} (t)={\frac {\mathrm {d} v}{\mathrm {d} t}}\mathbf {u} _{\mathrm {t} }(s)-{\frac {v^{2}}{\rho }}\mathbf {u} _{\mathrm {n} }(s)\ ;} with the tangential acceleration d v d t = d v d s d s d t = d v d s v . {\displaystyle {\frac {\mathrm {\mathrm {d} } v}{\mathrm {\mathrm {d} } t}}={\frac {\mathrm {d} v}{\mathrm {d} s}}\ {\frac {\mathrm {d} s}{\mathrm {d} t}}={\frac {\mathrm {d} v}{\mathrm {d} s}}\ v\ .} In this local coordinate system, the acceleration resembles the expression for nonuniform circular motion with the local radius ρ(s), and the centripetal acceleration is identified as the second term. Extending this approach to three dimensional space curves leads to the Frenet–Serret formulas. ===== Alternative approach ===== Looking at the image above, one might wonder whether adequate account has been taken of the difference in curvature between ρ(s) and ρ(s + ds) in computing the arc length as ds = ρ(s)dθ. Reassurance on this point can be found using a more formal approach outlined below. This approach also makes connection with the article on curvature. To introduce the unit vectors of the local coordinate system, one approach is to begin in Cartesian coordinates and describe the local coordinates in terms of these Cartesian coordinates. In terms of arc length s, let the path be described as: r ( s ) = [ x ( s ) , y ( s ) ] . {\displaystyle \mathbf {r} (s)=\left[x(s),\ y(s)\right].} Then an incremental displacement along the path ds is described by: d r ( s ) = [ d x ( s ) , d y ( s ) ] = [ x ′ ( s ) , y ′ ( s ) ] d s , {\displaystyle \mathrm {d} \mathbf {r} (s)=\left[\mathrm {d} x(s),\ \mathrm {d} y(s)\right]=\left[x'(s),\ y'(s)\right]\mathrm {d} s\ ,} where primes are introduced to denote derivatives with respect to s. The magnitude of this displacement is ds, showing that: [ x ′ ( s ) 2 + y ′ ( s ) 2 ] = 1 . {\displaystyle \left[x'(s)^{2}+y'(s)^{2}\right]=1\ .} (Eq. 1) This displacement is necessarily a tangent to the curve at s, showing that the unit vector tangent to the curve is: u t ( s ) = [ x ′ ( s ) , y ′ ( s ) ] , {\displaystyle \mathbf {u} _{\mathrm {t} }(s)=\left[x'(s),\ y'(s)\right],} while the outward unit vector normal to the curve is u n ( s ) = [ y ′ ( s ) , − x ′ ( s ) ] , {\displaystyle \mathbf {u} _{\mathrm {n} }(s)=\left[y'(s),\ -x'(s)\right],} Orthogonality can be verified by showing that the vector dot product is zero. The unit magnitude of these vectors is a consequence of Eq. 1. Using the tangent vector, the angle θ of the tangent to the curve is given by: sin ⁡ θ = y ′ ( s ) x ′ ( s ) 2 + y ′ ( s ) 2 = y ′ ( s ) ; {\displaystyle \sin \theta ={\frac {y'(s)}{\sqrt {x'(s)^{2}+y'(s)^{2}}}}=y'(s)\ ;} and cos ⁡ θ = x ′ ( s ) x ′ ( s ) 2 + y ′ ( s ) 2 = x ′ ( s ) . {\displaystyle \cos \theta ={\frac {x'(s)}{\sqrt {x'(s)^{2}+y'(s)^{2}}}}=x'(s)\ .} The radius of curvature is introduced completely formally (without need for geometric interpretation) as: 1 ρ = d θ d s . {\displaystyle {\frac {1}{\rho }}={\frac {\mathrm {d} \theta }{\mathrm {d} s}}\ .} The derivative of θ can be found from that for sinθ: d sin ⁡ θ d s = cos ⁡ θ d θ d s = 1 ρ cos ⁡ θ = 1 ρ x ′ ( s ) . {\displaystyle {\frac {\mathrm {d} \sin \theta }{\mathrm {d} s}}=\cos \theta {\frac {\mathrm {d} \theta }{\mathrm {d} s}}={\frac {1}{\rho }}\cos \theta \ ={\frac {1}{\rho }}x'(s)\ .} Now: d sin ⁡ θ d s = d d s y ′ ( s ) x ′ ( s ) 2 + y ′ ( s ) 2 = y ″ ( s ) x ′ ( s ) 2 − y ′ ( s ) x ′ ( s ) x ″ ( s ) ( x ′ ( s ) 2 + y ′ ( s ) 2 ) 3 / 2 , {\displaystyle {\frac {\mathrm {d} \sin \theta }{\mathrm {d} s}}={\frac {\mathrm {d} }{\mathrm {d} s}}{\frac {y'(s)}{\sqrt {x'(s)^{2}+y'(s)^{2}}}}={\frac {y''(s)x'(s)^{2}-y'(s)x'(s)x''(s)}{\left(x'(s)^{2}+y'(s)^{2}\right)^{3/2}}}\ ,} in which the denominator is unity. With this formula for the derivative of the sine, the radius of curvature becomes: d θ d s = 1 ρ = y ″ ( s ) x ′ ( s ) − y ′ ( s ) x ″ ( s ) = y ″ ( s ) x ′ ( s ) = − x ″ ( s ) y ′ ( s ) , {\displaystyle {\frac {\mathrm {d} \theta }{\mathrm {d} s}}={\frac {1}{\rho }}=y''(s)x'(s)-y'(s)x''(s)={\frac {y''(s)}{x'(s)}}=-{\frac {x''(s)}{y'(s)}}\ ,} where the equivalence of the forms stems from differentiation of Eq. 1: x ′ ( s ) x ″ ( s ) + y ′ ( s ) y ″ ( s ) = 0 . {\displaystyle x'(s)x''(s)+y'(s)y''(s)=0\ .} With these results, the acceleration can be found: a ( s ) = d d t v ( s ) = d d t [ d s d t ( x ′ ( s ) , y ′ ( s ) ) ] = ( d 2 s d t 2 ) u t ( s ) + ( d s d t ) 2 ( x ″ ( s ) , y ″ ( s ) ) = ( d 2 s d t 2 ) u t ( s ) − ( d s d t ) 2 1 ρ u n ( s ) {\displaystyle {\begin{aligned}\mathbf {a} (s)&={\frac {\mathrm {d} }{\mathrm {d} t}}\mathbf {v} (s)={\frac {\mathrm {d} }{\mathrm {d} t}}\left[{\frac {\mathrm {d} s}{\mathrm {d} t}}\left(x'(s),\ y'(s)\right)\right]\\&=\left({\frac {\mathrm {d} ^{2}s}{\mathrm {d} t^{2}}}\right)\mathbf {u} _{\mathrm {t} }(s)+\left({\frac {\mathrm {d} s}{\mathrm {d} t}}\right)^{2}\left(x''(s),\ y''(s)\right)\\&=\left({\frac {\mathrm {d} ^{2}s}{\mathrm {d} t^{2}}}\right)\mathbf {u} _{\mathrm {t} }(s)-\left({\frac {\mathrm {d} s}{\mathrm {d} t}}\right)^{2}{\frac {1}{\rho }}\mathbf {u} _{\mathrm {n} }(s)\end{aligned}}} as can be verified by taking the dot product with the unit vectors ut(s) and un(s). This result for acceleration is the same as that for circular motion based on the radius ρ. Using this coordinate system in the inertial frame, it is easy to identify the force normal to the trajectory as the centripetal force and that parallel to the trajectory as the tangential force. From a qualitative standpoint, the path can be approximated by an arc of a circle for a limited time, and for the limited time a particular radius of curvature applies, the centrifugal and Euler forces can be analyzed on the basis of circular motion with that radius. This result for acceleration agrees with that found earlier. However, in this approach, the question of the change in radius of curvature with s is handled completely formally, consistent with a geometric interpretation, but not relying upon it, thereby avoiding any questions the image above might suggest about neglecting the variation in ρ. ===== Example: circular motion ===== To illustrate the above formulas, let x, y be given as: x = α cos ⁡ s α ; y = α sin ⁡ s α . {\displaystyle x=\alpha \cos {\frac {s}{\alpha }}\ ;\ y=\alpha \sin {\frac {s}{\alpha }}\ .} Then: x 2 + y 2 = α 2 , {\displaystyle x^{2}+y^{2}=\alpha ^{2}\ ,} which can be recognized as a circular path around the origin with radius α. The position s = 0 corresponds to [α, 0], or 3 o'clock. To use the above formalism, the derivatives are needed: y ′ ( s ) = cos ⁡ s α ; x ′ ( s ) = − sin ⁡ s α , {\displaystyle y^{\prime }(s)=\cos {\frac {s}{\alpha }}\ ;\ x^{\prime }(s)=-\sin {\frac {s}{\alpha }}\ ,} y ′ ′ ( s ) = − 1 α sin ⁡ s α ; x ′ ′ ( s ) = − 1 α cos ⁡ s α . {\displaystyle y^{\prime \prime }(s)=-{\frac {1}{\alpha }}\sin {\frac {s}{\alpha }}\ ;\ x^{\prime \prime }(s)=-{\frac {1}{\alpha }}\cos {\frac {s}{\alpha }}\ .} With these results, one can verify that: x ′ ( s ) 2 + y ′ ( s ) 2 = 1 ; 1 ρ = y ′ ′ ( s ) x ′ ( s ) − y ′ ( s ) x ′ ′ ( s ) = 1 α . {\displaystyle x^{\prime }(s)^{2}+y^{\prime }(s)^{2}=1\ ;\ {\frac {1}{\rho }}=y^{\prime \prime }(s)x^{\prime }(s)-y^{\prime }(s)x^{\prime \prime }(s)={\frac {1}{\alpha }}\ .} The unit vectors can also be found: u t ( s ) = [ − sin ⁡ s α , cos ⁡ s α ] ; u n ( s ) = [ cos ⁡ s α , sin ⁡ s α ] , {\displaystyle \mathbf {u} _{\mathrm {t} }(s)=\left[-\sin {\frac {s}{\alpha }}\ ,\ \cos {\frac {s}{\alpha }}\right]\ ;\ \mathbf {u} _{\mathrm {n} }(s)=\left[\cos {\frac {s}{\alpha }}\ ,\ \sin {\frac {s}{\alpha }}\right]\ ,} which serve to show that s = 0 is located at position [ρ, 0] and s = ρπ/2 at [0, ρ], which agrees with the original expressions for x and y. In other words, s is measured counterclockwise around the circle from 3 o'clock. Also, the derivatives of these vectors can be found: d d s u t ( s ) = − 1 α [ cos ⁡ s α , sin ⁡ s α ] = − 1 α u n ( s ) ; {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} s}}\mathbf {u} _{\mathrm {t} }(s)=-{\frac {1}{\alpha }}\left[\cos {\frac {s}{\alpha }}\ ,\ \sin {\frac {s}{\alpha }}\right]=-{\frac {1}{\alpha }}\mathbf {u} _{\mathrm {n} }(s)\ ;} d d s u n ( s ) = 1 α [ − sin ⁡ s α , cos ⁡ s α ] = 1 α u t ( s ) . {\displaystyle \ {\frac {\mathrm {d} }{\mathrm {d} s}}\mathbf {u} _{\mathrm {n} }(s)={\frac {1}{\alpha }}\left[-\sin {\frac {s}{\alpha }}\ ,\ \cos {\frac {s}{\alpha }}\right]={\frac {1}{\alpha }}\mathbf {u} _{\mathrm {t} }(s)\ .} To obtain velocity and acceleration, a time-dependence for s is necessary. For counterclockwise motion at variable speed v(t): s ( t ) = ∫ 0 t d t ′ v ( t ′ ) , {\displaystyle s(t)=\int _{0}^{t}\ dt^{\prime }\ v(t^{\prime })\ ,} where v(t) is the speed and t is time, and s(t = 0) = 0. Then: v = v ( t ) u t ( s ) , {\displaystyle \mathbf {v} =v(t)\mathbf {u} _{\mathrm {t} }(s)\ ,} a = d v d t u t ( s ) + v d d t u t ( s ) = d v d t u t ( s ) − v 1 α u n ( s ) d s d t {\displaystyle \mathbf {a} ={\frac {\mathrm {d} v}{\mathrm {d} t}}\mathbf {u} _{\mathrm {t} }(s)+v{\frac {\mathrm {d} }{\mathrm {d} t}}\mathbf {u} _{\mathrm {t} }(s)={\frac {\mathrm {d} v}{\mathrm {d} t}}\mathbf {u} _{\mathrm {t} }(s)-v{\frac {1}{\alpha }}\mathbf {u} _{\mathrm {n} }(s){\frac {\mathrm {d} s}{\mathrm {d} t}}} a = d v d t u t ( s ) − v 2 α u n ( s ) , {\displaystyle \mathbf {a} ={\frac {\mathrm {d} v}{\mathrm {d} t}}\mathbf {u} _{\mathrm {t} }(s)-{\frac {v^{2}}{\alpha }}\mathbf {u} _{\mathrm {n} }(s)\ ,} where it already is established that α = ρ. This acceleration is the standard result for non-uniform circular motion. == See also == == Notes and references == == Further reading == Serway, Raymond A.; Jewett, John W. (2004). Physics for Scientists and Engineers (6th ed.). Brooks/Cole. ISBN 978-0-534-40842-8. Tipler, Paul (2004). Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics (5th ed.). W. H. Freeman. ISBN 978-0-7167-0809-4. Centripetal force vs. Centrifugal force, from an online Regents Exam physics tutorial by the Oswego City School District == External links == Notes from Physics and Astronomy HyperPhysics at Georgia State University
Wikipedia/Centripetal_force
In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics. The Hamilton–Jacobi equation is a formulation of mechanics in which the motion of a particle can be represented as a wave. In this sense, it fulfilled a long-held goal of theoretical physics (dating at least to Johann Bernoulli in the eighteenth century) of finding an analogy between the propagation of light and the motion of a particle. The wave equation followed by mechanical systems is similar to, but not identical with, the Schrödinger equation, as described below; for this reason, the Hamilton–Jacobi equation is considered the "closest approach" of classical mechanics to quantum mechanics. The qualitative form of this connection is called Hamilton's optico-mechanical analogy. In mathematics, the Hamilton–Jacobi equation is a necessary condition describing extremal geometry in generalizations of problems from the calculus of variations. It can be understood as a special case of the Hamilton–Jacobi–Bellman equation from dynamic programming. == Overview == The Hamilton–Jacobi equation is a first-order, non-linear partial differential equation for a system of particles at coordinates ⁠ q {\displaystyle \mathbf {q} } ⁠. The function H {\displaystyle H} is the system's Hamiltonian giving the system's energy. The solution of this equation is the action, ⁠ S {\displaystyle S} ⁠, called Hamilton's principal function.: 291  The solution can be related to the system Lagrangian L {\displaystyle \ {\mathcal {L}}\ } by an indefinite integral of the form used in the principle of least action:: 431  S = ∫ L d t + s o m e c o n s t a n t {\displaystyle \ S=\int {\mathcal {L}}\ \mathrm {d} t+~{\mathsf {some\ constant}}~} Geometrical surfaces of constant action are perpendicular to system trajectories, creating a wavefront-like view of the system dynamics. This property of the Hamilton–Jacobi equation connects classical mechanics to quantum mechanics.: 175  == Mathematical formulation == === Notation === Boldface variables such as q {\displaystyle \mathbf {q} } represent a list of N {\displaystyle N} generalized coordinates, q = ( q 1 , q 2 , … , q N − 1 , q N ) {\displaystyle \mathbf {q} =(q_{1},q_{2},\ldots ,q_{N-1},q_{N})} A dot over a variable or list signifies the time derivative (see Newton's notation). For example, q ˙ = d q d t . {\displaystyle {\dot {\mathbf {q} }}={\frac {d\mathbf {q} }{dt}}.} The dot product notation between two lists of the same number of coordinates is a shorthand for the sum of the products of corresponding components, such as p ⋅ q = ∑ k = 1 N p k q k . {\displaystyle \mathbf {p} \cdot \mathbf {q} =\sum _{k=1}^{N}p_{k}q_{k}.} === The action functional (a.k.a. Hamilton's principal function) === ==== Definition ==== Let the Hessian matrix H L ( q , q ˙ , t ) = { ∂ 2 L / ∂ q ˙ i ∂ q ˙ j } i j {\textstyle H_{\mathcal {L}}(\mathbf {q} ,\mathbf {\dot {q}} ,t)=\left\{\partial ^{2}{\mathcal {L}}/\partial {\dot {q}}^{i}\partial {\dot {q}}^{j}\right\}_{ij}} be invertible. The relation d d t ∂ L ∂ q ˙ i = ∑ j = 1 n ( ∂ 2 L ∂ q ˙ i ∂ q ˙ j q ¨ j + ∂ 2 L ∂ q ˙ i ∂ q j q ˙ j ) + ∂ 2 L ∂ q ˙ i ∂ t , i = 1 , … , n , {\displaystyle {\frac {d}{dt}}{\frac {\partial {\mathcal {L}}}{\partial {\dot {q}}^{i}}}=\sum _{j=1}^{n}\left({\frac {\partial ^{2}{\mathcal {L}}}{\partial {\dot {q}}^{i}\partial {\dot {q}}^{j}}}{\ddot {q}}^{j}+{\frac {\partial ^{2}{\mathcal {L}}}{\partial {\dot {q}}^{i}\partial {q}^{j}}}{\dot {q}}^{j}\right)+{\frac {\partial ^{2}{\mathcal {L}}}{\partial {\dot {q}}^{i}\partial t}},\qquad i=1,\ldots ,n,} shows that the Euler–Lagrange equations form a n × n {\displaystyle n\times n} system of second-order ordinary differential equations. Inverting the matrix H L {\displaystyle H_{\mathcal {L}}} transforms this system into q ¨ i = F i ( q , q ˙ , t ) , i = 1 , … , n . {\displaystyle {\ddot {q}}^{i}=F_{i}(\mathbf {q} ,\mathbf {\dot {q}} ,t),\ i=1,\ldots ,n.} Let a time instant t 0 {\displaystyle t_{0}} and a point q 0 ∈ M {\displaystyle \mathbf {q} _{0}\in M} in the configuration space be fixed. The existence and uniqueness theorems guarantee that, for every v 0 , {\displaystyle \mathbf {v} _{0},} the initial value problem with the conditions γ | τ = t 0 = q 0 {\displaystyle \gamma |_{\tau =t_{0}}=\mathbf {q} _{0}} and γ ˙ | τ = t 0 = v 0 {\displaystyle {\dot {\gamma }}|_{\tau =t_{0}}=\mathbf {v} _{0}} has a locally unique solution γ = γ ( τ ; t 0 , q 0 , v 0 ) . {\displaystyle \gamma =\gamma (\tau ;t_{0},\mathbf {q} _{0},\mathbf {v} _{0}).} Additionally, let there be a sufficiently small time interval ( t 0 , t 1 ) {\displaystyle (t_{0},t_{1})} such that extremals with different initial velocities v 0 {\displaystyle \mathbf {v} _{0}} would not intersect in M × ( t 0 , t 1 ) . {\displaystyle M\times (t_{0},t_{1}).} The latter means that, for any q ∈ M {\displaystyle \mathbf {q} \in M} and any t ∈ ( t 0 , t 1 ) , {\displaystyle t\in (t_{0},t_{1}),} there can be at most one extremal γ = γ ( τ ; t , t 0 , q , q 0 ) {\displaystyle \gamma =\gamma (\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0})} for which γ | τ = t 0 = q 0 {\displaystyle \gamma |_{\tau =t_{0}}=\mathbf {q} _{0}} and γ | τ = t = q . {\displaystyle \gamma |_{\tau =t}=\mathbf {q} .} Substituting γ = γ ( τ ; t , t 0 , q , q 0 ) {\displaystyle \gamma =\gamma (\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0})} into the action functional results in the Hamilton's principal function (HPF) where γ = γ ( τ ; t , t 0 , q , q 0 ) , {\displaystyle \gamma =\gamma (\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0}),} γ | τ = t 0 = q 0 , {\displaystyle \gamma |_{\tau =t_{0}}=\mathbf {q} _{0},} γ | τ = t = q . {\displaystyle \gamma |_{\tau =t}=\mathbf {q} .} === Formula for the momenta === The momenta are defined as the quantities p i ( q , q ˙ , t ) = ∂ L / ∂ q ˙ i . {\textstyle p_{i}(\mathbf {q} ,\mathbf {\dot {q}} ,t)=\partial {\mathcal {L}}/\partial {\dot {q}}^{i}.} This section shows that the dependency of p i {\displaystyle p_{i}} on q ˙ {\displaystyle \mathbf {\dot {q}} } disappears, once the HPF is known. Indeed, let a time instant t 0 {\displaystyle t_{0}} and a point q 0 {\displaystyle \mathbf {q} _{0}} in the configuration space be fixed. For every time instant t {\displaystyle t} and a point q , {\displaystyle \mathbf {q} ,} let γ = γ ( τ ; t , t 0 , q , q 0 ) {\displaystyle \gamma =\gamma (\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0})} be the (unique) extremal from the definition of the Hamilton's principal function ⁠ S {\displaystyle S} ⁠. Call v = def γ ˙ ( τ ; t , t 0 , q , q 0 ) | τ = t {\displaystyle \mathbf {v} \,{\stackrel {\text{def}}{=}}\,{\dot {\gamma }}(\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0})|_{\tau =t}} the velocity at ⁠ τ = t {\displaystyle \tau =t} ⁠. Then === Formula === Given the Hamiltonian H ( q , p , t ) {\displaystyle H(\mathbf {q} ,\mathbf {p} ,t)} of a mechanical system, the Hamilton–Jacobi equation is a first-order, non-linear partial differential equation for the Hamilton's principal function S {\displaystyle S} , Alternatively, as described below, the Hamilton–Jacobi equation may be derived from Hamiltonian mechanics by treating S {\displaystyle S} as the generating function for a canonical transformation of the classical Hamiltonian H = H ( q 1 , q 2 , … , q N ; p 1 , p 2 , … , p N ; t ) . {\displaystyle H=H(q_{1},q_{2},\ldots ,q_{N};p_{1},p_{2},\ldots ,p_{N};t).} The conjugate momenta correspond to the first derivatives of S {\displaystyle S} with respect to the generalized coordinates p k = ∂ S ∂ q k . {\displaystyle p_{k}={\frac {\partial S}{\partial q_{k}}}.} As a solution to the Hamilton–Jacobi equation, the principal function contains N + 1 {\displaystyle N+1} undetermined constants, the first N {\displaystyle N} of them denoted as α 1 , α 2 , … , α N {\displaystyle \alpha _{1},\,\alpha _{2},\dots ,\alpha _{N}} , and the last one coming from the integration of ∂ S ∂ t {\displaystyle {\frac {\partial S}{\partial t}}} . The relationship between p {\displaystyle \mathbf {p} } and q {\displaystyle \mathbf {q} } then describes the orbit in phase space in terms of these constants of motion. Furthermore, the quantities β k = ∂ S ∂ α k , k = 1 , 2 , … , N {\displaystyle \beta _{k}={\frac {\partial S}{\partial \alpha _{k}}},\quad k=1,2,\ldots ,N} are also constants of motion, and these equations can be inverted to find q {\displaystyle \mathbf {q} } as a function of all the α {\displaystyle \alpha } and β {\displaystyle \beta } constants and time. == Comparison with other formulations of mechanics == The Hamilton–Jacobi equation is a single, first-order partial differential equation for the function of the N {\displaystyle N} generalized coordinates q 1 , q 2 , … , q N {\displaystyle q_{1},\,q_{2},\dots ,q_{N}} and the time t {\displaystyle t} . The generalized momenta do not appear, except as derivatives of S {\displaystyle S} , the classical action. For comparison, in the equivalent Euler–Lagrange equations of motion of Lagrangian mechanics, the conjugate momenta also do not appear; however, those equations are a system of N {\displaystyle N} , generally second-order equations for the time evolution of the generalized coordinates. Similarly, Hamilton's equations of motion are another system of 2N first-order equations for the time evolution of the generalized coordinates and their conjugate momenta p 1 , p 2 , … , p N {\displaystyle p_{1},\,p_{2},\dots ,p_{N}} . Since the HJE is an equivalent expression of an integral minimization problem such as Hamilton's principle, the HJE can be useful in other problems of the calculus of variations and, more generally, in other branches of mathematics and physics, such as dynamical systems, symplectic geometry and quantum chaos. For example, the Hamilton–Jacobi equations can be used to determine the geodesics on a Riemannian manifold, an important variational problem in Riemannian geometry. However as a computational tool, the partial differential equations are notoriously complicated to solve except when is it possible to separate the independent variables; in this case the HJE become computationally useful.: 444  == Derivation using a canonical transformation == Any canonical transformation involving a type-2 generating function G 2 ( q , P , t ) {\displaystyle G_{2}(\mathbf {q} ,\mathbf {P} ,t)} leads to the relations p = ∂ G 2 ∂ q , Q = ∂ G 2 ∂ P , K ( Q , P , t ) = H ( q , p , t ) + ∂ G 2 ∂ t {\displaystyle {\begin{aligned}&\mathbf {p} ={\frac {\partial G_{2}}{\partial \mathbf {q} }},\quad \mathbf {Q} ={\frac {\partial G_{2}}{\partial \mathbf {P} }},\quad \\&K(\mathbf {Q} ,\mathbf {P} ,t)=H(\mathbf {q} ,\mathbf {p} ,t)+{\frac {\partial G_{2}}{\partial t}}\end{aligned}}} and Hamilton's equations in terms of the new variables P , Q {\displaystyle \mathbf {P} ,\,\mathbf {Q} } and new Hamiltonian K {\displaystyle K} have the same form: P ˙ = − ∂ K ∂ Q , Q ˙ = + ∂ K ∂ P . {\displaystyle {\dot {\mathbf {P} }}=-{\partial K \over \partial \mathbf {Q} },\quad {\dot {\mathbf {Q} }}=+{\partial K \over \partial \mathbf {P} }.} To derive the HJE, a generating function G 2 ( q , P , t ) {\displaystyle G_{2}(\mathbf {q} ,\mathbf {P} ,t)} is chosen in such a way that, it will make the new Hamiltonian K = 0 {\displaystyle K=0} . Hence, all its derivatives are also zero, and the transformed Hamilton's equations become trivial P ˙ = Q ˙ = 0 {\displaystyle {\dot {\mathbf {P} }}={\dot {\mathbf {Q} }}=0} so the new generalized coordinates and momenta are constants of motion. As they are constants, in this context the new generalized momenta P {\displaystyle \mathbf {P} } are usually denoted α 1 , α 2 , … , α N {\displaystyle \alpha _{1},\,\alpha _{2},\dots ,\alpha _{N}} , i.e. P m = α m {\displaystyle P_{m}=\alpha _{m}} and the new generalized coordinates Q {\displaystyle \mathbf {Q} } are typically denoted as β 1 , β 2 , … , β N {\displaystyle \beta _{1},\,\beta _{2},\dots ,\beta _{N}} , so Q m = β m {\displaystyle Q_{m}=\beta _{m}} . Setting the generating function equal to Hamilton's principal function, plus an arbitrary constant A {\displaystyle A} : G 2 ( q , α , t ) = S ( q , t ) + A , {\displaystyle G_{2}(\mathbf {q} ,{\boldsymbol {\alpha }},t)=S(\mathbf {q} ,t)+A,} the HJE automatically arises p = ∂ G 2 ∂ q = ∂ S ∂ q → H ( q , p , t ) + ∂ G 2 ∂ t = 0 → H ( q , ∂ S ∂ q , t ) + ∂ S ∂ t = 0. {\displaystyle {\begin{aligned}&\mathbf {p} ={\frac {\partial G_{2}}{\partial \mathbf {q} }}={\frac {\partial S}{\partial \mathbf {q} }}\\[1ex]\rightarrow {}&H(\mathbf {q} ,\mathbf {p} ,t)+{\partial G_{2} \over \partial t}=0\\[1ex]\rightarrow {}&H{\left(\mathbf {q} ,{\frac {\partial S}{\partial \mathbf {q} }},t\right)}+{\partial S \over \partial t}=0.\end{aligned}}} When solved for S ( q , α , t ) {\displaystyle S(\mathbf {q} ,{\boldsymbol {\alpha }},t)} , these also give us the useful equations Q = β = ∂ S ∂ α , {\displaystyle \mathbf {Q} ={\boldsymbol {\beta }}={\partial S \over \partial {\boldsymbol {\alpha }}},} or written in components for clarity Q m = β m = ∂ S ( q , α , t ) ∂ α m . {\displaystyle Q_{m}=\beta _{m}={\frac {\partial S(\mathbf {q} ,{\boldsymbol {\alpha }},t)}{\partial \alpha _{m}}}.} Ideally, these N equations can be inverted to find the original generalized coordinates q {\displaystyle \mathbf {q} } as a function of the constants α , β , {\displaystyle {\boldsymbol {\alpha }},\,{\boldsymbol {\beta }},} and t {\displaystyle t} , thus solving the original problem. == Separation of variables == When the problem allows additive separation of variables, the HJE leads directly to constants of motion. For example, the time t can be separated if the Hamiltonian does not depend on time explicitly. In that case, the time derivative ∂ S ∂ t {\displaystyle {\frac {\partial S}{\partial t}}} in the HJE must be a constant, usually denoted ( − E {\displaystyle -E} ), giving the separated solution S = W ( q 1 , q 2 , … , q N ) − E t {\displaystyle S=W(q_{1},q_{2},\ldots ,q_{N})-Et} where the time-independent function W ( q ) {\displaystyle W(\mathbf {q} )} is sometimes called the abbreviated action or Hamilton's characteristic function : 434  and sometimes: 607  written S 0 {\displaystyle S_{0}} (see action principle names). The reduced Hamilton–Jacobi equation can then be written H ( q , ∂ S ∂ q ) = E . {\displaystyle H{\left(\mathbf {q} ,{\frac {\partial S}{\partial \mathbf {q} }}\right)}=E.} To illustrate separability for other variables, a certain generalized coordinate q k {\displaystyle q_{k}} and its derivative ∂ S ∂ q k {\displaystyle {\frac {\partial S}{\partial q_{k}}}} are assumed to appear together as a single function ψ ( q k , ∂ S ∂ q k ) {\displaystyle \psi {\left(q_{k},{\frac {\partial S}{\partial q_{k}}}\right)}} in the Hamiltonian H = H ( q 1 , q 2 , … , q k − 1 , q k + 1 , … , q N ; p 1 , p 2 , … , p k − 1 , p k + 1 , … , p N ; ψ ; t ) . {\displaystyle H=H(q_{1},q_{2},\ldots ,q_{k-1},q_{k+1},\ldots ,q_{N};p_{1},p_{2},\ldots ,p_{k-1},p_{k+1},\ldots ,p_{N};\psi ;t).} In that case, the function S can be partitioned into two functions, one that depends only on qk and another that depends only on the remaining generalized coordinates S = S k ( q k ) + S rem ( q 1 , … , q k − 1 , q k + 1 , … , q N , t ) . {\displaystyle S=S_{k}(q_{k})+S_{\text{rem}}(q_{1},\ldots ,q_{k-1},q_{k+1},\ldots ,q_{N},t).} Substitution of these formulae into the Hamilton–Jacobi equation shows that the function ψ must be a constant (denoted here as Γ k {\displaystyle \Gamma _{k}} ), yielding a first-order ordinary differential equation for S k ( q k ) , {\displaystyle S_{k}(q_{k}),} ψ ( q k , d S k d q k ) = Γ k . {\displaystyle \psi {\left(q_{k},{\frac {dS_{k}}{dq_{k}}}\right)}=\Gamma _{k}.} In fortunate cases, the function S {\displaystyle S} can be separated completely into N {\displaystyle N} functions S m ( q m ) , {\displaystyle S_{m}(q_{m}),} S = S 1 ( q 1 ) + S 2 ( q 2 ) + ⋯ + S N ( q N ) − E t . {\displaystyle S=S_{1}(q_{1})+S_{2}(q_{2})+\cdots +S_{N}(q_{N})-Et.} In such a case, the problem devolves to N {\displaystyle N} ordinary differential equations. The separability of S depends both on the Hamiltonian and on the choice of generalized coordinates. For orthogonal coordinates and Hamiltonians that have no time dependence and are quadratic in the generalized momenta, S {\displaystyle S} will be completely separable if the potential energy is additively separable in each coordinate, where the potential energy term for each coordinate is multiplied by the coordinate-dependent factor in the corresponding momentum term of the Hamiltonian (the Staeckel conditions). For illustration, several examples in orthogonal coordinates are worked in the next sections. === Examples in various coordinate systems === ==== Spherical coordinates ==== In spherical coordinates the Hamiltonian of a free particle moving in a conservative potential U can be written H = 1 2 m [ p r 2 + p θ 2 r 2 + p ϕ 2 r 2 sin 2 ⁡ θ ] + U ( r , θ , ϕ ) . {\displaystyle H={\frac {1}{2m}}\left[p_{r}^{2}+{\frac {p_{\theta }^{2}}{r^{2}}}+{\frac {p_{\phi }^{2}}{r^{2}\sin ^{2}\theta }}\right]+U(r,\theta ,\phi ).} The Hamilton–Jacobi equation is completely separable in these coordinates provided that there exist functions U r ( r ) , U θ ( θ ) , U ϕ ( ϕ ) {\displaystyle U_{r}(r),U_{\theta }(\theta ),U_{\phi }(\phi )} such that U {\displaystyle U} can be written in the analogous form U ( r , θ , ϕ ) = U r ( r ) + U θ ( θ ) r 2 + U ϕ ( ϕ ) r 2 sin 2 ⁡ θ . {\displaystyle U(r,\theta ,\phi )=U_{r}(r)+{\frac {U_{\theta }(\theta )}{r^{2}}}+{\frac {U_{\phi }(\phi )}{r^{2}\sin ^{2}\theta }}.} Substitution of the completely separated solution S = S r ( r ) + S θ ( θ ) + S ϕ ( ϕ ) − E t {\displaystyle S=S_{r}(r)+S_{\theta }(\theta )+S_{\phi }(\phi )-Et} into the HJE yields 1 2 m ( d S r d r ) 2 + U r ( r ) + 1 2 m r 2 [ ( d S θ d θ ) 2 + 2 m U θ ( θ ) ] + 1 2 m r 2 sin 2 ⁡ θ [ ( d S ϕ d ϕ ) 2 + 2 m U ϕ ( ϕ ) ] = E . {\displaystyle {\frac {1}{2m}}\left({\frac {dS_{r}}{dr}}\right)^{2}+U_{r}(r)+{\frac {1}{2mr^{2}}}\left[\left({\frac {dS_{\theta }}{d\theta }}\right)^{2}+2mU_{\theta }(\theta )\right]+{\frac {1}{2mr^{2}\sin ^{2}\theta }}\left[\left({\frac {dS_{\phi }}{d\phi }}\right)^{2}+2mU_{\phi }(\phi )\right]=E.} This equation may be solved by successive integrations of ordinary differential equations, beginning with the equation for ϕ {\displaystyle \phi } ( d S ϕ d ϕ ) 2 + 2 m U ϕ ( ϕ ) = Γ ϕ {\displaystyle \left({\frac {dS_{\phi }}{d\phi }}\right)^{2}+2mU_{\phi }(\phi )=\Gamma _{\phi }} where Γ ϕ {\displaystyle \Gamma _{\phi }} is a constant of the motion that eliminates the ϕ {\displaystyle \phi } dependence from the Hamilton–Jacobi equation 1 2 m ( d S r d r ) 2 + U r ( r ) + 1 2 m r 2 [ 1 sin 2 ⁡ θ ( d S θ d θ ) 2 + 2 m sin 2 ⁡ θ U θ ( θ ) + Γ ϕ ] = E . {\displaystyle {\frac {1}{2m}}\left({\frac {dS_{r}}{dr}}\right)^{2}+U_{r}(r)+{\frac {1}{2mr^{2}}}\left[{\frac {1}{\sin ^{2}\theta }}\left({\frac {dS_{\theta }}{d\theta }}\right)^{2}+{\frac {2m}{\sin ^{2}\theta }}U_{\theta }(\theta )+\Gamma _{\phi }\right]=E.} The next ordinary differential equation involves the θ {\displaystyle \theta } generalized coordinate 1 sin 2 ⁡ θ ( d S θ d θ ) 2 + 2 m sin 2 ⁡ θ U θ ( θ ) + Γ ϕ = Γ θ {\displaystyle {\frac {1}{\sin ^{2}\theta }}\left({\frac {dS_{\theta }}{d\theta }}\right)^{2}+{\frac {2m}{\sin ^{2}\theta }}U_{\theta }(\theta )+\Gamma _{\phi }=\Gamma _{\theta }} where Γ θ {\displaystyle \Gamma _{\theta }} is again a constant of the motion that eliminates the θ {\displaystyle \theta } dependence and reduces the HJE to the final ordinary differential equation 1 2 m ( d S r d r ) 2 + U r ( r ) + Γ θ 2 m r 2 = E {\displaystyle {\frac {1}{2m}}\left({\frac {dS_{r}}{dr}}\right)^{2}+U_{r}(r)+{\frac {\Gamma _{\theta }}{2mr^{2}}}=E} whose integration completes the solution for S {\displaystyle S} . ==== Elliptic cylindrical coordinates ==== The Hamiltonian in elliptic cylindrical coordinates can be written H = p μ 2 + p ν 2 2 m a 2 ( sinh 2 ⁡ μ + sin 2 ⁡ ν ) + p z 2 2 m + U ( μ , ν , z ) {\displaystyle H={\frac {p_{\mu }^{2}+p_{\nu }^{2}}{2ma^{2}\left(\sinh ^{2}\mu +\sin ^{2}\nu \right)}}+{\frac {p_{z}^{2}}{2m}}+U(\mu ,\nu ,z)} where the foci of the ellipses are located at ± a {\displaystyle \pm a} on the x {\displaystyle x} -axis. The Hamilton–Jacobi equation is completely separable in these coordinates provided that U {\displaystyle U} has an analogous form U ( μ , ν , z ) = U μ ( μ ) + U ν ( ν ) sinh 2 ⁡ μ + sin 2 ⁡ ν + U z ( z ) {\displaystyle U(\mu ,\nu ,z)={\frac {U_{\mu }(\mu )+U_{\nu }(\nu )}{\sinh ^{2}\mu +\sin ^{2}\nu }}+U_{z}(z)} where U μ ( μ ) {\displaystyle U_{\mu }(\mu )} , U ν ( ν ) {\displaystyle U_{\nu }(\nu )} and U z ( z ) {\displaystyle U_{z}(z)} are arbitrary functions. Substitution of the completely separated solution S = S μ ( μ ) + S ν ( ν ) + S z ( z ) − E t {\displaystyle S=S_{\mu }(\mu )+S_{\nu }(\nu )+S_{z}(z)-Et} into the HJE yields 1 2 m ( d S z d z ) 2 + 1 2 m a 2 ( sinh 2 ⁡ μ + sin 2 ⁡ ν ) [ ( d S μ d μ ) 2 + ( d S ν d ν ) 2 ] + U z ( z ) + 1 sinh 2 ⁡ μ + sin 2 ⁡ ν [ U μ ( μ ) + U ν ( ν ) ] = E . {\displaystyle {\begin{aligned}{\frac {1}{2m}}\left({\frac {dS_{z}}{dz}}\right)^{2}+{\frac {1}{2ma^{2}\left(\sinh ^{2}\mu +\sin ^{2}\nu \right)}}\left[\left({\frac {dS_{\mu }}{d\mu }}\right)^{2}+\left({\frac {dS_{\nu }}{d\nu }}\right)^{2}\right]&\\{}+U_{z}(z)+{\frac {1}{\sinh ^{2}\mu +\sin ^{2}\nu }}\left[U_{\mu }(\mu )+U_{\nu }(\nu )\right]&=E.\end{aligned}}} Separating the first ordinary differential equation 1 2 m ( d S z d z ) 2 + U z ( z ) = Γ z {\displaystyle {\frac {1}{2m}}\left({\frac {dS_{z}}{dz}}\right)^{2}+U_{z}(z)=\Gamma _{z}} yields the reduced Hamilton–Jacobi equation (after re-arrangement and multiplication of both sides by the denominator) ( d S μ d μ ) 2 + ( d S ν d ν ) 2 + 2 m a 2 U μ ( μ ) + 2 m a 2 U ν ( ν ) = 2 m a 2 ( sinh 2 ⁡ μ + sin 2 ⁡ ν ) ( E − Γ z ) {\displaystyle \left({\frac {dS_{\mu }}{d\mu }}\right)^{2}+\left({\frac {dS_{\nu }}{d\nu }}\right)^{2}+2ma^{2}U_{\mu }(\mu )+2ma^{2}U_{\nu }(\nu )=2ma^{2}\left(\sinh ^{2}\mu +\sin ^{2}\nu \right)\left(E-\Gamma _{z}\right)} which itself may be separated into two independent ordinary differential equations ( d S μ d μ ) 2 + 2 m a 2 U μ ( μ ) + 2 m a 2 ( Γ z − E ) sinh 2 ⁡ μ = Γ μ ( d S ν d ν ) 2 + 2 m a 2 U ν ( ν ) + 2 m a 2 ( Γ z − E ) sin 2 ⁡ ν = Γ ν {\displaystyle {\begin{alignedat}{4}\left({\frac {dS_{\mu }}{d\mu }}\right)^{2}&\,+\,&2ma^{2}U_{\mu }(\mu )&\,+\,&2ma^{2}\left(\Gamma _{z}-E\right)\sinh ^{2}\mu &=\,&\Gamma _{\mu }\\\left({\frac {dS_{\nu }}{d\nu }}\right)^{2}&\,+\,&2ma^{2}U_{\nu }(\nu )&\,+\,&2ma^{2}\left(\Gamma _{z}-E\right)\sin ^{2}\nu &=\,&\Gamma _{\nu }\end{alignedat}}} that, when solved, provide a complete solution for S {\displaystyle S} . ==== Parabolic cylindrical coordinates ==== The Hamiltonian in parabolic cylindrical coordinates can be written H = p σ 2 + p τ 2 2 m ( σ 2 + τ 2 ) + p z 2 2 m + U ( σ , τ , z ) . {\displaystyle H={\frac {p_{\sigma }^{2}+p_{\tau }^{2}}{2m\left(\sigma ^{2}+\tau ^{2}\right)}}+{\frac {p_{z}^{2}}{2m}}+U(\sigma ,\tau ,z).} The Hamilton–Jacobi equation is completely separable in these coordinates provided that U {\displaystyle U} has an analogous form U ( σ , τ , z ) = U σ ( σ ) + U τ ( τ ) σ 2 + τ 2 + U z ( z ) {\displaystyle U(\sigma ,\tau ,z)={\frac {U_{\sigma }(\sigma )+U_{\tau }(\tau )}{\sigma ^{2}+\tau ^{2}}}+U_{z}(z)} where U σ ( σ ) {\displaystyle U_{\sigma }(\sigma )} , U τ ( τ ) {\displaystyle U_{\tau }(\tau )} , and U z ( z ) {\displaystyle U_{z}(z)} are arbitrary functions. Substitution of the completely separated solution S = S σ ( σ ) + S τ ( τ ) + S z ( z ) − E t + constant {\displaystyle S=S_{\sigma }(\sigma )+S_{\tau }(\tau )+S_{z}(z)-Et+{\text{constant}}} into the HJE yields 1 2 m ( d S z d z ) 2 + 1 2 m ( σ 2 + τ 2 ) [ ( d S σ d σ ) 2 + ( d S τ d τ ) 2 ] + U z ( z ) + 1 σ 2 + τ 2 [ U σ ( σ ) + U τ ( τ ) ] = E . {\displaystyle {\begin{aligned}{\frac {1}{2m}}\left({\frac {dS_{z}}{dz}}\right)^{2}+{\frac {1}{2m\left(\sigma ^{2}+\tau ^{2}\right)}}\left[\left({\frac {dS_{\sigma }}{d\sigma }}\right)^{2}+\left({\frac {dS_{\tau }}{d\tau }}\right)^{2}\right]&\\{}+U_{z}(z)+{\frac {1}{\sigma ^{2}+\tau ^{2}}}\left[U_{\sigma }(\sigma )+U_{\tau }(\tau )\right]&=E.\end{aligned}}} Separating the first ordinary differential equation 1 2 m ( d S z d z ) 2 + U z ( z ) = Γ z {\displaystyle {\frac {1}{2m}}\left({\frac {dS_{z}}{dz}}\right)^{2}+U_{z}(z)=\Gamma _{z}} yields the reduced Hamilton–Jacobi equation (after re-arrangement and multiplication of both sides by the denominator) ( d S σ d σ ) 2 + ( d S τ d τ ) 2 + 2 m [ U σ ( σ ) + U τ ( τ ) ] = 2 m ( σ 2 + τ 2 ) ( E − Γ z ) {\displaystyle \left({\frac {dS_{\sigma }}{d\sigma }}\right)^{2}+\left({\frac {dS_{\tau }}{d\tau }}\right)^{2}+2m\left[U_{\sigma }(\sigma )+U_{\tau }(\tau )\right]=2m\left(\sigma ^{2}+\tau ^{2}\right)\left(E-\Gamma _{z}\right)} which itself may be separated into two independent ordinary differential equations ( d S σ d σ ) 2 + 2 m U σ ( σ ) + 2 m σ 2 ( Γ z − E ) = Γ σ ( d S τ d τ ) 2 + 2 m U τ ( τ ) + 2 m τ 2 ( Γ z − E ) = Γ τ {\displaystyle {\begin{alignedat}{4}\left({\frac {dS_{\sigma }}{d\sigma }}\right)^{2}&+\,&2mU_{\sigma }(\sigma )&+\,&2m\sigma ^{2}\left(\Gamma _{z}-E\right)&=\,&\Gamma _{\sigma }\\\left({\frac {dS_{\tau }}{d\tau }}\right)^{2}&+\,&2mU_{\tau }(\tau )&+\,&2m\tau ^{2}\left(\Gamma _{z}-E\right)&=\,&\Gamma _{\tau }\end{alignedat}}} that, when solved, provide a complete solution for S {\displaystyle S} . == Waves and particles == === Optical wave fronts and trajectories === The HJE establishes a duality between trajectories and wavefronts. For example, in geometrical optics, light can be considered either as “rays” or waves. The wave front can be defined as the surface C t {\textstyle {\mathcal {C}}_{t}} that the light emitted at time t = 0 {\textstyle t=0} has reached at time t {\textstyle t} . Light rays and wave fronts are dual: if one is known, the other can be deduced. More precisely, geometrical optics is a variational problem where the “action” is the travel time T {\textstyle T} along a path, T = 1 c ∫ A B n d s {\displaystyle T={\frac {1}{c}}\int _{A}^{B}n\,ds} where n {\textstyle n} is the medium's index of refraction and d s {\textstyle ds} is an infinitesimal arc length. From the above formulation, one can compute the ray paths using the Euler–Lagrange formulation; alternatively, one can compute the wave fronts by solving the Hamilton–Jacobi equation. Knowing one leads to knowing the other. The above duality is very general and applies to all systems that derive from a variational principle: either compute the trajectories using Euler–Lagrange equations or the wave fronts by using Hamilton–Jacobi equation. The wave front at time t {\textstyle t} , for a system initially at q 0 {\textstyle \mathbf {q} _{0}} at time t 0 {\textstyle t_{0}} , is defined as the collection of points q {\textstyle \mathbf {q} } such that S ( q , t ) = const {\textstyle S(\mathbf {q} ,t)={\text{const}}} . If S ( q , t ) {\textstyle S(\mathbf {q} ,t)} is known, the momentum is immediately deduced. p = ∂ S ∂ q . {\displaystyle \mathbf {p} ={\frac {\partial S}{\partial \mathbf {q} }}.} Once p {\textstyle \mathbf {p} } is known, tangents to the trajectories q ˙ {\textstyle {\dot {\mathbf {q} }}} are computed by solving the equation ∂ L ∂ q ˙ = p {\displaystyle {\frac {\partial {\mathcal {L}}}{\partial {\dot {\mathbf {q} }}}}={\boldsymbol {p}}} for q ˙ {\textstyle {\dot {\mathbf {q} }}} , where L {\textstyle {\mathcal {L}}} is the Lagrangian. The trajectories are then recovered from the knowledge of q ˙ {\textstyle {\dot {\mathbf {q} }}} . === Relationship to the Schrödinger equation === The isosurfaces of the function S ( q , t ) {\displaystyle S(\mathbf {q} ,t)} can be determined at any time t. The motion of an S {\displaystyle S} -isosurface as a function of time is defined by the motions of the particles beginning at the points q {\displaystyle \mathbf {q} } on the isosurface. The motion of such an isosurface can be thought of as a wave moving through q {\displaystyle \mathbf {q} } -space, although it does not obey the wave equation exactly. To show this, let S represent the phase of a wave ψ = ψ 0 e i S / ℏ {\displaystyle \psi =\psi _{0}e^{iS/\hbar }} where ℏ {\displaystyle \hbar } is a constant (the Planck constant) introduced to make the exponential argument dimensionless; changes in the amplitude of the wave can be represented by having S {\displaystyle S} be a complex number. The Hamilton–Jacobi equation is then rewritten as ℏ 2 2 m ∇ 2 ψ − U ψ = ℏ i ∂ ψ ∂ t {\displaystyle {\frac {\hbar ^{2}}{2m}}\nabla ^{2}\psi -U\psi ={\frac {\hbar }{i}}{\frac {\partial \psi }{\partial t}}} which is the Schrödinger equation. Conversely, starting with the Schrödinger equation and our ansatz for ψ {\displaystyle \psi } , it can be deduced that 1 2 m ( ∇ S ) 2 + U + ∂ S ∂ t = i ℏ 2 m ∇ 2 ψ 0 ψ 0 . {\displaystyle {\frac {1}{2m}}\left(\nabla S\right)^{2}+U+{\frac {\partial S}{\partial t}}={\frac {i\hbar }{2m}}{\frac {\nabla ^{2}\psi _{0}}{\psi _{0}}}.} The classical limit ( ℏ → 0 {\displaystyle \hbar \rightarrow 0} ) of the Schrödinger equation above becomes identical to the following variant of the Hamilton–Jacobi equation, 1 2 m ( ∇ S ) 2 + U + ∂ S ∂ t = 0. {\displaystyle {\frac {1}{2m}}\left(\nabla S\right)^{2}+U+{\frac {\partial S}{\partial t}}=0.} == Applications == === HJE in a gravitational field === Using the energy–momentum relation in the form g α β P α P β − ( m c ) 2 = 0 {\displaystyle g^{\alpha \beta }P_{\alpha }P_{\beta }-(mc)^{2}=0} for a particle of rest mass m {\displaystyle m} travelling in curved space, where g α β {\displaystyle g^{\alpha \beta }} are the contravariant coordinates of the metric tensor (i.e., the inverse metric) solved from the Einstein field equations, and c {\displaystyle c} is the speed of light. Setting the four-momentum P α {\displaystyle P_{\alpha }} equal to the four-gradient of the action S {\displaystyle S} , P α = − ∂ S ∂ x α {\displaystyle P_{\alpha }=-{\frac {\partial S}{\partial x^{\alpha }}}} gives the Hamilton–Jacobi equation in the geometry determined by the metric g {\displaystyle g} : g α β ∂ S ∂ x α ∂ S ∂ x β − ( m c ) 2 = 0 , {\displaystyle g^{\alpha \beta }{\frac {\partial S}{\partial x^{\alpha }}}{\frac {\partial S}{\partial x^{\beta }}}-(mc)^{2}=0,} in other words, in a gravitational field. === HJE in electromagnetic fields === For a particle of rest mass m {\displaystyle m} and electric charge e {\displaystyle e} moving in electromagnetic field with four-potential A i = ( ϕ , A ) {\displaystyle A_{i}=(\phi ,\mathrm {A} )} in vacuum, the Hamilton–Jacobi equation in geometry determined by the metric tensor g i k = g i k {\displaystyle g^{ik}=g_{ik}} has a form g i k ( ∂ S ∂ x i + e c A i ) ( ∂ S ∂ x k + e c A k ) = m 2 c 2 {\displaystyle g^{ik}\left({\frac {\partial S}{\partial x^{i}}}+{\frac {e}{c}}A_{i}\right)\left({\frac {\partial S}{\partial x^{k}}}+{\frac {e}{c}}A_{k}\right)=m^{2}c^{2}} and can be solved for the Hamilton principal action function S {\displaystyle S} to obtain further solution for the particle trajectory and momentum: x = − e c γ ∫ A z d ξ , y = − e c γ ∫ A y d ξ , z = − e 2 2 c 2 γ 2 ∫ ( A 2 − A 2 ¯ ) d ξ , ξ = c t − e 2 2 γ 2 c 2 ∫ ( A 2 − A 2 ¯ ) d ξ , p x = − e c A x , p y = − e c A y , p z = e 2 2 γ c ( A 2 − A 2 ¯ ) , E = c γ + e 2 2 γ c ( A 2 − A 2 ¯ ) , {\displaystyle {\begin{aligned}x&=-{\frac {e}{c\gamma }}\int A_{z}\,d\xi ,&y&=-{\frac {e}{c\gamma }}\int A_{y}\,d\xi ,\\[1ex]z&=-{\frac {e^{2}}{2c^{2}\gamma ^{2}}}\int \left(\mathrm {A} ^{2}-{\overline {\mathrm {A} ^{2}}}\right)\,d\xi ,&\xi &=ct-{\frac {e^{2}}{2\gamma ^{2}c^{2}}}\int \left(\mathrm {A} ^{2}-{\overline {\mathrm {A} ^{2}}}\right)\,d\xi ,\\[1ex]p_{x}&=-{\frac {e}{c}}A_{x},&p_{y}&=-{\frac {e}{c}}A_{y},\\[1ex]p_{z}&={\frac {e^{2}}{2\gamma c}}\left(\mathrm {A} ^{2}-{\overline {\mathrm {A} ^{2}}}\right),&{\mathcal {E}}&=c\gamma +{\frac {e^{2}}{2\gamma c}}\left(\mathrm {A} ^{2}-{\overline {\mathrm {A} ^{2}}}\right),\end{aligned}}} where ξ = c t − z {\displaystyle \xi =ct-z} and γ 2 = m 2 c 2 + e 2 c 2 A ¯ 2 {\displaystyle \gamma ^{2}=m^{2}c^{2}+{\frac {e^{2}}{c^{2}}}{\overline {A}}^{2}} with A ¯ {\displaystyle {\overline {\mathbf {A} }}} the cycle average of the vector potential. ==== A circularly polarized wave ==== In the case of circular polarization, E x = E 0 sin ⁡ ω ξ 1 , E y = E 0 cos ⁡ ω ξ 1 , A x = c E 0 ω cos ⁡ ω ξ 1 , A y = − c E 0 ω sin ⁡ ω ξ 1 . {\displaystyle {\begin{aligned}E_{x}&=E_{0}\sin \omega \xi _{1},&E_{y}&=E_{0}\cos \omega \xi _{1},\\[1ex]A_{x}&={\frac {cE_{0}}{\omega }}\cos \omega \xi _{1},&A_{y}&=-{\frac {cE_{0}}{\omega }}\sin \omega \xi _{1}.\end{aligned}}} Hence x = − e c E 0 ω sin ⁡ ω ξ 1 , y = − e c E 0 ω cos ⁡ ω ξ 1 , p x = − e E 0 ω cos ⁡ ω ξ 1 , p y = e E 0 ω sin ⁡ ω ξ 1 , {\displaystyle {\begin{aligned}x&=-{\frac {ecE_{0}}{\omega }}\sin \omega \xi _{1},&y&=-{\frac {ecE_{0}}{\omega }}\cos \omega \xi _{1},\\[1ex]p_{x}&=-{\frac {eE_{0}}{\omega }}\cos \omega \xi _{1},&p_{y}&={\frac {eE_{0}}{\omega }}\sin \omega \xi _{1},\end{aligned}}} where ξ 1 = ξ / c {\displaystyle \xi _{1}=\xi /c} , implying the particle moving along a circular trajectory with a permanent radius e c E 0 / γ ω 2 {\displaystyle ecE_{0}/\gamma \omega ^{2}} and an invariable value of momentum e E 0 / ω 2 {\displaystyle eE_{0}/\omega ^{2}} directed along a magnetic field vector. ==== A monochromatic linearly polarized plane wave ==== For the flat, monochromatic, linearly polarized wave with a field E {\displaystyle E} directed along the axis y {\displaystyle y} E y = E 0 cos ⁡ ω ξ 1 , A y = − c E 0 ω sin ⁡ ω ξ 1 , {\displaystyle {\begin{aligned}E_{y}&=E_{0}\cos \omega \xi _{1},&A_{y}&=-{\frac {cE_{0}}{\omega }}\sin \omega \xi _{1},\end{aligned}}} hence x = const , y = y 0 cos ⁡ ω ξ 1 , y 0 = − e c E 0 γ ω 2 , z = C z y 0 sin ⁡ 2 ω ξ 1 , C z = e E 0 8 γ ω , γ 2 = m 2 c 2 + e 2 E 0 2 2 ω 2 , {\displaystyle {\begin{aligned}x&={\text{const}},\\[1ex]y&=y_{0}\cos \omega \xi _{1},&y_{0}&=-{\frac {ecE_{0}}{\gamma \omega ^{2}}},\\[1ex]z&=C_{z}y_{0}\sin 2\omega \xi _{1},&C_{z}&={\frac {eE_{0}}{8\gamma \omega }},\\[1ex]\gamma ^{2}&=m^{2}c^{2}+{\frac {e^{2}E_{0}^{2}}{2\omega ^{2}}},\end{aligned}}} p x = 0 , p y = p y , 0 sin ⁡ ω ξ 1 , p y , 0 = e E 0 ω , p z = − 2 C z p y , 0 cos ⁡ 2 ω ξ 1 {\displaystyle {\begin{aligned}p_{x}&=0,\\[1ex]p_{y}&=p_{y,0}\sin \omega \xi _{1},&p_{y,0}&={\frac {eE_{0}}{\omega }},\\[1ex]p_{z}&=-2C_{z}p_{y,0}\cos 2\omega \xi _{1}\end{aligned}}} implying the particle figure-8 trajectory with a long its axis oriented along the electric field E {\displaystyle E} vector. ==== An electromagnetic wave with a solenoidal magnetic field ==== For the electromagnetic wave with axial (solenoidal) magnetic field: E = E ϕ = ω ρ 0 c B 0 cos ⁡ ω ξ 1 , {\displaystyle E=E_{\phi }={\frac {\omega \rho _{0}}{c}}B_{0}\cos \omega \xi _{1},} A ϕ = − ρ 0 B 0 sin ⁡ ω ξ 1 = − L s π ρ 0 N s I 0 sin ⁡ ω ξ 1 , {\displaystyle A_{\phi }=-\rho _{0}B_{0}\sin \omega \xi _{1}=-{\frac {L_{s}}{\pi \rho _{0}N_{s}}}I_{0}\sin \omega \xi _{1},} hence x = constant , y = y 0 cos ⁡ ω ξ 1 , y 0 = − e ρ 0 B 0 γ ω , z = C z y 0 sin ⁡ 2 ω ξ 1 , C z = e ρ 0 B 0 8 c γ , γ 2 = m 2 c 2 + e 2 ρ 0 2 B 0 2 2 c 2 , {\displaystyle {\begin{aligned}x&={\text{constant}},\\y&=y_{0}\cos \omega \xi _{1},&y_{0}&=-{\frac {e\rho _{0}B_{0}}{\gamma \omega }},\\z&=C_{z}y_{0}\sin 2\omega \xi _{1},&C_{z}&={\frac {e\rho _{0}B_{0}}{8c\gamma }},\\\gamma ^{2}&=m^{2}c^{2}+{\frac {e^{2}\rho _{0}^{2}B_{0}^{2}}{2c^{2}}},\end{aligned}}} p x = 0 , p y = p y , 0 sin ⁡ ω ξ 1 , p y , 0 = e ρ 0 B 0 c , p z = − 2 C z p y , 0 cos ⁡ 2 ω ξ 1 , {\displaystyle {\begin{aligned}p_{x}&=0,\\p_{y}&=p_{y,0}\sin \omega \xi _{1},&p_{y,0}&={\frac {e\rho _{0}B_{0}}{c}},\\p_{z}&=-2C_{z}p_{y,0}\cos 2\omega \xi _{1},\end{aligned}}} where B 0 {\displaystyle B_{0}} is the magnetic field magnitude in a solenoid with the effective radius ρ 0 {\displaystyle \rho _{0}} , inductivity L s {\displaystyle L_{s}} , number of windings N s {\displaystyle N_{s}} , and an electric current magnitude I 0 {\displaystyle I_{0}} through the solenoid windings. The particle motion occurs along the figure-8 trajectory in y z {\displaystyle yz} plane set perpendicular to the solenoid axis with arbitrary azimuth angle φ {\displaystyle \varphi } due to axial symmetry of the solenoidal magnetic field. == See also == == References == == Further reading == Arnold, V.I. (1989). Mathematical Methods of Classical Mechanics (2 ed.). New York: Springer. ISBN 0-387-96890-3. Hamilton, W. (1833). "On a General Method of Expressing the Paths of Light, and of the Planets, by the Coefficients of a Characteristic Function" (PDF). Dublin University Review: 795–826. Hamilton, W. (1834). "On the Application to Dynamics of a General Mathematical Method previously Applied to Optics" (PDF). British Association Report: 513–518. Fetter, A. & Walecka, J. (2003). Theoretical Mechanics of Particles and Continua. Dover Books. ISBN 978-0-486-43261-8. Landau, L. D.; Lifshitz, E. M. (1975). Mechanics. Amsterdam: Elsevier. Sakurai, J. J. (1985). Modern Quantum Mechanics. Benjamin/Cummings Publishing. ISBN 978-0-8053-7501-5. Jacobi, C. G. J. (1884), Vorlesungen über Dynamik, C. G. J. Jacobi's Gesammelte Werke (in German), Berlin: G. Reimer, OL 14009561M Nakane, Michiyo; Fraser, Craig G. (2002). "The Early History of Hamilton-Jacobi Dynamics". Centaurus. 44 (3–4): 161–227. doi:10.1111/j.1600-0498.2002.tb00613.x. PMID 17357243.
Wikipedia/Hamilton–Jacobi_equation
In physics, the Coriolis force is a pseudo force that acts on objects in motion within a frame of reference that rotates with respect to an inertial frame. In a reference frame with clockwise rotation, the force acts to the left of the motion of the object. In one with anticlockwise (or counterclockwise) rotation, the force acts to the right. Deflection of an object due to the Coriolis force is called the Coriolis effect. Though recognized previously by others, the mathematical expression for the Coriolis force appeared in an 1835 paper by French scientist Gaspard-Gustave de Coriolis, in connection with the theory of water wheels. Early in the 20th century, the term Coriolis force began to be used in connection with meteorology. Newton's laws of motion describe the motion of an object in an inertial (non-accelerating) frame of reference. When Newton's laws are transformed to a rotating frame of reference, the Coriolis and centrifugal accelerations appear. When applied to objects with masses, the respective forces are proportional to their masses. The magnitude of the Coriolis force is proportional to the rotation rate, and the magnitude of the centrifugal force is proportional to the square of the rotation rate. The Coriolis force acts in a direction perpendicular to two quantities: the angular velocity of the rotating frame relative to the inertial frame and the velocity of the body relative to the rotating frame, and its magnitude is proportional to the object's speed in the rotating frame (more precisely, to the component of its velocity that is perpendicular to the axis of rotation). The centrifugal force acts outwards in the radial direction and is proportional to the distance of the body from the axis of the rotating frame. These additional forces are termed inertial forces, fictitious forces, or pseudo forces. By introducing these fictitious forces to a rotating frame of reference, Newton's laws of motion can be applied to the rotating system as though it were an inertial system; these forces are correction factors that are not required in a non-rotating system. In popular (non-technical) usage of the term "Coriolis effect", the rotating reference frame implied is almost always the Earth. Because the Earth spins, Earth-bound observers need to account for the Coriolis force to correctly analyze the motion of objects. The Earth completes one rotation for each sidereal day, so for motions of everyday objects the Coriolis force is imperceptible; its effects become noticeable only for motions occurring over large distances and long periods of time, such as large-scale movement of air in the atmosphere or water in the ocean, or where high precision is important, such as artillery or missile trajectories. Such motions are constrained by the surface of the Earth, so only the horizontal component of the Coriolis force is generally important. This force causes moving objects on the surface of the Earth to be deflected to the right (with respect to the direction of travel) in the Northern Hemisphere and to the left in the Southern Hemisphere. The horizontal deflection effect is greater near the poles, since the effective rotation rate about a local vertical axis is largest there, and decreases to zero at the equator. Rather than flowing directly from areas of high pressure to low pressure, as they would in a non-rotating system, winds and currents tend to flow to the right of this direction north of the equator ("clockwise") and to the left of this direction south of it ("anticlockwise"). This effect is responsible for the rotation and thus formation of cyclones (see: Coriolis effects in meteorology). == History == Italian scientist Giovanni Battista Riccioli and his assistant Francesco Maria Grimaldi described the effect in connection with artillery in the 1651 Almagestum Novum, writing that rotation of the Earth should cause a cannonball fired to the north to deflect to the east. In 1674, Claude François Milliet Dechales described in his Cursus seu Mundus Mathematicus how the rotation of the Earth should cause a deflection in the trajectories of both falling bodies and projectiles aimed toward one of the planet's poles. Riccioli, Grimaldi, and Dechales all described the effect as part of an argument against the heliocentric system of Copernicus. In other words, they argued that the Earth's rotation should create the effect, and so failure to detect the effect was evidence for an immobile Earth. The Coriolis acceleration equation was derived by Euler in 1749, and the effect was described in the tidal equations of Pierre-Simon Laplace in 1778. Gaspard-Gustave de Coriolis published a paper in 1835 on the energy yield of machines with rotating parts, such as waterwheels. That paper considered the supplementary forces that are detected in a rotating frame of reference. Coriolis divided these supplementary forces into two categories. The second category contained a force that arises from the cross product of the angular velocity of a coordinate system and the projection of a particle's velocity into a plane perpendicular to the system's axis of rotation. Coriolis referred to this force as the "compound centrifugal force" due to its analogies with the centrifugal force already considered in category one. The effect was known in the early 20th century as the "acceleration of Coriolis", and by 1920 as "Coriolis force". In 1856, William Ferrel proposed the existence of a circulation cell in the mid-latitudes with air being deflected by the Coriolis force to create the prevailing westerly winds. The understanding of the kinematics of how exactly the rotation of the Earth affects airflow was partial at first. Late in the 19th century, the full extent of the large scale interaction of pressure-gradient force and deflecting force that in the end causes air masses to move along isobars was understood. == Formula == In Newtonian mechanics, the equation of motion for an object in an inertial reference frame is: F = m a {\displaystyle \mathbf {F} =m\mathbf {a} } where F {\displaystyle \mathbf {F} } is the vector sum of the physical forces acting on the object, m {\displaystyle m} is the mass of the object, and a {\displaystyle \mathbf {a} } is the acceleration of the object relative to the inertial reference frame. Transforming this equation to a reference frame rotating about a fixed axis through the origin with angular velocity ω {\displaystyle {\boldsymbol {\omega }}} having variable rotation rate, the equation takes the form: F ′ = F − m d ω d t × r ′ − 2 m ω × v ′ − m ω × ( ω × r ′ ) = m a ′ {\displaystyle {\begin{aligned}\mathbf {F'} &=\mathbf {F} -m{\frac {\mathrm {d} {\boldsymbol {\omega }}}{\mathrm {d} t}}\times \mathbf {r} '-2m{\boldsymbol {\omega }}\times \mathbf {v} '-m{\boldsymbol {\omega }}\times ({\boldsymbol {\omega }}\times \mathbf {r} ')\\&=m\mathbf {a} '\end{aligned}}} where the prime (') variables denote coordinates of the rotating reference frame (not a derivative) and: F {\displaystyle \mathbf {F} } is the vector sum of the physical forces acting on the object ω {\displaystyle {\boldsymbol {\omega }}} is the angular velocity of the rotating reference frame relative to the inertial frame r ′ {\displaystyle \mathbf {r} '} is the position vector of the object relative to the rotating reference frame v ′ {\displaystyle \mathbf {v} '} is the velocity of the object relative to the rotating reference frame a ′ {\displaystyle \mathbf {a} '} is the acceleration of the object relative to the rotating reference frame The fictitious forces as they are perceived in the rotating frame act as additional forces that contribute to the apparent acceleration just like the real external forces. The fictitious force terms of the equation are, reading from left to right: Euler force, − m d ω d t × r ′ {\displaystyle -m{\frac {\mathrm {d} {\boldsymbol {\omega }}}{\mathrm {d} t}}\times \mathbf {r} '} Coriolis force, − 2 m ( ω × v ′ ) {\displaystyle -2m({\boldsymbol {\omega }}\times \mathbf {v} ')} centrifugal force, − m ω × ( ω × r ′ ) {\displaystyle -m{\boldsymbol {\omega }}\times ({\boldsymbol {\omega }}\times \mathbf {r} ')} As seen in these formulas the Euler and centrifugal forces depend on the position vector r ′ {\displaystyle \mathbf {r} '} of the object, while the Coriolis force depends on the object's velocity v ′ {\displaystyle \mathbf {v} '} as measured in the rotating reference frame. As expected, for a non-rotating inertial frame of reference ( ω = 0 ) {\displaystyle ({\boldsymbol {\omega }}=0)} the Coriolis force and all other fictitious forces disappear. === Direction of Coriolis force for simple cases === As the Coriolis force is proportional to a cross product of two vectors, it is perpendicular to both vectors, in this case the object's velocity and the frame's rotation vector. It therefore follows that: if the velocity is parallel to the rotation axis, the Coriolis force is zero. For example, on Earth, this situation occurs for a body at the equator moving north or south relative to the Earth's surface. (At any latitude other than the equator, however, the north–south motion would have a component perpendicular to the rotation axis and a force specified by the inward or outward cases mentioned below). if the velocity is straight inward to the axis, the Coriolis force is in the direction of local rotation. For example, on Earth, this situation occurs for a body at the equator falling downward, as in the Dechales illustration above, where the falling ball travels further to the east than does the tower. Note also that heading north in the northern hemisphere would have a velocity component toward the rotation axis, resulting in a Coriolis force to the east (more pronounced the further north one is). if the velocity is straight outward from the axis, the Coriolis force is against the direction of local rotation. In the tower example, a ball launched upward would move toward the west. if the velocity is in the direction of rotation, the Coriolis force is outward from the axis. For example, on Earth, this situation occurs for a body at the equator moving east relative to Earth's surface. It would move upward as seen by an observer on the surface. This effect (see Eötvös effect below) was discussed by Galileo Galilei in 1632 and by Riccioli in 1651. if the velocity is against the direction of rotation, the Coriolis force is inward to the axis. For example, on Earth, this situation occurs for a body at the equator moving west, which would deflect downward as seen by an observer. == Intuitive explanation == For an intuitive explanation of the origin of the Coriolis force, consider an object, constrained to follow the Earth's surface and moving northward in the Northern Hemisphere. Viewed from outer space, the object does not appear to go due north, but has an eastward motion (it rotates around toward the right along with the surface of the Earth). The further north it travels, the smaller the "radius of its parallel (latitude)" (the minimum distance from the surface point to the axis of rotation, which is in a plane orthogonal to the axis), and so the slower the eastward motion of its surface. As the object moves north it has a tendency to maintain the eastward speed it started with (rather than slowing down to match the reduced eastward speed of local objects on the Earth's surface), so it veers east (i.e. to the right of its initial motion). Though not obvious from this example, which considers northward motion, the horizontal deflection occurs equally for objects moving eastward or westward (or in any other direction). However, the theory that the effect determines the rotation of draining water in a household bathtub, sink or toilet has been repeatedly disproven by modern-day scientists; the force is negligibly small compared to the many other influences on the rotation. == Length scales and the Rossby number == The time, space, and velocity scales are important in determining the importance of the Coriolis force. Whether rotation is important in a system can be determined by its Rossby number (Ro), which is the ratio of the velocity, U, of a system to the product of the Coriolis parameter, f = 2 ω sin ⁡ φ {\displaystyle f=2\omega \sin \varphi \,} , and the length scale, L, of the motion: R o = U f L . {\displaystyle \mathrm {Ro} ={\frac {U}{fL}}.} Hence, it is the ratio of inertial to Coriolis forces; a small Rossby number indicates a system is strongly affected by Coriolis forces, and a large Rossby number indicates a system in which inertial forces dominate. For example, in tornadoes, the Rossby number is large, so in them the Coriolis force is negligible, and balance is between pressure and centrifugal forces. In low-pressure systems the Rossby number is low, as the centrifugal force is negligible; there, the balance is between Coriolis and pressure forces. In oceanic systems the Rossby number is often around 1, with all three forces comparable. An atmospheric system moving at U = 10 m/s (22 mph) occupying a spatial distance of L = 1,000 km (621 mi), has a Rossby number of approximately 0.1. A baseball pitcher may throw the ball at U = 45 m/s (100 mph) for a distance of L = 18.3 m (60 ft). The Rossby number in this case would be 32,000 (at latitude 31°47'46.382"). Baseball players don't care about which hemisphere they're playing in. However, an unguided missile obeys exactly the same physics as a baseball, but can travel far enough and be in the air long enough to experience the effect of Coriolis force. Long-range shells in the Northern Hemisphere landed close to, but to the right of, where they were aimed until this was noted. (Those fired in the Southern Hemisphere landed to the left.) In fact, it was this effect that first drew the attention of Coriolis himself. == Simple cases == === Tossed ball on a rotating carousel === The figures illustrate a ball tossed from 12:00 o'clock toward the center of a counter-clockwise rotating carousel. In the first figure, the ball is seen by a stationary observer above the carousel, and the ball travels in a straight line slightly to the right of the center, because it had an initial tangential velocity given by the rotation (blue arrow) and a radial velocity given by the thrower (green arrow). The resulting combined velocity is shown as a solid red line, and the trajectory is shown as a dotted red line. In the second figure, the ball is seen by an observer rotating with the carousel, so the ball-thrower appears to stay at 12:00 o'clock, and the ball trajectory has a slight curve. === Bounced ball === The figure describes a more complex situation where the tossed ball on a turntable bounces off the edge of the carousel and then returns to the tosser, who catches the ball. The effect of Coriolis force on its trajectory is shown again as seen by two observers: an observer (referred to as the "camera") that rotates with the carousel, and an inertial observer. The figure shows a bird's-eye view based upon the same ball speed on forward and return paths. Within each circle, plotted dots show the same time points. In the left panel, from the camera's viewpoint at the center of rotation, the tosser (smiley face) and the rail both are at fixed locations, and the ball makes a very considerable arc on its travel toward the rail, and takes a more direct route on the way back. From the ball tosser's viewpoint, the ball seems to return more quickly than it went (because the tosser is rotating toward the ball on the return flight). On the carousel, instead of tossing the ball straight at a rail to bounce back, the tosser must throw the ball toward the right of the target and the ball then seems to the camera to bear continuously to the left of its direction of travel to hit the rail (left because the carousel is turning clockwise). The ball appears to bear to the left from direction of travel on both inward and return trajectories. The curved path demands this observer to recognize a leftward net force on the ball. (This force is "fictitious" because it disappears for a stationary observer, as is discussed shortly.) For some angles of launch, a path has portions where the trajectory is approximately radial, and Coriolis force is primarily responsible for the apparent deflection of the ball (centrifugal force is radial from the center of rotation, and causes little deflection on these segments). When a path curves away from radial, however, centrifugal force contributes significantly to deflection. The ball's path through the air is straight when viewed by observers standing on the ground (right panel). In the right panel (stationary observer), the ball tosser (smiley face) is at 12 o'clock and the rail the ball bounces from is at position 1. From the inertial viewer's standpoint, positions 1, 2, and 3 are occupied in sequence. At position 2, the ball strikes the rail, and at position 3, the ball returns to the tosser. Straight-line paths are followed because the ball is in free flight, so this observer requires that no net force is applied. == Applied to the Earth == The acceleration affecting the motion of air "sliding" over the Earth's surface is the horizontal component of the Coriolis term − 2 ω × v {\displaystyle -2\,{\boldsymbol {\omega }}\times \mathbf {v} } This component is orthogonal to the velocity over the Earth surface and is given by the expression ω v 2 sin ⁡ ϕ {\displaystyle \omega \,v\ 2\,\sin \phi } where ω {\displaystyle \omega } is the spin rate of the Earth ϕ {\displaystyle \phi } is the latitude, positive in the northern hemisphere and negative in the southern hemisphere In the northern hemisphere, where the latitude is positive, this acceleration, as viewed from above, is to the right of the direction of motion. Conversely, it is to the left in the southern hemisphere. === Rotating sphere === Consider a location with latitude φ on a sphere that is rotating around the north–south axis. A local coordinate system is set up with the x axis horizontally due east, the y axis horizontally due north and the z axis vertically upwards. The rotation vector, velocity of movement and Coriolis acceleration expressed in this local coordinate system [listing components in the order east (e), north (n) and upward (u)] are: Ω = ω ( 0 cos ⁡ φ sin ⁡ φ ) , v = ( v e v n v u ) , {\displaystyle {\boldsymbol {\Omega }}=\omega {\begin{pmatrix}0\\\cos \varphi \\\sin \varphi \end{pmatrix}}\ ,\mathbf {v} ={\begin{pmatrix}v_{\mathrm {e} }\\v_{\mathrm {n} }\\v_{\mathrm {u} }\end{pmatrix}}\ ,} a C = − 2 Ω × v = 2 ω ( v n sin ⁡ φ − v u cos ⁡ φ − v e sin ⁡ φ v e cos ⁡ φ ) . {\displaystyle \mathbf {a} _{\mathrm {C} }=-2{\boldsymbol {\Omega }}\times \mathbf {v} =2\,\omega \,{\begin{pmatrix}v_{\mathrm {n} }\sin \varphi -v_{\mathrm {u} }\cos \varphi \\-v_{\mathrm {e} }\sin \varphi \\v_{\mathrm {e} }\cos \varphi \end{pmatrix}}\ .} When considering atmospheric or oceanic dynamics, the vertical velocity is small, and the vertical component of the Coriolis acceleration ( v e cos ⁡ φ {\displaystyle v_{e}\cos \varphi } ) is small compared with the acceleration due to gravity (g, approximately 9.81 m/s2 (32.2 ft/s2) near Earth's surface). For such cases, only the horizontal (east and north) components matter. The restriction of the above to the horizontal plane is (setting vu = 0): v = ( v e v n ) , a C = ( v n − v e ) f , {\displaystyle \mathbf {v} ={\begin{pmatrix}v_{\mathrm {e} }\\v_{\mathrm {n} }\end{pmatrix}}\ ,\mathbf {a} _{\mathrm {C} }={\begin{pmatrix}v_{\mathrm {n} }\\-v_{\mathrm {e} }\end{pmatrix}}\ f\ ,} where f = 2 ω sin ⁡ φ {\displaystyle f=2\omega \sin \varphi \,} is called the Coriolis parameter. By setting vn = 0, it can be seen immediately that (for positive φ and ω) a movement due east results in an acceleration due south; similarly, setting ve = 0, it is seen that a movement due north results in an acceleration due east. In general, observed horizontally, looking along the direction of the movement causing the acceleration, the acceleration always is turned 90° to the right (for positive φ) and of the same size regardless of the horizontal orientation. In the case of equatorial motion, setting φ = 0° yields: Ω = ω ( 0 1 0 ) , v = ( v e v n v u ) , {\displaystyle {\boldsymbol {\Omega }}=\omega {\begin{pmatrix}0\\1\\0\end{pmatrix}}\ ,\mathbf {v} ={\begin{pmatrix}v_{\mathrm {e} }\\v_{\mathrm {n} }\\v_{\mathrm {u} }\end{pmatrix}}\ ,} a C = − 2 Ω × v = 2 ω ( − v u 0 v e ) . {\displaystyle \mathbf {a} _{\mathrm {C} }=-2{\boldsymbol {\Omega }}\times \mathbf {v} =2\,\omega \,{\begin{pmatrix}-v_{\mathrm {u} }\\0\\v_{\mathrm {e} }\end{pmatrix}}\ .} Ω in this case is parallel to the north-south axis. Accordingly, an eastward motion (that is, in the same direction as the rotation of the sphere) provides an upward acceleration known as the Eötvös effect, and an upward motion produces an acceleration due west. === Meteorology and oceanography === Perhaps the most important impact of the Coriolis effect is in the large-scale dynamics of the oceans and the atmosphere. In meteorology and oceanography, it is convenient to postulate a rotating frame of reference wherein the Earth is stationary. In accommodation of that provisional postulation, the centrifugal and Coriolis forces are introduced. Their relative importance is determined by the applicable Rossby numbers. Tornadoes have high Rossby numbers, so, while tornado-associated centrifugal forces are quite substantial, Coriolis forces associated with tornadoes are for practical purposes negligible. Because surface ocean currents are driven by the movement of wind over the water's surface, the Coriolis force also affects the movement of ocean currents and cyclones as well. Many of the ocean's largest currents circulate around warm, high-pressure areas called gyres. Though the circulation is not as significant as that in the air, the deflection caused by the Coriolis effect is what creates the spiralling pattern in these gyres. The spiralling wind pattern helps the hurricane form. The stronger the force from the Coriolis effect, the faster the wind spins and picks up additional energy, increasing the strength of the hurricane. Air within high-pressure systems rotates in a direction such that the Coriolis force is directed radially inwards, and nearly balanced by the outwardly radial pressure gradient. As a result, air travels clockwise around high pressure in the Northern Hemisphere and anticlockwise in the Southern Hemisphere. Air around low-pressure rotates in the opposite direction, so that the Coriolis force is directed radially outward and nearly balances an inwardly radial pressure gradient. ==== Flow around a low-pressure area ==== If a low-pressure area forms in the atmosphere, air tends to flow in towards it, but is deflected perpendicular to its velocity by the Coriolis force. A system of equilibrium can then establish itself creating circular movement, or a cyclonic flow. Because the Rossby number is low, the force balance is largely between the pressure-gradient force acting towards the low-pressure area and the Coriolis force acting away from the center of the low pressure. Instead of flowing down the gradient, large scale motions in the atmosphere and ocean tend to occur perpendicular to the pressure gradient. This is known as geostrophic flow. On a non-rotating planet, fluid would flow along the straightest possible line, quickly eliminating pressure gradients. The geostrophic balance is thus very different from the case of "inertial motions" (see below), which explains why mid-latitude cyclones are larger by an order of magnitude than inertial circle flow would be. This pattern of deflection, and the direction of movement, is called Buys-Ballot's law. In the atmosphere, the pattern of flow is called a cyclone. In the Northern Hemisphere the direction of movement around a low-pressure area is anticlockwise. In the Southern Hemisphere, the direction of movement is clockwise because the rotational dynamics is a mirror image there. At high altitudes, outward-spreading air rotates in the opposite direction. Cyclones rarely form along the equator due to the weak Coriolis effect present in this region. ==== Inertial circles ==== An air or water mass moving with speed v {\displaystyle v\,} subject only to the Coriolis force travels in a circular trajectory called an inertial circle. Since the force is directed at right angles to the motion of the particle, it moves with a constant speed around a circle whose radius R {\displaystyle R} is given by: R = v f {\displaystyle R={\frac {v}{f}}} where f {\displaystyle f} is the Coriolis parameter 2 Ω sin ⁡ φ {\displaystyle 2\Omega \sin \varphi } , introduced above (where φ {\displaystyle \varphi } is the latitude). The time taken for the mass to complete a full circle is therefore 2 π / f {\displaystyle 2\pi /f} . The Coriolis parameter typically has a mid-latitude value of about 10−4 s−1; hence for a typical atmospheric speed of 10 m/s (22 mph), the radius is 100 km (62 mi) with a period of about 17 hours. For an ocean current with a typical speed of 10 cm/s (0.22 mph), the radius of an inertial circle is 1 km (0.6 mi). These inertial circles are clockwise in the northern hemisphere (where trajectories are bent to the right) and anticlockwise in the southern hemisphere. If the rotating system is a parabolic turntable, then f {\displaystyle f} is constant and the trajectories are exact circles. On a rotating planet, f {\displaystyle f} varies with latitude and the paths of particles do not form exact circles. Since the parameter f {\displaystyle f} varies as the sine of the latitude, the radius of the oscillations associated with a given speed are smallest at the poles (latitude of ±90°), and increase toward the equator. ==== Other terrestrial effects ==== The Coriolis effect strongly affects the large-scale oceanic and atmospheric circulation, leading to the formation of robust features like jet streams and western boundary currents. Such features are in geostrophic balance, meaning that the Coriolis and pressure gradient forces balance each other. Coriolis acceleration is also responsible for the propagation of many types of waves in the ocean and atmosphere, including Rossby waves and Kelvin waves. It is also instrumental in the so-called Ekman dynamics in the ocean, and in the establishment of the large-scale ocean flow pattern called the Sverdrup balance. === Eötvös effect === The practical impact of the "Coriolis effect" is mostly caused by the horizontal acceleration component produced by horizontal motion. There are other components of the Coriolis effect. Westward-traveling objects are deflected downwards, while eastward-traveling objects are deflected upwards. This is known as the Eötvös effect. This aspect of the Coriolis effect is greatest near the equator. The force produced by the Eötvös effect is similar to the horizontal component, but the much larger vertical forces due to gravity and pressure suggest that it is unimportant in the hydrostatic equilibrium. However, in the atmosphere, winds are associated with small deviations of pressure from the hydrostatic equilibrium. In the tropical atmosphere, the order of magnitude of the pressure deviations is so small that the contribution of the Eötvös effect to the pressure deviations is considerable. In addition, objects traveling upwards (i.e. out) or downwards (i.e. in) are deflected to the west or east respectively. This effect is also the greatest near the equator. Since vertical movement is usually of limited extent and duration, the size of the effect is smaller and requires precise instruments to detect. For example, idealized numerical modeling studies suggest that this effect can directly affect tropical large-scale wind field by roughly 10% given long-duration (2 weeks or more) heating or cooling in the atmosphere. Moreover, in the case of large changes of momentum, such as a spacecraft being launched into orbit, the effect becomes significant. The fastest and most fuel-efficient path to orbit is a launch from the equator that curves to a directly eastward heading. ==== Intuitive example ==== Imagine a train that travels through a frictionless railway line along the equator. Assume that, when in motion, it moves at the necessary speed to complete a trip around the world in one day (465 m/s). The Coriolis effect can be considered in three cases: when the train travels west, when it is at rest, and when it travels east. In each case, the Coriolis effect can be calculated from the rotating frame of reference on Earth first, and then checked against a fixed inertial frame. The image below illustrates the three cases as viewed by an observer at rest in a (near) inertial frame from a fixed point above the North Pole along the Earth's axis of rotation; the train is denoted by a few red pixels, fixed at the left side in the leftmost picture, moving in the others ( 1 day = ∧ 8 s ) : {\displaystyle \left(1{\text{ day}}\mathrel {\overset {\land }{=}} 8{\text{ s}}\right):} The train travels toward the west: In that case, it moves against the direction of rotation. Therefore, on the Earth's rotating frame the Coriolis term is pointed inwards towards the axis of rotation (down). This additional force downwards should cause the train to be heavier while moving in that direction.If one looks at this train from the fixed non-rotating frame on top of the center of the Earth, at that speed it remains stationary as the Earth spins beneath it. Hence, the only force acting on it is gravity and the reaction from the track. This force is greater (by 0.34%) than the force that the passengers and the train experience when at rest (rotating along with Earth). This difference is what the Coriolis effect accounts for in the rotating frame of reference. The train comes to a stop: From the point of view on the Earth's rotating frame, the velocity of the train is zero, thus the Coriolis force is also zero and the train and its passengers recuperate their usual weight.From the fixed inertial frame of reference above Earth, the train now rotates along with the rest of the Earth. 0.34% of the force of gravity provides the centripetal force needed to achieve the circular motion on that frame of reference. The remaining force, as measured by a scale, makes the train and passengers "lighter" than in the previous case. The train travels east. In this case, because it moves in the direction of Earth's rotating frame, the Coriolis term is directed outward from the axis of rotation (up). This upward force makes the train seem lighter still than when at rest. From the fixed inertial frame of reference above Earth, the train traveling east now rotates at twice the rate as when it was at rest—so the amount of centripetal force needed to cause that circular path increases leaving less force from gravity to act on the track. This is what the Coriolis term accounts for on the previous paragraph.As a final check one can imagine a frame of reference rotating along with the train. Such frame would be rotating at twice the angular velocity as Earth's rotating frame. The resulting centrifugal force component for that imaginary frame would be greater. Since the train and its passengers are at rest, that would be the only component in that frame explaining again why the train and the passengers are lighter than in the previous two cases. This also explains why high-speed projectiles that travel west are deflected down, and those that travel east are deflected up. This vertical component of the Coriolis effect is called the Eötvös effect. The above example can be used to explain why the Eötvös effect starts diminishing when an object is traveling westward as its tangential speed increases above Earth's rotation (465 m/s). If the westward train in the above example increases speed, part of the force of gravity that pushes against the track accounts for the centripetal force needed to keep it in circular motion on the inertial frame. Once the train doubles its westward speed at 930 m/s (2,100 mph) that centripetal force becomes equal to the force the train experiences when it stops. From the inertial frame, in both cases it rotates at the same speed but in the opposite directions. Thus, the force is the same cancelling completely the Eötvös effect. Any object that moves westward at a speed above 930 m/s (2,100 mph) experiences an upward force instead. In the figure, the Eötvös effect is illustrated for a 10-kilogram (22 lb) object on the train at different speeds. The parabolic shape is because the centripetal force is proportional to the square of the tangential speed. On the inertial frame, the bottom of the parabola is centered at the origin. The offset is because this argument uses the Earth's rotating frame of reference. The graph shows that the Eötvös effect is not symmetrical, and that the resulting downward force experienced by an object that travels west at high velocity is less than the resulting upward force when it travels east at the same speed. === Draining in bathtubs and toilets === Contrary to popular misconception, bathtubs, toilets, and other water receptacles do not drain in opposite directions in the Northern and Southern Hemispheres. This is because the magnitude of the Coriolis force is negligible at this scale. Forces determined by the initial conditions of the water (e.g. the geometry of the drain, the geometry of the receptacle, preexisting momentum of the water, etc.) are likely to be orders of magnitude greater than the Coriolis force and hence will determine the direction of water rotation, if any. For example, identical toilets flushed in both hemispheres drain in the same direction, and this direction is determined mostly by the shape of the toilet bowl. Under real-world conditions, the Coriolis force does not influence the direction of water flow perceptibly. Only if the water is so still that the effective rotation rate of the Earth is faster than that of the water relative to its container, and if externally applied torques (such as might be caused by flow over an uneven bottom surface) are small enough, the Coriolis effect may indeed determine the direction of the vortex. Without such careful preparation, the Coriolis effect will be much smaller than various other influences on drain direction such as any residual rotation of the water and the geometry of the container. ==== Laboratory testing of draining water under atypical conditions ==== In 1962, Ascher Shapiro performed an experiment at MIT to test the Coriolis force on a large basin of water, 2 meters (6 ft 7 in) across, with a small wooden cross above the plug hole to display the direction of rotation, covering it and waiting for at least 24 hours for the water to settle. Under these precise laboratory conditions, he demonstrated the effect and consistent counterclockwise rotation. The experiment required extreme precision, since the acceleration due to Coriolis effect is only 3 × 10 − 7 {\displaystyle 3\times 10^{-7}} that of gravity. The vortex was measured by a cross made of two slivers of wood pinned above the draining hole. It takes 20 minutes to drain, and the cross starts turning only around 15 minutes. At the end it is turning at 1 rotation every 3 to 4 seconds. He reported that, Both schools of thought are in some sense correct. For the everyday observations of the kitchen sink and bath-tub variety, the direction of the vortex seems to vary in an unpredictable manner with the date, the time of day, and the particular household of the experimenter. But under well-controlled conditions of experimentation, the observer looking downward at a drain in the northern hemisphere will always see a counter-clockwise vortex, while one in the southern hemisphere will always see a clockwise vortex. In a properly designed experiment, the vortex is produced by Coriolis forces, which are counter-clockwise in the northern hemisphere. Lloyd Trefethen reported clockwise rotation in the Southern Hemisphere at the University of Sydney in five tests with settling times of 18 h or more. === Ballistic trajectories === The Coriolis force is important in external ballistics for calculating the trajectories of very long-range artillery shells. The most famous historical example was the Paris gun, used by the Germans during World War I to bombard Paris from a range of about 120 km (75 mi). The Coriolis force minutely changes the trajectory of a bullet, affecting accuracy at extremely long distances. It is adjusted for by accurate long-distance shooters, such as snipers. At the latitude of Sacramento, California, a 1,000 yd (910 m) northward shot would be deflected 2.8 in (71 mm) to the right. There is also a vertical component, explained in the Eötvös effect section above, which causes westward shots to hit low, and eastward shots to hit high. The effects of the Coriolis force on ballistic trajectories should not be confused with the curvature of the paths of missiles, satellites, and similar objects when the paths are plotted on two-dimensional (flat) maps, such as the Mercator projection. The projections of the three-dimensional curved surface of the Earth to a two-dimensional surface (the map) necessarily results in distorted features. The apparent curvature of the path is a consequence of the sphericity of the Earth and would occur even in a non-rotating frame. The Coriolis force on a moving projectile depends on velocity components in all three directions, latitude, and azimuth. The directions are typically downrange (the direction that the gun is initially pointing), vertical, and cross-range.: 178  A X = − 2 ω ( V Y cos ⁡ θ l a t sin ⁡ ϕ a z + V Z sin ⁡ θ l a t ) {\displaystyle A_{\mathrm {X} }=-2\omega (V_{\mathrm {Y} }\cos \theta _{\mathrm {lat} }\sin \phi _{\mathrm {az} }+V_{\mathrm {Z} }\sin \theta _{\mathrm {lat} })} A Y = 2 ω ( V X cos ⁡ θ l a t sin ⁡ ϕ a z + V Z cos ⁡ θ l a t cos ⁡ ϕ a z ) {\displaystyle A_{\mathrm {Y} }=2\omega (V_{\mathrm {X} }\cos \theta _{\mathrm {lat} }\sin \phi _{\mathrm {az} }+V_{\mathrm {Z} }\cos \theta _{\mathrm {lat} }\cos \phi _{\mathrm {az} })} A Z = 2 ω ( V X sin ⁡ θ l a t − V Y cos ⁡ θ l a t cos ⁡ ϕ a z ) {\displaystyle A_{\mathrm {Z} }=2\omega (V_{\mathrm {X} }\sin \theta _{\mathrm {lat} }-V_{\mathrm {Y} }\cos \theta _{\mathrm {lat} }\cos \phi _{\mathrm {az} })} where A X {\displaystyle A_{\mathrm {X} }} , down-range acceleration. A Y {\displaystyle A_{\mathrm {Y} }} , vertical acceleration with positive indicating acceleration upward. A Z {\displaystyle A_{\mathrm {Z} }} , cross-range acceleration with positive indicating acceleration to the right. V X {\displaystyle V_{\mathrm {X} }} , down-range velocity. V Y {\displaystyle V_{\mathrm {Y} }} , vertical velocity with positive indicating upward. V Z {\displaystyle V_{\mathrm {Z} }} , cross-range velocity with positive indicating velocity to the right. ω {\displaystyle \omega } = 0.00007292 rad/sec, angular velocity of the earth (based on a sidereal day). θ l a t {\displaystyle \theta _{\mathrm {lat} }} , latitude with positive indicating Northern hemisphere. ϕ a z {\displaystyle \phi _{\mathrm {az} }} , azimuth measured clockwise from due North. == Visualization == To demonstrate the Coriolis effect, a parabolic turntable can be used. On a flat turntable, the inertia of a co-rotating object forces it off the edge. However, if the turntable surface has the correct paraboloid (parabolic bowl) shape (see the figure) and rotates at the corresponding rate, the force components shown in the figure make the component of gravity tangential to the bowl surface exactly equal to the centripetal force necessary to keep the object rotating at its velocity and radius of curvature (assuming no friction). (See banked turn.) This carefully contoured surface allows the Coriolis force to be displayed in isolation. Discs cut from cylinders of dry ice can be used as pucks, moving around almost frictionlessly over the surface of the parabolic turntable, allowing effects of Coriolis on dynamic phenomena to show themselves. To get a view of the motions as seen from the reference frame rotating with the turntable, a video camera is attached to the turntable so as to co-rotate with the turntable, with results as shown in the figure. In the left panel of the figure, which is the viewpoint of a stationary observer, the gravitational force in the inertial frame pulling the object toward the center (bottom ) of the dish is proportional to the distance of the object from the center. A centripetal force of this form causes the elliptical motion. In the right panel, which shows the viewpoint of the rotating frame, the inward gravitational force in the rotating frame (the same force as in the inertial frame) is balanced by the outward centrifugal force (present only in the rotating frame). With these two forces balanced, in the rotating frame the only unbalanced force is Coriolis (also present only in the rotating frame), and the motion is an inertial circle. Analysis and observation of circular motion in the rotating frame is a simplification compared with analysis and observation of elliptical motion in the inertial frame. Because this reference frame rotates several times a minute rather than only once a day like the Earth, the Coriolis acceleration produced is many times larger and so easier to observe on small time and spatial scales than is the Coriolis acceleration caused by the rotation of the Earth. In a manner of speaking, the Earth is analogous to such a turntable. The rotation has caused the planet to settle on a spheroid shape, such that the normal force, the gravitational force and the centrifugal force exactly balance each other on a "horizontal" surface. (See equatorial bulge.) The Coriolis effect caused by the rotation of the Earth can be seen indirectly through the motion of a Foucault pendulum. == In other areas == === Coriolis flow meter === A practical application of the Coriolis effect is the mass flow meter, an instrument that measures the mass flow rate and density of a fluid flowing through a tube. The operating principle involves inducing a vibration of the tube through which the fluid passes. The vibration, though not completely circular, provides the rotating reference frame that gives rise to the Coriolis effect. While specific methods vary according to the design of the flow meter, sensors monitor and analyze changes in frequency, phase shift, and amplitude of the vibrating flow tubes. The changes observed represent the mass flow rate and density of the fluid. === Molecular physics === In polyatomic molecules, the molecule motion can be described by a rigid body rotation and internal vibration of atoms about their equilibrium position. As a result of the vibrations of the atoms, the atoms are in motion relative to the rotating coordinate system of the molecule. Coriolis effects are therefore present, and make the atoms move in a direction perpendicular to the original oscillations. This leads to a mixing in molecular spectra between the rotational and vibrational levels, from which Coriolis coupling constants can be determined. === Gyroscopic precession === When an external torque is applied to a spinning gyroscope along an axis that is at right angles to the spin axis, the rim velocity that is associated with the spin becomes radially directed in relation to the external torque axis. This causes a torque-induced force to act on the rim in such a way as to tilt the gyroscope at right angles to the direction that the external torque would have tilted it. This tendency has the effect of keeping spinning bodies in their rotational frame. === Insect flight === Flies (Diptera) and some moths (Lepidoptera) exploit the Coriolis effect in flight with specialized appendages and organs that relay information about the angular velocity of their bodies. Coriolis forces resulting from linear motion of these appendages are detected within the rotating frame of reference of the insects' bodies. In the case of flies, their specialized appendages are dumbbell shaped organs located just behind their wings called "halteres". The fly's halteres oscillate in a plane at the same beat frequency as the main wings so that any body rotation results in lateral deviation of the halteres from their plane of motion. In moths, their antennae are known to be responsible for the sensing of Coriolis forces in the similar manner as with the halteres in flies. In both flies and moths, a collection of mechanosensors at the base of the appendage are sensitive to deviations at the beat frequency, correlating to rotation in the pitch and roll planes, and at twice the beat frequency, correlating to rotation in the yaw plane. === Lagrangian point stability === In astronomy, Lagrangian points are five positions in the orbital plane of two large orbiting bodies where a small object affected only by gravity can maintain a stable position relative to the two large bodies. The first three Lagrangian points (L1, L2, L3) lie along the line connecting the two large bodies, while the last two points (L4 and L5) each form an equilateral triangle with the two large bodies. The L4 and L5 points, although they correspond to maxima of the effective potential in the coordinate frame that rotates with the two large bodies, are stable due to the Coriolis effect. The stability can result in orbits around just L4 or L5, known as tadpole orbits, where trojans can be found. It can also result in orbits that encircle L3, L4, and L5, known as horseshoe orbits. == See also == === Physics and meteorology === === Historical === == References == == External links == The definition of the Coriolis effect from the Glossary of Meteorology The Coriolis Effect — a conflict between common sense and mathematics PDF-file. 20 pages. A general discussion by Anders Persson of various aspects of the coriolis effect, including Foucault's Pendulum and Taylor columns. The coriolis effect in meteorology PDF-file. 5 pages. A detailed explanation by Mats Rosengren of how the gravitational force and the rotation of the Earth affect the atmospheric motion over the Earth surface. 2 figures 10 Coriolis Effect Videos and Games- from the About.com Weather Page Coriolis Force – from ScienceWorld Coriolis Effect and Drains An article from the NEWTON web site hosted by the Argonne National Laboratory. Catalog of Coriolis videos Coriolis Effect: A graphical animation, a visual Earth animation with precise explanation An introduction to fluid dynamics SPINLab Educational Film explains the Coriolis effect with the aid of lab experiments Do bathtubs drain counterclockwise in the Northern Hemisphere? Archived 15 May 2008 at the Wayback Machine by Cecil Adams. Bad Coriolis. An article uncovering misinformation about the Coriolis effect. By Alistair B. Fraser, emeritus professor of meteorology at Pennsylvania State University The Coriolis Effect: A (Fairly) Simple Explanation, an explanation for the layperson Observe an animation of the Coriolis effect over Earth's surface Animation clip showing scenes as viewed from both an inertial frame and a rotating frame of reference, visualizing the Coriolis and centrifugal forces. Vincent Mallette The Coriolis Force @ INWIT NASA notes Interactive Coriolis Fountain lets you control rotation speed, droplet speed and frame of reference to explore the Coriolis effect. Rotating Co-ordinating Systems Archived 16 April 2021 at the Wayback Machine, transformation from inertial systems
Wikipedia/Coriolis_force
In classical mechanics, impulse (symbolized by J or Imp) is the change in momentum of an object. If the initial momentum of an object is p1, and a subsequent momentum is p2, the object has received an impulse J: J = p 2 − p 1 . {\displaystyle \mathbf {J} =\mathbf {p} _{2}-\mathbf {p} _{1}.} Momentum is a vector quantity, so impulse is also a vector quantity: ∑ F × Δ t = Δ p . {\displaystyle \sum \mathbf {F} \times \Delta t=\Delta \mathbf {p} .} Newton’s second law of motion states that the rate of change of momentum of an object is equal to the resultant force F acting on the object: F = p 2 − p 1 Δ t , {\displaystyle \mathbf {F} ={\frac {\mathbf {p} _{2}-\mathbf {p} _{1}}{\Delta t}},} so the impulse J delivered by a steady force F acting for time Δt is: J = F Δ t . {\displaystyle \mathbf {J} =\mathbf {F} \Delta t.} The impulse delivered by a varying force acting from time a to b is the integral of the force F with respect to time: J = ∫ a b F d t . {\displaystyle \mathbf {J} =\int _{a}^{b}\mathbf {F} \,\mathrm {d} t.} The SI unit of impulse is the newton second (N⋅s), and the dimensionally equivalent unit of momentum is the kilogram metre per second (kg⋅m/s). The corresponding English engineering unit is the pound-second (lbf⋅s), and in the British Gravitational System, the unit is the slug-foot per second (slug⋅ft/s). == Mathematical derivation in the case of an object of constant mass == Impulse J produced from time t1 to t2 is defined to be J = ∫ t 1 t 2 F d t , {\displaystyle \mathbf {J} =\int _{t_{1}}^{t_{2}}\mathbf {F} \,\mathrm {d} t,} where F is the resultant force applied from t1 to t2. From Newton's second law, force is related to momentum p by F = d p d t . {\displaystyle \mathbf {F} ={\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} t}}.} Therefore, J = ∫ t 1 t 2 d p d t d t = ∫ p 1 p 2 d p = p 2 − p 1 = Δ p , {\displaystyle {\begin{aligned}\mathbf {J} &=\int _{t_{1}}^{t_{2}}{\frac {\mathrm {d} \mathbf {p} }{\mathrm {d} t}}\,\mathrm {d} t\\&=\int _{\mathbf {p} _{1}}^{\mathbf {p} _{2}}\mathrm {d} \mathbf {p} \\&=\mathbf {p} _{2}-\mathbf {p} _{1}=\Delta \mathbf {p} ,\end{aligned}}} where Δp is the change in linear momentum from time t1 to t2. This is often called the impulse-momentum theorem (analogous to the work-energy theorem). As a result, an impulse may also be regarded as the change in momentum of an object to which a resultant force is applied. The impulse may be expressed in a simpler form when the mass is constant: J = ∫ t 1 t 2 F d t = Δ p = m v 2 − m v 1 , {\displaystyle \mathbf {J} =\int _{t_{1}}^{t_{2}}\mathbf {F} \,\mathrm {d} t=\Delta \mathbf {p} =m\mathbf {v_{2}} -m\mathbf {v_{1}} ,} where F is the resultant force applied, t1 and t2 are times when the impulse begins and ends, respectively, m is the mass of the object, v2 is the final velocity of the object at the end of the time interval, and v1 is the initial velocity of the object when the time interval begins. Impulse has the same units and dimensions (MLT−1) as momentum. In the International System of Units, these are kg⋅m/s = N⋅s. In English engineering units, they are slug⋅ft/s = lbf⋅s. The term "impulse" is also used to refer to a fast-acting force or impact. This type of impulse is often idealized so that the change in momentum produced by the force happens with no change in time. This sort of change is a step change, and is not physically possible. However, this is a useful model for computing the effects of ideal collisions (such as in videogame physics engines). Additionally, in rocketry, the term "total impulse" is commonly used and is considered synonymous with the term "impulse". == Variable mass == The application of Newton's second law for variable mass allows impulse and momentum to be used as analysis tools for jet- or rocket-propelled vehicles. In the case of rockets, the impulse imparted can be normalized by unit of propellant expended, to create a performance parameter, specific impulse. This fact can be used to derive the Tsiolkovsky rocket equation, which relates the vehicle's propulsive change in velocity to the engine's specific impulse (or nozzle exhaust velocity) and the vehicle's propellant-mass ratio. == See also == Wave–particle duality defines the impulse of a wave collision. The preservation of momentum in the collision is then called phase matching. Applications include: Compton effect Nonlinear optics Acousto-optic modulator Electron phonon scattering Dirac delta function, mathematical abstraction of a pure impulse == Notes == == References == Serway, Raymond A.; Jewett, John W. (2004). Physics for Scientists and Engineers (6th ed.). Brooks/Cole. ISBN 0-534-40842-7. Tipler, Paul (2004). Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics (5th ed.). W. H. Freeman. ISBN 0-7167-0809-4. == External links == Dynamics
Wikipedia/Impulse_(physics)
A pendulum is a body suspended from a fixed support such that it freely swings back and forth under the influence of gravity. When a pendulum is displaced sideways from its resting, equilibrium position, it is subject to a restoring force due to gravity that will accelerate it back towards the equilibrium position. When released, the restoring force acting on the pendulum's mass causes it to oscillate about the equilibrium position, swinging it back and forth. The mathematics of pendulums are in general quite complicated. Simplifying assumptions can be made, which in the case of a simple pendulum allow the equations of motion to be solved analytically for small-angle oscillations. == Simple gravity pendulum == A simple gravity pendulum is an idealized mathematical model of a real pendulum. It is a weight (or bob) on the end of a massless cord suspended from a pivot, without friction. Since in the model there is no frictional energy loss, when given an initial displacement it swings back and forth with a constant amplitude. The model is based on the assumptions: The rod or cord is massless, inextensible and always remains under tension. The bob is a point mass. The motion occurs in two dimensions. The motion does not lose energy to external friction or air resistance. The gravitational field is uniform. The support is immobile. The differential equation which governs the motion of a simple pendulum is where g is the magnitude of the gravitational field, ℓ is the length of the rod or cord, and θ is the angle from the vertical to the pendulum. == Small-angle approximation == The differential equation given above is not easily solved, and there is no solution that can be written in terms of elementary functions. However, adding a restriction to the size of the oscillation's amplitude gives a form whose solution can be easily obtained. If it is assumed that the angle is much less than 1 radian (often cited as less than 0.1 radians, about 6°), or θ ≪ 1 , {\displaystyle \theta \ll 1,} then substituting for sin θ into Eq. 1 using the small-angle approximation, sin ⁡ θ ≈ θ , {\displaystyle \sin \theta \approx \theta ,} yields the equation for a harmonic oscillator, d 2 θ d t 2 + g ℓ θ = 0. {\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+{\frac {g}{\ell }}\theta =0.} The error due to the approximation is of order θ3 (from the Taylor expansion for sin θ). Let the starting angle be θ0. If it is assumed that the pendulum is released with zero angular velocity, the solution becomes The motion is simple harmonic motion where θ0 is the amplitude of the oscillation (that is, the maximum angle between the rod of the pendulum and the vertical). The corresponding approximate period of the motion is then which is known as Christiaan Huygens's law for the period. Note that under the small-angle approximation, the period is independent of the amplitude θ0; this is the property of isochronism that Galileo discovered. === Rule of thumb for pendulum length === T 0 = 2 π ℓ g {\displaystyle T_{0}=2\pi {\sqrt {\frac {\ell }{g}}}} gives ℓ = g π 2 T 0 2 4 . {\displaystyle \ell ={\frac {g}{\pi ^{2}}}{\frac {T_{0}^{2}}{4}}.} If SI units are used (i.e. measure in metres and seconds), and assuming the measurement is taking place on the Earth's surface, then g ≈ 9.81 m/s2, and ⁠g/π2⁠ ≈ 1 m/s2 (0.994 is the approximation to 3 decimal places). Therefore, relatively reasonable approximations for the length and period are: ℓ ≈ T 0 2 4 , T 0 ≈ 2 ℓ {\displaystyle {\begin{aligned}\ell &\approx {\frac {T_{0}^{2}}{4}},\\T_{0}&\approx 2{\sqrt {\ell }}\end{aligned}}} where T0 is the number of seconds between two beats (one beat for each side of the swing), and l is measured in metres. == Arbitrary-amplitude period == For amplitudes beyond the small angle approximation, one can compute the exact period by first inverting the equation for the angular velocity obtained from the energy method (Eq. 2), d t d θ = ℓ 2 g 1 cos ⁡ θ − cos ⁡ θ 0 {\displaystyle {\frac {dt}{d\theta }}={\sqrt {\frac {\ell }{2g}}}{\frac {1}{\sqrt {\cos \theta -\cos \theta _{0}}}}} and then integrating over one complete cycle, T = t ( θ 0 → 0 → − θ 0 → 0 → θ 0 ) , {\displaystyle T=t(\theta _{0}\rightarrow 0\rightarrow -\theta _{0}\rightarrow 0\rightarrow \theta _{0}),} or twice the half-cycle T = 2 t ( θ 0 → 0 → − θ 0 ) , {\displaystyle T=2t(\theta _{0}\rightarrow 0\rightarrow -\theta _{0}),} or four times the quarter-cycle T = 4 t ( θ 0 → 0 ) , {\displaystyle T=4t(\theta _{0}\rightarrow 0),} which leads to T = 4 ℓ 2 g ∫ 0 θ 0 d θ cos ⁡ θ − cos ⁡ θ 0 . {\displaystyle T=4{\sqrt {\frac {\ell }{2g}}}\int _{0}^{\theta _{0}}{\frac {d\theta }{\sqrt {\cos \theta -\cos \theta _{0}}}}.} Note that this integral diverges as θ0 approaches the vertical lim θ 0 → π T = ∞ , {\displaystyle \lim _{\theta _{0}\to \pi }T=\infty ,} so that a pendulum with just the right energy to go vertical will never actually get there. (Conversely, a pendulum close to its maximum can take an arbitrarily long time to fall down.) This integral can be rewritten in terms of elliptic integrals as T = 4 ℓ g F ( π 2 , sin ⁡ θ 0 2 ) {\displaystyle T=4{\sqrt {\frac {\ell }{g}}}F\left({\frac {\pi }{2}},\sin {\frac {\theta _{0}}{2}}\right)} where F is the incomplete elliptic integral of the first kind defined by F ( φ , k ) = ∫ 0 φ d u 1 − k 2 sin 2 ⁡ u . {\displaystyle F(\varphi ,k)=\int _{0}^{\varphi }{\frac {du}{\sqrt {1-k^{2}\sin ^{2}u}}}\,.} Or more concisely by the substitution sin ⁡ u = sin ⁡ θ 2 sin ⁡ θ 0 2 {\displaystyle \sin {u}={\frac {\sin {\frac {\theta }{2}}}{\sin {\frac {\theta _{0}}{2}}}}} expressing θ in terms of u, Here K is the complete elliptic integral of the first kind defined by K ( k ) = F ( π 2 , k ) = ∫ 0 π 2 d u 1 − k 2 sin 2 ⁡ u . {\displaystyle K(k)=F\left({\frac {\pi }{2}},k\right)=\int _{0}^{\frac {\pi }{2}}{\frac {du}{\sqrt {1-k^{2}\sin ^{2}u}}}\,.} For comparison of the approximation to the full solution, consider the period of a pendulum of length 1 m on Earth (g = 9.80665 m/s2) at an initial angle of 10 degrees is 4 1 m g K ( sin ⁡ 10 ∘ 2 ) ≈ 2.0102 s . {\displaystyle 4{\sqrt {\frac {1{\text{ m}}}{g}}}\ K\left(\sin {\frac {10^{\circ }}{2}}\right)\approx 2.0102{\text{ s}}.} The linear approximation gives 2 π 1 m g ≈ 2.0064 s . {\displaystyle 2\pi {\sqrt {\frac {1{\text{ m}}}{g}}}\approx 2.0064{\text{ s}}.} The difference between the two values, less than 0.2%, is much less than that caused by the variation of g with geographical location. From here there are many ways to proceed to calculate the elliptic integral. === Legendre polynomial solution for the elliptic integral === Given Eq. 3 and the Legendre polynomial solution for the elliptic integral: K ( k ) = π 2 ∑ n = 0 ∞ ( ( 2 n − 1 ) ! ! ( 2 n ) ! ! k n ) 2 {\displaystyle K(k)={\frac {\pi }{2}}\sum _{n=0}^{\infty }\left({\frac {(2n-1)!!}{(2n)!!}}k^{n}\right)^{2}} where n!! denotes the double factorial, an exact solution to the period of a simple pendulum is: T = 2 π ℓ g ( 1 + ( 1 2 ) 2 sin 2 ⁡ θ 0 2 + ( 1 ⋅ 3 2 ⋅ 4 ) 2 sin 4 ⁡ θ 0 2 + ( 1 ⋅ 3 ⋅ 5 2 ⋅ 4 ⋅ 6 ) 2 sin 6 ⁡ θ 0 2 + ⋯ ) = 2 π ℓ g ⋅ ∑ n = 0 ∞ ( ( ( 2 n ) ! ( 2 n ⋅ n ! ) 2 ) 2 ⋅ sin 2 n ⁡ θ 0 2 ) . {\displaystyle {\begin{alignedat}{2}T&=2\pi {\sqrt {\frac {\ell }{g}}}\left(1+\left({\frac {1}{2}}\right)^{2}\sin ^{2}{\frac {\theta _{0}}{2}}+\left({\frac {1\cdot 3}{2\cdot 4}}\right)^{2}\sin ^{4}{\frac {\theta _{0}}{2}}+\left({\frac {1\cdot 3\cdot 5}{2\cdot 4\cdot 6}}\right)^{2}\sin ^{6}{\frac {\theta _{0}}{2}}+\cdots \right)\\&=2\pi {\sqrt {\frac {\ell }{g}}}\cdot \sum _{n=0}^{\infty }\left(\left({\frac {(2n)!}{(2^{n}\cdot n!)^{2}}}\right)^{2}\cdot \sin ^{2n}{\frac {\theta _{0}}{2}}\right).\end{alignedat}}} Figure 4 shows the relative errors using the power series. T0 is the linear approximation, and T2 to T10 include respectively the terms up to the 2nd to the 10th powers. === Power series solution for the elliptic integral === Another formulation of the above solution can be found if the following Maclaurin series: sin ⁡ θ 0 2 = 1 2 θ 0 − 1 48 θ 0 3 + 1 3 840 θ 0 5 − 1 645 120 θ 0 7 + ⋯ . {\displaystyle \sin {\frac {\theta _{0}}{2}}={\frac {1}{2}}\theta _{0}-{\frac {1}{48}}\theta _{0}^{3}+{\frac {1}{3\,840}}\theta _{0}^{5}-{\frac {1}{645\,120}}\theta _{0}^{7}+\cdots .} is used in the Legendre polynomial solution above. The resulting power series is: T = 2 π ℓ g ( 1 + 1 16 θ 0 2 + 11 3 072 θ 0 4 + 173 737 280 θ 0 6 + 22 931 1 321 205 760 θ 0 8 + 1 319 183 951 268 147 200 θ 0 10 + 233 526 463 2 009 078 326 886 400 θ 0 12 + ⋯ ) , {\displaystyle T=2\pi {\sqrt {\frac {\ell }{g}}}\left(1+{\frac {1}{16}}\theta _{0}^{2}+{\frac {11}{3\,072}}\theta _{0}^{4}+{\frac {173}{737\,280}}\theta _{0}^{6}+{\frac {22\,931}{1\,321\,205\,760}}\theta _{0}^{8}+{\frac {1\,319\,183}{951\,268\,147\,200}}\theta _{0}^{10}+{\frac {233\,526\,463}{2\,009\,078\,326\,886\,400}}\theta _{0}^{12}+\cdots \right),} more fractions available in the On-Line Encyclopedia of Integer Sequences with OEIS: A223067 having the numerators and OEIS: A223068 having the denominators. === Arithmetic-geometric mean solution for elliptic integral === Given Eq. 3 and the arithmetic–geometric mean solution of the elliptic integral: K ( k ) = π 2 M ( 1 − k , 1 + k ) , {\displaystyle K(k)={\frac {\pi }{2M(1-k,1+k)}},} where M(x,y) is the arithmetic-geometric mean of x and y. This yields an alternative and faster-converging formula for the period: T = 2 π M ( 1 , cos ⁡ θ 0 2 ) ℓ g . {\displaystyle T={\frac {2\pi }{M\left(1,\cos {\frac {\theta _{0}}{2}}\right)}}{\sqrt {\frac {\ell }{g}}}.} The first iteration of this algorithm gives T 1 = 2 T 0 1 + cos ⁡ θ 0 2 . {\displaystyle T_{1}={\frac {2T_{0}}{1+\cos {\frac {\theta _{0}}{2}}}}.} This approximation has the relative error of less than 1% for angles up to 96.11 degrees. Since 1 2 ( 1 + cos ⁡ ( θ 0 2 ) ) = cos 2 ⁡ θ 0 4 , {\textstyle {\frac {1}{2}}\left(1+\cos \left({\frac {\theta _{0}}{2}}\right)\right)=\cos ^{2}{\frac {\theta _{0}}{4}},} the expression can be written more concisely as T 1 = T 0 sec 2 ⁡ θ 0 4 . {\displaystyle T_{1}=T_{0}\sec ^{2}{\frac {\theta _{0}}{4}}.} The second order expansion of sec 2 ⁡ ( θ 0 / 4 ) {\displaystyle \sec ^{2}(\theta _{0}/4)} reduces to T ≈ T 0 ( 1 + θ 0 2 16 ) . {\textstyle T\approx T_{0}\left(1+{\frac {\theta _{0}^{2}}{16}}\right).} A second iteration of this algorithm gives T 2 = 4 T 0 1 + cos ⁡ θ 0 2 + 2 cos ⁡ θ 0 2 = 4 T 0 ( 1 + cos ⁡ θ 0 2 ) 2 . {\displaystyle T_{2}={\frac {4T_{0}}{1+\cos {\frac {\theta _{0}}{2}}+2{\sqrt {\cos {\frac {\theta _{0}}{2}}}}}}={\frac {4T_{0}}{\left(1+{\sqrt {\cos {\frac {\theta _{0}}{2}}}}\right)^{2}}}.} This second approximation has a relative error of less than 1% for angles up to 163.10 degrees. == Approximate formulae for the nonlinear pendulum period == Though the exact period T {\displaystyle T} can be determined, for any finite amplitude θ 0 < π {\displaystyle \theta _{0}<\pi } rad, by evaluating the corresponding complete elliptic integral K ( k ) {\displaystyle K(k)} , where k ≡ sin ⁡ ( θ 0 / 2 ) {\displaystyle k\equiv \sin(\theta _{0}/2)} , this is often avoided in applications because it is not possible to express this integral in a closed form in terms of elementary functions. This has made way for research on simple approximate formulae for the increase of the pendulum period with amplitude (useful in introductory physics labs, classical mechanics, electromagnetism, acoustics, electronics, superconductivity, etc. The approximate formulae found by different authors can be classified as follows: ‘Not so large-angle’ formulae, i.e. those yielding good estimates for amplitudes below π / 2 {\displaystyle \pi /2} rad (a natural limit for a bob on the end of a flexible string), though the deviation with respect to the exact period increases monotonically with amplitude, being unsuitable for amplitudes near to π {\displaystyle \pi } rad. One of the simplest formulae found in literature is the following one by Lima (2006): T ≈ − T 0 ln ⁡ a 1 − a {\textstyle T\approx -\,T_{0}\,{\frac {\ln {a}}{1-a}}} , where a ≡ cos ⁡ ( θ 0 / 2 ) {\displaystyle a\equiv \cos {(\theta _{0}/2)}} . ‘Very large-angle’ formulae, i.e. those which approximate the exact period asymptotically for amplitudes near to π {\displaystyle \pi } rad, with an error that increases monotonically for smaller amplitudes (i.e., unsuitable for small amplitudes). One of the better such formulae is that by Cromer, namely: T ≈ 2 π T 0 ln ⁡ ( 4 / a ) {\textstyle T\approx {\frac {2}{\pi }}\,T_{0}\,\ln {(4/a)}} . Of course, the increase of T {\displaystyle T} with amplitude is more apparent when π / 2 < θ 0 < π {\displaystyle \pi /2<\theta _{0}<\pi } , as has been observed in many experiments using either a rigid rod or a disc. As accurate timers and sensors are currently available even in introductory physics labs, the experimental errors found in ‘very large-angle’ experiments are already small enough for a comparison with the exact period, and a very good agreement between theory and experiments in which friction is negligible has been found. Since this activity has been encouraged by many instructors, a simple approximate formula for the pendulum period valid for all possible amplitudes, to which experimental data could be compared, was sought. In 2008, Lima derived a weighted-average formula with this characteristic: T ≈ r a 2 T Lima + k 2 T Cromer r a 2 + k 2 , {\displaystyle T\approx {\frac {r\,a^{2}\,T_{\text{Lima}}+k^{2}\,T_{\text{Cromer}}}{r\,a^{2}+k^{2}}},} where r = 7.17 {\displaystyle r=7.17} , which presents a maximum error of only 0.6% (at θ 0 = 95 ∘ {\displaystyle \theta _{0}=95^{\circ }} ). == Arbitrary-amplitude angular displacement == The Fourier series expansion of θ ( t ) {\displaystyle \theta (t)} is given by θ ( t ) = 8 ∑ n ≥ 1 odd ( − 1 ) ⌊ n / 2 ⌋ n q n / 2 1 + q n cos ⁡ ( n ω t ) {\displaystyle \theta (t)=8\sum _{n\geq 1{\text{ odd}}}{\frac {(-1)^{\left\lfloor {n/2}\right\rfloor }}{n}}{\frac {q^{n/2}}{1+q^{n}}}\cos(n\omega t)} where q {\displaystyle q} is the elliptic nome, q = exp ⁡ ( − π K ( 1 − k 2 ) / K ( k ) ) , {\displaystyle q=\exp \left({-\pi K{\bigl (}{\sqrt {\textstyle 1-k^{2}}}{\bigr )}{\big /}K(k)}\right),} k = sin ⁡ ( θ 0 / 2 ) , {\displaystyle k=\sin(\theta _{0}/2),} and ω = 2 π / T {\displaystyle \omega =2\pi /T} the angular frequency. If one defines ε = 1 2 ⋅ 1 − cos ⁡ ( θ 0 / 2 ) 1 + cos ⁡ ( θ 0 / 2 ) {\displaystyle \varepsilon ={\frac {1}{2}}\cdot {\frac {1-{\sqrt {\cos(\theta _{0}/2)}}}{1+{\sqrt {\cos(\theta _{0}/2)}}}}} q {\displaystyle q} can be approximated using the expansion q = ε + 2 ε 5 + 15 ε 9 + 150 ε 13 + 1707 ε 17 + 20910 ε 21 + ⋯ {\displaystyle q=\varepsilon +2\varepsilon ^{5}+15\varepsilon ^{9}+150\varepsilon ^{13}+1707\varepsilon ^{17}+20910\varepsilon ^{21}+\cdots } (see OEIS: A002103). Note that ε < 1 2 {\displaystyle \varepsilon <{\tfrac {1}{2}}} for θ 0 < π {\displaystyle \theta _{0}<\pi } , thus the approximation is applicable even for large amplitudes. Equivalently, the angle can be given in terms of the Jacobi elliptic function cd {\displaystyle \operatorname {cd} } with modulus k {\displaystyle k} θ ( t ) = 2 arcsin ⁡ ( k cd ⁡ ( g ℓ t ; k ) ) , k = sin ⁡ θ 0 2 . {\displaystyle \theta (t)=2\arcsin \left(k\operatorname {cd} \left({\sqrt {\frac {g}{\ell }}}t;k\right)\right),\quad k=\sin {\frac {\theta _{0}}{2}}.} For small x {\displaystyle x} , sin ⁡ x ≈ x {\displaystyle \sin x\approx x} , arcsin ⁡ x ≈ x {\displaystyle \arcsin x\approx x} and cd ⁡ ( t ; 0 ) = cos ⁡ t {\displaystyle \operatorname {cd} (t;0)=\cos t} , so the solution is well-approximated by the solution given in Pendulum (mechanics)#Small-angle approximation. == Examples == The animations below depict the motion of a simple (frictionless) pendulum with increasing amounts of initial displacement of the bob, or equivalently increasing initial velocity. The small graph above each pendulum is the corresponding phase plane diagram; the horizontal axis is displacement and the vertical axis is velocity. With a large enough initial velocity the pendulum does not oscillate back and forth but rotates completely around the pivot. == Compound pendulum == A compound pendulum (or physical pendulum) is one where the rod is not massless, and may have extended size; that is, an arbitrarily shaped rigid body swinging by a pivot O {\displaystyle O} . In this case the pendulum's period depends on its moment of inertia I O {\displaystyle I_{O}} around the pivot point. The equation of torque gives: τ = I α {\displaystyle \tau =I\alpha } where: α {\displaystyle \alpha } is the angular acceleration. τ {\displaystyle \tau } is the torque The torque is generated by gravity so: τ = − m g r ⊕ sin ⁡ θ {\displaystyle \tau =-mgr_{\oplus }\sin \theta } where: m {\displaystyle m} is the total mass of the rigid body (rod and bob) r ⊕ {\displaystyle r_{\oplus }} is the distance from the pivot point to the system's centre-of-mass θ {\displaystyle \theta } is the angle from the vertical Hence, under the small-angle approximation, sin ⁡ θ ≈ θ {\displaystyle \sin \theta \approx \theta } (or equivalently when θ m a x ≪ 1 {\displaystyle \theta _{\mathrm {max} }\ll 1} ), α = θ ¨ = m g r ⊕ I O sin ⁡ θ ≈ − m g r ⊕ I O θ {\displaystyle \alpha ={\ddot {\theta }}={\frac {mgr_{\oplus }}{I_{O}}}\sin \theta \approx -{\frac {mgr_{\oplus }}{I_{O}}}\theta } where I O {\displaystyle I_{O}} is the moment of inertia of the body about the pivot point O {\displaystyle O} . The expression for α {\displaystyle \alpha } is of the same form as the conventional simple pendulum and gives a period of T = 2 π I O m g r ⊕ {\displaystyle T=2\pi {\sqrt {\frac {I_{O}}{mgr_{\oplus }}}}} And a frequency of f = 1 T = 1 2 π m g r ⊕ I O {\displaystyle f={\frac {1}{T}}={\frac {1}{2\pi }}{\sqrt {\frac {mgr_{\oplus }}{I_{O}}}}} If the initial angle is taken into consideration (for large amplitudes), then the expression for α {\displaystyle \alpha } becomes: α = θ ¨ = − m g r ⊕ I O sin ⁡ θ {\displaystyle \alpha ={\ddot {\theta }}=-{\frac {mgr_{\oplus }}{I_{O}}}\sin \theta } and gives a period of: T = 4 K ⁡ ( sin 2 ⁡ θ m a x 2 ) I O m g r ⊕ {\displaystyle T=4\operatorname {K} \left(\sin ^{2}{\frac {\theta _{\mathrm {max} }}{2}}\right){\sqrt {\frac {I_{O}}{mgr_{\oplus }}}}} where θ m a x {\displaystyle \theta _{\mathrm {max} }} is the maximum angle of oscillation (with respect to the vertical) and K ⁡ ( k ) {\displaystyle \operatorname {K} (k)} is the complete elliptic integral of the first kind. An important concept is the equivalent length, ℓ e q {\displaystyle \ell ^{\mathrm {eq} }} , the length of a simple pendulums that has the same angular frequency ω 0 {\displaystyle \omega _{0}} as the compound pendulum: ω 0 2 = g ℓ e q := m g r ⊕ I O ⟹ ℓ e q = I O m r ⊕ {\displaystyle {\omega _{0}}^{2}={\frac {g}{\ell ^{\mathrm {eq} }}}:={\frac {mgr_{\oplus }}{I_{O}}}\implies \ell ^{\mathrm {eq} }={\frac {I_{O}}{mr_{\oplus }}}} Consider the following cases: The simple pendulum is the special case where all the mass is located at the bob swinging at a distance ℓ {\displaystyle \ell } from the pivot. Thus, r ⊕ = ℓ {\displaystyle r_{\oplus }=\ell } and I O = m ℓ 2 {\displaystyle I_{O}=m\ell ^{2}} , so the expression reduces to: ω 0 2 = m g r ⊕ I O = m g ℓ m ℓ 2 = g ℓ {\displaystyle {\omega _{0}}^{2}={\frac {mgr_{\oplus }}{I_{O}}}={\frac {mg\ell }{m\ell ^{2}}}={\frac {g}{\ell }}} . Notice ℓ e q = ℓ {\displaystyle \ell ^{\mathrm {eq} }=\ell } , as expected (the definition of equivalent length). A homogeneous rod of mass m {\displaystyle m} and length ℓ {\displaystyle \ell } swinging from its end has r ⊕ = 1 2 ℓ {\displaystyle r_{\oplus }={\frac {1}{2}}\ell } and I O = 1 3 m ℓ 2 {\displaystyle I_{O}={\frac {1}{3}}m\ell ^{2}} , so the expression reduces to: ω 0 2 = m g r ⊕ I O = m g 1 2 ℓ 1 3 m ℓ 2 = g 2 3 ℓ {\displaystyle {\omega _{0}}^{2}={\frac {mgr_{\oplus }}{I_{O}}}={\frac {mg\,{\frac {1}{2}}\ell }{{\frac {1}{3}}m\ell ^{2}}}={\frac {g}{{\frac {2}{3}}\ell }}} . Notice ℓ e q = 2 3 ℓ {\displaystyle \ell ^{\mathrm {eq} }={\frac {2}{3}}\ell } , a homogeneous rod oscillates as if it were a simple pendulum of two-thirds its length. A heavy simple pendulum: combination of a homogeneous rod of mass m r o d {\displaystyle m_{\mathrm {rod} }} and length ℓ {\displaystyle \ell } swinging from its end, and a bob m b o b {\displaystyle m_{\mathrm {bob} }} at the other end. Then the system has a total mass of m b o b + m r o d {\displaystyle m_{\mathrm {bob} }+m_{\mathrm {rod} }} , and the other parameters being m r ⊕ = m b o b ℓ + m r o d ℓ 2 {\displaystyle mr_{\oplus }=m_{\mathrm {bob} }\ell +m_{\mathrm {rod} }{\frac {\ell }{2}}} (by definition of centre-of-mass) and I O = m b o b ℓ 2 + 1 3 m r o d ℓ 2 {\displaystyle I_{O}=m_{\mathrm {bob} }\ell ^{2}+{\frac {1}{3}}m_{\mathrm {rod} }\ell ^{2}} , so the expression reduces to: ω 0 2 = m g r ⊕ I O = ( m b o b ℓ + m r o d ℓ 2 ) g m b o b ℓ 2 + 1 3 m r o d ℓ 2 = g ℓ m b o b + m r o d 2 m b o b + m r o d 3 = g ℓ 1 + m r o d 2 m b o b 1 + m r o d 3 m b o b {\displaystyle {\omega _{0}}^{2}={\frac {mgr_{\oplus }}{I_{O}}}={\frac {\left(m_{\mathrm {bob} }\ell +m_{\mathrm {rod} }{\frac {\ell }{2}}\right)g}{m_{\mathrm {bob} }\ell ^{2}+{\frac {1}{3}}m_{\mathrm {rod} }\ell ^{2}}}={\frac {g}{\ell }}{\frac {m_{\mathrm {bob} }+{\frac {m_{\mathrm {rod} }}{2}}}{m_{\mathrm {bob} }+{\frac {m_{\mathrm {rod} }}{3}}}}={\frac {g}{\ell }}{\frac {1+{\frac {m_{\mathrm {rod} }}{2m_{\mathrm {bob} }}}}{1+{\frac {m_{\mathrm {rod} }}{3m_{\mathrm {bob} }}}}}} Where ℓ e q = ℓ 1 + m r o d 3 m b o b 1 + m r o d 2 m b o b {\displaystyle \ell ^{\mathrm {eq} }=\ell {\frac {1+{\frac {m_{\mathrm {rod} }}{3m_{\mathrm {bob} }}}}{1+{\frac {m_{\mathrm {rod} }}{2m_{\mathrm {bob} }}}}}} . Notice these formulae can be particularized into the two previous cases studied before just by considering the mass of the rod or the bob to be zero respectively. Also notice that the formula does not depend on both the mass of the bob and the rod, but actually on their ratio, m r o d m b o b {\displaystyle {\frac {m_{\mathrm {rod} }}{m_{\mathrm {bob} }}}} . An approximation can be made for m r o d m b o b ≪ 1 {\displaystyle {\frac {m_{\mathrm {rod} }}{m_{\mathrm {bob} }}}\ll 1} : ω 0 2 ≈ g ℓ ( 1 + 1 6 m r o d m b o b + ⋯ ) {\displaystyle {\omega _{0}}^{2}\approx {\frac {g}{\ell }}\left(1+{\frac {1}{6}}{\frac {m_{\mathrm {rod} }}{m_{\mathrm {bob} }}}+\cdots \right)} Notice how similar it is to the angular frequency in a spring-mass system with effective mass. == Damped, driven pendulum == The above discussion focuses on a pendulum bob only acted upon by the force of gravity. Suppose a damping force, e.g. air resistance, as well as a sinusoidal driving force acts on the body. This system is a damped, driven oscillator, and is chaotic. Equation (1) can be written as m l 2 d 2 θ d t 2 = − m g l sin ⁡ θ {\displaystyle ml^{2}{\frac {d^{2}\theta }{dt^{2}}}=-mgl\sin \theta } (see the Torque derivation of Equation (1) above). A damping term and forcing term can be added to the right hand side to get m l 2 d 2 θ d t 2 = − m g l sin ⁡ θ − b d θ d t + a cos ⁡ ( Ω t ) {\displaystyle ml^{2}{\frac {d^{2}\theta }{dt^{2}}}=-mgl\sin \theta -b{\frac {d\theta }{dt}}+a\cos(\Omega t)} where the damping is assumed to be directly proportional to the angular velocity (this is true for low-speed air resistance, see also Drag (physics)). a {\displaystyle a} and b {\displaystyle b} are constants defining the amplitude of forcing and the degree of damping respectively. Ω {\textstyle \Omega } is the angular frequency of the driving oscillations. Dividing through by m l 2 {\textstyle ml^{2}} : d 2 θ d t 2 + b m l 2 d θ d t + g l sin ⁡ θ − a m l 2 cos ⁡ ( Ω t ) = 0. {\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+{\frac {b}{ml^{2}}}{\frac {d\theta }{dt}}+{\frac {g}{l}}{\sin \theta }-{\frac {a}{ml^{2}}}\cos(\Omega t)=0.} For a physical pendulum: d 2 θ d t 2 + b I d θ d t + m g r ⊕ I sin ⁡ θ − a I cos ⁡ ( Ω t ) = 0. {\displaystyle {\frac {d^{2}\theta }{dt^{2}}}+{\frac {b}{I}}{\frac {d\theta }{dt}}+{\frac {mgr_{\oplus }}{I}}{\sin \theta }-{\frac {a}{I}}\cos(\Omega t)=0.} This equation exhibits chaotic behaviour. The exact motion of this pendulum can only be found numerically and is highly dependent on initial conditions, e.g. the initial velocity and the starting amplitude. However, the small angle approximation outlined above can still be used under the required conditions to give an approximate analytical solution. == Physical interpretation of the imaginary period == The Jacobian elliptic function that expresses the position of a pendulum as a function of time is a doubly periodic function with a real period and an imaginary period. The real period is, of course, the time it takes the pendulum to go through one full cycle. Paul Appell pointed out a physical interpretation of the imaginary period: if θ0 is the maximum angle of one pendulum and 180° − θ0 is the maximum angle of another, then the real period of each is the magnitude of the imaginary period of the other. == Coupled pendula == Coupled pendulums can affect each other's motion, either through a direction connection (such as a spring connecting the bobs) or through motions in a supporting structure (such as a tabletop). The equations of motion for two identical simple pendulums coupled by a spring connecting the bobs can be obtained using Lagrangian mechanics. The kinetic energy of the system is: E K = 1 2 m L 2 ( θ ˙ 1 2 + θ ˙ 2 2 ) {\displaystyle E_{\text{K}}={\frac {1}{2}}mL^{2}\left({\dot {\theta }}_{1}^{2}+{\dot {\theta }}_{2}^{2}\right)} where m {\displaystyle m} is the mass of the bobs, L {\displaystyle L} is the length of the strings, and θ 1 {\displaystyle \theta _{1}} , θ 2 {\displaystyle \theta _{2}} are the angular displacements of the two bobs from equilibrium. The potential energy of the system is: E p = m g L ( 2 − cos ⁡ θ 1 − cos ⁡ θ 2 ) + 1 2 k L 2 ( θ 2 − θ 1 ) 2 {\displaystyle E_{\text{p}}=mgL(2-\cos \theta _{1}-\cos \theta _{2})+{\frac {1}{2}}kL^{2}(\theta _{2}-\theta _{1})^{2}} where g {\displaystyle g} is the gravitational acceleration, and k {\displaystyle k} is the spring constant. The displacement L ( θ 2 − θ 1 ) {\displaystyle L(\theta _{2}-\theta _{1})} of the spring from its equilibrium position assumes the small angle approximation. The Lagrangian is then L = 1 2 m L 2 ( θ ˙ 1 2 + θ ˙ 2 2 ) − m g L ( 2 − cos ⁡ θ 1 − cos ⁡ θ 2 ) − 1 2 k L 2 ( θ 2 − θ 1 ) 2 {\displaystyle {\mathcal {L}}={\frac {1}{2}}mL^{2}\left({\dot {\theta }}_{1}^{2}+{\dot {\theta }}_{2}^{2}\right)-mgL(2-\cos \theta _{1}-\cos \theta _{2})-{\frac {1}{2}}kL^{2}(\theta _{2}-\theta _{1})^{2}} which leads to the following set of coupled differential equations: θ ¨ 1 + g L sin ⁡ θ 1 + k m ( θ 1 − θ 2 ) = 0 θ ¨ 2 + g L sin ⁡ θ 2 − k m ( θ 1 − θ 2 ) = 0 {\displaystyle {\begin{aligned}{\ddot {\theta }}_{1}+{\frac {g}{L}}\sin \theta _{1}+{\frac {k}{m}}(\theta _{1}-\theta _{2})&=0\\{\ddot {\theta }}_{2}+{\frac {g}{L}}\sin \theta _{2}-{\frac {k}{m}}(\theta _{1}-\theta _{2})&=0\end{aligned}}} Adding and subtracting these two equations in turn, and applying the small angle approximation, gives two harmonic oscillator equations in the variables θ 1 + θ 2 {\displaystyle \theta _{1}+\theta _{2}} and θ 1 − θ 2 {\displaystyle \theta _{1}-\theta _{2}} : θ ¨ 1 + θ ¨ 2 + g L ( θ 1 + θ 2 ) = 0 θ ¨ 1 − θ ¨ 2 + ( g L + 2 k m ) ( θ 1 − θ 2 ) = 0 {\displaystyle {\begin{aligned}{\ddot {\theta }}_{1}+{\ddot {\theta }}_{2}+{\frac {g}{L}}(\theta _{1}+\theta _{2})&=0\\{\ddot {\theta }}_{1}-{\ddot {\theta }}_{2}+\left({\frac {g}{L}}+2{\frac {k}{m}}\right)(\theta _{1}-\theta _{2})&=0\end{aligned}}} with the corresponding solutions θ 1 + θ 2 = A cos ⁡ ( ω 1 t + α ) θ 1 − θ 2 = B cos ⁡ ( ω 2 t + β ) {\displaystyle {\begin{aligned}\theta _{1}+\theta _{2}&=A\cos(\omega _{1}t+\alpha )\\\theta _{1}-\theta _{2}&=B\cos(\omega _{2}t+\beta )\end{aligned}}} where ω 1 = g L ω 2 = g L + 2 k m {\displaystyle {\begin{aligned}\omega _{1}&={\sqrt {\frac {g}{L}}}\\\omega _{2}&={\sqrt {{\frac {g}{L}}+2{\frac {k}{m}}}}\end{aligned}}} and A {\displaystyle A} , B {\displaystyle B} , α {\displaystyle \alpha } , β {\displaystyle \beta } are constants of integration. Expressing the solutions in terms of θ 1 {\displaystyle \theta _{1}} and θ 2 {\displaystyle \theta _{2}} alone: θ 1 = 1 2 A cos ⁡ ( ω 1 t + α ) + 1 2 B cos ⁡ ( ω 2 t + β ) θ 2 = 1 2 A cos ⁡ ( ω 1 t + α ) − 1 2 B cos ⁡ ( ω 2 t + β ) {\displaystyle {\begin{aligned}\theta _{1}&={\frac {1}{2}}A\cos(\omega _{1}t+\alpha )+{\frac {1}{2}}B\cos(\omega _{2}t+\beta )\\\theta _{2}&={\frac {1}{2}}A\cos(\omega _{1}t+\alpha )-{\frac {1}{2}}B\cos(\omega _{2}t+\beta )\end{aligned}}} If the bobs are not given an initial push, then the condition θ ˙ 1 ( 0 ) = θ ˙ 2 ( 0 ) = 0 {\displaystyle {\dot {\theta }}_{1}(0)={\dot {\theta }}_{2}(0)=0} requires α = β = 0 {\displaystyle \alpha =\beta =0} , which gives (after some rearranging): A = θ 1 ( 0 ) + θ 2 ( 0 ) B = θ 1 ( 0 ) − θ 2 ( 0 ) {\displaystyle {\begin{aligned}A&=\theta _{1}(0)+\theta _{2}(0)\\B&=\theta _{1}(0)-\theta _{2}(0)\end{aligned}}} == See also == Harmonograph Conical pendulum Cycloidal pendulum Double pendulum Inverted pendulum Kapitza's pendulum Rayleigh–Lorentz pendulum Elastic pendulum Mathieu function Pendulum equations (software) == References == == Further reading == Baker, Gregory L.; Blackburn, James A. (2005). The Pendulum: A Physics Case Study (PDF). Oxford University Press. Ochs, Karlheinz (2011). "A comprehensive analytical solution of the nonlinear pendulum". European Journal of Physics. 32 (2): 479–490. Bibcode:2011EJPh...32..479O. doi:10.1088/0143-0807/32/2/019. S2CID 53621685. Sala, Kenneth L. (1989). "Transformations of the Jacobian Amplitude Function and its Calculation via the Arithmetic-Geometric Mean". SIAM J. Math. Anal. 20 (6): 1514–1528. doi:10.1137/0520100. == External links == Mathworld article on Mathieu Function
Wikipedia/Pendulum_(mechanics)
In classical mechanics, Appell's equation of motion (aka the Gibbs–Appell equation of motion) is an alternative general formulation of classical mechanics described by Josiah Willard Gibbs in 1879 and Paul Émile Appell in 1900. == Statement == The Gibbs-Appell equation reads Q r = ∂ S ∂ α r , {\displaystyle Q_{r}={\frac {\partial S}{\partial \alpha _{r}}},} where α r = q ¨ r {\displaystyle \alpha _{r}={\ddot {q}}_{r}} is an arbitrary generalized acceleration, or the second time derivative of the generalized coordinates q r {\displaystyle q_{r}} , and Q r {\displaystyle Q_{r}} is its corresponding generalized force. The generalized force gives the work done d W = ∑ r = 1 D Q r d q r , {\displaystyle dW=\sum _{r=1}^{D}Q_{r}dq_{r},} where the index r {\displaystyle r} runs over the D {\displaystyle D} generalized coordinates q r {\displaystyle q_{r}} , which usually correspond to the degrees of freedom of the system. The function S {\displaystyle S} is defined as the mass-weighted sum of the particle accelerations squared, S = 1 2 ∑ k = 1 N m k a k 2 , {\displaystyle S={\frac {1}{2}}\sum _{k=1}^{N}m_{k}\mathbf {a} _{k}^{2}\,,} where the index k {\displaystyle k} runs over the K {\displaystyle K} particles, and a k = r ¨ k = d 2 r k d t 2 {\displaystyle \mathbf {a} _{k}={\ddot {\mathbf {r} }}_{k}={\frac {d^{2}\mathbf {r} _{k}}{dt^{2}}}} is the acceleration of the k {\displaystyle k} -th particle, the second time derivative of its position vector r k {\displaystyle \mathbf {r} _{k}} . Each r k {\displaystyle \mathbf {r} _{k}} is expressed in terms of generalized coordinates, and a k {\displaystyle \mathbf {a} _{k}} is expressed in terms of the generalized accelerations. == Relations to other formulations of classical mechanics == Appell's formulation does not introduce any new physics to classical mechanics and as such is equivalent to other reformulations of classical mechanics, such as Lagrangian mechanics, and Hamiltonian mechanics. All classical mechanics is contained within Newton's laws of motion. In some cases, Appell's equation of motion may be more convenient than the commonly used Lagrangian mechanics, particularly when nonholonomic constraints are involved. In fact, Appell's equation leads directly to Lagrange's equations of motion. Moreover, it can be used to derive Kane's equations, which are particularly suited for describing the motion of complex spacecraft. Appell's formulation is an application of Gauss' principle of least constraint. == Derivation == The change in the particle positions rk for an infinitesimal change in the D generalized coordinates is d r k = ∑ r = 1 D d q r ∂ r k ∂ q r {\displaystyle d\mathbf {r} _{k}=\sum _{r=1}^{D}dq_{r}{\frac {\partial \mathbf {r} _{k}}{\partial q_{r}}}} Taking two derivatives with respect to time yields an equivalent equation for the accelerations ∂ a k ∂ α r = ∂ r k ∂ q r {\displaystyle {\frac {\partial \mathbf {a} _{k}}{\partial \alpha _{r}}}={\frac {\partial \mathbf {r} _{k}}{\partial q_{r}}}} The work done by an infinitesimal change dqr in the generalized coordinates is d W = ∑ r = 1 D Q r d q r = ∑ k = 1 N F k ⋅ d r k = ∑ k = 1 N m k a k ⋅ d r k {\displaystyle dW=\sum _{r=1}^{D}Q_{r}dq_{r}=\sum _{k=1}^{N}\mathbf {F} _{k}\cdot d\mathbf {r} _{k}=\sum _{k=1}^{N}m_{k}\mathbf {a} _{k}\cdot d\mathbf {r} _{k}} where Newton's second law for the kth particle F k = m k a k {\displaystyle \mathbf {F} _{k}=m_{k}\mathbf {a} _{k}} has been used. Substituting the formula for drk and swapping the order of the two summations yields the formulae d W = ∑ r = 1 D Q r d q r = ∑ k = 1 N m k a k ⋅ ∑ r = 1 D d q r ( ∂ r k ∂ q r ) = ∑ r = 1 D d q r ∑ k = 1 N m k a k ⋅ ( ∂ r k ∂ q r ) {\displaystyle dW=\sum _{r=1}^{D}Q_{r}dq_{r}=\sum _{k=1}^{N}m_{k}\mathbf {a} _{k}\cdot \sum _{r=1}^{D}dq_{r}\left({\frac {\partial \mathbf {r} _{k}}{\partial q_{r}}}\right)=\sum _{r=1}^{D}dq_{r}\sum _{k=1}^{N}m_{k}\mathbf {a} _{k}\cdot \left({\frac {\partial \mathbf {r} _{k}}{\partial q_{r}}}\right)} Therefore, the generalized forces are Q r = ∑ k = 1 N m k a k ⋅ ( ∂ r k ∂ q r ) = ∑ k = 1 N m k a k ⋅ ( ∂ a k ∂ α r ) {\displaystyle Q_{r}=\sum _{k=1}^{N}m_{k}\mathbf {a} _{k}\cdot \left({\frac {\partial \mathbf {r} _{k}}{\partial q_{r}}}\right)=\sum _{k=1}^{N}m_{k}\mathbf {a} _{k}\cdot \left({\frac {\partial \mathbf {a} _{k}}{\partial \alpha _{r}}}\right)} This equals the derivative of S with respect to the generalized accelerations ∂ S ∂ α r = ∂ ∂ α r 1 2 ∑ k = 1 N m k | a k | 2 = ∑ k = 1 N m k a k ⋅ ( ∂ a k ∂ α r ) {\displaystyle {\frac {\partial S}{\partial \alpha _{r}}}={\frac {\partial }{\partial \alpha _{r}}}{\frac {1}{2}}\sum _{k=1}^{N}m_{k}\left|\mathbf {a} _{k}\right|^{2}=\sum _{k=1}^{N}m_{k}\mathbf {a} _{k}\cdot \left({\frac {\partial \mathbf {a} _{k}}{\partial \alpha _{r}}}\right)} yielding Appell's equation of motion ∂ S ∂ α r = Q r . {\displaystyle {\frac {\partial S}{\partial \alpha _{r}}}=Q_{r}.} == Examples == === Euler's equations of rigid body dynamics === Euler's equations provide an excellent illustration of Appell's formulation. Consider a rigid body of N particles joined by rigid rods. The rotation of the body may be described by an angular velocity vector ω {\displaystyle {\boldsymbol {\omega }}} , and the corresponding angular acceleration vector α = d ω d t {\displaystyle {\boldsymbol {\alpha }}={\frac {d{\boldsymbol {\omega }}}{dt}}} The generalized force for a rotation is the torque N {\displaystyle {\textbf {N}}} , since the work done for an infinitesimal rotation δ ϕ {\displaystyle \delta {\boldsymbol {\phi }}} is d W = N ⋅ δ ϕ {\displaystyle dW=\mathbf {N} \cdot \delta {\boldsymbol {\phi }}} . The velocity of the k {\displaystyle k} -th particle is given by v k = ω × r k {\displaystyle \mathbf {v} _{k}={\boldsymbol {\omega }}\times \mathbf {r} _{k}} where r k {\displaystyle \mathbf {r} _{k}} is the particle's position in Cartesian coordinates; its corresponding acceleration is a k = d v k d t = α × r k + ω × v k {\displaystyle \mathbf {a} _{k}={\frac {d\mathbf {v} _{k}}{dt}}={\boldsymbol {\alpha }}\times \mathbf {r} _{k}+{\boldsymbol {\omega }}\times \mathbf {v} _{k}} Therefore, the function S {\displaystyle S} may be written as S = 1 2 ∑ k = 1 N m k ( a k ⋅ a k ) = 1 2 ∑ k = 1 N m k { ( α × r k ) 2 + ( ω × v k ) 2 + 2 ( α × r k ) ⋅ ( ω × v k ) } {\displaystyle S={\frac {1}{2}}\sum _{k=1}^{N}m_{k}\left(\mathbf {a} _{k}\cdot \mathbf {a} _{k}\right)={\frac {1}{2}}\sum _{k=1}^{N}m_{k}\left\{\left({\boldsymbol {\alpha }}\times \mathbf {r} _{k}\right)^{2}+\left({\boldsymbol {\omega }}\times \mathbf {v} _{k}\right)^{2}+2\left({\boldsymbol {\alpha }}\times \mathbf {r} _{k}\right)\cdot \left({\boldsymbol {\omega }}\times \mathbf {v} _{k}\right)\right\}} Setting the derivative of S with respect to α {\displaystyle {\boldsymbol {\alpha }}} equal to the torque yields Euler's equations I x x α x − ( I y y − I z z ) ω y ω z = N x {\displaystyle I_{xx}\alpha _{x}-\left(I_{yy}-I_{zz}\right)\omega _{y}\omega _{z}=N_{x}} I y y α y − ( I z z − I x x ) ω z ω x = N y {\displaystyle I_{yy}\alpha _{y}-\left(I_{zz}-I_{xx}\right)\omega _{z}\omega _{x}=N_{y}} I z z α z − ( I x x − I y y ) ω x ω y = N z {\displaystyle I_{zz}\alpha _{z}-\left(I_{xx}-I_{yy}\right)\omega _{x}\omega _{y}=N_{z}} == See also == Principle of stationary action Analytical mechanics == References == == Further reading == Pars, LA (1965). A Treatise on Analytical Dynamics. Woodbridge, Connecticut: Ox Bow Press. pp. 197–227, 631–632. Whittaker, ET (1937). A Treatise on the Analytical Dynamics of Particles and Rigid Bodies, with an Introduction to the Problem of Three Bodies (4th ed.). New York: Dover Publications. ISBN. Seeger (1930). "Appell's equations". Journal of the Washington Academy of Sciences. 20: 481–484. Brell, H (1913). "Nachweis der Aquivalenz des verallgemeinerten Prinzipes der kleinsten Aktion mit dem Prinzip des kleinsten Zwanges". Wien. Sitz. 122: 933–944. Connection of Appell's formulation with the principle of least action. PDF copy of Appell's article at Goettingen University PDF copy of a second article on Appell's equations and Gauss's principle
Wikipedia/Appell's_equation_of_motion
A fictitious force, also known as an inertial force or pseudo-force, is a force that appears to act on an object when its motion is described or experienced from a non-inertial frame of reference. Unlike real forces, which result from physical interactions between objects, fictitious forces occur due to the acceleration of the observer’s frame of reference rather than any actual force acting on a body. These forces are necessary for describing motion correctly within an accelerating frame, ensuring that Newton's second law of motion remains applicable. Common examples of fictitious forces include the centrifugal force, which appears to push objects outward in a rotating system; the Coriolis force, which affects moving objects in a rotating frame such as the Earth; and the Euler force, which arises when a rotating system changes its angular velocity. While these forces are not real in the sense of being caused by physical interactions, they are essential for accurately analyzing motion within accelerating reference frames, particularly in disciplines such as classical mechanics, meteorology, and astrophysics. Fictitious forces play a crucial role in understanding everyday phenomena, such as weather patterns influenced by the Coriolis effect and the perceived weightlessness experienced by astronauts in free-fall orbits. They are also fundamental in engineering applications, including navigation systems and rotating machinery. According to General relativity theory we perceive gravitational force when space is bending near heavy objects, so even this might be called a fictitious force. == Measurable examples of fictitious forces == Passengers in a vehicle accelerating in the forward direction may perceive they are acted upon by a force moving them into the direction of the backrest of their seats for instance. An example in a rotating reference frame may be the impression that it is a force which seems to move objects outward toward the rim of a centrifuge or carousel. The fictitious force called a pseudo force might also be referred to as a body force. It is due to an object's inertia when the reference frame does not move inertially any more but begins to accelerate relative to the free object. In terms of the example of the passenger vehicle, a pseudo force seems to be active just before the body touches the backrest of the seat in the car. A person in the car leaning forward first moves a bit backward in relation to the already accelerating car before touching the backrest. The motion in this short period seems to be the result of a force on the person; i.e., it is a pseudo force. A pseudo force does not arise from any physical interaction between two objects, such as electromagnetism or contact forces. It is only a consequence of the acceleration of the physical object the non-inertial reference frame is connected to, i.e. the vehicle in this case. From the viewpoint of the respective accelerating frame, an acceleration of the inert object appears to be present, apparently requiring a "force" for this to have happened. As stated by Iro: Such an additional force due to nonuniform relative motion of two reference frames is called a pseudo-force. The pseudo force on an object arises as an imaginary influence when the frame of reference used to describe the object's motion is accelerating compared to a non-accelerating frame. The pseudo force "explains", using Newton's second law mechanics, why an object does not follow Newton's second law and "floats freely" as if weightless. As a frame may accelerate in any arbitrary way, so may pseudo forces also be as arbitrary (but only in direct response to the acceleration of the frame). An example of a pseudo force as defined by Iro is the Coriolis force, maybe better to be called: the Coriolis effect. The gravitational force would also be a fictitious force (pseudo force) in a field model in which particles distort spacetime due to their mass, such as in the theory of general relativity. Assuming Newton's second law in the form F = ma, fictitious forces are always proportional to the mass m. The fictitious force that has been called an inertial force is also referred to as a d'Alembert force, or sometimes as a pseudo force. D'Alembert's principle is just another way of formulating Newton's second law of motion. It defines an inertial force as the negative of the product of mass times acceleration, just for the sake of easier calculations. (A d'Alembert force is not to be confused with a contact force arising from the physical interaction between two objects, which is the subject of Newton's third law – 'action is reaction'. In terms of the example of the passenger vehicle above, a contact force emerges when the body of the passenger touches the backrest of the seat in the car. It is present for as long as the car is accelerated.) Four fictitious forces have been defined for frames accelerated in commonly occurring ways: one caused by any acceleration relative to the origin in a straight line (rectilinear acceleration); two involving rotation: centrifugal force and Coriolis effect and a fourth, called the Euler force, caused by a variable rate of rotation, should that occur. == Background == The role of fictitious forces in Newtonian mechanics is described by Tonnelat: For Newton, the appearance of acceleration always indicates the existence of absolute motion – absolute motion of matter where real forces are concerned; absolute motion of the reference system, where so-called fictitious forces, such as inertial forces or those of Coriolis, are concerned. Fictitious forces arise in classical mechanics and special relativity in all non-inertial frames. Inertial frames are privileged over non-inertial frames because they do not have physics whose causes are outside of the system, while non-inertial frames do. Fictitious forces, or physics whose cause is outside of the system, are no longer necessary in general relativity, since these physics are explained with the geodesics of spacetime: "The field of all possible space-time null geodesics or photon paths unifies the absolute local non-rotation standard throughout space-time.". == On Earth == The surface of the Earth is a rotating reference frame. To solve classical mechanics problems exactly in an Earthbound reference frame, three fictitious forces must be introduced: the Coriolis force, the centrifugal force (described below) and the Euler force. The Euler force is typically ignored because the variations in the angular velocity of the rotating surface of the Earth are usually insignificant. Both of the other fictitious forces are weak compared to most typical forces in everyday life, but they can be detected under careful conditions. For example, Léon Foucault used his Foucault pendulum to show that the Coriolis force results from the Earth's rotation. If the Earth were to rotate twenty times faster (making each day only ~72 minutes long), people could easily get the impression that such fictitious forces were pulling on them, as on a spinning carousel. People in temperate and tropical latitudes would, in fact, need to hold on, in order to avoid being launched into orbit by the centrifugal force. When moving along the equator in a ship heading in an easterly direction, objects appear to be slightly lighter than on the way back. This phenomenon has been observed and is called the Eötvös effect. == Detection of non-inertial reference frame == Observers inside a closed box that is moving with a constant velocity cannot detect their own motion; however, observers within an accelerating reference frame can detect that they are in a non-inertial reference frame from the fictitious forces that arise. For example, for straight-line acceleration Vladimir Arnold presents the following theorem: In a coordinate system K which moves by translation relative to an inertial system k, the motion of a mechanical system takes place as if the coordinate system were inertial, but on every point of mass m an additional "inertial force" acted: F = −ma, where a is the acceleration of the system K. Other accelerations also give rise to fictitious forces, as described mathematically below. The physical explanation of motions in an inertial frame is the simplest possible, requiring no fictitious forces: fictitious forces are zero, providing a means to distinguish inertial frames from others. An example of the detection of a non-inertial, rotating reference frame is the precession of a Foucault pendulum. In the non-inertial frame of the Earth, the fictitious Coriolis force is necessary to explain observations. In an inertial frame outside the Earth, no such fictitious force is necessary. == Example concerning Circular motion == The effect of a fictitious force also occurs when a car takes the bend. Observed from a non-inertial frame of reference attached to the car, the fictitious force called the centrifugal force appears. As the car enters a left turn, a suitcase first on the left rear seat slides to the right rear seat and then continues until it comes into contact with the closed door on the right. This motion marks the phase of the fictitious centrifugal force as it is the inertia of the suitcase which plays a role in this piece of movement. It may seem that there must be a force responsible for this movement, but actually, this movement arises because of the inertia of the suitcase, which is (still) a 'free object' within an already accelerating frame of reference. After the suitcase has come into contact with the closed door of the car, the situation with the emergence of contact forces becomes current. The centripetal force on the car is now also transferred to the suitcase and the situation of Newton's third law comes into play, with the centripetal force as the action part and with the so-called reactive centrifugal force as the reaction part. The reactive centrifugal force is also due to the inertia of the suitcase. Now however the inertia appears in the form of a manifesting resistance to a change in its state of motion. Suppose a few miles further the car is moving at constant speed travelling a roundabout, again and again, then the occupants will feel as if they are being pushed to the outside of the vehicle by the (reactive) centrifugal force, away from the centre of the turn. The situation can be viewed from inertial as well as from non-inertial frames. From the viewpoint of an inertial reference frame stationary with respect to the road, the car is accelerating toward the centre of the circle. It is accelerating, because the direction of the velocity is changing, despite the car having constant speed. This inward acceleration is called centripetal acceleration, it requires a centripetal force to maintain the circular motion. This force is exerted by the ground upon the wheels, in this case, from the friction between the wheels and the road. The car is accelerating, due to the unbalanced force, which causes it to move in a circle. (See also banked turn.) From the viewpoint of a rotating frame, moving with the car, a fictitious centrifugal force appears to be present pushing the car toward the outside of the road (and pushing the occupants toward the outside of the car). The centrifugal force balances the friction between wheels and the road, making the car stationary in this non-inertial frame. A classic example of a fictitious force in circular motion is the experiment of rotating spheres tied by a cord and spinning around their centre of mass. In this case, the identification of a rotating, non-inertial frame of reference can be based upon the vanishing of fictitious forces. In an inertial frame, fictitious forces are not necessary to explain the tension in the string joining the spheres. In a rotating frame, Coriolis and centrifugal forces must be introduced to predict the observed tension. In the rotating reference frame perceived on the surface of the Earth, a centrifugal force reduces the apparent force of gravity by about one part in a thousand, depending on latitude. This reduction is zero at the poles, maximum at the equator. The fictitious Coriolis force, which is observed in rotational frames, is ordinarily visible only in very large-scale motion like the projectile motion of long-range guns or the circulation of the Earth's atmosphere (see Rossby number). Neglecting air resistance, an object dropped from a 50-meter-high tower at the equator will fall 7.7 millimetres eastward of the spot below where it is dropped because of the Coriolis force. == Fictitious forces and work == Fictitious forces can be considered to do work, provided that they move an object on a trajectory that changes its energy from potential to kinetic. For example, consider some persons in rotating chairs holding a weight in their outstretched hands. If they pull their hand inward toward their body, from the perspective of the rotating reference frame, they have done work against the centrifugal force. When the weight is let go, it spontaneously flies outward relative to the rotating reference frame, because the centrifugal force does work on the object, converting its potential energy into kinetic. From an inertial viewpoint, of course, the object flies away from them because it is suddenly allowed to move in a straight line. This illustrates that the work done, like the total potential and kinetic energy of an object, can be different in a non-inertial frame than in an inertial one. == Gravity as a fictitious force == The notion of "fictitious force" also arises in Einstein's general theory of relativity. All fictitious forces are proportional to the mass of the object upon which they act, which is also true for gravity. This led Albert Einstein to wonder whether gravity could be modeled as a fictitious force. He noted that a freefalling observer in a closed box would not be able to detect the force of gravity; hence, freefalling reference frames are equivalent to inertial reference frames (the equivalence principle). Developing this insight, Einstein formulated a theory with gravity as a fictitious force, and attributed the apparent acceleration due to gravity to the curvature of spacetime. This idea underlies Einstein's theory of general relativity. See the Eötvös experiment. == Mathematical derivation of fictitious forces == === General derivation === Many problems require use of noninertial reference frames, for example, those involving satellites and particle accelerators. Figure 2 shows a particle with mass m and position vector xA(t) in a particular inertial frame A. Consider a non-inertial frame B whose origin relative to the inertial one is given by XAB(t). Let the position of the particle in frame B be xB(t). What is the force on the particle as expressed in the coordinate system of frame B? To answer this question, let the coordinate axis in B be represented by unit vectors uj with j any of { 1, 2, 3 } for the three coordinate axes. Then x B = ∑ j = 1 3 x j u j . {\displaystyle \mathbf {x} _{\mathrm {B} }=\sum _{j=1}^{3}x_{j}\mathbf {u} _{j}\,.} The interpretation of this equation is that xB is the vector displacement of the particle as expressed in terms of the coordinates in frame B at the time t. From frame A the particle is located at: x A = X A B + ∑ j = 1 3 x j u j . {\displaystyle \mathbf {x} _{\mathrm {A} }=\mathbf {X} _{\mathrm {AB} }+\sum _{j=1}^{3}x_{j}\mathbf {u} _{j}\,.} As an aside, the unit vectors { uj } cannot change magnitude, so derivatives of these vectors express only rotation of the coordinate system B. On the other hand, vector XAB simply locates the origin of frame B relative to frame A, and so cannot include rotation of frame B. Taking a time derivative, the velocity of the particle is: d x A d t = d X A B d t + ∑ j = 1 3 d x j d t u j + ∑ j = 1 3 x j d u j d t . {\displaystyle {\frac {d\mathbf {x} _{\mathrm {A} }}{dt}}={\frac {d\mathbf {X} _{\mathrm {AB} }}{dt}}+\sum _{j=1}^{3}{\frac {dx_{j}}{dt}}\mathbf {u} _{j}+\sum _{j=1}^{3}x_{j}{\frac {d\mathbf {u} _{j}}{dt}}\,.} The second term summation is the velocity of the particle, say vB as measured in frame B. That is: d x A d t = v A B + v B + ∑ j = 1 3 x j d u j d t . {\displaystyle {\frac {d\mathbf {x} _{\mathrm {A} }}{dt}}=\mathbf {v} _{\mathrm {AB} }+\mathbf {v} _{\mathrm {B} }+\sum _{j=1}^{3}x_{j}{\frac {d\mathbf {u} _{j}}{dt}}.} The interpretation of this equation is that the velocity of the particle seen by observers in frame A consists of what observers in frame B call the velocity, namely vB, plus two extra terms related to the rate of change of the frame-B coordinate axes. One of these is simply the velocity of the moving origin vAB. The other is a contribution to velocity due to the fact that different locations in the non-inertial frame have different apparent velocities due to the rotation of the frame; a point seen from a rotating frame has a rotational component of velocity that is greater the further the point is from the origin. To find the acceleration, another time differentiation provides: d 2 x A d t 2 = a A B + d v B d t + ∑ j = 1 3 d x j d t d u j d t + ∑ j = 1 3 x j d 2 u j d t 2 . {\displaystyle {\frac {d^{2}\mathbf {x} _{\mathrm {A} }}{dt^{2}}}=\mathbf {a} _{\mathrm {AB} }+{\frac {d\mathbf {v} _{\mathrm {B} }}{dt}}+\sum _{j=1}^{3}{\frac {dx_{j}}{dt}}{\frac {d\mathbf {u} _{j}}{dt}}+\sum _{j=1}^{3}x_{j}{\frac {d^{2}\mathbf {u} _{j}}{dt^{2}}}.} Using the same formula already used for the time derivative of xB, the velocity derivative on the right is: d v B d t = ∑ j = 1 3 d v j d t u j + ∑ j = 1 3 v j d u j d t = a B + ∑ j = 1 3 v j d u j d t . {\displaystyle {\frac {d\mathbf {v} _{\mathrm {B} }}{dt}}=\sum _{j=1}^{3}{\frac {dv_{j}}{dt}}\mathbf {u} _{j}+\sum _{j=1}^{3}v_{j}{\frac {d\mathbf {u} _{j}}{dt}}=\mathbf {a} _{\mathrm {B} }+\sum _{j=1}^{3}v_{j}{\frac {d\mathbf {u} _{j}}{dt}}.} Consequently, The interpretation of this equation is as follows: the acceleration of the particle in frame A consists of what observers in frame B call the particle acceleration aB, but in addition, there are three acceleration terms related to the movement of the frame-B coordinate axes: one term related to the acceleration of the origin of frame B, namely aAB, and two terms related to the rotation of frame B. Consequently, observers in B will see the particle motion as possessing "extra" acceleration, which they will attribute to "forces" acting on the particle, but which observers in A say are "fictitious" forces arising simply because observers in B do not recognize the non-inertial nature of frame B. The factor of two in the Coriolis force arises from two equal contributions: (i) the apparent change of an inertially constant velocity with time because rotation makes the direction of the velocity seem to change (a dvB/dt term) and (ii) an apparent change in the velocity of an object when its position changes, putting it nearer to or further from the axis of rotation (the change in ∑ x j d u j / d t {\textstyle \sum x_{j}\,d\mathbf {u} _{j}/dt} due to change in x j ). To put matters in terms of forces, the accelerations are multiplied by the particle mass: F A = F B + m a A B + 2 m ∑ j = 1 3 v j d u j d t + m ∑ j = 1 3 x j d 2 u j d t 2 . {\displaystyle \mathbf {F} _{\mathrm {A} }=\mathbf {F} _{\mathrm {B} }+m\mathbf {a} _{\mathrm {AB} }+2m\sum _{j=1}^{3}v_{j}{\frac {d\mathbf {u} _{j}}{dt}}+m\sum _{j=1}^{3}x_{j}{\frac {d^{2}\mathbf {u} _{j}}{dt^{2}}}\ .} The force observed in frame B, FB = maB is related to the actual force on the particle, FA, by F B = F A + F f i c t i t i o u s , {\displaystyle \mathbf {F} _{\mathrm {B} }=\mathbf {F} _{\mathrm {A} }+\mathbf {F} _{\mathrm {fictitious} },} where: F f i c t i t i o u s = − m a A B − 2 m ∑ j = 1 3 v j d u j d t − m ∑ j = 1 3 x j d 2 u j d t 2 . {\displaystyle \mathbf {F} _{\mathrm {fictitious} }=-m\mathbf {a} _{\mathrm {AB} }-2m\sum _{j=1}^{3}v_{j}{\frac {d\mathbf {u} _{j}}{dt}}-m\sum _{j=1}^{3}x_{j}{\frac {d^{2}\mathbf {u} _{j}}{dt^{2}}}\,.} Thus, problems may be solved in frame B by assuming that Newton's second law holds (with respect to quantities in that frame) and treating Ffictitious as an additional force. Below are a number of examples applying this result for fictitious forces. More examples can be found in the article on centrifugal force. === Rotating coordinate systems === A common situation in which noninertial reference frames are useful is when the reference frame is rotating. Because such rotational motion is non-inertial, due to the acceleration present in any rotational motion, a fictitious force can always be invoked by using a rotational frame of reference. Despite this complication, the use of fictitious forces often simplifies the calculations involved. To derive expressions for the fictitious forces, derivatives are needed for the apparent time rate of change of vectors that take into account time-variation of the coordinate axes. If the rotation of frame 'B' is represented by a vector Ω pointed along the axis of rotation with the orientation given by the right-hand rule, and with magnitude given by | Ω | = d θ d t = ω ( t ) , {\displaystyle |{\boldsymbol {\Omega }}|={\frac {d\theta }{dt}}=\omega (t),} then the time derivative of any of the three unit vectors describing frame B is d u j ( t ) d t = Ω × u j ( t ) , {\displaystyle {\frac {d\mathbf {u} _{j}(t)}{dt}}={\boldsymbol {\Omega }}\times \mathbf {u} _{j}(t),} and d 2 u j ( t ) d t 2 = d Ω d t × u j + Ω × d u j ( t ) d t = d Ω d t × u j + Ω × [ Ω × u j ( t ) ] , {\displaystyle {\frac {d^{2}\mathbf {u} _{j}(t)}{dt^{2}}}={\frac {d{\boldsymbol {\Omega }}}{dt}}\times \mathbf {u} _{j}+{\boldsymbol {\Omega }}\times {\frac {d\mathbf {u} _{j}(t)}{dt}}={\frac {d{\boldsymbol {\Omega }}}{dt}}\times \mathbf {u} _{j}+{\boldsymbol {\Omega }}\times \left[{\boldsymbol {\Omega }}\times \mathbf {u} _{j}(t)\right],} as is verified using the properties of the vector cross product. These derivative formulas now are applied to the relationship between acceleration in an inertial frame, and that in a coordinate frame rotating with time-varying angular velocity ω(t). From the previous section, where subscript A refers to the inertial frame and B to the rotating frame, setting aAB = 0 to remove any translational acceleration, and focusing on only rotational properties (see Eq. 1): d 2 x A d t 2 = a B + 2 ∑ j = 1 3 v j d u j d t + ∑ j = 1 3 x j d 2 u j d t 2 , {\displaystyle {\frac {d^{2}\mathbf {x} _{\mathrm {A} }}{dt^{2}}}=\mathbf {a} _{\mathrm {B} }+2\sum _{j=1}^{3}v_{j}\ {\frac {d\mathbf {u} _{j}}{dt}}+\sum _{j=1}^{3}x_{j}{\frac {d^{2}\mathbf {u} _{j}}{dt^{2}}},} a A = a B + 2 ∑ j = 1 3 v j Ω × u j ( t ) + ∑ j = 1 3 x j d Ω d t × u j + ∑ j = 1 3 x j Ω × [ Ω × u j ( t ) ] = a B + 2 Ω × ∑ j = 1 3 v j u j ( t ) + d Ω d t × ∑ j = 1 3 x j u j + Ω × [ Ω × ∑ j = 1 3 x j u j ( t ) ] . {\displaystyle {\begin{aligned}\mathbf {a} _{\mathrm {A} }&=\mathbf {a} _{\mathrm {B} }+\ 2\sum _{j=1}^{3}v_{j}{\boldsymbol {\Omega }}\times \mathbf {u} _{j}(t)+\sum _{j=1}^{3}x_{j}{\frac {d{\boldsymbol {\Omega }}}{dt}}\times \mathbf {u} _{j}\ +\sum _{j=1}^{3}x_{j}{\boldsymbol {\Omega }}\times \left[{\boldsymbol {\Omega }}\times \mathbf {u} _{j}(t)\right]\\&=\mathbf {a} _{\mathrm {B} }+2{\boldsymbol {\Omega }}\times \sum _{j=1}^{3}v_{j}\mathbf {u} _{j}(t)+{\frac {d{\boldsymbol {\Omega }}}{dt}}\times \sum _{j=1}^{3}x_{j}\mathbf {u} _{j}+{\boldsymbol {\Omega }}\times \left[{\boldsymbol {\Omega }}\times \sum _{j=1}^{3}x_{j}\mathbf {u} _{j}(t)\right].\end{aligned}}} Collecting terms, the result is the so-called acceleration transformation formula: a A = a B + 2 Ω × v B + d Ω d t × x B + Ω × ( Ω × x B ) . {\displaystyle \mathbf {a} _{\mathrm {A} }=\mathbf {a} _{\mathrm {B} }+2{\boldsymbol {\Omega }}\times \mathbf {v} _{\mathrm {B} }+{\frac {d{\boldsymbol {\Omega }}}{dt}}\times \mathbf {x} _{\mathrm {B} }+{\boldsymbol {\Omega }}\times \left({\boldsymbol {\Omega }}\times \mathbf {x} _{\mathrm {B} }\right)\,.} The physical acceleration aA due to what observers in the inertial frame A call real external forces on the object is, therefore, not simply the acceleration aB seen by observers in the rotational frame B, but has several additional geometric acceleration terms associated with the rotation of B. As seen in the rotational frame, the acceleration aB of the particle is given by rearrangement of the above equation as: a B = a A − 2 Ω × v B − Ω × ( Ω × x B ) − d Ω d t × x B . {\displaystyle \mathbf {a} _{\mathrm {B} }=\mathbf {a} _{\mathrm {A} }-2{\boldsymbol {\Omega }}\times \mathbf {v} _{\mathrm {B} }-{\boldsymbol {\Omega }}\times ({\boldsymbol {\Omega }}\times \mathbf {x} _{\mathrm {B} })-{\frac {d{\boldsymbol {\Omega }}}{dt}}\times \mathbf {x} _{\mathrm {B} }.} The net force upon the object according to observers in the rotating frame is FB = maB. If their observations are to result in the correct force on the object when using Newton's laws, they must consider that the additional force Ffict is present, so the end result is FB = FA + Ffict. Thus, the fictitious force used by observers in B to get the correct behaviour of the object from Newton's laws equals: F f i c t = − 2 m Ω × v B − m Ω × ( Ω × x B ) − m d Ω d t × x B . {\displaystyle \mathbf {F} _{\mathrm {fict} }=-2m{\boldsymbol {\Omega }}\times \mathbf {v} _{\mathrm {B} }-m{\boldsymbol {\Omega }}\times ({\boldsymbol {\Omega }}\times \mathbf {x} _{\mathrm {B} })-m{\frac {d{\boldsymbol {\Omega }}}{dt}}\times \mathbf {x} _{\mathrm {B} }.} Here, the first term is the Coriolis force, the second term is the centrifugal force, and the third term is the Euler force. === Orbiting coordinate systems === As a related example, suppose the moving coordinate system B rotates with a constant angular speed ω in a circle of radius R about the fixed origin of inertial frame A, but maintains its coordinate axes fixed in orientation, as in Figure 3. The acceleration of an observed body is now (see Eq. 1): d 2 x A d t 2 = a A B + a B + 2 ∑ j = 1 3 v j d u j d t + ∑ j = 1 3 x j d 2 u j d t 2 = a A B + a B , {\displaystyle {\begin{aligned}{\frac {d^{2}\mathbf {x} _{A}}{dt^{2}}}&=\mathbf {a} _{AB}+\mathbf {a} _{B}+2\ \sum _{j=1}^{3}v_{j}\ {\frac {d\mathbf {u} _{j}}{dt}}+\sum _{j=1}^{3}x_{j}\ {\frac {d^{2}\mathbf {u} _{j}}{dt^{2}}}\\&=\mathbf {a} _{AB}\ +\mathbf {a} _{B}\ ,\end{aligned}}} where the summations are zero inasmuch as the unit vectors have no time dependence. The origin of the system B is located according to frame A at: X A B = R ( cos ⁡ ( ω t ) , sin ⁡ ( ω t ) ) , {\displaystyle \mathbf {X} _{AB}=R\left(\cos(\omega t),\ \sin(\omega t)\right)\ ,} leading to a velocity of the origin of frame B as: v A B = d d t X A B = Ω × X A B , {\displaystyle \mathbf {v} _{AB}={\frac {d}{dt}}\mathbf {X} _{AB}=\mathbf {\Omega \times X} _{AB}\ ,} leading to an acceleration of the origin of B given by: a A B = d 2 d t 2 X A B = Ω × ( Ω × X A B ) = − ω 2 X A B . {\displaystyle \mathbf {a} _{AB}={\frac {d^{2}}{dt^{2}}}\mathbf {X} _{AB}=\mathbf {\Omega \ \times } \left(\mathbf {\Omega \times X} _{AB}\right)=-\omega ^{2}\mathbf {X} _{AB}\,.} Because the first term, which is Ω × ( Ω × X A B ) , {\displaystyle \mathbf {\Omega \ \times } \left(\mathbf {\Omega \times X} _{AB}\right)\,,} is of the same form as the normal centrifugal force expression: Ω × ( Ω × x B ) , {\displaystyle {\boldsymbol {\Omega }}\times \left({\boldsymbol {\Omega }}\times \mathbf {x} _{B}\right)\,,} it is a natural extension of standard terminology (although there is no standard terminology for this case) to call this term a "centrifugal force". Whatever terminology is adopted, the observers in frame B must introduce a fictitious force, this time due to the acceleration from the orbital motion of their entire coordinate frame, that is radially outward away from the centre of rotation of the origin of their coordinate system: F f i c t = m ω 2 X A B , {\displaystyle \mathbf {F} _{\mathrm {fict} }=m\omega ^{2}\mathbf {X} _{AB}\,,} and of magnitude: | F f i c t | = m ω 2 R . {\displaystyle |\mathbf {F} _{\mathrm {fict} }|=m\omega ^{2}R\,.} This "centrifugal force" has differences from the case of a rotating frame. In the rotating frame the centrifugal force is related to the distance of the object from the origin of frame B, while in the case of an orbiting frame, the centrifugal force is independent of the distance of the object from the origin of frame B, but instead depends upon the distance of the origin of frame B from its centre of rotation, resulting in the same centrifugal fictitious force for all objects observed in frame B. === Orbiting and rotating === As a combination example, Figure 4 shows a coordinate system B that orbits inertial frame A as in Figure 3, but the coordinate axes in frame B turn so unit vector u1 always points toward the centre of rotation. This example might apply to a test tube in a centrifuge, where vector u1 points along the axis of the tube toward its opening at its top. It also resembles the Earth–Moon system, where the Moon always presents the same face to the Earth. In this example, unit vector u3 retains a fixed orientation, while vectors u1, u2 rotate at the same rate as the origin of coordinates. That is, u 1 = ( − cos ⁡ ω t , − sin ⁡ ω t ) ; u 2 = ( sin ⁡ ω t , − cos ⁡ ω t ) . {\displaystyle \mathbf {u} _{1}=(-\cos \omega t,\ -\sin \omega t)\ ;\ \mathbf {u} _{2}=(\sin \omega t,\ -\cos \omega t)\,.} d d t u 1 = Ω × u 1 = ω u 2 ; d d t u 2 = Ω × u 2 = − ω u 1 . {\displaystyle {\frac {d}{dt}}\mathbf {u} _{1}=\mathbf {\Omega \times u_{1}} =\omega \mathbf {u} _{2}\ ;\ {\frac {d}{dt}}\mathbf {u} _{2}=\mathbf {\Omega \times u_{2}} =-\omega \mathbf {u} _{1}\ \ .} Hence, the acceleration of a moving object is expressed as (see Eq. 1): d 2 x A d t 2 = a A B + a B + 2 ∑ j = 1 3 v j d u j d t + ∑ j = 1 3 x j d 2 u j d t 2 = Ω × ( Ω × X A B ) + a B + 2 ∑ j = 1 3 v j Ω × u j + ∑ j = 1 3 x j Ω × ( Ω × u j ) = Ω × ( Ω × X A B ) + a B + 2 Ω × v B + Ω × ( Ω × x B ) = Ω × ( Ω × ( X A B + x B ) ) + a B + 2 Ω × v B , {\displaystyle {\begin{aligned}{\frac {d^{2}\mathbf {x} _{A}}{dt^{2}}}&=\mathbf {a} _{AB}+\mathbf {a} _{B}+2\ \sum _{j=1}^{3}v_{j}\ {\frac {d\mathbf {u} _{j}}{dt}}+\ \sum _{j=1}^{3}x_{j}\ {\frac {d^{2}\mathbf {u} _{j}}{dt^{2}}}\\&=\mathbf {\Omega \ \times } \left(\mathbf {\Omega \times X} _{AB}\right)+\mathbf {a} _{B}+2\ \sum _{j=1}^{3}v_{j}\ \mathbf {\Omega \times u_{j}} \ +\ \sum _{j=1}^{3}x_{j}\ {\boldsymbol {\Omega }}\times \left({\boldsymbol {\Omega }}\times \mathbf {u} _{j}\right)\\&=\mathbf {\Omega \ \times } \left(\mathbf {\Omega \times X} _{AB}\right)+\mathbf {a} _{B}+2\ {\boldsymbol {\Omega }}\times \mathbf {v} _{B}\ \ +\ {\boldsymbol {\Omega }}\times \left({\boldsymbol {\Omega }}\times \mathbf {x} _{B}\right)\\&=\mathbf {\Omega \ \times } \left(\mathbf {\Omega \times } (\mathbf {X} _{AB}+\mathbf {x} _{B})\right)+\mathbf {a} _{B}+2\ {\boldsymbol {\Omega }}\times \mathbf {v} _{B}\ \,,\end{aligned}}} where the angular acceleration term is zero for the constant rate of rotation. Because the first term, which is Ω × ( Ω × ( X A B + x B ) ) , {\displaystyle \mathbf {\Omega \ \times } \left(\mathbf {\Omega \times } (\mathbf {X} _{AB}+\mathbf {x} _{B})\right)\,,} is of the same form as the normal centrifugal force expression: Ω × ( Ω × x B ) , {\displaystyle {\boldsymbol {\Omega }}\times \left({\boldsymbol {\Omega }}\times \mathbf {x} _{B}\right)\,,} it is a natural extension of standard terminology (although there is no standard terminology for this case) to call this term the "centrifugal force". Applying this terminology to the example of a tube in a centrifuge, if the tube is far enough from the center of rotation, |XAB| = R ≫ |xB|, all the matter in the test tube sees the same acceleration (the same centrifugal force). Thus, in this case, the fictitious force is primarily a uniform centrifugal force along the axis of the tube, away from the centre of rotation, with a value |Ffict| = ω2 R, where R is the distance of the matter in the tube from the centre of the centrifuge. It is the standard specification of a centrifuge to use the "effective" radius of the centrifuge to estimate its ability to provide centrifugal force. Thus, the first estimate of centrifugal force in a centrifuge can be based upon the distance of the tubes from the centre of rotation, and corrections applied if needed. Also, the test tube confines motion to the direction down the length of the tube, so vB is opposite to u1 and the Coriolis force is opposite to u2, that is, against the wall of the tube. If the tube is spun for a long enough time, the velocity vB drops to zero as the matter comes to an equilibrium distribution. For more details, see the articles on sedimentation and the Lamm equation. A related problem is that of centrifugal forces for the Earth–Moon–Sun system, where three rotations appear: the daily rotation of the Earth about its axis, the lunar-month rotation of the Earth–Moon system about its centre of mass, and the annual revolution of the Earth–Moon system about the Sun. These three motions influence the tides. === Crossing a carousel === Figure 5 shows another example comparing the observations of an inertial observer with those of an observer on a rotating carousel. The carousel rotates at a constant angular velocity represented by the vector Ω with magnitude ω, pointing upward according to the right-hand rule. A rider on the carousel walks radially across it at a constant speed, in what appears to the walker to be the straight line path inclined at 45° in Figure 5. To the stationary observer, however, the walker travels a spiral path. The points identified on both paths in Figure 5 correspond to the same times spaced at equal time intervals. We ask how two observers, one on the carousel and one in an inertial frame, formulate what they see using Newton's laws. ==== Inertial observer ==== The observer at rest describes the path followed by the walker as a spiral. Adopting the coordinate system shown in Figure 5, the trajectory is described by r(t): r ( t ) = R ( t ) u R = [ x ( t ) y ( t ) ] = [ R ( t ) cos ⁡ ( ω t + π / 4 ) R ( t ) sin ⁡ ( ω t + π / 4 ) ] , {\displaystyle \mathbf {r} (t)=R(t)\mathbf {u} _{R}={\begin{bmatrix}x(t)\\y(t)\end{bmatrix}}={\begin{bmatrix}R(t)\cos(\omega t+\pi /4)\\R(t)\sin(\omega t+\pi /4)\end{bmatrix}},} where the added π/4 sets the path angle at 45° to start with (just an arbitrary choice of direction), uR is a unit vector in the radial direction pointing from the centre of the carousel to the walker at the time t. The radial distance R(t) increases steadily with time according to: R ( t ) = s t , {\displaystyle R(t)=st,} with s the speed of walking. According to simple kinematics, the velocity is then the first derivative of the trajectory: v ( t ) = d R d t [ cos ⁡ ( ω t + π / 4 ) sin ⁡ ( ω t + π / 4 ) ] + ω R ( t ) [ − sin ⁡ ( ω t + π / 4 ) cos ⁡ ( ω t + π / 4 ) ] = d R d t u R + ω R ( t ) u θ , {\displaystyle {\begin{aligned}\mathbf {v} (t)&={\frac {dR}{dt}}{\begin{bmatrix}\cos(\omega t+\pi /4)\\\sin(\omega t+\pi /4)\end{bmatrix}}+\omega R(t){\begin{bmatrix}-\sin(\omega t+\pi /4)\\\cos(\omega t+\pi /4)\end{bmatrix}}\\&={\frac {dR}{dt}}\mathbf {u} _{R}+\omega R(t)\mathbf {u} _{\theta },\end{aligned}}} with uθ a unit vector perpendicular to uR at time t (as can be verified by noticing that the vector dot product with the radial vector is zero) and pointing in the direction of travel. The acceleration is the first derivative of the velocity: a ( t ) = d 2 R d t 2 [ cos ⁡ ( ω t + π / 4 ) sin ⁡ ( ω t + π / 4 ) ] + 2 d R d t ω [ − sin ⁡ ( ω t + π / 4 ) cos ⁡ ( ω t + π / 4 ) ] − ω 2 R ( t ) [ cos ⁡ ( ω t + π / 4 ) sin ⁡ ( ω t + π / 4 ) ] = 2 s ω [ − sin ⁡ ( ω t + π / 4 ) cos ⁡ ( ω t + π / 4 ) ] − ω 2 R ( t ) [ cos ⁡ ( ω t + π / 4 ) sin ⁡ ( ω t + π / 4 ) ] = 2 s ω u θ − ω 2 R ( t ) u R . {\displaystyle {\begin{aligned}\mathbf {a} (t)&={\frac {d^{2}R}{dt^{2}}}{\begin{bmatrix}\cos(\omega t+\pi /4)\\\sin(\omega t+\pi /4)\end{bmatrix}}+2{\frac {dR}{dt}}\omega {\begin{bmatrix}-\sin(\omega t+\pi /4)\\\cos(\omega t+\pi /4)\end{bmatrix}}-\omega ^{2}R(t){\begin{bmatrix}\cos(\omega t+\pi /4)\\\sin(\omega t+\pi /4)\end{bmatrix}}\\&=2s\omega {\begin{bmatrix}-\sin(\omega t+\pi /4)\\\cos(\omega t+\pi /4)\end{bmatrix}}-\omega ^{2}R(t){\begin{bmatrix}\cos(\omega t+\pi /4)\\\sin(\omega t+\pi /4)\end{bmatrix}}\\&=2s\ \omega \ \mathbf {u} _{\theta }-\omega ^{2}R(t)\ \mathbf {u} _{R}\,.\end{aligned}}} The last term in the acceleration is radially inward of magnitude ω2 R, which is therefore the instantaneous centripetal acceleration of circular motion. The first term is perpendicular to the radial direction, and pointing in the direction of travel. Its magnitude is 2sω, and it represents the acceleration of the walker as the edge of the carousel is neared, and the arc of the circle travelled in a fixed time increases, as can be seen by the increased spacing between points for equal time steps on the spiral in Figure 5 as the outer edge of the carousel is approached. Applying Newton's laws, multiplying the acceleration by the mass of the walker, the inertial observer concludes that the walker is subject to two forces: the inward radially directed centripetal force and another force perpendicular to the radial direction that is proportional to the speed of the walker. ==== Rotating observer ==== The rotating observer sees the walker travel a straight line from the centre of the carousel to the periphery, as shown in Figure 5. Moreover, the rotating observer sees that the walker moves at a constant speed in the same direction, so applying Newton's law of inertia, there is zero force upon the walker. These conclusions do not agree with the inertial observer. To obtain agreement, the rotating observer has to introduce fictitious forces that appear to exist in the rotating world, even though there is no apparent reason for them, no apparent gravitational mass, electric charge or what have you, that could account for these fictitious forces. To agree with the inertial observer, the forces applied to the walker must be exactly those found above. They can be related to the general formulas already derived, namely: F f i c t = − 2 m Ω × v B − m Ω × ( Ω × x B ) − m d Ω d t × x B . {\displaystyle \mathbf {F} _{\mathrm {fict} }=-2m{\boldsymbol {\Omega }}\times \mathbf {v} _{\mathrm {B} }-m{\boldsymbol {\Omega }}\times ({\boldsymbol {\Omega }}\times \mathbf {x} _{\mathrm {B} })-m{\frac {d{\boldsymbol {\Omega }}}{dt}}\times \mathbf {x} _{\mathrm {B} }.} In this example, the velocity seen in the rotating frame is: v B = s u R , {\displaystyle \mathbf {v} _{\mathrm {B} }=s\mathbf {u} _{R},} with uR a unit vector in the radial direction. The position of the walker as seen on the carousel is: x B = R ( t ) u R , {\displaystyle \mathbf {x} _{\mathrm {B} }=R(t)\mathbf {u} _{R},} and the time derivative of Ω is zero for uniform angular rotation. Noticing that Ω × u R = ω u θ {\displaystyle {\boldsymbol {\Omega }}\times \mathbf {u} _{R}=\omega \mathbf {u} _{\theta }} and Ω × u θ = − ω u R , {\displaystyle {\boldsymbol {\Omega }}\times \mathbf {u} _{\theta }=-\omega \mathbf {u} _{R}\,,} we find: F f i c t = − 2 m ω s u θ + m ω 2 R ( t ) u R . {\displaystyle \mathbf {F} _{\mathrm {fict} }=-2m\omega s\mathbf {u} _{\theta }+m\omega ^{2}R(t)\mathbf {u} _{R}.} To obtain a straight-line motion in the rotating world, a force exactly opposite in sign to the fictitious force must be applied to reduce the net force on the walker to zero, so Newton's law of inertia will predict a straight line motion, in agreement with what the rotating observer sees. The fictitious forces that must be combated are the Coriolis force (first term) and the centrifugal force (second term). (These terms are approximate.) By applying forces to counter these two fictitious forces, the rotating observer ends up applying exactly the same forces upon the walker that the inertial observer predicted were needed. Because they differ only by the constant walking velocity, the walker and the rotational observer see the same accelerations. From the walker's perspective, the fictitious force is experienced as real, and combating this force is necessary to stay on a straight line radial path holding a constant speed. It is like battling a crosswind while being thrown to the edge of the carousel. === Observation === Notice that this kinematical discussion does not delve into the mechanism by which the required forces are generated. That is the subject of kinetics. In the case of the carousel, the kinetic discussion would involve perhaps a study of the walker's shoes and the friction they need to generate against the floor of the carousel, or perhaps the dynamics of skateboarding if the walker switched to travel by skateboard. Whatever the means of travel across the carousel, the forces calculated above must be realized. A very rough analogy is heating your house: you must have a certain temperature to be comfortable, but whether you heat by burning gas or by burning coal is another problem. Kinematics sets the thermostat, kinetics fires the furnace. == See also == == References == == Further reading == Lev D. Landau and E. M. Lifshitz (1976). Mechanics. Course of Theoretical Physics. Vol. 1 (3rd ed.). Butterworth-Heinenan. pp. 128–130. ISBN 0-7506-2896-0. Keith Symon (1971). Mechanics (3rd ed.). Addison-Wesley. ISBN 0-201-07392-7. Jerry B. Marion (1970). Classical Dynamics of Particles and Systems. Academic Press. ISBN 0-12-472252-0. Marcel J. Sidi (1997). Spacecraft Dynamics and Control: A Practical Engineering Approach. Cambridge University Press. Chapter 4.8. ISBN 0-521-78780-7. == External links == Q and A from Richard C. Brill, Honolulu Community College NASA's David Stern: Lesson Plans for Teachers #23 on Inertial Forces Coriolis Force Motion over a flat surface Java physlet by Brian Fiedler illustrating fictitious forces. The physlet shows both the perspective as seen from a rotating and from a non-rotating point of view. Motion over a parabolic surface Java physlet by Brian Fiedler illustrating fictitious forces. The physlet shows both the perspective as seen from a rotating and as seen from a non-rotating point of view.
Wikipedia/Fictitious_forces
The following is a timeline of the history of classical mechanics: == Antiquity == 4th century BC – Aristotle invents the system of Aristotelian physics, which is later largely disproved 4th century BC – Babylonian astronomers calculate Jupiter's position using the Trapezoidal rule 260 BC – Archimedes works out the principle of the lever and connects buoyancy to weight 60 – Hero of Alexandria writes Metrica, Mechanics (on means to lift heavy objects), and Pneumatics (on machines working on pressure) 350 – Themistius states, that static friction is larger than kinetic friction == Early mechanics == 6th century – John Philoponus introduces the concept of impetus and the theory was modified by Avicenna in the 11th century and Ibn Malka al-Baghdadi in the 12th century 6th century – John Philoponus says that by observation, two balls of very different weights will fall at nearly the same speed. He therefore tests the equivalence principle 1021 – Al-Biruni uses three orthogonal coordinates to describe point in space 1100–1138 – Avempace develops the concept of a fatigue, which according to Shlomo Pines is precursor to Leibnizian idea of force 1100–1165 – Hibat Allah Abu'l-Barakat al-Baghdaadi discovers that force is proportional to acceleration rather than speed, a fundamental law in classical mechanics 1340–1358 – Jean Buridan develops the theory of impetus 14th century – Oxford Calculators and French collaborators prove the mean speed theorem 14th century – Nicole Oresme derives the times-squared law for uniformly accelerated change. Oresme, however, regarded this discovery as a purely intellectual exercise having no relevance to the description of any natural phenomena, and consequently failed to recognise any connection with the motion of accelerating bodies 1500–1528 – Al-Birjandi develops the theory of "circular inertia" to explain Earth's rotation 16th century – Francesco Beato and Luca Ghini experimentally contradict Aristotelian view on free fall. 16th century – Domingo de Soto suggests that bodies falling through a homogeneous medium are uniformly accelerated. Soto, however, did not anticipate many of the qualifications and refinements contained in Galileo's theory of falling bodies. He did not, for instance, recognise, as Galileo did, that a body would fall with a strictly uniform acceleration only in a vacuum, and that it would otherwise eventually reach a uniform terminal velocity 1581 – Galileo Galilei notices the timekeeping property of the pendulum 1589 – Galileo Galilei uses balls rolling on inclined planes to show that different weights fall with the same acceleration 1638 – Galileo Galilei publishes Dialogues Concerning Two New Sciences (which were materials science and kinematics) where he develops, amongst other things, Galilean transformation 1644 – René Descartes suggests an early form of the law of conservation of momentum 1645 – Ismaël Bullialdus argues that "gravity" weakens as the inverse square of the distance 1651 – Giovanni Battista Riccioli and Francesco Maria Grimaldi discover the Coriolis effect 1658 – Christiaan Huygens experimentally discovers that balls placed anywhere inside an inverted cycloid reach the lowest point of the cycloid in the same time and thereby experimentally shows that the cycloid is the tautochrone 1668 – John Wallis suggests the law of conservation of momentum 1673 – Christiaan Huygens publishes his Horologium Oscillatorium. Huygens describes in this work the first two laws of motion. The book is also the first modern treatise in which a physical problem (the accelerated motion of a falling body) is idealized by a set of parameters and then analyzed mathematically. 1676–1689 – Gottfried Leibniz develops the concept of vis viva, a limited theory of conservation of energy 1677 – Baruch Spinoza puts forward a primitive notion of Newton's first law == Newtonian mechanics == 1687 – Isaac Newton publishes his Philosophiæ Naturalis Principia Mathematica, in which he formulates Newton's laws of motion and Newton's law of universal gravitation 1690 – James Bernoulli shows that the cycloid is the solution to the tautochrone problem 1691 – Johann Bernoulli shows that a chain freely suspended from two points will form a catenary 1691 – James Bernoulli shows that the catenary curve has the lowest center of gravity of any chain hung from two fixed points 1696 – Johann Bernoulli shows that the cycloid is the solution to the brachistochrone problem 1710 – Jakob Hermann shows that Laplace–Runge–Lenz vector is conserved for a case of the inverse-square central force 1714 – Brook Taylor derives the fundamental frequency of a stretched vibrating string in terms of its tension and mass per unit length by solving an ordinary differential equation 1733 – Daniel Bernoulli derives the fundamental frequency and harmonics of a hanging chain by solving an ordinary differential equation 1734 – Daniel Bernoulli solves the ordinary differential equation for the vibrations of an elastic bar clamped at one end 1739 – Leonhard Euler solves the ordinary differential equation for a forced harmonic oscillator and notices the resonance 1742 – Colin Maclaurin discovers his uniformly rotating self-gravitating spheroids 1743 – Jean le Rond d'Alembert publishes his Traite de Dynamique, in which he introduces the concept of generalized forces and D'Alembert's principle 1747 – D'Alembert and Alexis Clairaut publish first approximate solutions to the three-body problem 1749 – Leonhard Euler derives equation for Coriolis acceleration 1759 – Leonhard Euler solves the partial differential equation for the vibration of a rectangular drum 1764 – Leonhard Euler examines the partial differential equation for the vibration of a circular drum and finds one of the Bessel function solutions 1776 – John Smeaton publishes a paper on experiments relating power, work, momentum and kinetic energy, and supporting the conservation of energy == Analytical mechanics == 1788 – Joseph-Louis Lagrange presents Lagrange's equations of motion in the Méchanique Analytique 1798 – Pierre-Simon Laplace publishes his Traité de mécanique céleste vol.1 and lasts vol.5 in 1825. In this, he summarized and extended the work of his predecessors 1803 – Louis Poinsot develops idea of angular momentum conservation (this result was previously known only in the case of conservation of areal velocity) 1813 – Peter Ewart supports the idea of the conservation of energy in his paper "On the measure of moving force" 1821 – William Hamilton begins his analysis of Hamilton's characteristic function and Hamilton–Jacobi equation 1829 – Carl Friedrich Gauss introduces Gauss's principle of least constraint 1834 – Carl Jacobi discovers his uniformly rotating self-gravitating ellipsoids 1834 – Louis Poinsot notes an instance of the intermediate axis theorem 1835 – William Hamilton states Hamilton's canonical equations of motion 1838 – Liouville begins work on Liouville's theorem 1841 – Julius von Mayer, an amateur scientist, writes a paper on the conservation of energy but his lack of academic training leads to a priority dispute. 1847 – Hermann von Helmholtz formally states the law of conservation of energy first half of the 19th century – Cauchy develops his momentum equation and his stress tensor 1851 – Léon Foucault shows the Earth's rotation with a huge pendulum (Foucault pendulum) 1870 – Rudolf Clausius deduces virial theorem 1890 – Henri Poincaré discovers the sensibility of initial conditions in the three-body problem. 1898 – Jacques Hadamard discusses the Hadamard billiards. == Modern developments == 1900 – Max Planck introduces the idea of quanta, introducing quantum mechanics 1902 – James Jeans finds the length scale required for gravitational perturbations to grow in a static nearly homogeneous medium 1905 – Albert Einstein first mathematically describes Brownian motion and introduces relativistic mechanics 1915 – Emmy Noether proves Noether's theorem, from which conservation laws are deduced 1915 – Albert Einstein introduces general relativity 1923 – Élie Cartan introduces the geometrized Newtonian gravitation, treating Newtonian gravitation in terms of spacetime. 1931–1932 – Bernard Koopman and John von Neumann papers led to the development of Koopman–von Neumann classical mechanics. 1952 – Parker develops a tensor form of the virial theorem 1954 – Andrey Kolmogorov publishes the first version of the Kolmogorov–Arnold–Moser theorem. 1961 – Edward Norton Lorenz discovers Lorenz systems and establishes the field of chaos theory. 1978 – Vladimir Arnold states precise form of Liouville–Arnold theorem 1983 – Mordehai Milgrom proposes modified Newtonian dynamics as an alternative to the dark matter hypothesis 1992 – Udwadia and Kalaba create Udwadia–Kalaba equation 2003 – John D. Norton introduces Norton's dome == References ==
Wikipedia/Timeline_of_classical_mechanics
The Koopman–von Neumann (KvN) theory is a description of classical mechanics as an operatorial theory similar to quantum mechanics, based on a Hilbert space of complex, square-integrable wavefunctions. As its name suggests, the KvN theory is related to work: 220  by Bernard Koopman and John von Neumann. == History == Statistical mechanics describes macroscopic systems in terms of statistical ensembles, such as the macroscopic properties of an ideal gas. Ergodic theory is a branch of mathematics arising from the study of statistical mechanics. The origins of the Koopman–von Neumann theory are tightly connected with the rise of ergodic theory as an independent branch of mathematics, in particular with Ludwig Boltzmann's ergodic hypothesis. In 1931, Koopman observed that the phase space of the classical system can be converted into a Hilbert space. According to this formulation, functions representing physical observables become vectors, with an inner product defined in terms of a natural integration rule over the system's probability density on phase space. This reformulation makes it possible to draw interesting conclusions about the evolution of physical observables from Stone's theorem, which had been proved shortly before. This finding inspired von Neumann to apply the novel formalism to the ergodic problem in 1932. Subsequently, he published several seminal results in modern ergodic theory, including the proof of his mean ergodic theorem. The Koopman–von Neumann treatment was further developed over the time by Mário Schenberg in 1952-1953, by Angelo Loinger in 1962, by Giacomo Della Riccia and Norbert Wiener in 1966, and by E. C. George Sudarshan himself in 1976. == Definition and dynamics == === Derivation starting from the Liouville equation === In the approach of Koopman and von Neumann (KvN), dynamics in phase space is described by a (classical) probability density, recovered from an underlying wavefunction – the Koopman–von Neumann wavefunction – as the square of its absolute value (more precisely, as the amplitude multiplied with its own complex conjugate). This stands in analogy to the Born rule in quantum mechanics. In the KvN framework, observables are represented by commuting self-adjoint operators acting on the Hilbert space of KvN wavefunctions. The commutativity physically implies that all observables are simultaneously measurable. Contrast this with quantum mechanics, where observables need not commute, which underlines the uncertainty principle, Kochen–Specker theorem, and Bell inequalities. The KvN wavefunction is postulated to evolve according to exactly the same Liouville equation as the classical probability density. From this postulate it can be shown that indeed probability density dynamics is recovered. === Derivation starting from operator axioms === Conversely, it is possible to start from operator postulates, similar to the Hilbert space axioms of quantum mechanics, and derive the equation of motion by specifying how expectation values evolve. The relevant axioms are that as in quantum mechanics (i) the states of a system are represented by normalized vectors of a complex Hilbert space, and the observables are given by self-adjoint operators acting on that space, (ii) the expectation value of an observable is obtained in the manner as the expectation value in quantum mechanics, (iii) the probabilities of measuring certain values of some observables are calculated by the Born rule, and (iv) the state space of a composite system is the tensor product of the subsystem's spaces. These axioms allow us to recover the formalism of both classical and quantum mechanics. Specifically, under the assumption that the classical position and momentum operators commute, the Liouville equation for the KvN wavefunction is recovered from averaged Newton's laws of motion. However, if the coordinate and momentum obey the canonical commutation relation, the Schrödinger equation of quantum mechanics is obtained. === Measurements === In the Hilbert space and operator formulation of classical mechanics, the Koopman von Neumann–wavefunction takes the form of a superposition of eigenstates, and measurement collapses the KvN wavefunction to the eigenstate which is associated the measurement result, in analogy to the wave function collapse of quantum mechanics. However, it can be shown that for Koopman–von Neumann classical mechanics non-selective measurements leave the KvN wavefunction unchanged. == KvN vs Liouville mechanics == The KvN dynamical equation (KvN dynamical eq in xp) and Liouville equation (Liouville eq) are first-order linear partial differential equations. One recovers Newton's laws of motion by applying the method of characteristics to either of these equations. Hence, the key difference between KvN and Liouville mechanics lies in weighting individual trajectories: Arbitrary weights, underlying the classical wave function, can be utilized in the KvN mechanics, while only positive weights, representing the probability density, are permitted in the Liouville mechanics (see this scheme). == Quantum analogy == Being explicitly based on the Hilbert space language, the KvN classical mechanics adopts many techniques from quantum mechanics, for example, perturbation and diagram techniques as well as functional integral methods. The KvN approach is very general, and it has been extended to dissipative systems, relativistic mechanics, and classical field theories. The KvN approach is fruitful in studies on the quantum-classical correspondence as it reveals that the Hilbert space formulation is not exclusively quantum mechanical. Even Dirac spinors are not exceptionally quantum as they are utilized in the relativistic generalization of the KvN mechanics. Similarly as the more well-known phase space formulation of quantum mechanics, the KvN approach can be understood as an attempt to bring classical and quantum mechanics into a common mathematical framework. In fact, the time evolution of the Wigner function approaches, in the classical limit, the time evolution of the KvN wavefunction of a classical particle. However, a mathematical resemblance to quantum mechanics does not imply the presence of hallmark quantum effects. In particular, impossibility of double-slit experiment and Aharonov–Bohm effect are explicitly demonstrated in the KvN framework. == See also == Classical mechanics Statistical mechanics Liouville's theorem Quantum mechanics Phase space formulation of quantum mechanics Wigner quasiprobability distribution Dynamical systems Ergodic theory == References == == Further reading == Koopman, B. O. (1931). "Hamiltonian Systems and Transformations in Hilbert Space". Proceedings of the National Academy of Sciences. 17 (5): 315–318. Bibcode:1931PNAS...17..315K. doi:10.1073/pnas.17.5.315. PMC 1076052. PMID 16577368. von Neumann, J. (1932a). "Zur Operatorenmethode In Der Klassischen Mechanik". Annals of Mathematics (in German). 33 (3): 587–642. doi:10.2307/1968537. JSTOR 1968537. von Neumann, J. (1932b). "Zusatze Zur Arbeit 'Zur Operatorenmethode...'". Annals of Mathematics (in German). 33 (4): 789–791. doi:10.2307/1968225. JSTOR 1968225. H.R. Jauslin, D. Sugny, Dynamics of mixed classical-quantum systems, geometric quantization and coherent states, Lecture Note Series, IMS, NUS, Review Vol., August 13, 2009, arXiv:1111.5774 The Legacy of John von Neumann (Proceedings of Symposia in Pure Mathematics, vol 50), edited by James Glimm, John Impagliazzo, Isadore Singer. — Amata Graphics, 2006. — ISBN 0821842196 U. Klein, From Koopman–von Neumann theory to quantum theory, Quantum Stud.: Math. Found. (2018) 5:219–227.[1]
Wikipedia/Koopman–von_Neumann_classical_mechanics
In physics and engineering, kinetics is the branch of classical mechanics that is concerned with the relationship between the motion and its causes, specifically, forces and torques. Since the mid-20th century, the term "dynamics" (or "analytical dynamics") has largely superseded "kinetics" in physics textbooks, though the term is still used in engineering. In plasma physics, kinetics refers to the study of continua in velocity space. This is usually in the context of non-thermal (non-Maxwellian) velocity distributions, or processes that perturb thermal distributions. These "kinetic plasmas" cannot be adequately described with fluid equations. The term kinetics is also used to refer to chemical kinetics, particularly in chemical physics and physical chemistry. In such uses, a qualifier is often used or implied, for example: "physical kinetics", "crystal growth kinetics", and so on. == References ==
Wikipedia/Kinetics_(physics)
Rotation around a fixed axis or axial rotation is a special case of rotational motion around an axis of rotation fixed, stationary, or static in three-dimensional space. This type of motion excludes the possibility of the instantaneous axis of rotation changing its orientation and cannot describe such phenomena as wobbling or precession. According to Euler's rotation theorem, simultaneous rotation along a number of stationary axes at the same time is impossible; if two rotations are forced at the same time, a new axis of rotation will result. This concept assumes that the rotation is also stable, such that no torque is required to keep it going. The kinematics and dynamics of rotation around a fixed axis of a rigid body are mathematically much simpler than those for free rotation of a rigid body; they are entirely analogous to those of linear motion along a single fixed direction, which is not true for free rotation of a rigid body. The expressions for the kinetic energy of the object, and for the forces on the parts of the object, are also simpler for rotation around a fixed axis, than for general rotational motion. For these reasons, rotation around a fixed axis is typically taught in introductory physics courses after students have mastered linear motion; the full generality of rotational motion is not usually taught in introductory physics classes. == Translation and rotation == A rigid body is an object of a finite extent in which all the distances between the component particles are constant. No truly rigid body exists; external forces can deform any solid. For our purposes, then, a rigid body is a solid which requires large forces to deform it appreciably. A change in the position of a particle in three-dimensional space can be completely specified by three coordinates. A change in the position of a rigid body is more complicated to describe. It can be regarded as a combination of two distinct types of motion: translational motion and circular motion. Purely translational motion occurs when every particle of the body has the same instantaneous velocity as every other particle; then the path traced out by any particle is exactly parallel to the path traced out by every other particle in the body. Under translational motion, the change in the position of a rigid body is specified completely by three coordinates such as x, y, and z giving the displacement of any point, such as the center of mass, fixed to the rigid body. Purely rotational motion occurs if every particle in the body moves in a circle about a single line. This line is called the axis of rotation. Then the radius vectors from the axis to all particles undergo the same angular displacement at the same time. The axis of rotation need not go through the body. In general, any rotation can be specified completely by the three angular displacements with respect to the rectangular-coordinate axes x, y, and z. Any change in the position of the rigid body is thus completely described by three translational and three rotational coordinates. Any displacement of a rigid body may be arrived at by first subjecting the body to a displacement followed by a rotation, or conversely, to a rotation followed by a displacement. We already know that for any collection of particles—whether at rest with respect to one another, as in a rigid body, or in relative motion, like the exploding fragments of a shell, the acceleration of the center of mass is given by F n e t = M a c m {\displaystyle F_{\mathrm {net} }=Ma_{\mathrm {cm} }} where M is the total mass of the system and acm is the acceleration of the center of mass. There remains the matter of describing the rotation of the body about the center of mass and relating it to the external forces acting on the body. The kinematics and dynamics of rotational motion around a single axis resemble the kinematics and dynamics of translational motion; rotational motion around a single axis even has a work-energy theorem analogous to that of particle dynamics. == Kinematics == === Angular displacement === Given a particle that moves along the circumference of a circle of radius r {\displaystyle r} , having moved an arc length s {\displaystyle s} , its angular position is θ {\displaystyle \theta } relative to its initial position, where θ = s r {\displaystyle \theta ={\frac {s}{r}}} . In mathematics and physics it is conventional to treat the radian, a unit of plane angle, as 1, often omitting it. Units are converted as follows: 360 ∘ = 2 π rad , 1 rad = 180 ∘ π ≈ 57.27 ∘ . {\displaystyle 360^{\circ }=2\pi {\text{ rad}}\,,\quad 1{\text{ rad}}={\frac {180^{\circ }}{\pi }}\approx 57.27^{\circ }.} An angular displacement is a change in angular position: Δ θ = θ 2 − θ 1 , {\displaystyle \Delta \theta =\theta _{2}-\theta _{1},} where Δ θ {\displaystyle \Delta \theta } is the angular displacement, θ 1 {\displaystyle \theta _{1}} is the initial angular position and θ 2 {\displaystyle \theta _{2}} is the final angular position. === Angular velocity === Change in angular displacement per unit time is called angular velocity with direction along the axis of rotation. The symbol for angular velocity is ω {\displaystyle \omega } and the units are typically rad s−1. Angular speed is the magnitude of angular velocity. ω ¯ = Δ θ Δ t = θ 2 − θ 1 t 2 − t 1 . {\displaystyle {\overline {\omega }}={\frac {\Delta \theta }{\Delta t}}={\frac {\theta _{2}-\theta _{1}}{t_{2}-t_{1}}}.} The instantaneous angular velocity is given by ω ( t ) = d θ d t . {\displaystyle \omega (t)={\frac {d\theta }{dt}}.} Using the formula for angular position and letting v = d s d t {\displaystyle v={\frac {ds}{dt}}} , we have also ω = d θ d t = v r , {\displaystyle \omega ={\frac {d\theta }{dt}}={\frac {v}{r}},} where v {\displaystyle v} is the translational speed of the particle. Angular velocity and frequency are related by ω = 2 π f . {\displaystyle \omega ={2\pi f}\,.} === Angular acceleration === A changing angular velocity indicates the presence of an angular acceleration in rigid body, typically measured in rad s−2. The average angular acceleration α ¯ {\displaystyle {\overline {\alpha }}} over a time interval Δt is given by α ¯ = Δ ω Δ t = ω 2 − ω 1 t 2 − t 1 . {\displaystyle {\overline {\alpha }}={\frac {\Delta \omega }{\Delta t}}={\frac {\omega _{2}-\omega _{1}}{t_{2}-t_{1}}}.} The instantaneous acceleration α(t) is given by α ( t ) = d ω d t = d 2 θ d t 2 . {\displaystyle \alpha (t)={\frac {d\omega }{dt}}={\frac {d^{2}\theta }{dt^{2}}}.} Thus, the angular acceleration is the rate of change of the angular velocity, just as acceleration is the rate of change of velocity. The translational acceleration of a point on the object rotating is given by a = r α , {\displaystyle a=r\alpha ,} where r is the radius or distance from the axis of rotation. This is also the tangential component of acceleration: it is tangential to the direction of motion of the point. If this component is 0, the motion is uniform circular motion, and the velocity changes in direction only. The radial acceleration (perpendicular to direction of motion) is given by a R = v 2 r = ω 2 r . {\displaystyle a_{\mathrm {R} }={\frac {v^{2}}{r}}=\omega ^{2}r\,.} It is directed towards the center of the rotational motion, and is often called the centripetal acceleration. The angular acceleration is caused by the torque, which can have a positive or negative value in accordance with the convention of positive and negative angular frequency. The relationship between torque and angular acceleration (how difficult it is to start, stop, or otherwise change rotation) is given by the moment of inertia: τ = I α {\displaystyle {\displaystyle {\boldsymbol {\tau }}}=I\alpha } . === Equations of kinematics === When the angular acceleration is constant, the five quantities angular displacement θ {\displaystyle \theta } , initial angular velocity ω 1 {\displaystyle \omega _{1}} , final angular velocity ω 2 {\displaystyle \omega _{2}} , angular acceleration α {\displaystyle \alpha } , and time t {\displaystyle t} can be related by four equations of kinematics: ω 2 = ω 1 + α t θ = ω 1 t + 1 2 α t 2 ω 2 2 = ω 1 2 + 2 α θ θ = 1 2 ( ω 2 + ω 1 ) t {\displaystyle {\begin{aligned}\omega _{2}&=\omega _{1}+\alpha t\\\theta &=\omega _{1}t+{\tfrac {1}{2}}\alpha t^{2}\\\omega _{2}^{2}&=\omega _{1}^{2}+2\alpha \theta \\\theta &={\tfrac {1}{2}}\left(\omega _{2}+\omega _{1}\right)t\end{aligned}}} == Dynamics == === Moment of inertia === The moment of inertia of an object, symbolized by I {\displaystyle I} , is a measure of the object's resistance to changes to its rotation. The moment of inertia is measured in kilogram metre² (kg m2). It depends on the object's mass: increasing the mass of an object increases the moment of inertia. It also depends on the distribution of the mass: distributing the mass further from the center of rotation increases the moment of inertia by a greater degree. For a single particle of mass m {\displaystyle m} a distance r {\displaystyle r} from the axis of rotation, the moment of inertia is given by I = m r 2 . {\displaystyle I=mr^{2}.} === Torque === Torque τ {\displaystyle {\boldsymbol {\tau }}} is the twisting effect of a force F applied to a rotating object which is at position r from its axis of rotation. Mathematically, τ = r × F , {\displaystyle {\boldsymbol {\tau }}=\mathbf {r} \times \mathbf {F} ,} where × denotes the cross product. A net torque acting upon an object will produce an angular acceleration of the object according to τ = I α , {\displaystyle {\boldsymbol {\tau }}=I{\boldsymbol {\alpha }},} just as F = ma in linear dynamics. The work done by a torque acting on an object equals the magnitude of the torque times the angle through which the torque is applied: W = τ θ . {\displaystyle W=\tau \theta .} The power of a torque is equal to the work done by the torque per unit time, hence: P = τ ω . {\displaystyle P=\tau \omega .} === Angular momentum === The angular momentum L {\displaystyle \mathbf {L} } is a measure of the difficulty of bringing a rotating object to rest. It is given by L = ∑ r × p , {\displaystyle \mathbf {L} =\sum \mathbf {r} \times \mathbf {p} ,} where the sum is taken over all particles in the object. Angular momentum is the product of moment of inertia and angular velocity: L = I ω , {\displaystyle \mathbf {L} =I{\boldsymbol {\omega }},} just as p = mv in linear dynamics. The analog of linear momentum in rotational motion is angular momentum. The greater the angular momentum of the spinning object such as a top, the greater its tendency to continue to spin. The angular momentum of a rotating body is proportional to its mass and to how rapidly it is turning. In addition, the angular momentum depends on how the mass is distributed relative to the axis of rotation: the further away the mass is located from the axis of rotation, the greater the angular momentum. A flat disk such as a record turntable has less angular momentum than a hollow cylinder of the same mass and velocity of rotation. Like linear momentum, angular momentum is vector quantity, and its conservation implies that the direction of the spin axis tends to remain unchanged. For this reason, the spinning top remains upright whereas a stationary one falls over immediately. The angular momentum equation can be used to relate the moment of the resultant force on a body about an axis (sometimes called torque), and the rate of rotation about that axis. Torque and angular momentum are related according to τ = d L d t , {\displaystyle {\boldsymbol {\tau }}={\frac {d\mathbf {L} }{dt}},} just as F = dp/dt in linear dynamics. In the absence of an external torque, the angular momentum of a body remains constant. The conservation of angular momentum is notably demonstrated in figure skating: when pulling the arms closer to the body during a spin, the moment of inertia is decreased, and so the angular velocity is increased. === Kinetic energy === The kinetic energy K rot {\displaystyle K_{\text{rot}}} due to the rotation of the body is given by K rot = 1 2 I ω 2 , {\displaystyle K_{\text{rot}}={\frac {1}{2}}I\omega ^{2},} just as K trans = 1 2 m v 2 {\displaystyle K_{\text{trans}}={\tfrac {1}{2}}mv^{2}} in linear dynamics. Kinetic energy is the energy of motion. The amount of translational kinetic energy found in two variables: the mass of the object ( m {\displaystyle m} ) and the speed of the object ( v {\displaystyle v} ) as shown in the equation above. Kinetic energy must always be either zero or a positive value. While velocity can have either a positive or negative value, velocity squared will always be positive. == Vector expression == The above development is a special case of general rotational motion. In the general case, angular displacement, angular velocity, angular acceleration, and torque are considered to be vectors. An angular displacement is considered to be a vector, pointing along the axis, of magnitude equal to that of Δ θ {\displaystyle \Delta \theta } . A right-hand rule is used to find which way it points along the axis; if the fingers of the right hand are curled to point in the way that the object has rotated, then the thumb of the right hand points in the direction of the vector. The angular velocity vector also points along the axis of rotation in the same way as the angular displacements it causes. If a disk spins counterclockwise as seen from above, its angular velocity vector points upwards. Similarly, the angular acceleration vector points along the axis of rotation in the same direction that the angular velocity would point if the angular acceleration were maintained for a long time. The torque vector points along the axis around which the torque tends to cause rotation. To maintain rotation around a fixed axis, the total torque vector has to be along the axis, so that it only changes the magnitude and not the direction of the angular velocity vector. In the case of a hinge, only the component of the torque vector along the axis has an effect on the rotation, other forces and torques are compensated by the structure. == Mathematical representation == == Examples and applications == === Constant angular speed === The simplest case of rotation not around a fixed axis is that of constant angular speed. Then the total torque is zero. For the example of the Earth rotating around its axis, there is very little friction. For a fan, the motor applies a torque to compensate for friction. Similar to the fan, equipment found in the mass production manufacturing industry demonstrate rotation around a fixed axis effectively. For example, a multi-spindle lathe is used to rotate the material on its axis to effectively increase the productivity of cutting, deformation and turning operations. The angle of rotation is a linear function of time, which modulo 360° is a periodic function. An example of this is the two-body problem with circular orbits. === Centripetal force === Internal tensile stress provides the centripetal force that keeps a spinning object together. A rigid body model neglects the accompanying strain. If the body is not rigid this strain will cause it to change shape. This is expressed as the object changing shape due to the "centrifugal force". Celestial bodies rotating about each other often have elliptic orbits. The special case of circular orbits is an example of a rotation around a fixed axis: this axis is the line through the center of mass perpendicular to the plane of motion. The centripetal force is provided by gravity, see also two-body problem. This usually also applies for a spinning celestial body, so it need not be solid to keep together unless the angular speed is too high in relation to its density. (It will, however, tend to become oblate.) For example, a spinning celestial body of water must take at least 3 hours and 18 minutes to rotate, regardless of size, or the water will separate. If the density of the fluid is higher the time can be less. See orbital period. == Plane of rotation == == See also == == References == Fundamentals of Physics Extended 7th Edition by Halliday, Resnick and Walker. ISBN 0-471-23231-9 Concepts of Physics Volume 1, by H. C. Verma, 1st edition, ISBN 81-7709-187-5
Wikipedia/Rotational_dynamics
In classical mechanics, Euler's rotation equations are a vectorial quasilinear first-order ordinary differential equation describing the rotation of a rigid body, using a rotating reference frame with angular velocity ω whose axes are fixed to the body. They are named in honour of Leonhard Euler. In the absence of applied torques, one obtains the Euler top. When the torques are due to gravity, there are special cases when the motion of the top is integrable. == Formulation == Their general vector form is I ω ˙ + ω × ( I ω ) = M . {\displaystyle \mathbf {I} {\dot {\boldsymbol {\omega }}}+{\boldsymbol {\omega }}\times \left(\mathbf {I} {\boldsymbol {\omega }}\right)=\mathbf {M} .} where M is the applied torques and I is the inertia matrix. The vector ω ˙ {\displaystyle {\dot {\boldsymbol {\omega }}}} is the angular acceleration. Again, note that all quantities are defined in the rotating reference frame. In orthogonal principal axes of inertia coordinates the equations become I 1 ω ˙ 1 + ( I 3 − I 2 ) ω 2 ω 3 = M 1 I 2 ω ˙ 2 + ( I 1 − I 3 ) ω 3 ω 1 = M 2 I 3 ω ˙ 3 + ( I 2 − I 1 ) ω 1 ω 2 = M 3 {\displaystyle {\begin{aligned}I_{1}\,{\dot {\omega }}_{1}+(I_{3}-I_{2})\,\omega _{2}\,\omega _{3}&=M_{1}\\I_{2}\,{\dot {\omega }}_{2}+(I_{1}-I_{3})\,\omega _{3}\,\omega _{1}&=M_{2}\\I_{3}\,{\dot {\omega }}_{3}+(I_{2}-I_{1})\,\omega _{1}\,\omega _{2}&=M_{3}\end{aligned}}} where Mk are the components of the applied torques, Ik are the principal moments of inertia and ωk are the components of the angular velocity. == Derivation == In an inertial frame of reference (subscripted "in"), Euler's second law states that the time derivative of the angular momentum L equals the applied torque: d L in d t = M in {\displaystyle {\frac {d\mathbf {L} _{\text{in}}}{dt}}=\mathbf {M} _{\text{in}}} For point particles such that the internal forces are central forces, this may be derived using Newton's second law. For a rigid body, one has the relation between angular momentum and the moment of inertia Iin given as L in = I in ω {\displaystyle \mathbf {L} _{\text{in}}=\mathbf {I} _{\text{in}}{\boldsymbol {\omega }}} In the inertial frame, the differential equation is not always helpful in solving for the motion of a general rotating rigid body, as both Iin and ω can change during the motion. One may instead change to a coordinate frame fixed in the rotating body, in which the moment of inertia tensor is constant. Using a reference frame such as that at the center of mass, the frame's position drops out of the equations. In any rotating reference frame, the time derivative must be replaced so that the equation becomes ( d L d t ) r o t + ω × L = M {\displaystyle \left({\frac {d\mathbf {L} }{dt}}\right)_{\mathrm {rot} }+{\boldsymbol {\omega }}\times \mathbf {L} =\mathbf {M} } and so the cross product arises, see time derivative in rotating reference frame. The vector components of the torque in the inertial and the rotating frames are related by M in = Q M , {\displaystyle \mathbf {M} _{\text{in}}=\mathbf {Q} \mathbf {M} ,} where Q {\displaystyle \mathbf {Q} } is the rotation tensor (not rotation matrix), an orthogonal tensor related to the angular velocity vector by ω × u = Q ˙ Q − 1 u {\displaystyle {\boldsymbol {\omega }}\times {\boldsymbol {u}}={\dot {\mathbf {Q} }}\mathbf {Q} ^{-1}{\boldsymbol {u}}} for any vector u. Now L = I ω {\displaystyle \mathbf {L} =\mathbf {I} {\boldsymbol {\omega }}} is substituted and the time derivatives are taken in the rotating frame, while realizing that the particle positions and the inertia tensor does not depend on time. This leads to the general vector form of Euler's equations which are valid in such a frame I ω ˙ + ω × ( I ω ) = M . {\displaystyle \mathbf {I} {\dot {\boldsymbol {\omega }}}+{\boldsymbol {\omega }}\times \left(\mathbf {I} {\boldsymbol {\omega }}\right)=\mathbf {M} .} The equations are also derived from Newton's laws in the discussion of the resultant torque. More generally, by the tensor transform rules, any rank-2 tensor T {\displaystyle \mathbf {T} } has a time-derivative T ˙ {\displaystyle \mathbf {\dot {T}} } such that for any vector u {\displaystyle \mathbf {u} } , one has T ˙ u = ω × ( T u ) − T ( ω × u ) {\displaystyle \mathbf {\dot {T}} \mathbf {u} ={\boldsymbol {\omega }}\times (\mathbf {T} \mathbf {u} )-\mathbf {T} ({\boldsymbol {\omega }}\times \mathbf {u} )} . This yields the Euler's equations by plugging in d d t ( I ω ) = M . {\displaystyle {\frac {d}{dt}}\left(\mathbf {I} {\boldsymbol {\omega }}\right)=\mathbf {M} .} === Principal axes form === When choosing a frame so that its axes are aligned with the principal axes of the inertia tensor, its component matrix is diagonal, which further simplifies calculations. As described in the moment of inertia article, the angular momentum L can then be written L = L 1 e 1 + L 2 e 2 + L 3 e 3 = ∑ i = 1 3 I i ω i e i {\displaystyle \mathbf {L} =L_{1}\mathbf {e} _{1}+L_{2}\mathbf {e} _{2}+L_{3}\mathbf {e} _{3}=\sum _{i=1}^{3}I_{i}\omega _{i}\mathbf {e} _{i}} Also in some frames not tied to the body can it be possible to obtain such simple (diagonal tensor) equations for the rate of change of the angular momentum. Then ω must be the angular velocity for rotation of that frames axes instead of the rotation of the body. It is however still required that the chosen axes are still principal axes of inertia. The resulting form of the Euler rotation equations is useful for rotation-symmetric objects that allow some of the principal axes of rotation to be chosen freely. == Special case solutions == === Torque-free precessions === Torque-free precessions are non-trivial solution for the situation where the torque on the right hand side is zero. When I is not constant in the external reference frame (i.e. the body is moving and its inertia tensor is not constantly diagonal) then I cannot be pulled through the derivative operator acting on L. In this case I(t) and ω(t) do change together in such a way that the derivative of their product is still zero. This motion can be visualized by Poinsot's construction. == Generalized Euler equations == The Euler equations can be generalized to any simple Lie algebra. The original Euler equations come from fixing the Lie algebra to be s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} , with generators t 1 , t 2 , t 3 {\displaystyle {t_{1},t_{2},t_{3}}} satisfying the relation [ t a , t b ] = ϵ a b c t c {\displaystyle [t_{a},t_{b}]=\epsilon _{abc}t_{c}} . Then if ω ( t ) = ∑ a ω a ( t ) t a {\displaystyle {\boldsymbol {\omega }}(t)=\sum _{a}\omega _{a}(t)t_{a}} (where t {\displaystyle t} is a time coordinate, not to be confused with basis vectors t a {\displaystyle t_{a}} ) is an s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} -valued function of time, and I = d i a g ( I 1 , I 2 , I 3 ) {\displaystyle \mathbf {I} =\mathrm {diag} (I_{1},I_{2},I_{3})} (with respect to the Lie algebra basis), then the (untorqued) original Euler equations can be written I ω ˙ = [ I ω , ω ] . {\displaystyle \mathbf {I} {\dot {\boldsymbol {\omega }}}=[\mathbf {I} {\boldsymbol {\omega }},{\boldsymbol {\omega }}].} To define I {\displaystyle \mathbf {I} } in a basis-independent way, it must be a self-adjoint map on the Lie algebra g {\displaystyle {\mathfrak {g}}} with respect to the invariant bilinear form on g {\displaystyle {\mathfrak {g}}} . This expression generalizes readily to an arbitrary simple Lie algebra, say in the standard classification of simple Lie algebras. This can also be viewed as a Lax pair formulation of the generalized Euler equations, suggesting their integrability. == See also == Euler angles Dzhanibekov effect Moment of inertia Poinsot's ellipsoid Rigid rotor == References == C. A. Truesdell, III (1991) A First Course in Rational Continuum Mechanics. Vol. 1: General Concepts, 2nd ed., Academic Press. ISBN 0-12-701300-8. Sects. I.8-10. C. A. Truesdell, III and R. A. Toupin (1960) The Classical Field Theories, in S. Flügge (ed.) Encyclopedia of Physics. Vol. III/1: Principles of Classical Mechanics and Field Theory, Springer-Verlag. Sects. 166–168, 196–197, and 294. Landau L.D. and Lifshitz E.M. (1976) Mechanics, 3rd. ed., Pergamon Press. ISBN 0-08-021022-8 (hardcover) and ISBN 0-08-029141-4 (softcover). Goldstein H. (1980) Classical Mechanics, 2nd ed., Addison-Wesley. ISBN 0-201-02918-9 Symon KR. (1971) Mechanics, 3rd. ed., Addison-Wesley. ISBN 0-201-07392-7
Wikipedia/Euler's_equations_(rigid_body_dynamics)
Screw theory is the algebraic calculation of pairs of vectors, also known as dual vectors – such as angular and linear velocity, or forces and moments – that arise in the kinematics and dynamics of rigid bodies. Screw theory provides a mathematical formulation for the geometry of lines which is central to rigid body dynamics, where lines form the screw axes of spatial movement and the lines of action of forces. The pair of vectors that form the Plücker coordinates of a line define a unit screw, and general screws are obtained by multiplication by a pair of real numbers and addition of vectors. Important theorems of screw theory include: the transfer principle proves that geometric calculations for points using vectors have parallel geometric calculations for lines obtained by replacing vectors with screws; Chasles' theorem proves that any change between two rigid object poses can be performed by a single screw; Poinsot's theorem proves that rotations about a rigid object's major and minor – but not intermediate – axes are stable. Screw theory is an important tool in robot mechanics, mechanical design, computational geometry and multibody dynamics. This is in part because of the relationship between screws and dual quaternions which have been used to interpolate rigid-body motions. Based on screw theory, an efficient approach has also been developed for the type synthesis of parallel mechanisms (parallel manipulators or parallel robots). == Basic concepts == A spatial displacement of a rigid body can be defined by a rotation about a line and a translation along the same line, called a screw motion. This is known as Chasles' theorem. The six parameters that define a screw motion are the four independent components of the Plücker vector that defines the screw axis, together with the rotation angle about and linear slide along this line, and form a pair of vectors called a screw. For comparison, the six parameters that define a spatial displacement can also be given by three Euler angles that define the rotation and the three components of the translation vector. === Screw === A screw is a six-dimensional vector constructed from a pair of three-dimensional vectors, such as forces and torques and linear and angular velocity, that arise in the study of spatial rigid body movement. The components of the screw define the Plücker coordinates of a line in space and the magnitudes of the vector along the line and moment about this line. === Twist === A twist is a screw used to represent the velocity of a rigid body as an angular velocity around an axis and a linear velocity along this axis. All points in the body have the same component of the velocity along the axis, however the greater the distance from the axis the greater the velocity in the plane perpendicular to this axis. Thus, the helicoidal field formed by the velocity vectors in a moving rigid body flattens out the further the points are radially from the twist axis. The points in a body undergoing a constant twist motion trace helices in the fixed frame. If this screw motion has zero pitch then the trajectories trace circles, and the movement is a pure rotation. If the screw motion has infinite pitch then the trajectories are all straight lines in the same direction. === Wrench === The force and torque vectors that arise in applying Newton's laws to a rigid body can be assembled into a screw called a wrench. A force has a point of application and a line of action, therefore it defines the Plücker coordinates of a line in space and has zero pitch. A torque, on the other hand, is a pure moment that is not bound to a line in space and is an infinite pitch screw. The ratio of these two magnitudes defines the pitch of the screw. == Algebra of screws == Let a screw be an ordered pair S = ( S , V ) , {\displaystyle {\mathsf {S}}=(\mathbf {S} ,\mathbf {V} ),} where S and V are three-dimensional real vectors. The sum and difference of these ordered pairs are computed componentwise. Screws are often called dual vectors. Now, introduce the ordered pair of real numbers â = (a, b), called a dual scalar. Let the addition and subtraction of these numbers be componentwise, and define multiplication as a ^ c ^ = ( a , b ) ( c , d ) = ( a c , a d + b c ) . {\displaystyle {\hat {\mathsf {a}}}{\hat {\mathsf {c}}}=(a,b)(c,d)=(ac,ad+bc).} The multiplication of a screw S = (S, V) by the dual scalar â = (a, b) is computed componentwise to be, a ^ S = ( a , b ) ( S , V ) = ( a S , a V + b S ) . {\displaystyle {\hat {\mathsf {a}}}{\mathsf {S}}=(a,b)(\mathbf {S} ,\mathbf {V} )=(a\mathbf {S} ,a\mathbf {V} +b\mathbf {S} ).} Finally, introduce the dot and cross products of screws by the formulas: S ⋅ T = ( S , V ) ⋅ ( T , W ) = ( S ⋅ T , S ⋅ W + V ⋅ T ) , {\displaystyle {\mathsf {S}}\cdot {\mathsf {T}}=(\mathbf {S} ,\mathbf {V} )\cdot (\mathbf {T} ,\mathbf {W} )=(\mathbf {S} \cdot \mathbf {T} ,\,\,\mathbf {S} \cdot \mathbf {W} +\mathbf {V} \cdot \mathbf {T} ),} which is a dual scalar, and S × T = ( S , V ) × ( T , W ) = ( S × T , S × W + V × T ) , {\displaystyle {\mathsf {S}}\times {\mathsf {T}}=(\mathbf {S} ,\mathbf {V} )\times (\mathbf {T} ,\mathbf {W} )=(\mathbf {S} \times \mathbf {T} ,\,\,\mathbf {S} \times \mathbf {W} +\mathbf {V} \times \mathbf {T} ),} which is a screw. The dot and cross products of screws satisfy the identities of vector algebra, and allow computations that directly parallel computations in the algebra of vectors. Let the dual scalar ẑ = (φ, d) define a dual angle, then the infinite series definitions of sine and cosine yield the relations sin ⁡ z ^ = ( sin ⁡ φ , d cos ⁡ φ ) , cos ⁡ z ^ = ( cos ⁡ φ , − d sin ⁡ φ ) , {\displaystyle \sin {\hat {\mathsf {z}}}=(\sin \varphi ,d\cos \varphi ),\,\,\,\cos {\hat {\mathsf {z}}}=(\cos \varphi ,-d\sin \varphi ),} which are also dual scalars. In general, the function of a dual variable is defined to be f(ẑ) = (f(φ), df′(φ)), where df′(φ) is the derivative of f(φ). These definitions allow the following results: Unit screws are Plücker coordinates of a line and satisfy the relation | S | = S ⋅ S = 1 ; {\displaystyle |{\mathsf {S}}|={\sqrt {{\mathsf {S}}\cdot {\mathsf {S}}}}=1;} Let ẑ = (φ, d) be the dual angle, where φ is the angle between the axes of S and T around their common normal, and d is the distance between these axes along the common normal, then S ⋅ T = | S | | T | cos ⁡ z ^ ; {\displaystyle {\mathsf {S}}\cdot {\mathsf {T}}=\left|{\mathsf {S}}\right|\left|{\mathsf {T}}\right|\cos {\hat {\mathsf {z}}};} Let N be the unit screw that defines the common normal to the axes of S and T, and ẑ = (φ, d) is the dual angle between these axes, then S × T = | S | | T | sin ⁡ z ^ N . {\displaystyle {\mathsf {S}}\times {\mathsf {T}}=\left|{\mathsf {S}}\right|\left|{\mathsf {T}}\right|\sin {\hat {\mathsf {z}}}{\mathsf {N}}.} == Wrench == A common example of a screw is the wrench associated with a force acting on a rigid body. Let P be the point of application of the force F and let P be the vector locating this point in a fixed frame. The wrench W = (F, P × F) is a screw. The resultant force and moment obtained from all the forces Fi, i = 1, ..., n, acting on a rigid body is simply the sum of the individual wrenches Wi, that is R = ∑ i = 1 n W i = ∑ i = 1 n ( F i , P i × F i ) . {\displaystyle {\mathsf {R}}=\sum _{i=1}^{n}{\mathsf {W}}_{i}=\sum _{i=1}^{n}(\mathbf {F} _{i},\mathbf {P} _{i}\times \mathbf {F} _{i}).} Notice that the case of two equal but opposite forces F and −F acting at points A and B respectively, yields the resultant R = ( F − F , A × F − B × F ) = ( 0 , ( A − B ) × F ) . {\displaystyle {\mathsf {R}}=(\mathbf {F} -\mathbf {F} ,\mathbf {A} \times \mathbf {F} -\mathbf {B} \times \mathbf {F} )=(0,(\mathbf {A} -\mathbf {B} )\times \mathbf {F} ).} This shows that screws of the form M = ( 0 , M ) , {\displaystyle {\mathsf {M}}=(0,\mathbf {M} ),} can be interpreted as pure moments. == Twist == In order to define the twist of a rigid body, we must consider its movement defined by the parameterized set of spatial displacements, D(t) = ([A(t)], d(t)), where [A] is a rotation matrix and d is a translation vector. This causes a point p that is fixed in moving body coordinates to trace a curve P(t) in the fixed frame given by P ( t ) = [ A ( t ) ] p + d ( t ) . {\displaystyle \mathbf {P} (t)=[A(t)]\mathbf {p} +\mathbf {d} (t).} The velocity of P is V P ( t ) = [ d A ( t ) d t ] p + v ( t ) , {\displaystyle \mathbf {V} _{P}(t)=\left[{\frac {dA(t)}{dt}}\right]\mathbf {p} +\mathbf {v} (t),} where v is velocity of the origin of the moving frame, that is dd/dt. Now substitute p = [AT](P − d) into this equation to obtain, V P ( t ) = [ Ω ] P + v − [ Ω ] d or V P ( t ) = ω × P + v + d × ω , {\displaystyle \mathbf {V} _{P}(t)=[\Omega ]\mathbf {P} +\mathbf {v} -[\Omega ]\mathbf {d} \quad {\text{or}}\quad \mathbf {V} _{P}(t)=\mathbf {\omega } \times \mathbf {P} +\mathbf {v} +\mathbf {d} \times \mathbf {\omega } ,} where [Ω] = [dA/dt][AT] is the angular velocity matrix and ω is the angular velocity vector. The screw T = ( ω → , v + d × ω → ) , {\displaystyle {\mathsf {T}}=({\vec {\omega }},\mathbf {v} +\mathbf {d} \times {\vec {\omega }}),\!} is the twist of the moving body. The vector V = v + d × ω is the velocity of the point in the body that corresponds with the origin of the fixed frame. There are two important special cases: (i) when d is constant, that is v = 0, then the twist is a pure rotation about a line, then the twist is L = ( ω , d × ω ) , {\displaystyle {\mathsf {L}}=(\omega ,\mathbf {d} \times \omega ),} and (ii) when [Ω] = 0, that is the body does not rotate but only slides in the direction v, then the twist is a pure slide given by T = ( 0 , v ) . {\displaystyle {\mathsf {T}}=(0,\mathbf {v} ).} === Revolute joints === For a revolute joint, let the axis of rotation pass through the point q and be directed along the vector ω, then the twist for the joint is given by, ξ = { ω q × ω } . {\displaystyle \xi ={\begin{Bmatrix}{\boldsymbol {\omega }}\\q\times {\boldsymbol {\omega }}\end{Bmatrix}}.} === Prismatic joints === For a prismatic joint, let the vector v pointing define the direction of the slide, then the twist for the joint is given by, ξ = { 0 v } . {\displaystyle \xi ={\begin{Bmatrix}0\\v\end{Bmatrix}}.} == Coordinate transformation of screws == The coordinate transformations for screws are easily understood by beginning with the coordinate transformations of the Plücker vector of line, which in turn are obtained from the transformations of the coordinate of points on the line. Let the displacement of a body be defined by D = ([A], d), where [A] is the rotation matrix and d is the translation vector. Consider the line in the body defined by the two points p and q, which has the Plücker coordinates, q = ( q − p , p × q ) , {\displaystyle {\mathsf {q}}=(\mathbf {q} -\mathbf {p} ,\mathbf {p} \times \mathbf {q} ),} then in the fixed frame we have the transformed point coordinates P = [A]p + d and Q = [A]q + d, which yield. Q = ( Q − P , P × Q ) = ( [ A ] ( q − p ) , [ A ] ( p × q ) + d × [ A ] ( q − p ) ) {\displaystyle {\mathsf {Q}}=(\mathbf {Q} -\mathbf {P} ,\mathbf {P} \times \mathbf {Q} )=([A](\mathbf {q} -\mathbf {p} ),[A](\mathbf {p} \times \mathbf {q} )+\mathbf {d} \times [A](\mathbf {q} -\mathbf {p} ))} Thus, a spatial displacement defines a transformation for Plücker coordinates of lines given by { Q − P P × Q } = [ A 0 D A A ] { q − p p × q } . {\displaystyle {\begin{Bmatrix}\mathbf {Q} -\mathbf {P} \\\mathbf {P} \times \mathbf {Q} \end{Bmatrix}}={\begin{bmatrix}A&0\\DA&A\end{bmatrix}}{\begin{Bmatrix}\mathbf {q} -\mathbf {p} \\\mathbf {p} \times \mathbf {q} \end{Bmatrix}}.} The matrix [D] is the skew-symmetric matrix that performs the cross product operation, that is [D]y = d × y. The 6×6 matrix obtained from the spatial displacement D = ([A], d) can be assembled into the dual matrix [ A ^ ] = ( [ A ] , [ D A ] ) , {\displaystyle [{\hat {\mathsf {A}}}]=([A],[DA]),} which operates on a screw s = (s.v) to obtain, S = [ A ^ ] s , ( S , V ) = ( [ A ] , [ D A ] ) ( s , v ) = ( [ A ] s , [ A ] v + [ D A ] s ) . {\displaystyle {\mathsf {S}}=[{\hat {\mathsf {A}}}]{\mathsf {s}},\quad (\mathbf {S} ,\mathbf {V} )=([A],[DA])(\mathbf {s} ,\mathbf {v} )=([A]\mathbf {s} ,[A]\mathbf {v} +[DA]\mathbf {s} ).} The dual matrix [Â] = ([A], [DA]) has determinant 1 and is called a dual orthogonal matrix. == Twists as elements of a Lie algebra == Consider the movement of a rigid body defined by the parameterized 4x4 homogeneous transform, P ( t ) = [ T ( t ) ] p = { P 1 } = [ A ( t ) d ( t ) 0 1 ] { p 1 } . {\displaystyle {\textbf {P}}(t)=[T(t)]{\textbf {p}}={\begin{Bmatrix}{\textbf {P}}\\1\end{Bmatrix}}={\begin{bmatrix}A(t)&{\textbf {d}}(t)\\0&1\end{bmatrix}}{\begin{Bmatrix}{\textbf {p}}\\1\end{Bmatrix}}.} This notation does not distinguish between P = (X, Y, Z, 1), and P = (X, Y, Z), which is hopefully clear in context. The velocity of this movement is defined by computing the velocity of the trajectories of the points in the body, V P = [ T ˙ ( t ) ] p = { V P 0 } = [ A ˙ ( t ) d ˙ ( t ) 0 0 ] { p 1 } . {\displaystyle {\textbf {V}}_{P}=[{\dot {T}}(t)]{\textbf {p}}={\begin{Bmatrix}{\textbf {V}}_{P}\\0\end{Bmatrix}}={\begin{bmatrix}{\dot {A}}(t)&{\dot {\textbf {d}}}(t)\\0&0\end{bmatrix}}{\begin{Bmatrix}{\textbf {p}}\\1\end{Bmatrix}}.} The dot denotes the derivative with respect to time, and because p is constant its derivative is zero. Substitute the inverse transform for p into the velocity equation to obtain the velocity of P by operating on its trajectory P(t), that is V P = [ T ˙ ( t ) ] [ T ( t ) ] − 1 P ( t ) = [ S ] P , {\displaystyle {\textbf {V}}_{P}=[{\dot {T}}(t)][T(t)]^{-1}{\textbf {P}}(t)=[S]{\textbf {P}},} where [ S ] = [ Ω − Ω d + d ˙ 0 0 ] = [ Ω d × ω + v 0 0 ] . {\displaystyle [S]={\begin{bmatrix}\Omega &-\Omega {\textbf {d}}+{\dot {\textbf {d}}}\\0&0\end{bmatrix}}={\begin{bmatrix}\Omega &\mathbf {d} \times \omega +\mathbf {v} \\0&0\end{bmatrix}}.} Recall that [Ω] is the angular velocity matrix. The matrix [S] is an element of the Lie algebra se(3) of the Lie group SE(3) of homogeneous transforms. The components of [S] are the components of the twist screw, and for this reason [S] is also often called a twist. From the definition of the matrix [S], we can formulate the ordinary differential equation, [ T ˙ ( t ) ] = [ S ] [ T ( t ) ] , {\displaystyle [{\dot {T}}(t)]=[S][T(t)],} and ask for the movement [T(t)] that has a constant twist matrix [S]. The solution is the matrix exponential [ T ( t ) ] = e [ S ] t . {\displaystyle [T(t)]=e^{[S]t}.} This formulation can be generalized such that given an initial configuration g(0) in SE(n), and a twist ξ in se(n), the homogeneous transformation to a new location and orientation can be computed with the formula, g ( θ ) = exp ⁡ ( ξ θ ) g ( 0 ) , {\displaystyle g(\theta )=\exp(\xi \theta )g(0),} where θ represents the parameters of the transformation. == Screws by reflection == In transformation geometry, the elemental concept of transformation is the reflection (mathematics). In planar transformations a translation is obtained by reflection in parallel lines, and rotation is obtained by reflection in a pair of intersecting lines. To produce a screw transformation from similar concepts one must use planes in space: the parallel planes must be perpendicular to the screw axis, which is the line of intersection of the intersecting planes that generate the rotation of the screw. Thus four reflections in planes effect a screw transformation. The tradition of inversive geometry borrows some of the ideas of projective geometry and provides a language of transformation that does not depend on analytic geometry. == Homography == The combination of a translation with a rotation effected by a screw displacement can be illustrated with the exponential mapping. Since ε2 = 0 for dual numbers, exp(aε) = 1 + aε, all other terms of the exponential series vanishing. Let F = {1 + εr : r ∈ H}, ε2 = 0. Note that F is stable under the rotation q → p−1qp and under the translation (1 + εr)(1 + εs) = 1 + ε(r + s) for any vector quaternions r and s. F is a 3-flat in the eight-dimensional space of dual quaternions. This 3-flat F represents space, and the homography constructed, restricted to F, is a screw displacement of space. Let a be half the angle of the desired turn about axis r, and br half the displacement on the screw axis. Then form z = exp((a + bε)r) and z* = exp((a − bε)r). Now the homography is [ q : 1 ] ( z 0 0 z ∗ ) = [ q z : z ∗ ] ∼ [ ( z ∗ ) − 1 q z : 1 ] . {\displaystyle [q:1]{\begin{pmatrix}z&0\\0&z^{*}\end{pmatrix}}=[qz:z^{*}]\thicksim [(z^{*})^{-1}qz:1].} The inverse for z* is 1 exp ⁡ ( a r − b ε r ) = ( e a r e − b r ε ) − 1 = e b r ε e − a r , {\displaystyle {\frac {1}{\exp(ar-b\varepsilon r)}}=(e^{ar}e^{-br\varepsilon })^{-1}=e^{br\varepsilon }e^{-ar},} so, the homography sends q to ( e b ε e − a r ) q ( e a r e b ε r ) = e b ε r ( e − a r q e a r ) e b ε r = e 2 b ε r ( e − a r q e a r ) . {\displaystyle (e^{b\varepsilon }e^{-ar})q(e^{ar}e^{b\varepsilon r})=e^{b\varepsilon r}(e^{-ar}qe^{ar})e^{b\varepsilon r}=e^{2b\varepsilon r}(e^{-ar}qe^{ar}).} Now for any quaternion vector p, p* = −p, let q = 1 + pε ∈ F, where the required rotation and translation are effected. Evidently the group of units of the ring of dual quaternions is a Lie group. A subgroup has Lie algebra generated by the parameters a r and b s, where a, b ∈ R, and r, s ∈ H. These six parameters generate a subgroup of the units, the unit sphere. Of course it includes F and the 3-sphere of versors. == Work of forces acting on a rigid body == Consider the set of forces F1, F2 ... Fn act on the points X1, X2 ... Xn in a rigid body. The trajectories of Xi, i = 1,...,n are defined by the movement of the rigid body with rotation [A(t)] and the translation d(t) of a reference point in the body, given by X i ( t ) = [ A ( t ) ] x i + d ( t ) i = 1 , … , n , {\displaystyle \mathbf {X} _{i}(t)=[A(t)]\mathbf {x} _{i}+\mathbf {d} (t)\quad i=1,\ldots ,n,} where xi are coordinates in the moving body. The velocity of each point Xi is V i = ω → × ( X i − d ) + v , {\displaystyle \mathbf {V} _{i}={\vec {\omega }}\times (\mathbf {X} _{i}-\mathbf {d} )+\mathbf {v} ,} where ω is the angular velocity vector and v is the derivative of d(t). The work by the forces over the displacement δri=viδt of each point is given by δ W = F 1 ⋅ V 1 δ t + F 2 ⋅ V 2 δ t + ⋯ + F n ⋅ V n δ t . {\displaystyle \delta W=\mathbf {F} _{1}\cdot \mathbf {V} _{1}\delta t+\mathbf {F} _{2}\cdot \mathbf {V} _{2}\delta t+\cdots +\mathbf {F} _{n}\cdot \mathbf {V} _{n}\delta t.} Define the velocities of each point in terms of the twist of the moving body to obtain δ W = ∑ i = 1 n F i ⋅ ( ω → × ( X i − d ) + v ) δ t . {\displaystyle \delta W=\sum _{i=1}^{n}\mathbf {F} _{i}\cdot ({\vec {\omega }}\times (\mathbf {X} _{i}-\mathbf {d} )+\mathbf {v} )\delta t.} Expand this equation and collect coefficients of ω and v to obtain δ W = ( ∑ i = 1 n F i ) ⋅ d × ω → δ t + ( ∑ i = 1 n F i ) ⋅ v δ t + ( ∑ i = 1 n X i × F i ) ⋅ ω → δ t = ( ∑ i = 1 n F i ) ⋅ ( v + d × ω → ) δ t + ( ∑ i = 1 n X i × F i ) ⋅ ω → δ t . {\displaystyle {\begin{aligned}\delta W&=\left(\sum _{i=1}^{n}\mathbf {F} _{i}\right)\cdot \mathbf {d} \times {\vec {\omega }}\delta t+\left(\sum _{i=1}^{n}\mathbf {F} _{i}\right)\cdot \mathbf {v} \delta t+\left(\sum _{i=1}^{n}\mathbf {X} _{i}\times \mathbf {F} _{i}\right)\cdot {\vec {\omega }}\delta t\\[4pt]&=\left(\sum _{i=1}^{n}\mathbf {F} _{i}\right)\cdot (\mathbf {v} +\mathbf {d} \times {\vec {\omega }})\delta t+\left(\sum _{i=1}^{n}\mathbf {X} _{i}\times \mathbf {F} _{i}\right)\cdot {\vec {\omega }}\delta t.\end{aligned}}} Introduce the twist of the moving body and the wrench acting on it given by T = ( ω → , d × ω → + v ) = ( T , T ∘ ) , W = ( ∑ i = 1 n F i , ∑ i = 1 n X i × F i ) = ( W , W ∘ ) , {\displaystyle {\mathsf {T}}=({\vec {\omega }},\mathbf {d} \times {\vec {\omega }}+\mathbf {v} )=(\mathbf {T} ,\mathbf {T} ^{\circ }),\quad {\mathsf {W}}=\left(\sum _{i=1}^{n}\mathbf {F} _{i},\sum _{i=1}^{n}\mathbf {X} _{i}\times \mathbf {F} _{i}\right)=(\mathbf {W} ,\mathbf {W} ^{\circ }),} then work takes the form δ W = ( W ⋅ T ∘ + W ∘ ⋅ T ) δ t . {\displaystyle \delta W=(\mathbf {W} \cdot \mathbf {T} ^{\circ }+\mathbf {W} ^{\circ }\cdot \mathbf {T} )\delta t.} The 6×6 matrix [Π] is used to simplify the calculation of work using screws, so that δ W = ( W ⋅ T ∘ + W ∘ ⋅ T ) δ t = W [ Π ] T δ t , {\displaystyle \delta W=(\mathbf {W} \cdot \mathbf {T} ^{\circ }+\mathbf {W} ^{\circ }\cdot \mathbf {T} )\delta t={\mathsf {W}}[\Pi ]{\mathsf {T}}\delta t,} where [ Π ] = [ 0 I I 0 ] , {\displaystyle [\Pi ]={\begin{bmatrix}0&I\\I&0\end{bmatrix}},} and [I] is the 3×3 identity matrix. === Reciprocal screws === If the virtual work of a wrench on a twist is zero, then the forces and torque of the wrench are constraint forces relative to the twist. The wrench and twist are said to be reciprocal, that is if δ W = W [ Π ] T δ t = 0 , {\displaystyle \delta W={\mathsf {W}}[\Pi ]{\mathsf {T}}\delta t=0,} then the screws W and T are reciprocal. === Twists in robotics === In the study of robotic systems the components of the twist are often transposed to eliminate the need for the 6×6 matrix [Π] in the calculation of work. In this case the twist is defined to be T ˇ = ( d × ω → + v , ω → ) , {\displaystyle {\check {\mathsf {T}}}=(\mathbf {d} \times {\vec {\omega }}+\mathbf {v} ,{\vec {\omega }}),} so the calculation of work takes the form δ W = W ⋅ T ˇ δ t . {\displaystyle \delta W={\mathsf {W}}\cdot {\check {\mathsf {T}}}\delta t.} In this case, if δ W = W ⋅ T ˇ δ t = 0 , {\displaystyle \delta W={\mathsf {W}}\cdot {\check {\mathsf {T}}}\delta t=0,} then the wrench W is reciprocal to the twist T. == History == The mathematical framework was developed by Sir Robert Stawell Ball in 1876 for application in kinematics and statics of mechanisms (rigid body mechanics). Felix Klein saw screw theory as an application of elliptic geometry and his Erlangen Program. He also worked out elliptic geometry, and a fresh view of Euclidean geometry, with the Cayley–Klein metric. The use of a symmetric matrix for a von Staudt conic and metric, applied to screws, has been described by Harvey Lipkin. Other prominent contributors include Julius Plücker, W. K. Clifford, F. M. Dimentberg, Kenneth H. Hunt, J. R. Phillips. The homography idea in transformation geometry was advanced by Sophus Lie more than a century ago. Even earlier, William Rowan Hamilton displayed the versor form of unit quaternions as exp(a r)= cos a + r sin a. The idea is also in Euler's formula parametrizing the unit circle in the complex plane. William Kingdon Clifford initiated the use of dual quaternions for kinematics, followed by Aleksandr Kotelnikov, Eduard Study (Geometrie der Dynamen), and Wilhelm Blaschke. However, the point of view of Sophus Lie has recurred. In 1940, Julian Coolidge described the use of dual quaternions for screw displacements on page 261 of A History of Geometrical Methods. He notes the 1885 contribution of Arthur Buchheim. Coolidge based his description simply on the tools Hamilton had used for real quaternions. == See also == Screw axis Newton–Euler equations uses screws to describe rigid body motions and loading. Twist (differential geometry) Twist (rational trigonometry) == References == == External links == Joe Rooney William Kingdon Clifford, Department of Design and Innovation, the Open University, London. Ravi Banavar notes on Robotics, Geometry and Control
Wikipedia/Wrench_(screw_theory)
In physics, a couple or torque is a pair of forces that are equal in magnitude but opposite in their direction of action. A couple produce a pure rotational motion without any translational form. == Simple couple == The simplest kind of couple consists of two equal and opposite forces whose lines of action do not coincide. This is called a "simple couple". The forces have a turning effect or moment called a torque about an axis which is normal (perpendicular) to the plane of the forces. The SI unit for the torque of the couple is newton metre. If the two forces are F and −F, then the magnitude of the torque is given by the following formula: τ = F d {\displaystyle \tau =Fd} where τ {\displaystyle \tau } is the moment of couple F is the magnitude of the force d is the perpendicular distance (moment) between the two parallel forces The magnitude of the torque is equal to F • d, with the direction of the torque given by the unit vector e ^ {\displaystyle {\hat {e}}} , which is perpendicular to the plane containing the two forces and positive being a counter-clockwise couple. When d is taken as a vector between the points of action of the forces, then the torque is the cross product of d and F, i.e. τ = | d × F | . {\displaystyle \mathbf {\tau } =|\mathbf {d} \times \mathbf {F} |.} == Independence of reference point == The moment of a force is only defined with respect to a certain point P (it is said to be the "moment about P") and, in general, when P is changed, the moment changes. However, the moment (torque) of a couple is independent of the reference point P: Any point will give the same moment. In other words, a couple, unlike any more general moments, is a "free vector". (This fact is called Varignon's Second Moment Theorem.) The proof of this claim is as follows: Suppose there are a set of force vectors F1, F2, etc. that form a couple, with position vectors (about some origin P), r1, r2, etc., respectively. The moment about P is M = r 1 × F 1 + r 2 × F 2 + ⋯ {\displaystyle M=\mathbf {r} _{1}\times \mathbf {F} _{1}+\mathbf {r} _{2}\times \mathbf {F} _{2}+\cdots } Now we pick a new reference point P' that differs from P by the vector r. The new moment is M ′ = ( r 1 + r ) × F 1 + ( r 2 + r ) × F 2 + ⋯ {\displaystyle M'=(\mathbf {r} _{1}+\mathbf {r} )\times \mathbf {F} _{1}+(\mathbf {r} _{2}+\mathbf {r} )\times \mathbf {F} _{2}+\cdots } Now the distributive property of the cross product implies M ′ = ( r 1 × F 1 + r 2 × F 2 + ⋯ ) + r × ( F 1 + F 2 + ⋯ ) . {\displaystyle M'=\left(\mathbf {r} _{1}\times \mathbf {F} _{1}+\mathbf {r} _{2}\times \mathbf {F} _{2}+\cdots \right)+\mathbf {r} \times \left(\mathbf {F} _{1}+\mathbf {F} _{2}+\cdots \right).} However, the definition of a force couple means that F 1 + F 2 + ⋯ = 0. {\displaystyle \mathbf {F} _{1}+\mathbf {F} _{2}+\cdots =0.} Therefore, M ′ = r 1 × F 1 + r 2 × F 2 + ⋯ = M {\displaystyle M'=\mathbf {r} _{1}\times \mathbf {F} _{1}+\mathbf {r} _{2}\times \mathbf {F} _{2}+\cdots =M} This proves that the moment is independent of reference point, which is proof that a couple is a free vector. == Forces and couples == A force F applied to a rigid body at a distance d from the center of mass has the same effect as the same force applied directly to the center of mass and a couple Cℓ = Fd. The couple produces an angular acceleration of the rigid body at right angles to the plane of the couple. The force at the center of mass accelerates the body in the direction of the force without change in orientation. The general theorems are: A single force acting at any point O′ of a rigid body can be replaced by an equal and parallel force F acting at any given point O and a couple with forces parallel to F whose moment is M = Fd, d being the separation of O and O′. Conversely, a couple and a force in the plane of the couple can be replaced by a single force, appropriately located. Any couple can be replaced by another in the same plane of the same direction and moment, having any desired force or any desired arm. == Applications == Couples are very important in engineering and the physical sciences. A few examples are: The forces exerted by one's hand on a screw-driver The forces exerted by the tip of a screwdriver on the head of a screw Drag forces acting on a spinning propeller Forces on an electric dipole in a uniform electric field The reaction control system on a spacecraft Force exerted by hands on steering wheel 'Rocking couples' are a regular imbalance giving rise to vibration == See also == Traction (engineering) Torque Moment (physics) Force == References == H.F. Girvin (1938) Applied Mechanics, §28 Couples, pp 33,4, Scranton Pennsylvania: International Textbook Company.
Wikipedia/Couple_(mechanics)
Inverse dynamics is an inverse problem. It commonly refers to either inverse rigid body dynamics or inverse structural dynamics. Inverse rigid-body dynamics is a method for computing forces and/or moments of force (torques) based on the kinematics (motion) of a body and the body's inertial properties (mass and moment of inertia). Typically it uses link-segment models to represent the mechanical behaviour of interconnected segments, such as the limbs of humans or animals or the joint extensions of robots, where given the kinematics of the various parts, inverse dynamics derives the minimum forces and moments responsible for the individual movements. In practice, inverse dynamics computes these internal moments and forces from measurements of the motion of limbs and external forces such as ground reaction forces, under a special set of assumptions. == Applications == The fields of robotics and biomechanics constitute the major application areas for inverse dynamics. Within robotics, inverse dynamics algorithms are used to calculate the torques that a robot's motors must deliver to make the robot's end-point move in the way prescribed by its current task. The "inverse dynamics problem" for robotics was solved by Eduardo Bayo in 1987. This solution calculates how each of the numerous electric motors that control a robot arm must move to produce a particular action. Humans can perform very complicated and precise movements, such as controlling the tip of a fishing rod well enough to cast the bait accurately. Before the arm moves, the brain calculates the necessary movement of each muscle involved and tells the muscles what to do as the arm swings. In the case of a robot arm, the "muscles" are the electric motors which must turn by a given amount at a given moment. Each motor must be supplied with just the right amount of electric current, at just the right time. Researchers can predict the motion of a robot arm if they know how the motors will move. This is known as the forward dynamics problem. Until this discovery, they had not been able to work backwards to calculate the movements of the motors required to generate a particular complicated motion. Bayo's work began with the application of frequency-domain methods to the inverse dynamics of single-link flexible robots. This approach yielded non-causal exact solutions due to the right-half plane zeros in the hub-torque-to-tip transfer functions. Extending this method to the nonlinear multi-flexible-link case was of particular importance to robotics. When combined with passive joint control in a collaborative effort with a control group, Bayo's inverse dynamics approach led to exponentially stable tip-tracking control for flexible multi-link robots. Similarly, inverse dynamics in biomechanics computes the net turning effect of all the anatomical structures across a joint, in particular the muscles and ligaments, necessary to produce the observed motions of the joint. These moments of force may then be used to compute the amount of mechanical work performed by that moment of force. Each moment of force can perform positive work to increase the speed and/or height of the body or perform negative work to decrease the speed and/or height of the body. The equations of motion necessary for these computations are based on Newtonian mechanics, specifically the Newton–Euler equations of: Force equal mass times linear acceleration, and Moment equals mass moment of inertia times angular acceleration. These equations mathematically model the behavior of a limb in terms of a knowledge domain-independent, link-segment model, such as idealized solids of revolution or a skeleton with fixed-length limbs and perfect pivot joints. From these equations, inverse dynamics derives the torque (moment) level at each joint based on the movement of the attached limb or limbs affected by the joint. This process used to derive the joint moments is known as inverse dynamics because it reverses the forward dynamics equations of motion, the set of differential equations which yield the position and angle trajectories of the idealized skeleton's limbs from the accelerations and forces applied. From joint moments, a biomechanist could infer muscle forces that would lead to those moments based on a model of bone and muscle attachments, etc., thereby estimating muscle activation from kinematic motion. Correctly computing force (or moment) values from inverse dynamics can be challenging because external forces (e.g., ground contact forces) affect motion but are not directly observable from the kinematic motion. In addition, co-activation of muscles can lead to a family of solutions which are not distinguishable from the kinematic motion's characteristics. Furthermore, closed kinematic chains, such as swinging a bat or shooting a hockey puck, require the measurement of internal forces (in the bat or stick) be made before shoulder, elbow or wrist moments and forces can be derived. == See also == Kinematics Inverse kinematics: a problem similar to Inverse dynamics but with different goals and starting assumptions. While inverse dynamics asks for torques that produce a certain time-trajectory of positions and velocities, inverse kinematics only asks for a static set of joint angles such that a certain point (or a set of points) of the character (or robot) is positioned at a certain designated location. It is used in synthesizing the appearance of human motion, particularly in the field of video game design. Another use is in robotics, where joint angles of an arm must be calculated from the desired position of the end effector. Body segment parameters == References == Kirtley, C.; Whittle, M.W; Jefferson, RJ (1985). "Influence of Walking Speed on Gait Parameters". Journal of Biomedical Engineering. 7 (4): 282–8. doi:10.1016/0141-5425(85)90055-X. PMID 4057987. Jensen RK (1989). "Changes in segment inertia proportions between four and twenty years". Journal of Biomechanics. 22 (6–7): 529–36. doi:10.1016/0021-9290(89)90004-3. PMID 2808438. == External links == Inverse dynamics Chris Kirtley's research roundup and tutorials on biomechanical aspects of human gait.
Wikipedia/Inverse_dynamics
In mathematics, Fredholm theory is a theory of integral equations. In the narrowest sense, Fredholm theory concerns itself with the solution of the Fredholm integral equation. In a broader sense, the abstract structure of Fredholm's theory is given in terms of the spectral theory of Fredholm operators and Fredholm kernels on Hilbert space. It therefore forms a branch of operator theory and functional analysis. The theory is named in honour of Erik Ivar Fredholm. == Fredholm equation of the first kind == Much of Fredholm theory concerns itself with the following integral equation for f when g and K are given: g ( x ) = ∫ a b K ( x , y ) f ( y ) d y . {\displaystyle g(x)=\int _{a}^{b}K(x,y)f(y)\,dy.} This equation arises naturally in many problems in physics and mathematics, as the inverse of a differential equation. That is, one is asked to solve the differential equation L g ( x ) = f ( x ) {\displaystyle Lg(x)=f(x)} where the function f is given and g is unknown. Here, L stands for a linear differential operator. For example, one might take L to be an elliptic operator, such as L = d 2 d x 2 {\displaystyle L={\frac {d^{2}}{dx^{2}}}\,} in which case the equation to be solved becomes the Poisson equation. A general method of solving such equations is by means of Green's functions, namely, rather than a direct attack, one first finds the function K = K ( x , y ) {\displaystyle K=K(x,y)} such that for a given pair x,y, L K ( x , y ) = δ ( x − y ) , {\displaystyle LK(x,y)=\delta (x-y),} where δ(x) is the Dirac delta function. The desired solution to the above differential equation is then written as an integral in the form of a Fredholm integral equation, g ( x ) = ∫ K ( x , y ) f ( y ) d y . {\displaystyle g(x)=\int K(x,y)f(y)\,dy.} The function K(x,y) is variously known as a Green's function, or the kernel of an integral. It is sometimes called the nucleus of the integral, whence the term nuclear operator arises. In the general theory, x and y may be points on any manifold; the real number line or m-dimensional Euclidean space in the simplest cases. The general theory also often requires that the functions belong to some given function space: often, the space of square-integrable functions is studied, and Sobolev spaces appear often. The actual function space used is often determined by the solutions of the eigenvalue problem of the differential operator; that is, by the solutions to L ψ n ( x ) = ω n ψ n ( x ) {\displaystyle L\psi _{n}(x)=\omega _{n}\psi _{n}(x)} where the ωn are the eigenvalues, and the ψn(x) are the eigenvectors. The set of eigenvectors span a Banach space, and, when there is a natural inner product, then the eigenvectors span a Hilbert space, at which point the Riesz representation theorem is applied. Examples of such spaces are the orthogonal polynomials that occur as the solutions to a class of second-order ordinary differential equations. Given a Hilbert space as above, the kernel may be written in the form K ( x , y ) = ∑ n ψ n ( x ) ψ n ( y ) ω n . {\displaystyle K(x,y)=\sum _{n}{\frac {\psi _{n}(x)\psi _{n}(y)}{\omega _{n}}}.} In this form, the object K(x,y) is often called the Fredholm operator or the Fredholm kernel. That this is the same kernel as before follows from the completeness of the basis of the Hilbert space, namely, that one has δ ( x − y ) = ∑ n ψ n ( x ) ψ n ( y ) . {\displaystyle \delta (x-y)=\sum _{n}\psi _{n}(x)\psi _{n}(y).} Since the ωn are generally increasing, the resulting eigenvalues of the operator K(x,y) are thus seen to be decreasing towards zero. == Inhomogeneous equations == The inhomogeneous Fredholm integral equation f ( x ) = − ω φ ( x ) + ∫ K ( x , y ) φ ( y ) d y {\displaystyle f(x)=-\omega \varphi (x)+\int K(x,y)\varphi (y)\,dy} may be written formally as f = ( K − ω ) φ {\displaystyle f=(K-\omega )\varphi } which has the formal solution φ = 1 K − ω f . {\displaystyle \varphi ={\frac {1}{K-\omega }}f.} A solution of this form is referred to as the resolvent formalism, where the resolvent is defined as the operator R ( ω ) = 1 K − ω I . {\displaystyle R(\omega )={\frac {1}{K-\omega I}}.} Given the collection of eigenvectors and eigenvalues of K, the resolvent may be given a concrete form as R ( ω ; x , y ) = ∑ n ψ n ( y ) ψ n ( x ) ω n − ω {\displaystyle R(\omega ;x,y)=\sum _{n}{\frac {\psi _{n}(y)\psi _{n}(x)}{\omega _{n}-\omega }}} with the solution being φ ( x ) = ∫ R ( ω ; x , y ) f ( y ) d y . {\displaystyle \varphi (x)=\int R(\omega ;x,y)f(y)\,dy.} A necessary and sufficient condition for such a solution to exist is one of Fredholm's theorems. The resolvent is commonly expanded in powers of λ = 1 / ω {\displaystyle \lambda =1/\omega } , in which case it is known as the Liouville-Neumann series. In this case, the integral equation is written as g ( x ) = φ ( x ) − λ ∫ K ( x , y ) φ ( y ) d y {\displaystyle g(x)=\varphi (x)-\lambda \int K(x,y)\varphi (y)\,dy} and the resolvent is written in the alternate form as R ( λ ) = 1 I − λ K . {\displaystyle R(\lambda )={\frac {1}{I-\lambda K}}.} == Fredholm determinant == The Fredholm determinant is commonly defined as det ( I − λ K ) = exp ⁡ [ − ∑ n λ n n Tr K n ] {\displaystyle \det(I-\lambda K)=\exp \left[-\sum _{n}{\frac {\lambda ^{n}}{n}}\operatorname {Tr} \,K^{n}\right]} where Tr K = ∫ K ( x , x ) d x {\displaystyle \operatorname {Tr} \,K=\int K(x,x)\,dx} and Tr K 2 = ∬ K ( x , y ) K ( y , x ) d x d y {\displaystyle \operatorname {Tr} \,K^{2}=\iint K(x,y)K(y,x)\,dx\,dy} and so on. The corresponding zeta function is ζ ( s ) = 1 det ( I − s K ) . {\displaystyle \zeta (s)={\frac {1}{\det(I-sK)}}.} The zeta function can be thought of as the determinant of the resolvent. The zeta function plays an important role in studying dynamical systems. Note that this is the same general type of zeta function as the Riemann zeta function; however, in this case, the corresponding kernel is not known. The existence of such a kernel is known as the Hilbert–Pólya conjecture. == Main results == The classical results of the theory are Fredholm's theorems, one of which is the Fredholm alternative. One of the important results from the general theory is that the kernel is a compact operator when the space of functions are equicontinuous. A related celebrated result is the Atiyah–Singer index theorem, pertaining to index (dim ker – dim coker) of elliptic operators on compact manifolds. == History == Fredholm's 1903 paper in Acta Mathematica is considered to be one of the major landmarks in the establishment of operator theory. David Hilbert developed the abstraction of Hilbert space in association with research on integral equations prompted by Fredholm's (amongst other things). == See also == Green's functions Spectral theory Fredholm alternative == References == Fredholm, E. I. (1903). "Sur une classe d'equations fonctionnelles" (PDF). Acta Mathematica. 27: 365–390. doi:10.1007/bf02421317. Edmunds, D. E.; Evans, W. D. (1987). Spectral Theory and Differential Operators. Oxford University Press. ISBN 0-19-853542-2. B. V. Khvedelidze, G. L. Litvinov (2001) [1994], "Fredholm kernel", Encyclopedia of Mathematics, EMS Press Driver, Bruce K. "Compact and Fredholm Operators and the Spectral Theorem" (PDF). Analysis Tools with Applications. pp. 579–600. Mathews, Jon; Walker, Robert L. (1970). Mathematical Methods of Physics (2nd ed.). New York: W. A. Benjamin. ISBN 0-8053-7002-1. McOwen, Robert C. (1980). "Fredholm theory of partial differential equations on complete Riemannian manifolds". Pacific J. Math. 87 (1): 169–185. doi:10.2140/pjm.1980.87.169. Zbl 0457.35084.
Wikipedia/Fredholm_theory
Renewal theory is the branch of probability theory that generalizes the Poisson process for arbitrary holding times. Instead of exponentially distributed holding times, a renewal process may have any independent and identically distributed (IID) holding times that have finite expectation. A renewal-reward process additionally has a random sequence of rewards incurred at each holding time, which are IID but need not be independent of the holding times. A renewal process has asymptotic properties analogous to the strong law of large numbers and central limit theorem. The renewal function m ( t ) {\displaystyle m(t)} (expected number of arrivals) and reward function g ( t ) {\displaystyle g(t)} (expected reward value) are of key importance in renewal theory. The renewal function satisfies a recursive integral equation, the renewal equation. The key renewal equation gives the limiting value of the convolution of m ′ ( t ) {\displaystyle m'(t)} with a suitable non-negative function. The superposition of renewal processes can be studied as a special case of Markov renewal processes. Applications include calculating the best strategy for replacing worn-out machinery in a factory; comparing the long-term benefits of different insurance policies; and modelling the transmission of infectious disease, where "One of the most widely adopted means of inference of the reproduction number is via the renewal equation". The inspection paradox relates to the fact that observing a renewal interval at time t gives an interval with average value larger than that of an average renewal interval. == Renewal processes == === Introduction === The renewal process is a generalization of the Poisson process. In essence, the Poisson process is a continuous-time Markov process on the positive integers (usually starting at zero) which has independent exponentially distributed holding times at each integer i {\displaystyle i} before advancing to the next integer, i + 1 {\displaystyle i+1} . In a renewal process, the holding times need not have an exponential distribution; rather, the holding times may have any distribution on the positive numbers, so long as the holding times are independent and identically distributed (IID) and have finite mean. === Formal definition === Let ( S i ) i ≥ 1 {\displaystyle (S_{i})_{i\geq 1}} be a sequence of positive independent identically distributed random variables with finite expected value 0 < E ⁡ [ S i ] < ∞ . {\displaystyle 0<\operatorname {E} [S_{i}]<\infty .} We refer to the random variable S i {\displaystyle S_{i}} as the " i {\displaystyle i} -th holding time". Define for each n > 0 : J n = ∑ i = 1 n S i , {\displaystyle J_{n}=\sum _{i=1}^{n}S_{i},} each J n {\displaystyle J_{n}} is referred to as the " n {\displaystyle n} -th jump time" and the intervals [ J n , J n + 1 ] {\displaystyle [J_{n},J_{n+1}]} are called "renewal intervals". Then ( X t ) t ≥ 0 {\displaystyle (X_{t})_{t\geq 0}} is given by random variable X t = ∑ n = 1 ∞ I { J n ≤ t } = sup { n : J n ≤ t } {\displaystyle X_{t}=\sum _{n=1}^{\infty }\operatorname {\mathbb {I} } _{\{J_{n}\leq t\}}=\sup \left\{\,n:J_{n}\leq t\,\right\}} where I { J n ≤ t } {\displaystyle \operatorname {\mathbb {I} } _{\{J_{n}\leq t\}}} is the indicator function I { J n ≤ t } = { 1 , if J n ≤ t 0 , otherwise {\displaystyle \operatorname {\mathbb {I} } _{\{J_{n}\leq t\}}={\begin{cases}1,&{\text{if }}J_{n}\leq t\\0,&{\text{otherwise}}\end{cases}}} ( X t ) t ≥ 0 {\displaystyle (X_{t})_{t\geq 0}} represents the number of jumps that have occurred by time t, and is called a renewal process. === Interpretation === If one considers events occurring at random times, one may choose to think of the holding times { S i : i ≥ 1 } {\displaystyle \{S_{i}:i\geq 1\}} as the random time elapsed between two consecutive events. For example, if the renewal process is modelling the numbers of breakdown of different machines, then the holding time represents the time between one machine breaking down before another one does. The Poisson process is the unique renewal process with the Markov property, as the exponential distribution is the unique continuous random variable with the property of memorylessness. == Renewal-reward processes == Let W 1 , W 2 , … {\displaystyle W_{1},W_{2},\ldots } be a sequence of IID random variables (rewards) satisfying E ⁡ | W i | < ∞ . {\displaystyle \operatorname {E} |W_{i}|<\infty .\,} Then the random variable Y t = ∑ i = 1 X t W i {\displaystyle Y_{t}=\sum _{i=1}^{X_{t}}W_{i}} is called a renewal-reward process. Note that unlike the S i {\displaystyle S_{i}} , each W i {\displaystyle W_{i}} may take negative values as well as positive values. The random variable Y t {\displaystyle Y_{t}} depends on two sequences: the holding times S 1 , S 2 , … {\displaystyle S_{1},S_{2},\ldots } and the rewards W 1 , W 2 , … {\displaystyle W_{1},W_{2},\ldots } These two sequences need not be independent. In particular, W i {\displaystyle W_{i}} may be a function of S i {\displaystyle S_{i}} . === Interpretation === In the context of the above interpretation of the holding times as the time between successive malfunctions of a machine, the "rewards" W 1 , W 2 , … {\displaystyle W_{1},W_{2},\ldots } (which in this case happen to be negative) may be viewed as the successive repair costs incurred as a result of the successive malfunctions. An alternative analogy is that we have a magic goose which lays eggs at intervals (holding times) distributed as S i {\displaystyle S_{i}} . Sometimes it lays golden eggs of random weight, and sometimes it lays toxic eggs (also of random weight) which require responsible (and costly) disposal. The "rewards" W i {\displaystyle W_{i}} are the successive (random) financial losses/gains resulting from successive eggs (i = 1,2,3,...) and Y t {\displaystyle Y_{t}} records the total financial "reward" at time t. == Renewal function == We define the renewal function as the expected value of the number of jumps observed up to some time t {\displaystyle t} : m ( t ) = E ⁡ [ X t ] . {\displaystyle m(t)=\operatorname {E} [X_{t}].\,} === Elementary renewal theorem === The renewal function satisfies lim t → ∞ 1 t m ( t ) = 1 E ⁡ [ S 1 ] . {\displaystyle \lim _{t\to \infty }{\frac {1}{t}}m(t)={\frac {1}{\operatorname {E} [S_{1}]}}.} === Elementary renewal theorem for renewal reward processes === We define the reward function: g ( t ) = E ⁡ [ Y t ] . {\displaystyle g(t)=\operatorname {E} [Y_{t}].\,} The reward function satisfies lim t → ∞ 1 t g ( t ) = E ⁡ [ W 1 ] E ⁡ [ S 1 ] . {\displaystyle \lim _{t\to \infty }{\frac {1}{t}}g(t)={\frac {\operatorname {E} [W_{1}]}{\operatorname {E} [S_{1}]}}.} === Renewal equation === The renewal function satisfies m ( t ) = F S ( t ) + ∫ 0 t m ( t − s ) f S ( s ) d s {\displaystyle m(t)=F_{S}(t)+\int _{0}^{t}m(t-s)f_{S}(s)\,ds} where F S {\displaystyle F_{S}} is the cumulative distribution function of S 1 {\displaystyle S_{1}} and f S {\displaystyle f_{S}} is the corresponding probability density function. == Key renewal theorem == Let X be a renewal process with renewal function m ( t ) {\displaystyle m(t)} and interrenewal mean μ {\displaystyle \mu } . Let g : [ 0 , ∞ ) → [ 0 , ∞ ) {\displaystyle g:[0,\infty )\rightarrow [0,\infty )} be a function satisfying: ∫ 0 ∞ g ( t ) d t < ∞ {\displaystyle \int _{0}^{\infty }g(t)\,dt<\infty } g is monotone and non-increasing The key renewal theorem states that, as t → ∞ {\displaystyle t\rightarrow \infty } : ∫ 0 t g ( t − x ) m ′ ( x ) d x → 1 μ ∫ 0 ∞ g ( x ) d x {\displaystyle \int _{0}^{t}g(t-x)m'(x)\,dx\rightarrow {\frac {1}{\mu }}\int _{0}^{\infty }g(x)\,dx} === Renewal theorem === Considering g ( x ) = I [ 0 , h ] ( x ) {\displaystyle g(x)=\mathbb {I} _{[0,h]}(x)} for any h > 0 {\displaystyle h>0} gives as a special case the renewal theorem: m ( t + h ) − m ( t ) → h μ {\displaystyle m(t+h)-m(t)\rightarrow {\frac {h}{\mu }}} as t → ∞ {\displaystyle t\rightarrow \infty } The result can be proved using integral equations or by a coupling argument. Though a special case of the key renewal theorem, it can be used to deduce the full theorem, by considering step functions and then increasing sequences of step functions. == Asymptotic properties == Renewal processes and renewal-reward processes have properties analogous to the strong law of large numbers, which can be derived from the same theorem. If ( X t ) t ≥ 0 {\displaystyle (X_{t})_{t\geq 0}} is a renewal process and ( Y t ) t ≥ 0 {\displaystyle (Y_{t})_{t\geq 0}} is a renewal-reward process then: lim t → ∞ 1 t X t = 1 E ⁡ [ S 1 ] {\displaystyle \lim _{t\to \infty }{\frac {1}{t}}X_{t}={\frac {1}{\operatorname {E} [S_{1}]}}} lim t → ∞ 1 t Y t = 1 E ⁡ [ S 1 ] E ⁡ [ W 1 ] {\displaystyle \lim _{t\to \infty }{\frac {1}{t}}Y_{t}={\frac {1}{\operatorname {E} [S_{1}]}}\operatorname {E} [W_{1}]} almost surely. Renewal processes additionally have a property analogous to the central limit theorem: X t − t / μ t σ 2 / μ 3 → N ( 0 , 1 ) {\displaystyle {\frac {X_{t}-t/\mu }{\sqrt {t\sigma ^{2}/\mu ^{3}}}}\to {\mathcal {N}}(0,1)} == Inspection paradox == A curious feature of renewal processes is that if we wait some predetermined time t and then observe how large the renewal interval containing t is, we should expect it to be typically larger than a renewal interval of average size. Mathematically the inspection paradox states: for any t > 0 the renewal interval containing t is stochastically larger than the first renewal interval. That is, for all x > 0 and for all t > 0: P ⁡ ( S X t + 1 > x ) ≥ P ⁡ ( S 1 > x ) = 1 − F S ( x ) {\displaystyle \operatorname {P} (S_{X_{t}+1}>x)\geq \operatorname {P} (S_{1}>x)=1-F_{S}(x)} where FS is the cumulative distribution function of the IID holding times Si. A vivid example is the bus waiting time paradox: For a given random distribution of bus arrivals, the average rider at a bus stop observes more delays than the average operator of the buses. The resolution of the paradox is that our sampled distribution at time t is size-biased (see sampling bias), in that the likelihood an interval is chosen is proportional to its size. However, a renewal interval of average size is not size-biased. == Superposition == Unless the renewal process is a Poisson process, the superposition (sum) of two independent renewal processes is not a renewal process. However, such processes can be described within a larger class of processes called the Markov-renewal processes. However, the cumulative distribution function of the first inter-event time in the superposition process is given by R ( t ) = 1 − ∑ k = 1 K α k ∑ l = 1 K α l ( 1 − R k ( t ) ) ∏ j = 1 , j ≠ k K α j ∫ t ∞ ( 1 − R j ( u ) ) d u {\displaystyle R(t)=1-\sum _{k=1}^{K}{\frac {\alpha _{k}}{\sum _{l=1}^{K}\alpha _{l}}}(1-R_{k}(t))\prod _{j=1,j\neq k}^{K}\alpha _{j}\int _{t}^{\infty }(1-R_{j}(u))\,{\text{d}}u} where Rk(t) and αk > 0 are the CDF of the inter-event times and the arrival rate of process k. == Example application == Eric the entrepreneur has n machines, each having an operational lifetime uniformly distributed between zero and two years. Eric may let each machine run until it fails with replacement cost €2600; alternatively he may replace a machine at any time while it is still functional at a cost of €200. What is his optimal replacement policy? == See also == == Notes == == References == Cox, David (1970). Renewal Theory. London: Methuen & Co. p. 142. ISBN 0-412-20570-X. Doob, J. L. (1948). "Renewal Theory From the Point of View of the Theory of Probability" (PDF). Transactions of the American Mathematical Society. 63 (3): 422–438. doi:10.2307/1990567. JSTOR 1990567. Feller, William (1971). An introduction to probability theory and its applications. Vol. 2 (second ed.). Wiley. Grimmett, G. R.; Stirzaker, D. R. (1992). Probability and Random Processes (second ed.). Oxford University Press. ISBN 0198572220. Smith, Walter L. (1958). "Renewal Theory and Its Ramifications". Journal of the Royal Statistical Society, Series B. 20 (2): 243–302. doi:10.1111/j.2517-6161.1958.tb00294.x. JSTOR 2983891. Wanli Wang, Johannes H. P. Schulz, Weihua Deng, and Eli Barkai (2018). "Renewal theory with fat-tailed distributed sojourn times: Typical versus rare". Phys. Rev. E. 98 (4): 042139. arXiv:1809.05856. Bibcode:2018PhRvE..98d2139W. doi:10.1103/PhysRevE.98.042139. S2CID 54727926.{{cite journal}}: CS1 maint: multiple names: authors list (link)
Wikipedia/Renewal_theory
In mathematics, the Fredholm integral equation is an integral equation whose solution gives rise to Fredholm theory, the study of Fredholm kernels and Fredholm operators. The integral equation was studied by Ivar Fredholm. A useful method to solve such equations, the Adomian decomposition method, is due to George Adomian. == Equation of the first kind == A Fredholm equation is an integral equation in which the term containing the kernel function (defined below) has constants as integration limits. A closely related form is the Volterra integral equation which has variable integral limits. An inhomogeneous Fredholm equation of the first kind is written as and the problem is, given the continuous kernel function K {\displaystyle K} and the function g {\displaystyle g} , to find the function f {\displaystyle f} . An important case of these types of equation is the case when the kernel is a function only of the difference of its arguments, namely K ( t , s ) = K ( t − s ) {\displaystyle K(t,s)=K(t-s)} , and the limits of integration are ±∞, then the right hand side of the equation can be rewritten as a convolution of the functions K {\displaystyle K} and f {\displaystyle f} and therefore, formally, the solution is given by f ( s ) = F ω − 1 [ F t [ g ( t ) ] ( ω ) F t [ K ( t ) ] ( ω ) ] = ∫ − ∞ ∞ F t [ g ( t ) ] ( ω ) F t [ K ( t ) ] ( ω ) e 2 π i ω s d ω {\displaystyle f(s)={\mathcal {F}}_{\omega }^{-1}\left[{{\mathcal {F}}_{t}[g(t)](\omega ) \over {\mathcal {F}}_{t}[K(t)](\omega )}\right]=\int _{-\infty }^{\infty }{{\mathcal {F}}_{t}[g(t)](\omega ) \over {\mathcal {F}}_{t}[K(t)](\omega )}e^{2\pi i\omega s}\mathrm {d} \omega } where F t {\displaystyle {\mathcal {F}}_{t}} and F ω − 1 {\displaystyle {\mathcal {F}}_{\omega }^{-1}} are the direct and inverse Fourier transforms, respectively. This case would not be typically included under the umbrella of Fredholm integral equations, a name that is usually reserved for when the integral operator defines a compact operator (convolution operators on non-compact groups are non-compact, since, in general, the spectrum of the operator of convolution with K {\displaystyle K} contains the range of F K {\displaystyle {\mathcal {F}}{K}} , which is usually a non-countable set, whereas compact operators have discrete countable spectra). == Equation of the second kind == An inhomogeneous Fredholm equation of the second kind is given as Given the kernel K ( t , s ) {\displaystyle K(t,s)} , and the function f ( t ) {\displaystyle f(t)} , the problem is typically to find the function φ ( t ) {\displaystyle \varphi (t)} . A standard approach to solving this is to use iteration, amounting to the resolvent formalism; written as a series, the solution is known as the Liouville–Neumann series. == General theory == The general theory underlying the Fredholm equations is known as Fredholm theory. One of the principal results is that the kernel K yields a compact operator. Compactness may be shown by invoking equicontinuity, and more specifically the theorem of Arzelà-Ascoli. As an operator, it has a spectral theory that can be understood in terms of a discrete spectrum of eigenvalues that tend to 0. == Applications == Fredholm equations arise naturally in the theory of signal processing, for example as the famous spectral concentration problem popularized by David Slepian. The operators involved are the same as linear filters. They also commonly arise in linear forward modeling and inverse problems. In physics, the solution of such integral equations allows for experimental spectra to be related to various underlying distributions, for instance the mass distribution of polymers in a polymeric melt, or the distribution of relaxation times in the system. In addition, Fredholm integral equations also arise in fluid mechanics problems involving hydrodynamic interactions near finite-sized elastic interfaces. A specific application of Fredholm equation is the generation of photo-realistic images in computer graphics, in which the Fredholm equation is used to model light transport from the virtual light sources to the image plane. The Fredholm equation is often called the rendering equation in this context. == See also == Liouville–Neumann series Volterra integral equation Fredholm alternative == References == == Further reading == Integral Equations at EqWorld: The World of Mathematical Equations. A.D. Polyanin and A.V. Manzhirov, Handbook of Integral Equations, CRC Press, Boca Raton, 1998. ISBN 0-8493-2876-4 Khvedelidze, B.V.; Litvinov, G.L. (2001) [1994], "Fredholm kernel", Encyclopedia of Mathematics, EMS Press Simons, F. J.; Wieczorek, M. A.; Dahlen, F. A. (2006). "Spatiospectral concentration on a sphere". SIAM Review. 48 (3): 504–536. arXiv:math/0408424. Bibcode:2006SIAMR..48..504S. doi:10.1137/S0036144504445765. Slepian, D. (1983). "Some comments on Fourier Analysis, uncertainty and modeling". SIAM Review. 25 (3): 379–393. doi:10.1137/1025078. Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 19.1. Fredholm Equations of the Second Kind". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. Archived from the original on 2011-08-11. Retrieved 2011-08-17. Mathews, Jon; Walker, Robert L. (1970), Mathematical methods of physics (2nd ed.), New York: W. A. Benjamin, ISBN 0-8053-7002-1 == External links == IntEQ: a Python package for numerically solving Fredholm integral equations
Wikipedia/Fredholm_integral_equation
In mathematical physics, more specifically the one-dimensional inverse scattering problem, the Marchenko equation (or Gelfand-Levitan-Marchenko equation or GLM equation), named after Israel Gelfand, Boris Levitan and Vladimir Marchenko, is derived by computing the Fourier transform of the scattering relation: K ( r , r ′ ) + g ( r , r ′ ) + ∫ r ∞ K ( r , r ′ ′ ) g ( r ′ ′ , r ′ ) d r ′ ′ = 0 {\displaystyle K(r,r^{\prime })+g(r,r^{\prime })+\int _{r}^{\infty }K(r,r^{\prime \prime })g(r^{\prime \prime },r^{\prime })\mathrm {d} r^{\prime \prime }=0} Where g ( r , r ′ ) {\displaystyle g(r,r^{\prime })\,} is a symmetric kernel, such that g ( r , r ′ ) = g ( r ′ , r ) , {\displaystyle g(r,r^{\prime })=g(r^{\prime },r),\,} which is computed from the scattering data. Solving the Marchenko equation, one obtains the kernel of the transformation operator K ( r , r ′ ) {\displaystyle K(r,r^{\prime })} from which the potential can be read off. This equation is derived from the Gelfand–Levitan integral equation, using the Povzner–Levitan representation. == Application to scattering theory == Suppose that for a potential u ( x ) {\displaystyle u(x)} for the Schrödinger operator L = − d 2 d x 2 + u ( x ) {\displaystyle L=-{\frac {d^{2}}{dx^{2}}}+u(x)} , one has the scattering data ( r ( k ) , { χ 1 , ⋯ , χ N } ) {\displaystyle (r(k),\{\chi _{1},\cdots ,\chi _{N}\})} , where r ( k ) {\displaystyle r(k)} are the reflection coefficients from continuous scattering, given as a function r : R → C {\displaystyle r:\mathbb {R} \rightarrow \mathbb {C} } , and the real parameters χ 1 , ⋯ , χ N > 0 {\displaystyle \chi _{1},\cdots ,\chi _{N}>0} are from the discrete bound spectrum. Then defining F ( x ) = ∑ n = 1 N β n e − χ n x + 1 2 π ∫ R r ( k ) e i k x d k , {\displaystyle F(x)=\sum _{n=1}^{N}\beta _{n}e^{-\chi _{n}x}+{\frac {1}{2\pi }}\int _{\mathbb {R} }r(k)e^{ikx}dk,} where the β n {\displaystyle \beta _{n}} are non-zero constants, solving the GLM equation K ( x , y ) + F ( x + y ) + ∫ x ∞ K ( x , z ) F ( z + y ) d z = 0 {\displaystyle K(x,y)+F(x+y)+\int _{x}^{\infty }K(x,z)F(z+y)dz=0} for K {\displaystyle K} allows the potential to be recovered using the formula u ( x ) = − 2 d d x K ( x , x ) . {\displaystyle u(x)=-2{\frac {d}{dx}}K(x,x).} == See also == Lax pair == Notes == == References == Dunajski, Maciej (2009). Solitons, Instantons, and Twistors. Oxford; New York: OUP Oxford. ISBN 978-0-19-857063-9. OCLC 320199531. Marchenko, V. A. (2011). Sturm–Liouville Operators and Applications (2nd ed.). Providence: American Mathematical Society. ISBN 978-0-8218-5316-0. MR 2798059. Kay, Irvin W. (1955). The inverse scattering problem. New York: Courant Institute of Mathematical Sciences, New York University. OCLC 1046812324. Levinson, Norman (1953). "Certain Explicit Relationships between Phase Shift and Scattering Potential". Physical Review. 89 (4): 755–757. Bibcode:1953PhRv...89..755L. doi:10.1103/PhysRev.89.755. ISSN 0031-899X.
Wikipedia/Marchenko_equation
In mathematics, the inverse scattering transform is a method that solves the initial value problem for a nonlinear partial differential equation using mathematical methods related to wave scattering.: 4960  The direct scattering transform describes how a function scatters waves or generates bound-states.: 39–43  The inverse scattering transform uses wave scattering data to construct the function responsible for wave scattering.: 66–67  The direct and inverse scattering transforms are analogous to the direct and inverse Fourier transforms which are used to solve linear partial differential equations.: 66–67  Using a pair of differential operators, a 3-step algorithm may solve nonlinear differential equations; the initial solution is transformed to scattering data (direct scattering transform), the scattering data evolves forward in time (time evolution), and the scattering data reconstructs the solution forward in time (inverse scattering transform).: 66–67  This algorithm simplifies solving a nonlinear partial differential equation to solving 2 linear ordinary differential equations and an ordinary integral equation, a method ultimately leading to analytic solutions for many otherwise difficult to solve nonlinear partial differential equations.: 72  The inverse scattering problem is equivalent to a Riemann–Hilbert factorization problem, at least in the case of equations of one space dimension. This formulation can be generalized to differential operators of order greater than two and also to periodic problems. In higher space dimensions one has instead a "nonlocal" Riemann–Hilbert factorization problem (with convolution instead of multiplication) or a d-bar problem. == History == The inverse scattering transform arose from studying solitary waves. J.S. Russell described a "wave of translation" or "solitary wave" occurring in shallow water. First J.V. Boussinesq and later D. Korteweg and G. deVries discovered the Korteweg-deVries (KdV) equation, a nonlinear partial differential equation describing these waves. Later, N. Zabusky and M. Kruskal, using numerical methods for investigating the Fermi–Pasta–Ulam–Tsingou problem, found that solitary waves had the elastic properties of colliding particles; the waves' initial and ultimate amplitudes and velocities remained unchanged after wave collisions. These particle-like waves are called solitons and arise in nonlinear equations because of a weak balance between dispersive and nonlinear effects. Gardner, Greene, Kruskal and Miura introduced the inverse scattering transform for solving the Korteweg–de Vries equation. Lax, Ablowitz, Kaup, Newell, and Segur generalized this approach which led to solving other nonlinear equations including the nonlinear Schrödinger equation, sine-Gordon equation, modified Korteweg–De Vries equation, Kadomtsev–Petviashvili equation, the Ishimori equation, Toda lattice equation, and the Dym equation. This approach has also been applied to different types of nonlinear equations including differential-difference, partial difference, multidimensional equations and fractional integrable nonlinear systems. == Description == === Nonlinear partial differential equation === The independent variables are a spatial variable x {\displaystyle x} and a time variable t {\displaystyle t} . Subscripts or differential operators ( ∂ x , ∂ t {\textstyle \partial _{x},\partial _{t}} ) indicate differentiation. The function u ( x , t ) {\displaystyle u(x,t)} is a solution of a nonlinear partial differential equation, u t + N ( u ) = 0 {\textstyle u_{t}+N(u)=0} , with initial condition (value) u ( x , 0 ) {\textstyle u(x,0)} .: 72  === Requirements === The differential equation's solution meets the integrability and Fadeev conditions:: 40  Integrability condition: ∫ − ∞ ∞ | u ( x ) | d x < ∞ {\displaystyle \int _{-\infty }^{\infty }\ |u(x)|\ dx\ <\infty } Fadeev condition: ∫ − ∞ ∞ ( 1 + | x | ) | u ( x ) | d x < ∞ {\displaystyle \int _{-\infty }^{\infty }\ (1+|x|)|u(x)|\ dx\ <\infty } === Differential operator pair === The Lax differential operators, L {\textstyle L} and M {\textstyle M} , are linear ordinary differential operators with coefficients that may contain the function u ( x , t ) {\textstyle u(x,t)} or its derivatives. The self-adjoint operator L {\textstyle L} has a time derivative L t {\textstyle L_{t}} and generates a eigenvalue (spectral) equation with eigenfunctions ψ {\textstyle \psi } and time-constant eigenvalues (spectral parameters) λ {\textstyle \lambda } .: 4963 : 98  L ( ψ ) = λ ψ , {\displaystyle L(\psi )=\lambda \psi ,\ } and L t ( ψ ) = d e f ( L ( ψ ) ) t − L ( ψ t ) {\textstyle \ L_{t}(\psi ){\overset {def}{=}}(L(\psi ))_{t}-L(\psi _{t})} The operator M {\textstyle M} describes how the eigenfunctions evolve over time, and generates a new eigenfunction ψ ~ {\textstyle {\widetilde {\psi }}} of operator L {\textstyle L} from eigenfunction ψ {\textstyle \psi } of L {\textstyle L} .: 4963  ψ ~ = ψ t − M ( ψ ) {\displaystyle {\widetilde {\psi }}=\psi _{t}-M(\psi )\ } The Lax operators combine to form a multiplicative operator, not a differential operator, of the eigenfuctions ψ {\textstyle \psi } .: 4963  ( L t + L M − M L ) ψ = 0 {\displaystyle (L_{t}+LM-ML)\psi =0} The Lax operators are chosen to make the multiplicative operator equal to the nonlinear differential equation.: 4963  L t + L M − M L = u t + N ( u ) = 0 {\displaystyle L_{t}+LM-ML=u_{t}+N(u)=0} The AKNS differential operators, developed by Ablowitz, Kaup, Newell, and Segur, are an alternative to the Lax differential operators and achieve a similar result.: 4964  === Direct scattering transform === The direct scattering transform generates initial scattering data; this may include the reflection coefficients, transmission coefficient, eigenvalue data, and normalization constants of the eigenfunction solutions for this differential equation.: 39–48  L ( ψ ) = λ ψ {\displaystyle L(\psi )=\lambda \psi } === Scattering data time evolution === The equations describing how scattering data evolves over time occur as solutions to a 1st order linear ordinary differential equation with respect to time. Using varying approaches, this first order linear differential equation may arise from the linear differential operators (Lax pair, AKNS pair), a combination of the linear differential operators and the nonlinear differential equation, or through additional substitution, integration or differentiation operations. Spatially asymptotic equations ( x → ± ∞ {\textstyle x\to \pm \infty } ) simplify solving these differential equations.: 4967–4968 : 68–72  === Inverse scattering transform === The Marchenko equation combines the scattering data into a linear Fredholm integral equation. The solution to this integral equation leads to the solution, u(x,t), of the nonlinear differential equation.: 48–57  == Example: Korteweg–De Vries equation == The nonlinear differential Korteweg–De Vries equation is : 4  u t − 6 u u x + u x x x = 0 {\displaystyle u_{t}-6uu_{x}+u_{xxx}=0} === Lax operators === The Lax operators are:: 97–102  L = − ∂ x 2 + u ( x , t ) {\displaystyle L=-\partial _{x}^{2}+u(x,t)\ } and M = − 4 ∂ x 3 + 6 u ∂ x + 3 u x {\textstyle \ M=-4\partial _{x}^{3}+6u\partial _{x}+3u_{x}} The multiplicative operator is: L t + L M − M L = u t − 6 u u x + u x x x = 0 {\displaystyle L_{t}+LM-ML=u_{t}-6uu_{x}+u_{xxx}=0} === Direct scattering transform === The solutions to this differential equation L ( ψ ) = − ψ x x + u ( x , 0 ) ψ = λ ψ {\textstyle L(\psi )=-\psi _{xx}+u(x,0)\psi =\lambda \psi } may include scattering solutions with a continuous range of eigenvalues (continuous spectrum) and bound-state solutions with discrete eigenvalues (discrete spectrum). The scattering data includes transmission coefficients T ( k , 0 ) {\textstyle T(k,0)} , left reflection coefficient R L ( k , 0 ) {\textstyle R_{L}(k,0)} , right reflection coefficient R R ( k , 0 ) {\textstyle R_{R}(k,0)} , discrete eigenvalues − κ 1 2 , … , − κ N 2 {\textstyle -\kappa _{1}^{2},\ldots ,-\kappa _{N}^{2}} , and left and right bound-state normalization (norming) constants.: 4960  c ( 0 ) L j = ( ∫ − ∞ ∞ ψ L 2 ( i k j , x , 0 ) d x ) − 1 / 2 j = 1 , … , N {\displaystyle c(0)_{Lj}=\left(\int _{-\infty }^{\infty }\ \psi _{L}^{2}(ik_{j},x,0)\ dx\right)^{-1/2}\ j=1,\dots ,N} c ( 0 ) R j = ( ∫ − ∞ ∞ ψ R 2 ( i k j , x , 0 ) d x ) − 1 / 2 j = 1 , … , N {\displaystyle c(0)_{Rj}=\left(\int _{-\infty }^{\infty }\ \psi _{R}^{2}(ik_{j},x,0)\ dx\right)^{-1/2}\ j=1,\dots ,N} === Scattering data time evolution === The spatially asymptotic left ψ L ( k , x , t ) {\textstyle \psi _{L}(k,x,t)} and right ψ R ( k , x , t ) {\textstyle \psi _{R}(k,x,t)} Jost functions simplify this step.: 4965–4966  ψ L ( x , k , t ) = e i k x + o ( 1 ) , x → + ∞ ψ L ( x , k , t ) = e i k x T ( k , t ) + R L ( k , t ) e − i k x T ( k , t ) + o ( 1 ) , x → − ∞ ψ R ( x , k , t ) = e − i k x T ( k , t ) + R R ( k , t ) e i k x T ( k , t ) + o ( 1 ) , x → + ∞ ψ R ( x , k , t ) = e − i k x + o ( 1 ) , x → − ∞ {\displaystyle {\begin{aligned}\psi _{L}(x,k,t)&=e^{ikx}+o(1),\ x\to +\infty \\\psi _{L}(x,k,t)&={\frac {e^{ikx}}{T(k,t)}}+{\frac {R_{L}(k,t)e^{-ikx}}{T(k,t)}}+o(1),\ x\to -\infty \\\psi _{R}(x,k,t)&={\frac {e^{-ikx}}{T(k,t)}}+{\frac {R_{R}(k,t)e^{ikx}}{T(k,t)}}+o(1),\ x\to +\infty \\\psi _{R}(x,k,t)&=e^{-ikx}+o(1),\ x\to -\infty \\\end{aligned}}} The dependency constants γ j ( t ) {\textstyle \gamma _{j}(t)} relate the right and left Jost functions and right and left normalization constants.: 4965–4966  γ j ( t ) = ψ L ( x , i κ j , t ) ψ R ( x , i κ j , t ) = ( − 1 ) N − j c R j ( t ) c L j ( t ) {\displaystyle \gamma _{j}(t)={\frac {\psi _{L}(x,i\kappa _{j},t)}{\psi _{R}(x,i\kappa _{j},t)}}=(-1)^{N-j}{\frac {c_{Rj}(t)}{c_{Lj}(t)}}} The Lax M {\textstyle M} differential operator generates an eigenfunction which can be expressed as a time-dependent linear combination of other eigenfunctions.: 4967  ∂ t ψ L ( k , x , t ) − M ψ L ( x , k , t ) = a L ( k , t ) ψ L ( x , k , t ) + b L ( k , t ) ψ R ( x , k , t ) {\displaystyle \partial _{t}\psi _{L}(k,x,t)-M\psi _{L}(x,k,t)=a_{L}(k,t)\psi _{L}(x,k,t)+b_{L}(k,t)\psi _{R}(x,k,t)} ∂ t ψ R ( k , x , t ) − M ψ R ( x , k , t ) = a R ( k , t ) ψ L ( x , k , t ) + b R ( k , t ) ψ R ( x , k , t ) {\displaystyle \partial _{t}\psi _{R}(k,x,t)-M\psi _{R}(x,k,t)=a_{R}(k,t)\psi _{L}(x,k,t)+b_{R}(k,t)\psi _{R}(x,k,t)} The solutions to these differential equations, determined using scattering and bound-state spatially asymptotic Jost functions, indicate a time-constant transmission coefficient T ( k , t ) {\textstyle T(k,t)} , but time-dependent reflection coefficients and normalization coefficients.: 4967–4968  R L ( k , t ) = R L ( k , 0 ) e − i 8 k 3 t R R ( k , t ) = R R ( k , 0 ) e + i 8 k 3 t c L j ( t ) = c L j ( 0 ) e + 4 κ j 3 t , j = 1 , … , N c R j ( t ) = c R j ( 0 ) e − 4 κ j 3 t , j = 1 , … , N {\displaystyle {\begin{aligned}R_{L}(k,t)&=R_{L}(k,0)e^{-i8k^{3}t}\\R_{R}(k,t)&=R_{R}(k,0)e^{+i8k^{3}t}\\c_{Lj}(t)&=c_{Lj}(0)e^{+4\kappa _{j}^{3}t},\ j=1,\ldots ,N\\c_{Rj}(t)&=c_{Rj}(0)e^{-4\kappa _{j}^{3}t},\ j=1,\ldots ,N\end{aligned}}} === Inverse scattering transform === The Marchenko kernel is F ( x , t ) {\textstyle F(x,t)} .: 4968–4969  F ( x , t ) = d e f 1 2 π ∫ − ∞ ∞ R R ( k , t ) e i k x d k + ∑ j = 1 N c ( t ) L j 2 e − κ j x {\displaystyle F(x,t){\overset {def}{=}}{\frac {1}{2\pi }}\int _{-\infty }^{\infty }R_{R}(k,t)e^{ikx}\ dk+\sum _{j=1}^{N}c(t)_{Lj}^{2}e^{-\kappa _{j}x}} The Marchenko integral equation is a linear integral equation solved for K ( x , y , t ) {\textstyle K(x,y,t)} .: 4968–4969  K ( x , z , t ) + F ( x + z , t ) + ∫ x ∞ K ( x , y , t ) F ( y + z , t ) d y = 0 {\displaystyle K(x,z,t)+F(x+z,t)+\int _{x}^{\infty }K(x,y,t)F(y+z,t)\ dy=0} The solution to the Marchenko equation, K ( x , y , t ) {\textstyle K(x,y,t)} , generates the solution u ( x , t ) {\textstyle u(x,t)} to the nonlinear partial differential equation.: 4969  u ( x , t ) = − 2 ∂ K ( x , x , t ) ∂ x {\displaystyle u(x,t)=-2{\frac {\partial K(x,x,t)}{\partial x}}} == Examples of integrable equations == Korteweg–de Vries equation nonlinear Schrödinger equation Camassa-Holm equation Sine-Gordon equation Toda lattice Ishimori equation Dym equation == See also == Quantum inverse scattering method Integrable system == Citations == == References == Ablowitz, M. J.; Kaup, D. J.; Newell, A. C.; Segur, H. (1973). "Method for Solving the Sine-Gordon Equation". Physical Review Letters. 30 (25): 1262–1264. Bibcode:1973PhRvL..30.1262A. doi:10.1103/PhysRevLett.30.1262. Ablowitz, M.J.; Kaup, D.J.; Newell, A.C.; Segur, H. (1974). "The Inverse Scattering Transform—Fourier Analysis for Nonlinear Problems". Studies in Applied Mathematics. 53 (4): 249–315. doi:10.1002/sapm1974534249. Ablowitz, Mark J.; Segur, Harvey (1981). Solitons and the Inverse Scattering Transform. SIAM. ISBN 978-0-89871-477-7. Ablowitz, Mark J.; Fokas, A. S. (2003). Complex Variables: Introduction and Applications. Cambridge University Press. pp. 604–620. ISBN 978-0-521-53429-1. Ablowitz, Mark J. (2023). "Nonlinear waves and the Inverse Scattering Transform". Optik. 278: 170710. Bibcode:2023Optik.27870710A. doi:10.1016/j.ijleo.2023.170710. Aktosun, Tuncay (2009). "Inverse Scattering Transform and the Theory of Solitons". Encyclopedia of Complexity and Systems Science. Springer. pp. 4960–4971. doi:10.1007/978-0-387-30440-3_295. ISBN 978-0-387-30440-3. Drazin, P. G.; Johnson, R. S. (1989). Solitons: An Introduction. Cambridge University Press. ISBN 978-0-521-33655-0. Gardner, Clifford S.; Greene, John M.; Kruskal, Martin D.; Miura, Robert M. (1967). "Method for Solving the Korteweg-deVries Equation". Physical Review Letters. 19 (19): 1095–1097. Bibcode:1967PhRvL..19.1095G. doi:10.1103/PhysRevLett.19.1095. Konopelchenko, B.G.; Dubrowsky, V.G. (1991). "Localized solitons for the Ishimori equation". In Sattinger, David H.; Tracy, C.A.; Venakides, Stephanos (eds.). Inverse Scattering and Applications. American Mathematical Soc. pp. 77–90. ISBN 978-0-8218-5129-6. Oono, H. (1996). "N-Soliton solution of Harry Dym equation by inverse scattering method.". In Alfinito, E.; Boiti, M.; Martina, L. (eds.). Nonlinear Physics: Theory and Experiment. World Scientific Publishing Company Pte Limited. pp. 241–248. ISBN 978-981-02-2559-9. Osborne, A. R. (1995). "Soliton physics and the periodic inverse scattering transform". Physica D: Nonlinear Phenomena. 86 (1): 81–89. doi:10.1016/0167-2789(95)00089-M. ISSN 0167-2789. == Further reading == Ablowitz, Mark J.; Clarkson, P. A. (12 December 1991). Solitons, Nonlinear Evolution Equations and Inverse Scattering. Cambridge University Press. ISBN 978-0-521-38730-9. Bullough, R. K.; Caudrey, P. J. (11 November 2013). Solitons. Springer Science & Business Media. ISBN 978-3-642-81448-8. Gardner, Clifford S.; Greene, John M.; Kruskal, Martin D.; Miura, Robert M. (1974), "Korteweg-deVries equation and generalization. VI. Methods for exact solution.", Comm. Pure Appl. Math., 27: 97–133, doi:10.1002/cpa.3160270108, MR 0336122 Gelʹfand, Izrailʹ Moiseevich (1955). On the Determination of a Differential Equation from Its Spectral Function. American Mathematical Society. p. 253-304. Marchenko, Vladimir A. (1986). Sturm-Liouville Operators and Applications. Operator Theory: Advances and Applications. Vol. 22. Basel: Birkhäuser. doi:10.1007/978-3-0348-5485-6. ISBN 978-3-0348-5486-3. Shaw, J. K. (1 May 2004). Mathematical Principles of Optical Fiber Communication. SIAM. ISBN 978-0-89871-556-9. == External links == "Introductory mathematical paper on IST" (PDF). (300 KiB) Inverse Scattering Transform and the Theory of Solitons
Wikipedia/Inverse_scattering_transform
The Wiener–Hopf method is a mathematical technique widely used in applied mathematics. It was initially developed by Norbert Wiener and Eberhard Hopf as a method to solve systems of integral equations, but has found wider use in solving two-dimensional partial differential equations with mixed boundary conditions on the same boundary. In general, the method works by exploiting the complex-analytical properties of transformed functions. Typically, the standard Fourier transform is used, but examples exist using other transforms, such as the Mellin transform. In general, the governing equations and boundary conditions are transformed and these transforms are used to define a pair of complex functions (typically denoted with '+' and '−' subscripts) which are respectively analytic in the upper and lower halves of the complex plane, and have growth no faster than polynomials in these regions. These two functions will also coincide on some region of the complex plane, typically, a thin strip containing the real line. Analytic continuation guarantees that these two functions define a single function analytic in the entire complex plane, and Liouville's theorem implies that this function is an unknown polynomial, which is often zero or constant. Analysis of the conditions at the edges and corners of the boundary allows one to determine the degree of this polynomial. == Wiener–Hopf decomposition == The fundamental equation that appears in the Wiener-Hopf method is of the form A ( α ) Ξ + ( α ) + B ( α ) Ψ − ( α ) + C ( α ) = 0 , {\displaystyle A(\alpha )\Xi _{+}(\alpha )+B(\alpha )\Psi _{-}(\alpha )+C(\alpha )=0,} where A {\displaystyle A} , B {\displaystyle B} , C {\displaystyle C} are known holomorphic functions, the functions Ξ + ( α ) {\displaystyle \Xi _{+}(\alpha )} , Ψ − ( α ) {\displaystyle \Psi _{-}(\alpha )} are unknown and the equation holds in a strip τ − < I m ( α ) < τ + {\displaystyle \tau _{-}<{\mathfrak {Im}}(\alpha )<\tau _{+}} in the complex α {\displaystyle \alpha } plane. Finding Ξ + ( α ) {\displaystyle \Xi _{+}(\alpha )} , Ψ − ( α ) {\displaystyle \Psi _{-}(\alpha )} is what's called the Wiener-Hopf problem. The key step in many Wiener–Hopf problems is to decompose an arbitrary function Φ {\displaystyle \Phi } into two functions Φ ± {\displaystyle \Phi _{\pm }} with the desired properties outlined above. In general, this can be done by writing Φ + ( α ) = 1 2 π i ∫ C 1 Φ ( z ) d z z − α {\displaystyle \Phi _{+}(\alpha )={\frac {1}{2\pi i}}\int _{C_{1}}\Phi (z){\frac {dz}{z-\alpha }}} and Φ − ( α ) = − 1 2 π i ∫ C 2 Φ ( z ) d z z − α , {\displaystyle \Phi _{-}(\alpha )=-{\frac {1}{2\pi i}}\int _{C_{2}}\Phi (z){\frac {dz}{z-\alpha }},} where the contours C 1 {\displaystyle C_{1}} and C 2 {\displaystyle C_{2}} are parallel to the real line, but pass above and below the point z = α {\displaystyle z=\alpha } , respectively. Similarly, arbitrary scalar functions may be decomposed into a product of +/− functions, i.e. K ( α ) = K + ( α ) K − ( α ) {\displaystyle K(\alpha )=K_{+}(\alpha )K_{-}(\alpha )} , by first taking the logarithm, and then performing a sum decomposition. Product decompositions of matrix functions (which occur in coupled multi-modal systems such as elastic waves) are considerably more problematic since the logarithm is not well defined, and any decomposition might be expected to be non-commutative. A small subclass of commutative decompositions were obtained by Khrapkov, and various approximate methods have also been developed. == Example == Consider the linear partial differential equation L x y f ( x , y ) = 0 , {\displaystyle {\boldsymbol {L}}_{xy}f(x,y)=0,} where L x y {\displaystyle {\boldsymbol {L}}_{xy}} is a linear operator which contains derivatives with respect to x and y, subject to the mixed conditions on y = 0, for some prescribed function g(x), f = g ( x ) for x ≤ 0 , f y = 0 when x > 0 {\displaystyle f=g(x){\text{ for }}x\leq 0,\quad f_{y}=0{\text{ when }}x>0} and decay at infinity i.e. f → 0 as x → ∞ {\displaystyle {\boldsymbol {x}}\rightarrow \infty } . Taking a Fourier transform with respect to x results in the following ordinary differential equation L y f ^ ( k , y ) − P ( k , y ) f ^ ( k , y ) = 0 , {\displaystyle {\boldsymbol {L}}_{y}{\widehat {f}}(k,y)-P(k,y){\widehat {f}}(k,y)=0,} where L y {\displaystyle {\boldsymbol {L}}_{y}} is a linear operator containing y derivatives only, P(k,y) is a known function of y and k and f ^ ( k , y ) = ∫ − ∞ ∞ f ( x , y ) e − i k x d x . {\displaystyle {\widehat {f}}(k,y)=\int _{-\infty }^{\infty }f(x,y)e^{-ikx}\,{\textrm {d}}x.} If a particular solution of this ordinary differential equation which satisfies the necessary decay at infinity is denoted F(k,y), a general solution can be written as f ^ ( k , y ) = C ( k ) F ( k , y ) , {\displaystyle {\widehat {f}}(k,y)=C(k)F(k,y),} where C(k) is an unknown function to be determined by the boundary conditions on y=0. The key idea is to split f ^ {\displaystyle {\widehat {f}}} into two separate functions, f ^ + {\displaystyle {\widehat {f}}_{+}} and f ^ − {\displaystyle {\widehat {f}}_{-}} which are analytic in the lower- and upper-halves of the complex plane, respectively, f ^ + ( k , y ) = ∫ 0 ∞ f ( x , y ) e − i k x d x , {\displaystyle {\widehat {f}}_{+}(k,y)=\int _{0}^{\infty }f(x,y)e^{-ikx}\,{\textrm {d}}x,} f ^ − ( k , y ) = ∫ − ∞ 0 f ( x , y ) e − i k x d x . {\displaystyle {\widehat {f}}_{-}(k,y)=\int _{-\infty }^{0}f(x,y)e^{-ikx}\,{\textrm {d}}x.} The boundary conditions then give g ^ ( k ) + f ^ + ( k , 0 ) = f ^ − ( k , 0 ) + f ^ + ( k , 0 ) = f ^ ( k , 0 ) = C ( k ) F ( k , 0 ) {\displaystyle {\widehat {g\,}}(k)+{\widehat {f}}_{+}(k,0)={\widehat {f}}_{-}(k,0)+{\widehat {f}}_{+}(k,0)={\widehat {f}}(k,0)=C(k)F(k,0)} and, on taking derivatives with respect to y {\displaystyle y} , f ^ − ′ ( k , 0 ) = f ^ − ′ ( k , 0 ) + f ^ + ′ ( k , 0 ) = f ^ ′ ( k , 0 ) = C ( k ) F ′ ( k , 0 ) . {\displaystyle {\widehat {f}}'_{-}(k,0)={\widehat {f}}'_{-}(k,0)+{\widehat {f}}'_{+}(k,0)={\widehat {f}}'(k,0)=C(k)F'(k,0).} Eliminating C ( k ) {\displaystyle C(k)} yields g ^ ( k ) + f ^ + ( k , 0 ) = f ^ − ′ ( k , 0 ) / K ( k ) , {\displaystyle {\widehat {g\,}}(k)+{\widehat {f}}_{+}(k,0)={\widehat {f}}'_{-}(k,0)/K(k),} where K ( k ) = F ′ ( k , 0 ) F ( k , 0 ) . {\displaystyle K(k)={\frac {F'(k,0)}{F(k,0)}}.} Now K ( k ) {\displaystyle K(k)} can be decomposed into the product of functions K − {\displaystyle K^{-}} and K + {\displaystyle K^{+}} which are analytic in the upper and lower half-planes respectively. To be precise, K ( k ) = K + ( k ) K − ( k ) , {\displaystyle K(k)=K^{+}(k)K^{-}(k),} where log ⁡ K − = 1 2 π i ∫ − ∞ ∞ log ⁡ ( K ( z ) ) z − k d z , Im ⁡ k > 0 , {\displaystyle \log K^{-}={\frac {1}{2\pi i}}\int _{-\infty }^{\infty }{\frac {\log(K(z))}{z-k}}\,{\textrm {d}}z,\quad \operatorname {Im} k>0,} log ⁡ K + = − 1 2 π i ∫ − ∞ ∞ log ⁡ ( K ( z ) ) z − k d z , Im ⁡ k < 0. {\displaystyle \log K^{+}=-{\frac {1}{2\pi i}}\int _{-\infty }^{\infty }{\frac {\log(K(z))}{z-k}}\,{\textrm {d}}z,\quad \operatorname {Im} k<0.} (Note that this sometimes involves scaling K {\displaystyle K} so that it tends to 1 {\displaystyle 1} as k → ∞ {\displaystyle k\rightarrow \infty } .) We also decompose K + g ^ {\displaystyle K^{+}{\widehat {g\,}}} into the sum of two functions G + {\displaystyle G^{+}} and G − {\displaystyle G^{-}} which are analytic in the lower and upper half-planes respectively, i.e., K + ( k ) g ^ ( k ) = G + ( k ) + G − ( k ) . {\displaystyle K^{+}(k){\widehat {g\,}}(k)=G^{+}(k)+G^{-}(k).} This can be done in the same way that we factorised K ( k ) . {\displaystyle K(k).} Consequently, G + ( k ) + K + ( k ) f ^ + ( k , 0 ) = f ^ − ′ ( k , 0 ) / K − ( k ) − G − ( k ) . {\displaystyle G^{+}(k)+K_{+}(k){\widehat {f}}_{+}(k,0)={\widehat {f}}'_{-}(k,0)/K_{-}(k)-G^{-}(k).} Now, as the left-hand side of the above equation is analytic in the lower half-plane, whilst the right-hand side is analytic in the upper half-plane, analytic continuation guarantees existence of an entire function which coincides with the left- or right-hand sides in their respective half-planes. Furthermore, since it can be shown that the functions on either side of the above equation decay at large k, an application of Liouville's theorem shows that this entire function is identically zero, therefore f ^ + ( k , 0 ) = − G + ( k ) K + ( k ) , {\displaystyle {\widehat {f}}_{+}(k,0)=-{\frac {G^{+}(k)}{K^{+}(k)}},} and so C ( k ) = K + ( k ) g ^ ( k ) − G + ( k ) K + ( k ) F ( k , 0 ) . {\displaystyle C(k)={\frac {K^{+}(k){\widehat {g\,}}(k)-G^{+}(k)}{K^{+}(k)F(k,0)}}.} == See also == Wiener filter Riemann–Hilbert problem == Notes == == References == "Category:Wiener-Hopf - WikiWaves". wikiwaves.org. Retrieved 2020-05-19. "Wiener-Hopf method", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Fornberg, Bengt. Complex variables and analytic functions : an illustrated introduction. Piret, Cécile. Philadelphia. ISBN 978-1-61197-597-0. OCLC 1124781689. Noble, Ben (1958). Methods Based on the Wiener-Hopf Technique for the Solution of Partial Differential Equations. New York, N.Y: Taylor & Francis US. ISBN 978-0-8284-0332-0. {{cite book}}: ISBN / Date incompatibility (help)
Wikipedia/Wiener–Hopf_method
The boundary element method (BEM) is a numerical computational method of solving linear partial differential equations which have been formulated as integral equations (i.e. in boundary integral form), including fluid mechanics, acoustics, electromagnetics (where the technique is known as method of moments or abbreviated as MoM), fracture mechanics, and contact mechanics. == Mathematical basis == The integral equation may be regarded as an exact solution of the governing partial differential equation. The boundary element method attempts to use the given boundary conditions to fit boundary values into the integral equation, rather than values throughout the space defined by a partial differential equation. Once this is done, in the post-processing stage, the integral equation can then be used again to calculate numerically the solution directly at any desired point in the interior of the solution domain. BEM is applicable to problems for which Green's functions can be calculated. These usually involve fields in linear homogeneous media. This places considerable restrictions on the range and generality of problems to which boundary elements can usefully be applied. Nonlinearities can be included in the formulation, although they will generally introduce volume integrals which then require the volume to be discretised before solution can be attempted, removing one of the most often cited advantages of BEM. A useful technique for treating the volume integral without discretising the volume is the dual-reciprocity method. The technique approximates part of the integrand using radial basis functions (local interpolating functions) and converts the volume integral into boundary integral after collocating at selected points distributed throughout the volume domain (including the boundary). In the dual-reciprocity BEM, although there is no need to discretize the volume into meshes, unknowns at chosen points inside the solution domain are involved in the linear algebraic equations approximating the problem being considered. The Green's function elements connecting pairs of source and field patches defined by the mesh form a matrix, which is solved numerically. Unless the Green's function is well behaved, at least for pairs of patches near each other, the Green's function must be integrated over either or both the source patch and the field patch. The form of the method in which the integrals over the source and field patches are the same is called "Galerkin's method". Galerkin's method is the obvious approach for problems which are symmetrical with respect to exchanging the source and field points. In frequency domain electromagnetics, this is assured by electromagnetic reciprocity. The cost of computation involved in naive Galerkin implementations is typically quite severe. One must loop over each pair of elements (so we get n2 interactions) and for each pair of elements we loop through Gauss points in the elements producing a multiplicative factor proportional to the number of Gauss-points squared. Also, the function evaluations required are typically quite expensive, involving trigonometric/hyperbolic function calls. Nonetheless, the principal source of the computational cost is this double-loop over elements producing a fully populated matrix. The Green's functions, or fundamental solutions, are often problematic to integrate as they are based on a solution of the system equations subject to a singularity load (e.g. the electrical field arising from a point charge). Integrating such singular fields is not easy. For simple element geometries (e.g. planar triangles) analytical integration can be used. For more general elements, it is possible to design purely numerical schemes that adapt to the singularity, but at great computational cost. Of course, when source point and target element (where the integration is done) are far-apart, the local gradient surrounding the point need not be quantified exactly and it becomes possible to integrate easily due to the smooth decay of the fundamental solution. It is this feature that is typically employed in schemes designed to accelerate boundary element problem calculations. Derivation of closed-form Green's functions is of particular interest in boundary element method, especially in electromagnetics. Specifically in the analysis of layered media, derivation of spatial-domain Green's function necessitates the inversion of analytically-derivable spectral-domain Green's function through Sommerfeld path integral. This integral can not be evaluated analytically and its numerical integration is costly due to its oscillatory and slowly-converging behaviour. For a robust analysis, spatial Green's functions are approximated as complex exponentials with methods such as Prony's method or generalized pencil of function, and the integral is evaluated with Sommerfeld identity. This method is known as discrete complex image method. == Comparison to other methods == The boundary element method is often more efficient than other methods, including finite elements, in terms of computational resources for problems where there is a small surface/volume ratio. Conceptually, it works by constructing a "mesh" over the modelled surface. However, for many problems boundary element methods are significantly less efficient than volume-discretisation methods (finite element method, finite difference method, finite volume method). A good example of application of the boundary element method is efficient calculation of natural frequencies of liquid sloshing in tanks. Boundary element method is one of the most effective methods for numerical simulation of contact problems, in particular for simulation of adhesive contacts. Boundary element formulations typically give rise to fully populated matrices. This means that the storage requirements and computational time will tend to grow according to the square of the problem size. By contrast, finite element matrices are typically banded (elements are only locally connected) and the storage requirements for the system matrices typically grow quite linearly with the problem size. Compression techniques (e.g. multipole expansions or adaptive cross approximation/hierarchical matrices) can be used to ameliorate these problems, though at the cost of added complexity and with a success-rate that depends heavily on the nature of the problem being solved and the geometry involved. == See also == Analytic element method Computational electromagnetics Meshfree methods Immersed boundary method Stretched grid method Modified radial integration method == References == == Bibliography == Ang, Whye-Teong (2007), A Beginner's Course in Boundary Element Methods, Boca Raton, Fl: Universal Publishers, ISBN 978-1-58112-974-8. Ang, Whye-Teong (2013), Hypersingular Integral Equations in Fracture Analysis, Oxford: Woodhead Publishing, ISBN 978-0-85709-479-7. Banerjee, Prasanta Kumar (1994), The Boundary Element Methods in Engineering (2nd ed.), London, etc.: McGraw-Hill, ISBN 978-0-07-707769-3. Beer, Gernot; Smith, Ian; Duenser, Christian (8 April 2008), The Boundary Element Method with Programming: For Engineers and Scientists, Berlin – Heidelberg – New York: Springer-Verlag, pp. XIV+494, ISBN 978-3-211-71574-1 Cheng, Alexander H.-D.; Cheng, Daisy T. (2005), "Heritage and early history of the boundary element method", Engineering Analysis with Boundary Elements, 29 (3): 268–302, doi:10.1016/j.enganabound.2004.12.001, Zbl 1182.65005, available also here. Gibson, Walton C (2008), The Method of Moments in Electromagnetics, Boca Raton, Florida: Chapman & Hall/CRC Press, pp. xv+272, ISBN 978-1-4200-6145-1, MR 2503144, Zbl 1175.78002. Katsikadelis, John T. (2002), Boundary Elements Theory and Applications, Amsterdam: Elsevier, pp. XIV+336, ISBN 978-0-080-44107-8. Wrobel, L. C.; Aliabadi, M. H. (2002), The Boundary Element Method, New York: John Wiley & Sons, p. 1066, ISBN 978-0-470-84139-6 (in two volumes). == Further reading == Constanda, Christian; Doty, Dale; Hamill, William (2016). Boundary Integral Equation Methods and Numerical Solutions: Thin Plates on an Elastic Foundation. New York: Springer. ISBN 978-3-319-26307-6. == External links == An Online Resource for Boundary Elements What lies beneath the surface? A guide to the Boundary Element Method and Green's functions for the students and professionals An introductory BEM course (with a chapter on Green's functions) Boundary elements for plane crack problems Electromagnetic Modeling web site at Clemson University (includes list of currently available software) Concept Analyst Boundary Element Analysis software Klimpke, Bruce A Hybrid Magnetic Field Solver Using a Combined Finite Element/Boundary Element Field Solver, U.K. Magnetics Society Conference, 2003 which compares FEM and BEM methods as well as hybrid approaches === Free software === Bembel A 3D, isogeometric, higher-order, open-source BEM software for Laplace, Helmholtz and Maxwell problems utilizing a fast multipole method for compression and reduction of computational cost boundary-element-method.com An open-source BEM software for solving acoustics / Helmholtz and Laplace problems Puma-EM An open-source and high-performance Method of Moments / Multilevel Fast Multipole Method parallel program AcouSTO Acoustics Simulation TOol, a free and open-source parallel BEM solver for the Kirchhoff-Helmholtz Integral Equation (KHIE) FastBEM Free fast multipole boundary element programs for solving 2D/3D potential, elasticity, Stokes flow and acoustic problems ParaFEM Includes the free and open-source parallel BEM solver for elasticity problems described in Gernot Beer, Ian Smith, Christian Duenser, The Boundary Element Method with Programming: For Engineers and Scientists, Springer, ISBN 978-3-211-71574-1 (2008) Boundary Element Template Library (BETL) A general purpose C++ software library for the discretisation of boundary integral operators Nemoh An open source hydrodynamics BEM software dedicated to the computation of first-order wave loads on offshore structures (added mass, radiation damping, diffraction forces) Bempp, An open-source BEM software for 3D Laplace, Helmholtz and Maxwell problems MNPBEM, An open-source Matlab toolbox to solve Maxwell's equations for arbitrarily shaped nanostructures Contact Mechanics and Tribology Simulator, Free, BEM based software MultiFEBE, BEM-FEM solver for computational mechanics, allowing coupling of 2D and 3D viscoelastic or poroelastic media with beam and shell structural elements (for dynamic soil-structure interaction problems, for instance). BE-STATIK Free BE-Programs for 2D potential, elasticity and plate bending problems (Kirchhoff).
Wikipedia/Boundary_element_method
In actuarial science and applied probability, ruin theory (sometimes risk theory or collective risk theory) uses mathematical models to describe an insurer's vulnerability to insolvency/ruin. In such models key quantities of interest are the probability of ruin, distribution of surplus immediately prior to ruin and deficit at time of ruin. == Classical model == The theoretical foundation of ruin theory, known as the Cramér–Lundberg model (or classical compound-Poisson risk model, classical risk process or Poisson risk process) was introduced in 1903 by the Swedish actuary Filip Lundberg. Lundberg's work was republished in the 1930s by Harald Cramér. The model describes an insurance company who experiences two opposing cash flows: incoming cash premiums and outgoing claims. Premiums arrive a constant rate c > 0 {\textstyle c>0} from customers and claims arrive according to a Poisson process N t {\displaystyle N_{t}} with intensity λ {\textstyle \lambda } and are independent and identically distributed non-negative random variables ξ i {\displaystyle \xi _{i}} with distribution F {\textstyle F} and mean μ {\textstyle \mu } (they form a compound Poisson process). So for an insurer who starts with initial surplus x {\textstyle x} , the aggregate assets X t {\displaystyle X_{t}} are given by: X t = x + c t − ∑ i = 1 N t ξ i for t ≥ 0. {\displaystyle X_{t}=x+ct-\sum _{i=1}^{N_{t}}\xi _{i}\quad {\text{ for t}}\geq 0.} The central object of the model is to investigate the probability that the insurer's surplus level eventually falls below zero (making the firm bankrupt). This quantity, called the probability of ultimate ruin, is defined as ψ ( x ) = P x { τ < ∞ } {\displaystyle \psi (x)=\mathbb {P} ^{x}\{\tau <\infty \}} , where the time of ruin is τ = inf { t > 0 : X ( t ) < 0 } {\displaystyle \tau =\inf\{t>0\,:\,X(t)<0\}} with the convention that inf ∅ = ∞ {\displaystyle \inf \varnothing =\infty } . This can be computed exactly using the Pollaczek–Khinchine formula as (the ruin function here is equivalent to the tail function of the stationary distribution of waiting time in an M/G/1 queue) ψ ( x ) = ( 1 − λ μ c ) ∑ n = 0 ∞ ( λ μ c ) n ( 1 − F l ∗ n ( x ) ) {\displaystyle \psi (x)=\left(1-{\frac {\lambda \mu }{c}}\right)\sum _{n=0}^{\infty }\left({\frac {\lambda \mu }{c}}\right)^{n}(1-F_{l}^{\ast n}(x))} where F l {\displaystyle F_{l}} is the transform of the tail distribution of F {\displaystyle F} , F l ( x ) = 1 μ ∫ 0 x ( 1 − F ( u ) ) d u {\displaystyle F_{l}(x)={\frac {1}{\mu }}\int _{0}^{x}\left(1-F(u)\right){\text{d}}u} and ⋅ ∗ n {\displaystyle \cdot ^{\ast n}} denotes the n {\displaystyle n} -fold convolution. In the case where the claim sizes are exponentially distributed, this simplifies to ψ ( x ) = λ μ c e − ( 1 μ − λ c ) x . {\displaystyle \psi (x)={\frac {\lambda \mu }{c}}e^{-\left({\frac {1}{\mu }}-{\frac {\lambda }{c}}\right)x}.} == Sparre Andersen model == E. Sparre Andersen extended the classical model in 1957 by allowing claim inter-arrival times to have arbitrary distribution functions. X t = x + c t − ∑ i = 1 N t ξ i for t ≥ 0 , {\displaystyle X_{t}=x+ct-\sum _{i=1}^{N_{t}}\xi _{i}\quad {\text{ for }}t\geq 0,} where the claim number process ( N t ) t ≥ 0 {\displaystyle (N_{t})_{t\geq 0}} is a renewal process and ( ξ i ) i ∈ N {\displaystyle (\xi _{i})_{i\in \mathbb {N} }} are independent and identically distributed random variables. The model furthermore assumes that ξ i > 0 {\displaystyle \xi _{i}>0} almost surely and that ( N t ) t ≥ 0 {\displaystyle (N_{t})_{t\geq 0}} and ( ξ i ) i ∈ N {\displaystyle (\xi _{i})_{i\in \mathbb {N} }} are independent. The model is also known as the renewal risk model. == Expected discounted penalty function == Michael R. Powers and Gerber and Shiu analyzed the behavior of the insurer's surplus through the expected discounted penalty function, which is commonly referred to as Gerber-Shiu function in the ruin literature and named after actuarial scientists Elias S.W. Shiu and Hans-Ulrich Gerber. It is arguable whether the function should have been called Powers-Gerber-Shiu function due to the contribution of Powers. In Powers' notation, this is defined as m ( x ) = E x [ e − δ τ K τ ] {\displaystyle m(x)=\mathbb {E} ^{x}[e^{-\delta \tau }K_{\tau }]} , where δ {\displaystyle \delta } is the discounting force of interest, K τ {\displaystyle K_{\tau }} is a general penalty function reflecting the economic costs to the insurer at the time of ruin, and the expectation E x {\displaystyle \mathbb {E} ^{x}} corresponds to the probability measure P x {\displaystyle \mathbb {P} ^{x}} . The function is called expected discounted cost of insolvency by Powers. In Gerber and Shiu's notation, it is given as m ( x ) = E x [ e − δ τ w ( X τ − , X τ ) I ( τ < ∞ ) ] {\displaystyle m(x)=\mathbb {E} ^{x}[e^{-\delta \tau }w(X_{\tau -},X_{\tau })\mathbb {I} (\tau <\infty )]} , where δ {\displaystyle \delta } is the discounting force of interest and w ( X τ − , X τ ) {\displaystyle w(X_{\tau -},X_{\tau })} is a penalty function capturing the economic costs to the insurer at the time of ruin (assumed to depend on the surplus prior to ruin X τ − {\displaystyle X_{\tau -}} and the deficit at ruin X τ {\displaystyle X_{\tau }} ), and the expectation E x {\displaystyle \mathbb {E} ^{x}} corresponds to the probability measure P x {\displaystyle \mathbb {P} ^{x}} . Here the indicator function I ( τ < ∞ ) {\displaystyle \mathbb {I} (\tau <\infty )} emphasizes that the penalty is exercised only when ruin occurs. It is quite intuitive to interpret the expected discounted penalty function. Since the function measures the actuarial present value of the penalty that occurs at τ {\displaystyle \tau } , the penalty function is multiplied by the discounting factor e − δ τ {\displaystyle e^{-\delta \tau }} , and then averaged over the probability distribution of the waiting time to τ {\displaystyle \tau } . While Gerber and Shiu applied this function to the classical compound-Poisson model, Powers argued that an insurer's surplus is better modeled by a family of diffusion processes. There are a great variety of ruin-related quantities that fall into the category of the expected discounted penalty function. Other finance-related quantities belonging to the class of the expected discounted penalty function include the perpetual American put option, the contingent claim at optimal exercise time, and more. == Recent developments == Compound-Poisson risk model with constant interest Compound-Poisson risk model with stochastic interest Brownian-motion risk model General diffusion-process model Markov-modulated risk model Accident probability factor (APF) calculator – risk analysis model (@SBH) == See also == Financial risk Volterra integral equation § Application: Ruin theory Chance-constrained portfolio selection == References == == Further reading == Gerber, H.U. (1979). An Introduction to Mathematical Risk Theory. Philadelphia: S.S. Heubner Foundation Monograph Series 8. Asmussen S., Albrecher H. (2010). Ruin Probabilities, 2nd Edition. Singapore: World Scientific Publishing Co.
Wikipedia/Ruin_theory
In mathematics, the Volterra integral equations are a special type of integral equations. They are divided into two groups referred to as the first and the second kind. A linear Volterra equation of the first kind is f ( t ) = ∫ a t K ( t , s ) x ( s ) d s {\displaystyle f(t)=\int _{a}^{t}K(t,s)\,x(s)\,ds} where f is a given function and x is an unknown function to be solved for. A linear Volterra equation of the second kind is x ( t ) = f ( t ) + ∫ a t K ( t , s ) x ( s ) d s . {\displaystyle x(t)=f(t)+\int _{a}^{t}K(t,s)x(s)\,ds.} In operator theory, and in Fredholm theory, the corresponding operators are called Volterra operators. A useful method to solve such equations, the Adomian decomposition method, is due to George Adomian. A linear Volterra integral equation is a convolution equation if x ( t ) = f ( t ) + ∫ t 0 t K ( t − s ) x ( s ) d s . {\displaystyle x(t)=f(t)+\int _{t_{0}}^{t}K(t-s)x(s)\,ds.} The function K {\displaystyle K} in the integral is called the kernel. Such equations can be analyzed and solved by means of Laplace transform techniques. For a weakly singular kernel of the form K ( t , s ) = ( t 2 − s 2 ) − α {\displaystyle K(t,s)=(t^{2}-s^{2})^{-\alpha }} with 0 < α < 1 {\displaystyle 0<\alpha <1} , Volterra integral equation of the first kind can conveniently be transformed into a classical Abel integral equation. The Volterra integral equations were introduced by Vito Volterra and then studied by Traian Lalescu in his 1908 thesis, Sur les équations de Volterra, written under the direction of Émile Picard. In 1911, Lalescu wrote the first book ever on integral equations. Volterra integral equations find application in demography as Lotka's integral equation, the study of viscoelastic materials, in actuarial science through the renewal equation, and in fluid mechanics to describe the flow behavior near finite-sized boundaries. == Conversion of Volterra equation of the first kind to the second kind == A linear Volterra equation of the first kind can always be reduced to a linear Volterra equation of the second kind, assuming that K ( t , t ) ≠ 0 {\displaystyle K(t,t)\neq 0} . Taking the derivative of the first kind Volterra equation gives us: d f d t = ∫ a t ∂ K ∂ t x ( s ) d s + K ( t , t ) x ( t ) {\displaystyle {df \over {dt}}=\int _{a}^{t}{\partial K \over {\partial t}}x(s)ds+K(t,t)x(t)} Dividing through by K ( t , t ) {\displaystyle K(t,t)} yields: x ( t ) = 1 K ( t , t ) d f d t − ∫ a t 1 K ( t , t ) ∂ K ∂ t x ( s ) d s {\displaystyle x(t)={1 \over {K(t,t)}}{df \over {dt}}-\int _{a}^{t}{1 \over {K(t,t)}}{\partial K \over {\partial t}}x(s)ds} Defining f ~ ( t ) = 1 K ( t , t ) d f d t {\textstyle {\widetilde {f}}(t)={1 \over {K(t,t)}}{df \over {dt}}} and K ~ ( t , s ) = − 1 K ( t , t ) ∂ K ∂ t {\textstyle {\widetilde {K}}(t,s)=-{1 \over {K(t,t)}}{\partial K \over {\partial t}}} completes the transformation of the first kind equation into a linear Volterra equation of the second kind. == Numerical solution using trapezoidal rule == A standard method for computing the numerical solution of a linear Volterra equation of the second kind is the trapezoidal rule, which for equally-spaced subintervals Δ x {\displaystyle \Delta x} is given by: ∫ a b f ( x ) d x ≈ Δ x 2 [ f ( x 0 ) + 2 ∑ i = 1 n − 1 f ( x i ) + f ( x n ) ] {\displaystyle \int _{a}^{b}f(x)dx\approx {\Delta x \over {2}}\left[f(x_{0})+2\sum _{i=1}^{n-1}f(x_{i})+f(x_{n})\right]} Assuming equal spacing for the subintervals, the integral component of the Volterra equation may be approximated by: ∫ a t K ( t , s ) x ( s ) d s ≈ Δ s 2 [ K ( t , s 0 ) x ( s 0 ) + 2 K ( t , s 1 ) x ( s 1 ) + ⋯ + 2 K ( t , s n − 1 ) x ( s n − 1 ) + K ( t , s n ) x ( s n ) ] {\displaystyle \int _{a}^{t}K(t,s)x(s)ds\approx {\Delta s \over {2}}\left[K(t,s_{0})x(s_{0})+2K(t,s_{1})x(s_{1})+\cdots +2K(t,s_{n-1})x(s_{n-1})+K(t,s_{n})x(s_{n})\right]} Defining x i = x ( s i ) {\displaystyle x_{i}=x(s_{i})} , f i = f ( t i ) {\displaystyle f_{i}=f(t_{i})} , and K i j = K ( t i , s j ) {\displaystyle K_{ij}=K(t_{i},s_{j})} , we have the system of linear equations: x 0 = f 0 x 1 = f 1 + Δ s 2 ( K 10 x 0 + K 11 x 1 ) x 2 = f 2 + Δ s 2 ( K 20 x 0 + 2 K 21 x 1 + K 22 x 2 ) ⋮ x n = f n + Δ s 2 ( K n 0 x 0 + 2 K n 1 x 1 + ⋯ + 2 K n , n − 1 x n − 1 + K n n x n ) {\displaystyle {\begin{aligned}x_{0}&=f_{0}\\x_{1}&=f_{1}+{\Delta s \over {2}}\left(K_{10}x_{0}+K_{11}x_{1}\right)\\x_{2}&=f_{2}+{\Delta s \over {2}}\left(K_{20}x_{0}+2K_{21}x_{1}+K_{22}x_{2}\right)\\&\vdots \\x_{n}&=f_{n}+{\Delta s \over {2}}\left(K_{n0}x_{0}+2K_{n1}x_{1}+\cdots +2K_{n,n-1}x_{n-1}+K_{nn}x_{n}\right)\end{aligned}}} This is equivalent to the matrix equation: x = f + M x ⟹ x = ( I − M ) − 1 f {\displaystyle x=f+Mx\implies x=(I-M)^{-1}f} For well-behaved kernels, the trapezoidal rule tends to work well. == Application: Ruin theory == One area where Volterra integral equations appear is in ruin theory, the study of the risk of insolvency in actuarial science. The objective is to quantify the probability of ruin ψ ( u ) = P [ τ ( u ) < ∞ ] {\displaystyle \psi (u)=\mathbb {P} [\tau (u)<\infty ]} , where u {\displaystyle u} is the initial surplus and τ ( u ) {\displaystyle \tau (u)} is the time of ruin. In the classical model of ruin theory, the net cash position X t {\displaystyle X_{t}} is a function of the initial surplus, premium income earned at rate c {\displaystyle c} , and outgoing claims ξ {\displaystyle \xi } : X t = u + c t − ∑ i = 1 N t ξ i , t ≥ 0 {\displaystyle X_{t}=u+ct-\sum _{i=1}^{N_{t}}\xi _{i},\quad t\geq 0} where N t {\displaystyle N_{t}} is a Poisson process for the number of claims with intensity λ {\displaystyle \lambda } . Under these circumstances, the ruin probability may be represented by a Volterra integral equation of the form: ψ ( u ) = λ c ∫ u ∞ S ( x ) d x + λ c ∫ 0 u ψ ( u − x ) S ( x ) d x {\displaystyle \psi (u)={\lambda \over {c}}\int _{u}^{\infty }S(x)dx+{\lambda \over {c}}\int _{0}^{u}\psi (u-x)S(x)dx} where S ( ⋅ ) {\displaystyle S(\cdot )} is the survival function of the claims distribution. == See also == Fredholm integral equation Integral equation Integro-differential equation == References == == Further reading == Traian Lalescu, Introduction à la théorie des équations intégrales. Avec une préface de É. Picard, Paris: A. Hermann et Fils, 1912. VII + 152 pp. "Volterra equation", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Weisstein, Eric W. "Volterra Integral Equation of the First Kind". MathWorld. Weisstein, Eric W. "Volterra Integral Equation of the Second Kind". MathWorld. Integral Equations: Exact Solutions at EqWorld: The World of Mathematical Equations Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 19.2. Volterra Equations". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. == External links == IntEQ: a Python package for numerically solving Volterra integral equations
Wikipedia/Volterra_integral_equation
The electric-field integral equation is a relationship that allows the calculation of an electric field (E) generated by an electric current distribution (J). == Derivation == When all quantities in the frequency domain are considered, a time-dependency e j w t {\displaystyle e^{jwt}} that is suppressed throughout is assumed. Beginning with the Maxwell equations relating the electric and magnetic field, and assuming a linear, homogeneous media with permeability μ {\displaystyle \mu } and permittivity ε {\displaystyle \varepsilon \,} : ∇ × E = − j ω μ H ∇ × H = j ω ε E + J {\displaystyle {\begin{aligned}\nabla \times \mathbf {E} &=-j\omega \mu \mathbf {H} \\[1ex]\nabla \times \mathbf {H} &=j\omega \varepsilon \mathbf {E} +\mathbf {J} \end{aligned}}} Following the third equation involving the divergence of H ∇ ⋅ H = 0 {\displaystyle \nabla \cdot \mathbf {H} =0\,} by vector calculus we can write any divergenceless vector as the curl of another vector, hence ∇ × A = H {\displaystyle \nabla \times \mathbf {A} =\mathbf {H} } where A is called the magnetic vector potential. Substituting this into the above we get ∇ × ( E + j ω μ A ) = 0 {\displaystyle \nabla \times (\mathbf {E} +j\omega \mu \mathbf {A} )=0} and any curl-free vector can be written as the gradient of a scalar, hence E + j ω μ A = − ∇ Φ {\displaystyle \mathbf {E} +j\omega \mu \mathbf {A} =-\nabla \Phi } where Φ {\displaystyle \Phi } is the electric scalar potential. These relationships now allow us to write ∇ × ∇ × A − k 2 A = J − j ω ε ∇ Φ {\displaystyle \nabla \times \nabla \times \mathbf {A} -k^{2}\mathbf {A} =\mathbf {J} -j\omega \varepsilon \nabla \Phi } where k = ω μ ε {\displaystyle k=\omega {\sqrt {\mu \varepsilon }}} , which can be rewritten by vector identity as ∇ ( ∇ ⋅ A ) − ∇ 2 A − k 2 A = J − j ω ε ∇ Φ {\displaystyle \nabla (\nabla \cdot \mathbf {A} )-\nabla ^{2}\mathbf {A} -k^{2}\mathbf {A} =\mathbf {J} -j\omega \varepsilon \nabla \Phi } As we have only specified the curl of A, we are free to define the divergence, and choose the following: ∇ ⋅ A = − j ω ε Φ {\displaystyle \nabla \cdot \mathbf {A} =-j\omega \varepsilon \Phi \,} which is called the Lorenz gauge condition. The previous expression for A now reduces to ∇ 2 A + k 2 A = − J {\displaystyle \nabla ^{2}\mathbf {A} +k^{2}\mathbf {A} =-\mathbf {J} \,} which is the vector Helmholtz equation. The solution of this equation for A is A ( r ) = 1 4 π ∫ J ( r ′ ) G ( r , r ′ ) d r ′ {\displaystyle \mathbf {A} (\mathbf {r} )={\frac {1}{4\pi }}\int \mathbf {J} (\mathbf {r} ^{\prime })\ G(\mathbf {r} ,\mathbf {r} ^{\prime })\,d\mathbf {r} ^{\prime }} where G ( r , r ′ ) {\displaystyle G(\mathbf {r} ,\mathbf {r} ^{\prime })} is the three-dimensional homogeneous Green's function given by G ( r , r ′ ) = e − j k | r − r ′ | | r − r ′ | {\displaystyle G(\mathbf {r} ,\mathbf {r} ^{\prime })={\frac {e^{-jk\left|\mathbf {r} -\mathbf {r} ^{\prime }\right|}}{\left|\mathbf {r} -\mathbf {r} ^{\prime }\right|}}} We can now write what is called the electric field integral equation (EFIE), relating the electric field E to the vector potential A E = − j ω μ A + 1 j ω ε ∇ ( ∇ ⋅ A ) {\displaystyle \mathbf {E} =-j\omega \mu \mathbf {A} +{\frac {1}{j\omega \varepsilon }}\nabla (\nabla \cdot \mathbf {A} )\,} We can further represent the EFIE in the dyadic form as E = − j ω μ ∫ V d r ′ G ( r , r ′ ) ⋅ J ( r ′ ) {\displaystyle \mathbf {E} =-j\omega \mu \int _{V}d\mathbf {r} ^{\prime }\mathbf {G} (\mathbf {r} ,\mathbf {r} ^{\prime })\cdot \mathbf {J} (\mathbf {r} ^{\prime })\,} where G ( r , r ′ ) {\displaystyle \mathbf {G} (\mathbf {r} ,\mathbf {r} ^{\prime })\,} here is the dyadic homogeneous Green's Function given by G ( r , r ′ ) = 1 4 π [ I + ∇ ∇ k 2 ] G ( r , r ′ ) {\displaystyle \mathbf {G} (\mathbf {r} ,\mathbf {r} ^{\prime })={\frac {1}{4\pi }}\left[\mathbf {I} +{\frac {\nabla \nabla }{k^{2}}}\right]G(\mathbf {r} ,\mathbf {r} ^{\prime })} == Interpretation == The EFIE describes a radiated field E given a set of sources J, and as such it is the fundamental equation used in antenna analysis and design. It is a very general relationship that can be used to compute the radiated field of any sort of antenna once the current distribution on it is known. The most important aspect of the EFIE is that it allows us to solve the radiation/scattering problem in an unbounded region, or one whose boundary is located at infinity. For closed surfaces, it is possible to use the Magnetic Field Integral Equation or the Combined Field Integral Equation, both of which result in a set of equations with improved condition number compared to the EFIE. However, the MFIE and CFIE can still contain resonances. In scattering problems, it is desirable to determine an unknown scattered field E s {\displaystyle E_{s}} that is due to a known incident field E i {\displaystyle E_{i}} . Unfortunately, the EFIE relates the scattered field to J, not the incident field, so we do not know what J is. This sort of problem can be solved by imposing the boundary conditions on the incident and scattered field, allowing one to write the EFIE in terms of E i {\displaystyle E_{i}} and J alone. Once this has been done, the integral equation can then be solved by a numerical technique appropriate to integral equations such as the method of moments. == Notes == By the Helmholtz theorem a vector field is described completely by its divergence and curl. As the divergence was not defined, we are justified by choosing the Lorenz Gauge condition above provided that we consistently use this definition of the divergence of A in all subsequent analysis. However, other choices for ∇ ⋅ A {\displaystyle \nabla \cdot \mathbf {A} } are just as valid and lead to other equations, which all describe the same phenomena, and the solutions of the equations for any choice of ∇ ⋅ A {\displaystyle \nabla \cdot \mathbf {A} } lead to the same electromagnetic fields, and the same physical predictions about the fields and charges are accelerated by them. It is natural to think that if a quantity exhibits this degree of freedom in its choice, then it should not be interpreted as a real physical quantity. After all, if we can freely choose ∇ ⋅ A {\displaystyle \nabla \cdot \mathbf {A} } to be anything, then A {\displaystyle \mathbf {A} } is not unique. One may ask: what is the "true" value of A {\displaystyle \mathbf {A} } measured in an experiment? If A {\displaystyle \mathbf {A} } is not unique, then the only logical answer must be that we can never measure the value of A {\displaystyle \mathbf {A} } . On this basis, it is often stated that it is not a real physical quantity and it is believed that the fields E {\displaystyle \mathbf {E} } and B {\displaystyle \mathbf {B} } are the true physical quantities. However, there is at least one experiment in which value of the E {\displaystyle \mathbf {E} } and B {\displaystyle \mathbf {B} } are both zero at the location of a charged particle, but it is nevertheless affected by the presence of a local magnetic vector potential; see the Aharonov–Bohm effect for details. Nevertheless, even in the Aharonov–Bohm experiment, the divergence A {\displaystyle \mathbf {A} } never enters the calculations; only ∇ × A {\displaystyle \nabla \times \mathbf {A} } along the path of the particle determines the measurable effect. == References ==
Wikipedia/Electric-field_integral_equation
The calculus ratiocinator is a theoretical universal logical calculation framework, a concept described in the writings of Gottfried Leibniz, usually paired with his more frequently mentioned characteristica universalis, a universal conceptual language. == Two views == There are two contrasting points of view on what Leibniz meant by calculus ratiocinator. The first is associated with computer software, the second is associated with computer hardware. === Analytic view === The received point of view in analytic philosophy and formal logic, is that the calculus ratiocinator anticipates mathematical logic—an "algebra of logic". The analytic point of view understands that the calculus ratiocinator is a formal inference engine or computer program, which can be designed so as to grant primacy to calculations. That logic began with Frege's 1879 Begriffsschrift and C.S. Peirce's writings on logic in the 1880s. Frege intended his "concept script" to be a calculus ratiocinator as well as a universal characteristics. That part of formal logic relevant to the calculus comes under the heading of proof theory. From this perspective the calculus ratiocinator is only a part (or a subset) of the universal characteristics, and a complete universal characteristics includes a "logical calculus". === Synthetic view === A contrasting point of view stems from synthetic philosophy and fields such as cybernetics, electronic engineering, and general systems theory. It is little appreciated in analytic philosophy. The synthetic view understands the calculus ratiocinator as referring to a "calculating machine". The cybernetician Norbert Wiener considered Leibniz's calculus ratiocinator a forerunner to the modern day digital computer: "The history of the modern computing machine goes back to Leibniz and Pascal. Indeed, the general idea of a computing machine is nothing but a mechanization of Leibniz's calculus ratiocinator." "...like his predecessor Pascal, [Leibniz] was interested in the construction of computing machines in the Metal. ... just as the calculus of arithmetic lends itself to a mechanization progressing through the abacus and the desk computing machine to the ultra-rapid computing machines of the present day, so the calculus ratiocinator of Leibniz contains the germs of the machina ratiocinatrix, the reasoning machine." Leibniz constructed just such a machine for mathematical calculations, which was also called a "stepped reckoner". As a computing machine, the ideal calculus ratiocinator would perform Leibniz's integral and differential calculus. In this way the meaning of the word, "ratiocinator" is clarified and can be understood as a mechanical instrument that combines and compares ratios. Hartley Rogers saw a link between the two, defining the calculus ratiocinator as "an algorithm which, when applied to the symbols of any formula of the characteristica universalis, would determine whether or not that formula were true as a statement of science". A classic discussion of the calculus ratiocinator is that of Louis Couturat, who maintained that the characteristica universalis — and thus the calculus ratiocinator — were inseparable from Leibniz's encyclopedic project. Hence the characteristics, calculus ratiocinator, and encyclopedia form three pillars of Leibniz's project. == See also == Algebraic logic § Calculus of relations Mathesis universalis == References == == Bibliography == == External links == Language as Calculus versus Language as Universal Medium
Wikipedia/Calculus_ratiocinator
In logic, a truth function is a function that accepts truth values as input and produces a unique truth value as output. In other words: the input and output of a truth function are all truth values; a truth function will always output exactly one truth value, and inputting the same truth value(s) will always output the same truth value. The typical example is in propositional logic, wherein a compound statement is constructed using individual statements connected by logical connectives; if the truth value of the compound statement is entirely determined by the truth value(s) of the constituent statement(s), the compound statement is called a truth function, and any logical connectives used are said to be truth functional. Classical propositional logic is a truth-functional logic, in that every statement has exactly one truth value which is either true or false, and every logical connective is truth functional (with a correspondent truth table), thus every compound statement is a truth function. On the other hand, modal logic is non-truth-functional. == Overview == A logical connective is truth-functional if the truth-value of a compound sentence is a function of the truth-value of its sub-sentences. A class of connectives is truth-functional if each of its members is. For example, the connective "and" is truth-functional since a sentence like "Apples are fruits and carrots are vegetables" is true if, and only if, each of its sub-sentences "apples are fruits" and "carrots are vegetables" is true, and it is false otherwise. Some connectives of a natural language, such as English, are not truth-functional. Connectives of the form "x believes that ..." are typical examples of connectives that are not truth-functional. If e.g. Mary mistakenly believes that Al Gore was President of the USA on April 20, 2000, but she does not believe that the moon is made of green cheese, then the sentence "Mary believes that Al Gore was President of the USA on April 20, 2000" is true while "Mary believes that the moon is made of green cheese" is false. In both cases, each component sentence (i.e. "Al Gore was president of the USA on April 20, 2000" and "the moon is made of green cheese") is false, but each compound sentence formed by prefixing the phrase "Mary believes that" differs in truth-value. That is, the truth-value of a sentence of the form "Mary believes that..." is not determined solely by the truth-value of its component sentence, and hence the (unary) connective (or simply operator since it is unary) is non-truth-functional. The class of classical logic connectives (e.g. &, →) used in the construction of formulas is truth-functional. Their values for various truth-values as argument are usually given by truth tables. Truth-functional propositional calculus is a formal system whose formulae may be interpreted as either true or false. == Table of binary truth functions == In two-valued logic, there are sixteen possible truth functions, also called Boolean functions, of two inputs P and Q. Any of these functions corresponds to a truth table of a certain logical connective in classical logic, including several degenerate cases such as a function not depending on one or both of its arguments. Truth and falsehood are denoted as 1 and 0, respectively, in the following truth tables for sake of brevity. == Functional completeness == Because a function may be expressed as a composition, a truth-functional logical calculus does not need to have dedicated symbols for all of the above-mentioned functions to be functionally complete. This is expressed in a propositional calculus as logical equivalence of certain compound statements. For example, classical logic has ¬P ∨ Q equivalent to P → Q. The conditional operator "→" is therefore not necessary for a classical-based logical system if "¬" (not) and "∨" (or) are already in use. A minimal set of operators that can express every statement expressible in the propositional calculus is called a minimal functionally complete set. A minimally complete set of operators is achieved by NAND alone {↑} and NOR alone {↓}. The following are the minimal functionally complete sets of operators whose arities do not exceed 2: One element {↑}, {↓}. Two elements { ∨ , ¬ } {\displaystyle \{\vee ,\neg \}} , { ∧ , ¬ } {\displaystyle \{\wedge ,\neg \}} , { → , ¬ } {\displaystyle \{\to ,\neg \}} , { ← , ¬ } {\displaystyle \{\gets ,\neg \}} , { → , ⊥ } {\displaystyle \{\to ,\bot \}} , { ← , ⊥ } {\displaystyle \{\gets ,\bot \}} , { → , ↮ } {\displaystyle \{\to ,\nleftrightarrow \}} , { ← , ↮ } {\displaystyle \{\gets ,\nleftrightarrow \}} , { → , ↛ } {\displaystyle \{\to ,\nrightarrow \}} , { → , ↚ } {\displaystyle \{\to ,\nleftarrow \}} , { ← , ↛ } {\displaystyle \{\gets ,\nrightarrow \}} , { ← , ↚ } {\displaystyle \{\gets ,\nleftarrow \}} , { ↛ , ¬ } {\displaystyle \{\nrightarrow ,\neg \}} , { ↚ , ¬ } {\displaystyle \{\nleftarrow ,\neg \}} , { ↛ , ⊤ } {\displaystyle \{\nrightarrow ,\top \}} , { ↚ , ⊤ } {\displaystyle \{\nleftarrow ,\top \}} , { ↛ , ↔ } {\displaystyle \{\nrightarrow ,\leftrightarrow \}} , { ↚ , ↔ } {\displaystyle \{\nleftarrow ,\leftrightarrow \}} . Three elements { ∨ , ↔ , ⊥ } {\displaystyle \{\lor ,\leftrightarrow ,\bot \}} , { ∨ , ↔ , ↮ } {\displaystyle \{\lor ,\leftrightarrow ,\nleftrightarrow \}} , { ∨ , ↮ , ⊤ } {\displaystyle \{\lor ,\nleftrightarrow ,\top \}} , { ∧ , ↔ , ⊥ } {\displaystyle \{\land ,\leftrightarrow ,\bot \}} , { ∧ , ↔ , ↮ } {\displaystyle \{\land ,\leftrightarrow ,\nleftrightarrow \}} , { ∧ , ↮ , ⊤ } {\displaystyle \{\land ,\nleftrightarrow ,\top \}} . == Algebraic properties == Some truth functions possess properties which may be expressed in the theorems containing the corresponding connective. Some of those properties that a binary truth function (or a corresponding logical connective) may have are: associativity: Within an expression containing two or more of the same associative connectives in a row, the order of the operations does not matter as long as the sequence of the operands is not changed. commutativity: The operands of the connective may be swapped without affecting the truth-value of the expression. distributivity: A connective denoted by · distributes over another connective denoted by +, if a · (b + c) = (a · b) + (a · c) for all operands a, b, c. idempotence: Whenever the operands of the operation are the same, the connective gives the operand as the result. In other words, the operation is both truth-preserving and falsehood-preserving (see below). absorption: A pair of connectives ∧ , ∨ {\displaystyle \land ,\lor } satisfies the absorption law if a ∧ ( a ∨ b ) = a ∨ ( a ∧ b ) = a {\displaystyle a\land (a\lor b)=a\lor (a\land b)=a} for all operands a, b. A set of truth functions is functionally complete if and only if for each of the following five properties it contains at least one member lacking it: monotonic: If f(a1, ..., an) ≤ f(b1, ..., bn) for all a1, ..., an, b1, ..., bn ∈ {0,1} such that a1 ≤ b1, a2 ≤ b2, ..., an ≤ bn. E.g., ∨ , ∧ , ⊤ , ⊥ {\displaystyle \vee ,\wedge ,\top ,\bot } . affine: For each variable, changing its value either always or never changes the truth-value of the operation, for all fixed values of all other variables. E.g., ¬ , ↔ {\displaystyle \neg ,\leftrightarrow } , ↮ , ⊤ , ⊥ {\displaystyle \not \leftrightarrow ,\top ,\bot } . self dual: To read the truth-value assignments for the operation from top to bottom on its truth table is the same as taking the complement of reading it from bottom to top; in other words, f(¬a1, ..., ¬an) = ¬f(a1, ..., an). E.g., ¬ {\displaystyle \neg } . truth-preserving: The interpretation under which all variables are assigned a truth value of true produces a truth value of true as a result of these operations. E.g., ∨ , ∧ , ⊤ , → , ↔ , ⊂ {\displaystyle \vee ,\wedge ,\top ,\rightarrow ,\leftrightarrow ,\subset } . (see validity) falsehood-preserving: The interpretation under which all variables are assigned a truth value of false produces a truth value of false as a result of these operations. E.g., ∨ , ∧ , ↮ , ⊥ , ⊄ , ⊅ {\displaystyle \vee ,\wedge ,\nleftrightarrow ,\bot ,\not \subset ,\not \supset } . (see validity) === Arity === A concrete function may be also referred to as an operator. In two-valued logic there are 2 nullary operators (constants), 4 unary operators, 16 binary operators, 256 ternary operators, and 2 2 n {\displaystyle 2^{2^{n}}} n-ary operators. In three-valued logic there are 3 nullary operators (constants), 27 unary operators, 19683 binary operators, 7625597484987 ternary operators, and 3 3 n {\displaystyle 3^{3^{n}}} n-ary operators. In k-valued logic, there are k nullary operators, k k {\displaystyle k^{k}} unary operators, k k 2 {\displaystyle k^{k^{2}}} binary operators, k k 3 {\displaystyle k^{k^{3}}} ternary operators, and k k n {\displaystyle k^{k^{n}}} n-ary operators. An n-ary operator in k-valued logic is a function from Z k n → Z k {\displaystyle \mathbb {Z} _{k}^{n}\to \mathbb {Z} _{k}} . Therefore, the number of such operators is | Z k | | Z k n | = k k n {\displaystyle |\mathbb {Z} _{k}|^{|\mathbb {Z} _{k}^{n}|}=k^{k^{n}}} , which is how the above numbers were derived. However, some of the operators of a particular arity are actually degenerate forms that perform a lower-arity operation on some of the inputs and ignore the rest of the inputs. Out of the 256 ternary Boolean operators cited above, ( 3 2 ) ⋅ 16 − ( 3 1 ) ⋅ 4 + ( 3 0 ) ⋅ 2 {\displaystyle {\binom {3}{2}}\cdot 16-{\binom {3}{1}}\cdot 4+{\binom {3}{0}}\cdot 2} of them are such degenerate forms of binary or lower-arity operators, using the inclusion–exclusion principle. The ternary operator f ( x , y , z ) = ¬ x {\displaystyle f(x,y,z)=\lnot x} is one such operator which is actually a unary operator applied to one input, and ignoring the other two inputs. "Not" is a unary operator, it takes a single term (¬P). The rest are binary operators, taking two terms to make a compound statement (P ∧ Q, P ∨ Q, P → Q, P ↔ Q). The set of logical operators Ω may be partitioned into disjoint subsets as follows: Ω = Ω 0 ∪ Ω 1 ∪ … ∪ Ω j ∪ … ∪ Ω m . {\displaystyle \Omega =\Omega _{0}\cup \Omega _{1}\cup \ldots \cup \Omega _{j}\cup \ldots \cup \Omega _{m}\,.} In this partition, Ω j {\displaystyle \Omega _{j}} is the set of operator symbols of arity j. In the more familiar propositional calculi, Ω {\displaystyle \Omega } is typically partitioned as follows: nullary operators: Ω 0 = { ⊥ , ⊤ } {\displaystyle \Omega _{0}=\{\bot ,\top \}} unary operators: Ω 1 = { ¬ } {\displaystyle \Omega _{1}=\{\lnot \}} binary operators: Ω 2 ⊃ { ∧ , ∨ , → , ↔ } {\displaystyle \Omega _{2}\supset \{\land ,\lor ,\rightarrow ,\leftrightarrow \}} == Principle of compositionality == Instead of using truth tables, logical connective symbols can be interpreted by means of an interpretation function and a functionally complete set of truth-functions (Gamut 1991), as detailed by the principle of compositionality of meaning. Let I be an interpretation function, let Φ, Ψ be any two sentences and let the truth function fnand be defined as: fnand(T,T) = F; fnand(T,F) = fnand(F,T) = fnand(F,F) = T Then, for convenience, fnot, for fand and so on are defined by means of fnand: fnot(x) = fnand(x,x) for(x,y) = fnand(fnot(x), fnot(y)) fand(x,y) = fnot(fnand(x,y)) or, alternatively fnot, for fand and so on are defined directly: fnot(T) = F; fnot(F) = T; for(T,T) = for(T,F) = for(F,T) = T; for(F,F) = F fand(T,T) = T; fand(T,F) = fand(F,T) = fand(F,F) = F Then etc. Thus if S is a sentence that is a string of symbols consisting of logical symbols v1...vn representing logical connectives, and non-logical symbols c1...cn, then if and only if I(v1)...I(vn) have been provided interpreting v1 to vn by means of fnand (or any other set of functional complete truth-functions) then the truth-value of ⁠ I ( s ) {\displaystyle I(s)} ⁠ is determined entirely by the truth-values of c1...cn, i.e. of I(c1)...I(cn). In other words, as expected and required, S is true or false only under an interpretation of all its non-logical symbols. == Definition == Using the functions defined above, we can give a formal definition of a proposition's truth function. Let PROP be the set of all propositional variables, P R O P = { p 1 , p 2 , … } {\displaystyle PROP=\{p_{1},p_{2},\dots \}} We define a truth assignment to be any function ϕ : P R O P → { T , F } {\displaystyle \phi :PROP\to \{T,F\}} . A truth assignment is therefore an association of each propositional variable with a particular truth value. This is effectively the same as a particular row of a proposition's truth table. For a truth assignment, ϕ {\displaystyle \phi } , we define its extended truth assignment, ϕ ¯ {\displaystyle {\overline {\phi }}} , as follows. This extends ϕ {\displaystyle \phi } to a new function ϕ ¯ {\displaystyle {\overline {\phi }}} which has domain equal to the set of all propositional formulas. The range of ϕ ¯ {\displaystyle {\overline {\phi }}} is still { T , F } {\displaystyle \{T,F\}} . If A ∈ P R O P {\displaystyle A\in PROP} then ϕ ¯ ( A ) = ϕ ( A ) {\displaystyle {\overline {\phi }}(A)=\phi (A)} . If A and B are any propositional formulas, then ϕ ¯ ( ¬ A ) = f not ( ϕ ¯ ( A ) ) {\displaystyle {\overline {\phi }}(\neg A)=f_{\text{not}}({\overline {\phi }}(A))} . ϕ ¯ ( A ∧ B ) = f and ( ϕ ¯ ( A ) , ϕ ¯ ( B ) ) {\displaystyle {\overline {\phi }}(A\land B)=f_{\text{and}}({\overline {\phi }}(A),{\overline {\phi }}(B))} . ϕ ¯ ( A ∨ B ) = f or ( ϕ ¯ ( A ) , ϕ ¯ ( B ) ) {\displaystyle {\overline {\phi }}(A\lor B)=f_{\text{or}}({\overline {\phi }}(A),{\overline {\phi }}(B))} . ϕ ¯ ( A → B ) = ϕ ¯ ( ¬ A ∨ B ) {\displaystyle {\overline {\phi }}(A\to B)={\overline {\phi }}(\neg A\lor B)} . ϕ ¯ ( A ↔ B ) = ϕ ¯ ( ( A → B ) ∧ ( B → A ) ) {\displaystyle {\overline {\phi }}(A\leftrightarrow B)={\overline {\phi }}((A\to B)\land (B\to A))} . Finally, now that we have defined the extended truth assignment, we can use this to define the truth-function of a proposition. For a proposition, A, its truth function, f A {\displaystyle f_{A}} , has domain equal to the set of all truth assignments, and range equal to { T , F } {\displaystyle \{T,F\}} . It is defined, for each truth assignment ϕ {\displaystyle \phi } , by f A ( ϕ ) = ϕ ¯ ( A ) {\displaystyle f_{A}(\phi )={\overline {\phi }}(A)} . The value given by ϕ ¯ ( A ) {\displaystyle {\overline {\phi }}(A)} is the same as the one displayed in the final column of the truth table of A, on the row identified with ϕ {\displaystyle \phi } . == Computer science == Logical operators are implemented as logic gates in digital circuits. Practically all digital circuits (the major exception is DRAM) are built up from NAND, NOR, NOT, and transmission gates. NAND and NOR gates with 3 or more inputs rather than the usual 2 inputs are fairly common, although they are logically equivalent to a cascade of 2-input gates. All other operators are implemented by breaking them down into a logically equivalent combination of 2 or more of the above logic gates. The "logical equivalence" of "NAND alone", "NOR alone", and "NOT and AND" is similar to Turing equivalence. The fact that all truth functions can be expressed with NOR alone is demonstrated by the Apollo guidance computer. == See also == == Notes == == References == This article incorporates material from TruthFunction on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. == Further reading == Józef Maria Bocheński (1959), A Précis of Mathematical Logic, translated from the French and German versions by Otto Bird, Dordrecht, South Holland: D. Reidel. Alonzo Church (1944), Introduction to Mathematical Logic, Princeton, NJ: Princeton University Press. See the Introduction for a history of the truth function concept.
Wikipedia/Truth_function
In computer science, a production or production rule is a rewrite rule that replaces some symbols with other symbols. A finite set of productions P {\displaystyle P} is the main component in the specification of a formal grammar (specifically a generative grammar). The other components are a finite set N {\displaystyle N} of nonterminal symbols, a finite set (known as an alphabet) Σ {\displaystyle \Sigma } of terminal symbols that is disjoint from N {\displaystyle N} and a distinguished symbol S ∈ N {\displaystyle S\in N} that is the start symbol. In an unrestricted grammar, a production is of the form u → v {\displaystyle u\to v} , where u {\displaystyle u} and v {\displaystyle v} are arbitrary strings of terminals and nonterminals, and u {\displaystyle u} may not be the empty string. If v {\displaystyle v} is the empty string, this is denoted by the symbol ϵ {\displaystyle \epsilon } , or λ {\displaystyle \lambda } (rather than leaving the right-hand side blank). So productions are members of the cartesian product V ∗ N V ∗ × V ∗ = ( V ∗ ∖ Σ ∗ ) × V ∗ {\displaystyle V^{*}NV^{*}\times V^{*}=(V^{*}\setminus \Sigma ^{*})\times V^{*}} , where V := N ∪ Σ {\displaystyle V:=N\cup \Sigma } is the vocabulary, ∗ {\displaystyle {}^{*}} is the Kleene star operator, V ∗ N V ∗ {\displaystyle V^{*}NV^{*}} indicates concatenation, ∪ {\displaystyle \cup } denotes set union, and ∖ {\displaystyle \setminus } denotes set minus or set difference. If we do not allow the start symbol to occur in v {\displaystyle v} (the word on the right side), we have to replace V ∗ {\displaystyle V^{*}} by ( V ∖ { S } ) ∗ {\displaystyle (V\setminus \{S\})^{*}} on the right side of the cartesian product symbol. The other types of formal grammar in the Chomsky hierarchy impose additional restrictions on what constitutes a production. Notably in a context-free grammar, the left-hand side of a production must be a single nonterminal symbol. So productions are of the form: N → ( N ∪ Σ ) ∗ {\displaystyle N\to (N\cup \Sigma )^{*}} == Grammar generation == To generate a string in the language, one begins with a string consisting of only a single start symbol, and then successively applies the rules (any number of times, in any order) to rewrite this string. This stops when a string containing only terminals is obtained. The language consists of all the strings that can be generated in this manner. Any particular sequence of legal choices taken during this rewriting process yields one particular string in the language. If there are multiple different ways of generating this single string, then the grammar is said to be ambiguous. For example, assume the alphabet consists of a {\displaystyle a} and b {\displaystyle b} , with the start symbol S {\displaystyle S} , and we have the following rules: 1. S → a S b {\displaystyle S\rightarrow aSb} 2. S → b a {\displaystyle S\rightarrow ba} then we start with S {\displaystyle S} , and can choose a rule to apply to it. If we choose rule 1, we replace S {\displaystyle S} with a S b {\displaystyle aSb} and obtain the string a S b {\displaystyle aSb} . If we choose rule 1 again, we replace S {\displaystyle S} with a S b {\displaystyle aSb} and obtain the string a a S b b {\displaystyle aaSbb} . This process is repeated until we only have symbols from the alphabet (i.e., a {\displaystyle a} and b {\displaystyle b} ). If we now choose rule 2, we replace S {\displaystyle S} with b a {\displaystyle ba} and obtain the string a a b a b b {\displaystyle aababb} , and are done. We can write this series of choices more briefly, using symbols: S ⇒ a S b ⇒ a a S b b ⇒ a a b a b b {\displaystyle S\Rightarrow aSb\Rightarrow aaSbb\Rightarrow aababb} . The language of the grammar is the set of all the strings that can be generated using this process: { b a , a b a b , a a b a b b , a a a b a b b b , … } {\displaystyle \{ba,abab,aababb,aaababbb,\dotsc \}} . == See also == Formal grammar Finite automata Generative grammar L-system Rewrite rule Backus–Naur form (A compact form for writing the productions of a context-free grammar.) Phrase structure rule Post canonical system (Emil Post's production systems- a model of computation.) == References ==
Wikipedia/Production_(computer_science)
First-order logic, also called predicate logic, predicate calculus, or quantificational logic, is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables. Rather than propositions such as "all humans are mortal", in first-order logic one can have expressions in the form "for all x, if x is a human, then x is mortal", where "for all x" is a quantifier, x is a variable, and "... is a human" and "... is mortal" are predicates. This distinguishes it from propositional logic, which does not use quantifiers or relations;: 161  in this sense, propositional logic is the foundation of first-order logic. A theory about a topic, such as set theory, a theory for groups, or a formal theory of arithmetic, is usually a first-order logic together with a specified domain of discourse (over which the quantified variables range), finitely many functions from that domain to itself, finitely many predicates defined on that domain, and a set of axioms believed to hold about them. "Theory" is sometimes understood in a more formal sense as just a set of sentences in first-order logic. The term "first-order" distinguishes first-order logic from higher-order logic, in which there are predicates having predicates or functions as arguments, or in which quantification over predicates, functions, or both, are permitted.: 56  In first-order theories, predicates are often associated with sets. In interpreted higher-order theories, predicates may be interpreted as sets of sets. There are many deductive systems for first-order logic which are both sound, i.e. all provable statements are true in all models; and complete, i.e. all statements which are true in all models are provable. Although the logical consequence relation is only semidecidable, much progress has been made in automated theorem proving in first-order logic. First-order logic also satisfies several metalogical theorems that make it amenable to analysis in proof theory, such as the Löwenheim–Skolem theorem and the compactness theorem. First-order logic is the standard for the formalization of mathematics into axioms, and is studied in the foundations of mathematics. Peano arithmetic and Zermelo–Fraenkel set theory are axiomatizations of number theory and set theory, respectively, into first-order logic. No first-order theory, however, has the strength to uniquely describe a structure with an infinite domain, such as the natural numbers or the real line. Axiom systems that do fully describe these two structures, i.e. categorical axiom systems, can be obtained in stronger logics such as second-order logic. The foundations of first-order logic were developed independently by Gottlob Frege and Charles Sanders Peirce. For a history of first-order logic and how it came to dominate formal logic, see José Ferreirós (2001). == Introduction == While propositional logic deals with simple declarative propositions, first-order logic additionally covers predicates and quantification. A predicate evaluates to true or false for an entity or entities in the domain of discourse. Consider the two sentences "Socrates is a philosopher" and "Plato is a philosopher". In propositional logic, these sentences themselves are viewed as the individuals of study, and might be denoted, for example, by variables such as p and q. They are not viewed as an application of a predicate, such as isPhil {\displaystyle {\text{isPhil}}} , to any particular objects in the domain of discourse, instead viewing them as purely an utterance which is either true or false. However, in first-order logic, these two sentences may be framed as statements that a certain individual or non-logical object has a property. In this example, both sentences happen to have the common form isPhil ( x ) {\displaystyle {\text{isPhil}}(x)} for some individual x {\displaystyle x} , in the first sentence the value of the variable x is "Socrates", and in the second sentence it is "Plato". Due to the ability to speak about non-logical individuals along with the original logical connectives, first-order logic includes propositional logic.: 29–30  The truth of a formula such as "x is a philosopher" depends on which object is denoted by x and on the interpretation of the predicate "is a philosopher". Consequently, "x is a philosopher" alone does not have a definite truth value of true or false, and is akin to a sentence fragment. Relationships between predicates can be stated using logical connectives. For example, the first-order formula "if x is a philosopher, then x is a scholar", is a conditional statement with "x is a philosopher" as its hypothesis, and "x is a scholar" as its conclusion, which again needs specification of x in order to have a definite truth value. Quantifiers can be applied to variables in a formula. The variable x in the previous formula can be universally quantified, for instance, with the first-order sentence "For every x, if x is a philosopher, then x is a scholar". The universal quantifier "for every" in this sentence expresses the idea that the claim "if x is a philosopher, then x is a scholar" holds for all choices of x. The negation of the sentence "For every x, if x is a philosopher, then x is a scholar" is logically equivalent to the sentence "There exists x such that x is a philosopher and x is not a scholar". The existential quantifier "there exists" expresses the idea that the claim "x is a philosopher and x is not a scholar" holds for some choice of x. The predicates "is a philosopher" and "is a scholar" each take a single variable. In general, predicates can take several variables. In the first-order sentence "Socrates is the teacher of Plato", the predicate "is the teacher of" takes two variables. An interpretation (or model) of a first-order formula specifies what each predicate means, and the entities that can instantiate the variables. These entities form the domain of discourse or universe, which is usually required to be a nonempty set. For example, consider the sentence "There exists x such that x is a philosopher." This sentence is seen as being true in an interpretation such that the domain of discourse consists of all human beings, and that the predicate "is a philosopher" is understood as "was the author of the Republic." It is true, as witnessed by Plato in that text. There are two key parts of first-order logic. The syntax determines which finite sequences of symbols are well-formed expressions in first-order logic, while the semantics determines the meanings behind these expressions. == Syntax == Unlike natural languages, such as English, the language of first-order logic is completely formal, so that it can be mechanically determined whether a given expression is well formed. There are two key types of well-formed expressions: terms, which intuitively represent objects, and formulas, which intuitively express statements that can be true or false. The terms and formulas of first-order logic are strings of symbols, where all the symbols together form the alphabet of the language. === Alphabet === As with all formal languages, the nature of the symbols themselves is outside the scope of formal logic; they are often regarded simply as letters and punctuation symbols. It is common to divide the symbols of the alphabet into logical symbols, which always have the same meaning, and non-logical symbols, whose meaning varies by interpretation. For example, the logical symbol ∧ {\displaystyle \land } always represents "and"; it is never interpreted as "or", which is represented by the logical symbol ∨ {\displaystyle \lor } . However, a non-logical predicate symbol such as Phil(x) could be interpreted to mean "x is a philosopher", "x is a man named Philip", or any other unary predicate depending on the interpretation at hand. ==== Logical symbols ==== Logical symbols are a set of characters that vary by author, but usually include the following: Quantifier symbols: ∀ for universal quantification, and ∃ for existential quantification Logical connectives: ∧ for conjunction, ∨ for disjunction, → for implication, ↔ for biconditional, ¬ for negation. Some authors use Cpq instead of → and Epq instead of ↔, especially in contexts where → is used for other purposes. Moreover, the horseshoe ⊃ may replace →; the triple-bar ≡ may replace ↔; a tilde (~), Np, or Fp may replace ¬; a double bar ‖ {\displaystyle \|} , + {\displaystyle +} , or Apq may replace ∨; and an ampersand &, Kpq, or the middle dot ⋅ may replace ∧, especially if these symbols are not available for technical reasons. Parentheses, brackets, and other punctuation symbols. The choice of such symbols varies depending on context. An infinite set of variables, often denoted by lowercase letters at the end of the alphabet x, y, z, ... . Subscripts are often used to distinguish variables: x0, x1, x2, ... . An equality symbol (sometimes, identity symbol) = (see § Equality and its axioms below). Not all of these symbols are required in first-order logic. Either one of the quantifiers along with negation, conjunction (or disjunction), variables, brackets, and equality suffices. Other logical symbols include the following: Truth constants: T, or ⊤ for "true" and F, or ⊥ for "false". Without any such logical operators of valence 0, these two constants can only be expressed using quantifiers. Additional logical connectives such as the Sheffer stroke, Dpq (NAND), and exclusive or, Jpq. ==== Non-logical symbols ==== Non-logical symbols represent predicates (relations), functions and constants. It used to be standard practice to use a fixed, infinite set of non-logical symbols for all purposes: For every integer n ≥ 0, there is a collection of n-ary, or n-place, predicate symbols. Because they represent relations between n elements, they are also called relation symbols. For each arity n, there is an infinite supply of them: Pn0, Pn1, Pn2, Pn3, ... For every integer n ≥ 0, there are infinitely many n-ary function symbols: f n0, f n1, f n2, f n3, ... When the arity of a predicate symbol or function symbol is clear from context, the superscript n is often omitted. In this traditional approach, there is only one language of first-order logic. This approach is still common, especially in philosophically oriented books. A more recent practice is to use different non-logical symbols according to the application one has in mind. Therefore, it has become necessary to name the set of all non-logical symbols used in a particular application. This choice is made via a signature. Typical signatures in mathematics are {1, ×} or just {×} for groups, or {0, 1, +, ×, <} for ordered fields. There are no restrictions on the number of non-logical symbols. The signature can be empty, finite, or infinite, even uncountable. Uncountable signatures occur for example in modern proofs of the Löwenheim–Skolem theorem. Though signatures might in some cases imply how non-logical symbols are to be interpreted, interpretation of the non-logical symbols in the signature is separate (and not necessarily fixed). Signatures concern syntax rather than semantics. In this approach, every non-logical symbol is of one of the following types: A predicate symbol (or relation symbol) with some valence (or arity, number of arguments) greater than or equal to 0. These are often denoted by uppercase letters such as P, Q and R. Examples: In P(x), P is a predicate symbol of valence 1. One possible interpretation is "x is a man". In Q(x,y), Q is a predicate symbol of valence 2. Possible interpretations include "x is greater than y" and "x is the father of y". Relations of valence 0 can be identified with propositional variables, which can stand for any statement. One possible interpretation of R is "Socrates is a man". A function symbol, with some valence greater than or equal to 0. These are often denoted by lowercase roman letters such as f, g and h. Examples: f(x) may be interpreted as "the father of x". In arithmetic, it may stand for "-x". In set theory, it may stand for "the power set of x". In arithmetic, g(x,y) may stand for "x+y". In set theory, it may stand for "the union of x and y". Function symbols of valence 0 are called constant symbols, and are often denoted by lowercase letters at the beginning of the alphabet such as a, b and c. The symbol a may stand for Socrates. In arithmetic, it may stand for 0. In set theory, it may stand for the empty set. The traditional approach can be recovered in the modern approach, by simply specifying the "custom" signature to consist of the traditional sequences of non-logical symbols. === Formation rules === The formation rules define the terms and formulas of first-order logic. When terms and formulas are represented as strings of symbols, these rules can be used to write a formal grammar for terms and formulas. These rules are generally context-free (each production has a single symbol on the left side), except that the set of symbols may be allowed to be infinite and there may be many start symbols, for example the variables in the case of terms. ==== Terms ==== The set of terms is inductively defined by the following rules: Variables. Any variable symbol is a term. Functions. If f is an n-ary function symbol, and t1, ..., tn are terms, then f(t1,...,tn) is a term. In particular, symbols denoting individual constants are nullary function symbols, and thus are terms. Only expressions which can be obtained by finitely many applications of rules 1 and 2 are terms. For example, no expression involving a predicate symbol is a term. ==== Formulas ==== The set of formulas (also called well-formed formulas or WFFs) is inductively defined by the following rules: Predicate symbols. If P is an n-ary predicate symbol and t1, ..., tn are terms then P(t1,...,tn) is a formula. Equality. If the equality symbol is considered part of logic, and t1 and t2 are terms, then t1 = t2 is a formula. Negation. If φ {\displaystyle \varphi } is a formula, then ¬ φ {\displaystyle \lnot \varphi } is a formula. Binary connectives. If ⁠ φ {\displaystyle \varphi } ⁠ and ⁠ ψ {\displaystyle \psi } ⁠ are formulas, then ( φ → ψ {\displaystyle \varphi \rightarrow \psi } ) is a formula. Similar rules apply to other binary logical connectives. Quantifiers. If φ {\displaystyle \varphi } is a formula and x is a variable, then ∀ x φ {\displaystyle \forall x\varphi } (for all x, φ {\displaystyle \varphi } holds) and ∃ x φ {\displaystyle \exists x\varphi } (there exists x such that φ {\displaystyle \varphi } ) are formulas. Only expressions which can be obtained by finitely many applications of rules 1–5 are formulas. The formulas obtained from the first two rules are said to be atomic formulas. For example: ∀ x ∀ y ( P ( f ( x ) ) → ¬ ( P ( x ) → Q ( f ( y ) , x , z ) ) ) {\displaystyle \forall x\forall y(P(f(x))\rightarrow \neg (P(x)\rightarrow Q(f(y),x,z)))} is a formula, if f is a unary function symbol, P a unary predicate symbol, and Q a ternary predicate symbol. However, ∀ x x → {\displaystyle \forall x\,x\rightarrow } is not a formula, although it is a string of symbols from the alphabet. The role of the parentheses in the definition is to ensure that any formula can only be obtained in one way—by following the inductive definition (i.e., there is a unique parse tree for each formula). This property is known as unique readability of formulas. There are many conventions for where parentheses are used in formulas. For example, some authors use colons or full stops instead of parentheses, or change the places in which parentheses are inserted. Each author's particular definition must be accompanied by a proof of unique readability. ==== Notational conventions ==== For convenience, conventions have been developed about the precedence of the logical operators, to avoid the need to write parentheses in some cases. These rules are similar to the order of operations in arithmetic. A common convention is: ¬ {\displaystyle \lnot } is evaluated first ∧ {\displaystyle \land } and ∨ {\displaystyle \lor } are evaluated next Quantifiers are evaluated next → {\displaystyle \to } is evaluated last. Moreover, extra punctuation not required by the definition may be inserted—to make formulas easier to read. Thus the formula: ¬ ∀ x P ( x ) → ∃ x ¬ P ( x ) {\displaystyle \lnot \forall xP(x)\to \exists x\lnot P(x)} might be written as: ( ¬ [ ∀ x P ( x ) ] ) → ∃ x [ ¬ P ( x ) ] . {\displaystyle (\lnot [\forall xP(x)])\to \exists x[\lnot P(x)].} === Free and bound variables === In a formula, a variable may occur free or bound (or both). One formalization of this notion is due to Quine, first the concept of a variable occurrence is defined, then whether a variable occurrence is free or bound, then whether a variable symbol overall is free or bound. In order to distinguish different occurrences of the identical symbol x, each occurrence of a variable symbol x in a formula φ is identified with the initial substring of φ up to the point at which said instance of the symbol x appears.p.297 Then, an occurrence of x is said to be bound if that occurrence of x lies within the scope of at least one of either ∃ x {\displaystyle \exists x} or ∀ x {\displaystyle \forall x} . Finally, x is bound in φ if all occurrences of x in φ are bound.pp.142--143 Intuitively, a variable symbol is free in a formula if at no point is it quantified:pp.142--143 in ∀y P(x, y), the sole occurrence of variable x is free while that of y is bound. The free and bound variable occurrences in a formula are defined inductively as follows. Atomic formulas If φ is an atomic formula, then x occurs free in φ if and only if x occurs in φ. Moreover, there are no bound variables in any atomic formula. Negation x occurs free in ¬φ if and only if x occurs free in φ. x occurs bound in ¬φ if and only if x occurs bound in φ Binary connectives x occurs free in (φ → ψ) if and only if x occurs free in either φ or ψ. x occurs bound in (φ → ψ) if and only if x occurs bound in either φ or ψ. The same rule applies to any other binary connective in place of →. Quantifiers x occurs free in ∀y φ, if and only if x occurs free in φ and x is a different symbol from y. Also, x occurs bound in ∀y φ, if and only if x is y or x occurs bound in φ. The same rule holds with ∃ in place of ∀. For example, in ∀x ∀y (P(x) → Q(x,f(x),z)), x and y occur only bound, z occurs only free, and w is neither because it does not occur in the formula. Free and bound variables of a formula need not be disjoint sets: in the formula P(x) → ∀x Q(x), the first occurrence of x, as argument of P, is free while the second one, as argument of Q, is bound. A formula in first-order logic with no free variable occurrences is called a first-order sentence. These are the formulas that will have well-defined truth values under an interpretation. For example, whether a formula such as Phil(x) is true must depend on what x represents. But the sentence ∃x Phil(x) will be either true or false in a given interpretation. === Example: ordered abelian groups === In mathematics, the language of ordered abelian groups has one constant symbol 0, one unary function symbol −, one binary function symbol +, and one binary relation symbol ≤. Then: The expressions +(x, y) and +(x, +(y, −(z))) are terms. These are usually written as x + y and x + y − z. The expressions +(x, y) = 0 and ≤(+(x, +(y, −(z))), +(x, y)) are atomic formulas. These are usually written as x + y = 0 and x + y − z ≤ x + y. The expression ( ∀ x ∀ y [ ≤ ⁡ ( + ⁡ ( x , y ) , z ) → ∀ x ∀ y + ⁡ ( x , y ) = 0 ) ] {\displaystyle (\forall x\forall y\,[\mathop {\leq } (\mathop {+} (x,y),z)\to \forall x\,\forall y\,\mathop {+} (x,y)=0)]} is a formula, which is usually written as ∀ x ∀ y ( x + y ≤ z ) → ∀ x ∀ y ( x + y = 0 ) . {\displaystyle \forall x\forall y(x+y\leq z)\to \forall x\forall y(x+y=0).} This formula has one free variable, z. The axioms for ordered abelian groups can be expressed as a set of sentences in the language. For example, the axiom stating that the group is commutative is usually written ( ∀ x ) ( ∀ y ) [ x + y = y + x ] . {\displaystyle (\forall x)(\forall y)[x+y=y+x].} == Semantics == An interpretation of a first-order language assigns a denotation to each non-logical symbol (predicate symbol, function symbol, or constant symbol) in that language. It also determines a domain of discourse that specifies the range of the quantifiers. The result is that each term is assigned an object that it represents, each predicate is assigned a property of objects, and each sentence is assigned a truth value. In this way, an interpretation provides semantic meaning to the terms, predicates, and formulas of the language. The study of the interpretations of formal languages is called formal semantics. What follows is a description of the standard or Tarskian semantics for first-order logic. (It is also possible to define game semantics for first-order logic, but aside from requiring the axiom of choice, game semantics agree with Tarskian semantics for first-order logic, so game semantics will not be elaborated herein.) === First-order structures === The most common way of specifying an interpretation (especially in mathematics) is to specify a structure (also called a model; see below). The structure consists of a domain of discourse D and an interpretation function I mapping non-logical symbols to predicates, functions, and constants. The domain of discourse D is a nonempty set of "objects" of some kind. Intuitively, given an interpretation, a first-order formula becomes a statement about these objects; for example, ∃ x P ( x ) {\displaystyle \exists xP(x)} states the existence of some object in D for which the predicate P is true (or, more precisely, for which the predicate assigned to the predicate symbol P by the interpretation is true). For example, one can take D to be the set of integers. Non-logical symbols are interpreted as follows: The interpretation of an n-ary function symbol is a function from Dn to D. For example, if the domain of discourse is the set of integers, a function symbol f of arity 2 can be interpreted as the function that gives the sum of its arguments. In other words, the symbol f is associated with the function ⁠ I ( f ) {\displaystyle I(f)} ⁠ which, in this interpretation, is addition. The interpretation of a constant symbol (a function symbol of arity 0) is a function from D0 (a set whose only member is the empty tuple) to D, which can be simply identified with an object in D. For example, an interpretation may assign the value I ( c ) = 10 {\displaystyle I(c)=10} to the constant symbol c {\displaystyle c} . The interpretation of an n-ary predicate symbol is a set of n-tuples of elements of D, giving the arguments for which the predicate is true. For example, an interpretation I ( P ) {\displaystyle I(P)} of a binary predicate symbol P may be the set of pairs of integers such that the first one is less than the second. According to this interpretation, the predicate P would be true if its first argument is less than its second argument. Equivalently, predicate symbols may be assigned Boolean-valued functions from Dn to { t r u e , f a l s e } {\displaystyle \{\mathrm {true,false} \}} . === Evaluation of truth values === A formula evaluates to true or false given an interpretation and a variable assignment μ that associates an element of the domain of discourse with each variable. The reason that a variable assignment is required is to give meanings to formulas with free variables, such as y = x {\displaystyle y=x} . The truth value of this formula changes depending on the values that x and y denote. First, the variable assignment μ can be extended to all terms of the language, with the result that each term maps to a single element of the domain of discourse. The following rules are used to make this assignment: Variables. Each variable x evaluates to μ(x) Functions. Given terms t 1 , … , t n {\displaystyle t_{1},\ldots ,t_{n}} that have been evaluated to elements d 1 , … , d n {\displaystyle d_{1},\ldots ,d_{n}} of the domain of discourse, and a n-ary function symbol f, the term f ( t 1 , … , t n ) {\displaystyle f(t_{1},\ldots ,t_{n})} evaluates to ( I ( f ) ) ( d 1 , … , d n ) {\displaystyle (I(f))(d_{1},\ldots ,d_{n})} . Next, each formula is assigned a truth value. The inductive definition used to make this assignment is called the T-schema. Atomic formulas (1). A formula P ( t 1 , … , t n ) {\displaystyle P(t_{1},\ldots ,t_{n})} is associated the value true or false depending on whether ⟨ v 1 , … , v n ⟩ ∈ I ( P ) {\displaystyle \langle v_{1},\ldots ,v_{n}\rangle \in I(P)} , where v 1 , … , v n {\displaystyle v_{1},\ldots ,v_{n}} are the evaluation of the terms t 1 , … , t n {\displaystyle t_{1},\ldots ,t_{n}} and I ( P ) {\displaystyle I(P)} is the interpretation of P {\displaystyle P} , which by assumption is a subset of D n {\displaystyle D^{n}} . Atomic formulas (2). A formula t 1 = t 2 {\displaystyle t_{1}=t_{2}} is assigned true if t 1 {\displaystyle t_{1}} and t 2 {\displaystyle t_{2}} evaluate to the same object of the domain of discourse (see the section on equality below). Logical connectives. A formula in the form ¬ φ {\displaystyle \neg \varphi } , φ → ψ {\displaystyle \varphi \rightarrow \psi } , etc. is evaluated according to the truth table for the connective in question, as in propositional logic. Existential quantifiers. A formula ∃ x φ ( x ) {\displaystyle \exists x\varphi (x)} is true according to M and μ {\displaystyle \mu } if there exists an evaluation μ ′ {\displaystyle \mu '} of the variables that differs from μ {\displaystyle \mu } at most regarding the evaluation of x and such that φ is true according to the interpretation M and the variable assignment μ ′ {\displaystyle \mu '} . This formal definition captures the idea that ∃ x φ ( x ) {\displaystyle \exists x\varphi (x)} is true if and only if there is a way to choose a value for x such that φ(x) is satisfied. Universal quantifiers. A formula ∀ x φ ( x ) {\displaystyle \forall x\varphi (x)} is true according to M and μ {\displaystyle \mu } if φ(x) is true for every pair composed by the interpretation M and some variable assignment μ ′ {\displaystyle \mu '} that differs from μ {\displaystyle \mu } at most on the value of x. This captures the idea that ∀ x φ ( x ) {\displaystyle \forall x\varphi (x)} is true if every possible choice of a value for x causes φ(x) to be true. If a formula does not contain free variables, and so is a sentence, then the initial variable assignment does not affect its truth value. In other words, a sentence is true according to M and μ {\displaystyle \mu } if and only if it is true according to M and every other variable assignment μ ′ {\displaystyle \mu '} . There is a second common approach to defining truth values that does not rely on variable assignment functions. Instead, given an interpretation M, one first adds to the signature a collection of constant symbols, one for each element of the domain of discourse in M; say that for each d in the domain the constant symbol cd is fixed. The interpretation is extended so that each new constant symbol is assigned to its corresponding element of the domain. One now defines truth for quantified formulas syntactically, as follows: Existential quantifiers (alternate). A formula ∃ x φ ( x ) {\displaystyle \exists x\varphi (x)} is true according to M if there is some d in the domain of discourse such that φ ( c d ) {\displaystyle \varphi (c_{d})} holds. Here φ ( c d ) {\displaystyle \varphi (c_{d})} is the result of substituting cd for every free occurrence of x in φ. Universal quantifiers (alternate). A formula ∀ x φ ( x ) {\displaystyle \forall x\varphi (x)} is true according to M if, for every d in the domain of discourse, φ ( c d ) {\displaystyle \varphi (c_{d})} is true according to M. This alternate approach gives exactly the same truth values to all sentences as the approach via variable assignments. === Validity, satisfiability, and logical consequence === If a sentence φ evaluates to true under a given interpretation M, one says that M satisfies φ; this is denoted M ⊨ φ {\displaystyle M\vDash \varphi } . A sentence is satisfiable if there is some interpretation under which it is true. This is a bit different from the symbol ⊨ {\displaystyle \vDash } from model theory, where M ⊨ ϕ {\displaystyle M\vDash \phi } denotes satisfiability in a model, i.e. "there is a suitable assignment of values in M {\displaystyle M} 's domain to variable symbols of ϕ {\displaystyle \phi } ". Satisfiability of formulas with free variables is more complicated, because an interpretation on its own does not determine the truth value of such a formula. The most common convention is that a formula φ with free variables x 1 {\displaystyle x_{1}} , ..., x n {\displaystyle x_{n}} is said to be satisfied by an interpretation if the formula φ remains true regardless which individuals from the domain of discourse are assigned to its free variables x 1 {\displaystyle x_{1}} , ..., x n {\displaystyle x_{n}} . This has the same effect as saying that a formula φ is satisfied if and only if its universal closure ∀ x 1 … ∀ x n ϕ ( x 1 , … , x n ) {\displaystyle \forall x_{1}\dots \forall x_{n}\phi (x_{1},\dots ,x_{n})} is satisfied. A formula is logically valid (or simply valid) if it is true in every interpretation. These formulas play a role similar to tautologies in propositional logic. A formula φ is a logical consequence of a formula ψ if every interpretation that makes ψ true also makes φ true. In this case one says that φ is logically implied by ψ. === Algebraizations === An alternate approach to the semantics of first-order logic proceeds via abstract algebra. This approach generalizes the Lindenbaum–Tarski algebras of propositional logic. There are three ways of eliminating quantified variables from first-order logic that do not involve replacing quantifiers with other variable binding term operators: Cylindric algebra, by Alfred Tarski, et al.; Polyadic algebra, by Paul Halmos; Predicate functor logic, primarily by Willard Quine. These algebras are all lattices that properly extend the two-element Boolean algebra. Tarski and Givant (1987) showed that the fragment of first-order logic that has no atomic sentence lying in the scope of more than three quantifiers has the same expressive power as relation algebra.: 32–33  This fragment is of great interest because it suffices for Peano arithmetic and most axiomatic set theory, including the canonical Zermelo–Fraenkel set theory (ZFC). They also prove that first-order logic with a primitive ordered pair is equivalent to a relation algebra with two ordered pair projection functions.: 803  === First-order theories, models, and elementary classes === A first-order theory of a particular signature is a set of axioms, which are sentences consisting of symbols from that signature. The set of axioms is often finite or recursively enumerable, in which case the theory is called effective. Some authors require theories to also include all logical consequences of the axioms. The axioms are considered to hold within the theory and from them other sentences that hold within the theory can be derived. A first-order structure that satisfies all sentences in a given theory is said to be a model of the theory. An elementary class is the set of all structures satisfying a particular theory. These classes are a main subject of study in model theory. Many theories have an intended interpretation, a certain model that is kept in mind when studying the theory. For example, the intended interpretation of Peano arithmetic consists of the usual natural numbers with their usual operations. However, the Löwenheim–Skolem theorem shows that most first-order theories will also have other, nonstandard models. A theory is consistent (within a deductive system) if it is not possible to prove a contradiction from the axioms of the theory. A theory is complete if, for every formula in its signature, either that formula or its negation is a logical consequence of the axioms of the theory. Gödel's incompleteness theorem shows that effective first-order theories that include a sufficient portion of the theory of the natural numbers can never be both consistent and complete. === Empty domains === The definition above requires that the domain of discourse of any interpretation must be nonempty. There are settings, such as inclusive logic, where empty domains are permitted. Moreover, if a class of algebraic structures includes an empty structure (for example, there is an empty poset), that class can only be an elementary class in first-order logic if empty domains are permitted or the empty structure is removed from the class. There are several difficulties with empty domains, however: Many common rules of inference are valid only when the domain of discourse is required to be nonempty. One example is the rule stating that φ ∨ ∃ x ψ {\displaystyle \varphi \lor \exists x\psi } implies ∃ x ( φ ∨ ψ ) {\displaystyle \exists x(\varphi \lor \psi )} when x is not a free variable in φ {\displaystyle \varphi } . This rule, which is used to put formulas into prenex normal form, is sound in nonempty domains, but unsound if the empty domain is permitted. The definition of truth in an interpretation that uses a variable assignment function cannot work with empty domains, because there are no variable assignment functions whose range is empty. (Similarly, one cannot assign interpretations to constant symbols.) This truth definition requires that one must select a variable assignment function (μ above) before truth values for even atomic formulas can be defined. Then the truth value of a sentence is defined to be its truth value under any variable assignment, and it is proved that this truth value does not depend on which assignment is chosen. This technique does not work if there are no assignment functions at all; it must be changed to accommodate empty domains. Thus, when the empty domain is permitted, it must often be treated as a special case. Most authors, however, simply exclude the empty domain by definition. == Deductive systems == A deductive system is used to demonstrate, on a purely syntactic basis, that one formula is a logical consequence of another formula. There are many such systems for first-order logic, including Hilbert-style deductive systems, natural deduction, the sequent calculus, the tableaux method, and resolution. These share the common property that a deduction is a finite syntactic object; the format of this object, and the way it is constructed, vary widely. These finite deductions themselves are often called derivations in proof theory. They are also often called proofs but are completely formalized unlike natural-language mathematical proofs. A deductive system is sound if any formula that can be derived in the system is logically valid. Conversely, a deductive system is complete if every logically valid formula is derivable. All of the systems discussed in this article are both sound and complete. They also share the property that it is possible to effectively verify that a purportedly valid deduction is actually a deduction; such deduction systems are called effective. A key property of deductive systems is that they are purely syntactic, so that derivations can be verified without considering any interpretation. Thus, a sound argument is correct in every possible interpretation of the language, regardless of whether that interpretation is about mathematics, economics, or some other area. In general, logical consequence in first-order logic is only semidecidable: if a sentence A logically implies a sentence B then this can be discovered (for example, by searching for a proof until one is found, using some effective, sound, complete proof system). However, if A does not logically imply B, this does not mean that A logically implies the negation of B. There is no effective procedure that, given formulas A and B, always correctly decides whether A logically implies B. === Rules of inference === A rule of inference states that, given a particular formula (or set of formulas) with a certain property as a hypothesis, another specific formula (or set of formulas) can be derived as a conclusion. The rule is sound (or truth-preserving) if it preserves validity in the sense that whenever any interpretation satisfies the hypothesis, that interpretation also satisfies the conclusion. For example, one common rule of inference is the rule of substitution. If t is a term and φ is a formula possibly containing the variable x, then φ[t/x] is the result of replacing all free instances of x by t in φ. The substitution rule states that for any φ and any term t, one can conclude φ[t/x] from φ provided that no free variable of t becomes bound during the substitution process. (If some free variable of t becomes bound, then to substitute t for x it is first necessary to change the bound variables of φ to differ from the free variables of t.) To see why the restriction on bound variables is necessary, consider the logically valid formula φ given by ∃ x ( x = y ) {\displaystyle \exists x(x=y)} , in the signature of (0,1,+,×,=) of arithmetic. If t is the term "x + 1", the formula φ[t/y] is ∃ x ( x = x + 1 ) {\displaystyle \exists x(x=x+1)} , which will be false in many interpretations. The problem is that the free variable x of t became bound during the substitution. The intended replacement can be obtained by renaming the bound variable x of φ to something else, say z, so that the formula after substitution is ∃ z ( z = x + 1 ) {\displaystyle \exists z(z=x+1)} , which is again logically valid. The substitution rule demonstrates several common aspects of rules of inference. It is entirely syntactical; one can tell whether it was correctly applied without appeal to any interpretation. It has (syntactically defined) limitations on when it can be applied, which must be respected to preserve the correctness of derivations. Moreover, as is often the case, these limitations are necessary because of interactions between free and bound variables that occur during syntactic manipulations of the formulas involved in the inference rule. === Hilbert-style systems and natural deduction === A deduction in a Hilbert-style deductive system is a list of formulas, each of which is a logical axiom, a hypothesis that has been assumed for the derivation at hand or follows from previous formulas via a rule of inference. The logical axioms consist of several axiom schemas of logically valid formulas; these encompass a significant amount of propositional logic. The rules of inference enable the manipulation of quantifiers. Typical Hilbert-style systems have a small number of rules of inference, along with several infinite schemas of logical axioms. It is common to have only modus ponens and universal generalization as rules of inference. Natural deduction systems resemble Hilbert-style systems in that a deduction is a finite list of formulas. However, natural deduction systems have no logical axioms; they compensate by adding additional rules of inference that can be used to manipulate the logical connectives in formulas in the proof. === Sequent calculus === The sequent calculus was developed to study the properties of natural deduction systems. Instead of working with one formula at a time, it uses sequents, which are expressions of the form: A 1 , … , A n ⊢ B 1 , … , B k , {\displaystyle A_{1},\ldots ,A_{n}\vdash B_{1},\ldots ,B_{k},} where A1, ..., An, B1, ..., Bk are formulas and the turnstile symbol ⊢ {\displaystyle \vdash } is used as punctuation to separate the two halves. Intuitively, a sequent expresses the idea that ( A 1 ∧ ⋯ ∧ A n ) {\displaystyle (A_{1}\land \cdots \land A_{n})} implies ( B 1 ∨ ⋯ ∨ B k ) {\displaystyle (B_{1}\lor \cdots \lor B_{k})} . === Tableaux method === Unlike the methods just described the derivations in the tableaux method are not lists of formulas. Instead, a derivation is a tree of formulas. To show that a formula A is provable, the tableaux method attempts to demonstrate that the negation of A is unsatisfiable. The tree of the derivation has ¬ A {\displaystyle \lnot A} at its root; the tree branches in a way that reflects the structure of the formula. For example, to show that C ∨ D {\displaystyle C\lor D} is unsatisfiable requires showing that C and D are each unsatisfiable; this corresponds to a branching point in the tree with parent C ∨ D {\displaystyle C\lor D} and children C and D. === Resolution === The resolution rule is a single rule of inference that, together with unification, is sound and complete for first-order logic. As with the tableaux method, a formula is proved by showing that the negation of the formula is unsatisfiable. Resolution is commonly used in automated theorem proving. The resolution method works only with formulas that are disjunctions of atomic formulas; arbitrary formulas must first be converted to this form through Skolemization. The resolution rule states that from the hypotheses A 1 ∨ ⋯ ∨ A k ∨ C {\displaystyle A_{1}\lor \cdots \lor A_{k}\lor C} and B 1 ∨ ⋯ ∨ B l ∨ ¬ C {\displaystyle B_{1}\lor \cdots \lor B_{l}\lor \lnot C} , the conclusion A 1 ∨ ⋯ ∨ A k ∨ B 1 ∨ ⋯ ∨ B l {\displaystyle A_{1}\lor \cdots \lor A_{k}\lor B_{1}\lor \cdots \lor B_{l}} can be obtained. === Provable identities === Many identities can be proved, which establish equivalences between particular formulas. These identities allow for rearranging formulas by moving quantifiers across other connectives and are useful for putting formulas in prenex normal form. Some provable identities include: ¬ ∀ x P ( x ) ⇔ ∃ x ¬ P ( x ) {\displaystyle \lnot \forall x\,P(x)\Leftrightarrow \exists x\,\lnot P(x)} ¬ ∃ x P ( x ) ⇔ ∀ x ¬ P ( x ) {\displaystyle \lnot \exists x\,P(x)\Leftrightarrow \forall x\,\lnot P(x)} ∀ x ∀ y P ( x , y ) ⇔ ∀ y ∀ x P ( x , y ) {\displaystyle \forall x\,\forall y\,P(x,y)\Leftrightarrow \forall y\,\forall x\,P(x,y)} ∃ x ∃ y P ( x , y ) ⇔ ∃ y ∃ x P ( x , y ) {\displaystyle \exists x\,\exists y\,P(x,y)\Leftrightarrow \exists y\,\exists x\,P(x,y)} ∀ x P ( x ) ∧ ∀ x Q ( x ) ⇔ ∀ x ( P ( x ) ∧ Q ( x ) ) {\displaystyle \forall x\,P(x)\land \forall x\,Q(x)\Leftrightarrow \forall x\,(P(x)\land Q(x))} ∃ x P ( x ) ∨ ∃ x Q ( x ) ⇔ ∃ x ( P ( x ) ∨ Q ( x ) ) {\displaystyle \exists x\,P(x)\lor \exists x\,Q(x)\Leftrightarrow \exists x\,(P(x)\lor Q(x))} P ∧ ∃ x Q ( x ) ⇔ ∃ x ( P ∧ Q ( x ) ) {\displaystyle P\land \exists x\,Q(x)\Leftrightarrow \exists x\,(P\land Q(x))} (where x {\displaystyle x} must not occur free in P {\displaystyle P} ) P ∨ ∀ x Q ( x ) ⇔ ∀ x ( P ∨ Q ( x ) ) {\displaystyle P\lor \forall x\,Q(x)\Leftrightarrow \forall x\,(P\lor Q(x))} (where x {\displaystyle x} must not occur free in P {\displaystyle P} ) == Equality and its axioms == There are several different conventions for using equality (or identity) in first-order logic. The most common convention, known as first-order logic with equality, includes the equality symbol as a primitive logical symbol which is always interpreted as the real equality relation between members of the domain of discourse, such that the "two" given members are the same member. This approach also adds certain axioms about equality to the deductive system employed. These equality axioms are:: 198–200  Reflexivity. For each variable x, x = x. Substitution for functions. For all variables x and y, and any function symbol f, x = y → f(..., x, ...) = f(..., y, ...). Substitution for formulas. For any variables x and y and any formula φ(z) with a free variable z, then: x = y → (φ(x) → φ(y)). These are axiom schemas, each of which specifies an infinite set of axioms. The third schema is known as Leibniz's law, "the principle of substitutivity", "the indiscernibility of identicals", or "the replacement property". The second schema, involving the function symbol f, is (equivalent to) a special case of the third schema, using the formula: φ(z): f(..., x, ...) = f(..., z, ...) Then x = y → (f(..., x, ...) = f(..., x, ...) → f(..., x, ...) = f(..., y, ...)). Since x = y is given, and f(..., x, ...) = f(..., x, ...) true by reflexivity, we have f(..., x, ...) = f(..., y, ...) Many other properties of equality are consequences of the axioms above, for example: Symmetry. If x = y then y = x. Transitivity. If x = y and y = z then x = z. === First-order logic without equality === An alternate approach considers the equality relation to be a non-logical symbol. This convention is known as first-order logic without equality. If an equality relation is included in the signature, the axioms of equality must now be added to the theories under consideration, if desired, instead of being considered rules of logic. The main difference between this method and first-order logic with equality is that an interpretation may now interpret two distinct individuals as "equal" (although, by Leibniz's law, these will satisfy exactly the same formulas under any interpretation). That is, the equality relation may now be interpreted by an arbitrary equivalence relation on the domain of discourse that is congruent with respect to the functions and relations of the interpretation. When this second convention is followed, the term normal model is used to refer to an interpretation where no distinct individuals a and b satisfy a = b. In first-order logic with equality, only normal models are considered, and so there is no term for a model other than a normal model. When first-order logic without equality is studied, it is necessary to amend the statements of results such as the Löwenheim–Skolem theorem so that only normal models are considered. First-order logic without equality is often employed in the context of second-order arithmetic and other higher-order theories of arithmetic, where the equality relation between sets of natural numbers is usually omitted. === Defining equality within a theory === If a theory has a binary formula A(x,y) which satisfies reflexivity and Leibniz's law, the theory is said to have equality, or to be a theory with equality. The theory may not have all instances of the above schemas as axioms, but rather as derivable theorems. For example, in theories with no function symbols and a finite number of relations, it is possible to define equality in terms of the relations, by defining the two terms s and t to be equal if any relation is unchanged by changing s to t in any argument. Some theories allow other ad hoc definitions of equality: In the theory of partial orders with one relation symbol ≤, one could define s = t to be an abbreviation for s ≤ t ∧ {\displaystyle \wedge } t ≤ s. In set theory with one relation ∈, one may define s = t to be an abbreviation for ∀x (s ∈ x ↔ t ∈ x) ∧ {\displaystyle \wedge } ∀x (x ∈ s ↔ x ∈ t). This definition of equality then automatically satisfies the axioms for equality. In this case, one should replace the usual axiom of extensionality, which can be stated as ∀ x ∀ y [ ∀ z ( z ∈ x ⇔ z ∈ y ) ⇒ x = y ] {\displaystyle \forall x\forall y[\forall z(z\in x\Leftrightarrow z\in y)\Rightarrow x=y]} , with an alternative formulation ∀ x ∀ y [ ∀ z ( z ∈ x ⇔ z ∈ y ) ⇒ ∀ z ( x ∈ z ⇔ y ∈ z ) ] {\displaystyle \forall x\forall y[\forall z(z\in x\Leftrightarrow z\in y)\Rightarrow \forall z(x\in z\Leftrightarrow y\in z)]} , which says that if sets x and y have the same elements, then they also belong to the same sets. == Metalogical properties == One motivation for the use of first-order logic, rather than higher-order logic, is that first-order logic has many metalogical properties that stronger logics do not have. These results concern general properties of first-order logic itself, rather than properties of individual theories. They provide fundamental tools for the construction of models of first-order theories. === Completeness and undecidability === Gödel's completeness theorem, proved by Kurt Gödel in 1929, establishes that there are sound, complete, effective deductive systems for first-order logic, and thus the first-order logical consequence relation is captured by finite provability. Naively, the statement that a formula φ logically implies a formula ψ depends on every model of φ; these models will in general be of arbitrarily large cardinality, and so logical consequence cannot be effectively verified by checking every model. However, it is possible to enumerate all finite derivations and search for a derivation of ψ from φ. If ψ is logically implied by φ, such a derivation will eventually be found. Thus first-order logical consequence is semidecidable: it is possible to make an effective enumeration of all pairs of sentences (φ,ψ) such that ψ is a logical consequence of φ. Unlike propositional logic, first-order logic is undecidable (although semidecidable), provided that the language has at least one predicate of arity at least 2 (other than equality). This means that there is no decision procedure that determines whether arbitrary formulas are logically valid. This result was established independently by Alonzo Church and Alan Turing in 1936 and 1937, respectively, giving a negative answer to the Entscheidungsproblem posed by David Hilbert and Wilhelm Ackermann in 1928. Their proofs demonstrate a connection between the unsolvability of the decision problem for first-order logic and the unsolvability of the halting problem. There are systems weaker than full first-order logic for which the logical consequence relation is decidable. These include propositional logic and monadic predicate logic, which is first-order logic restricted to unary predicate symbols and no function symbols. Other logics with no function symbols which are decidable are the guarded fragment of first-order logic, as well as two-variable logic. The Bernays–Schönfinkel class of first-order formulas is also decidable. Decidable subsets of first-order logic are also studied in the framework of description logics. === The Löwenheim–Skolem theorem === The Löwenheim–Skolem theorem shows that if a first-order theory of cardinality λ has an infinite model, then it has models of every infinite cardinality greater than or equal to λ. One of the earliest results in model theory, it implies that it is not possible to characterize countability or uncountability in a first-order language with a countable signature. That is, there is no first-order formula φ(x) such that an arbitrary structure M satisfies φ if and only if the domain of discourse of M is countable (or, in the second case, uncountable). The Löwenheim–Skolem theorem implies that infinite structures cannot be categorically axiomatized in first-order logic. For example, there is no first-order theory whose only model is the real line: any first-order theory with an infinite model also has a model of cardinality larger than the continuum. Since the real line is infinite, any theory satisfied by the real line is also satisfied by some nonstandard models. When the Löwenheim–Skolem theorem is applied to first-order set theories, the nonintuitive consequences are known as Skolem's paradox. === The compactness theorem === The compactness theorem states that a set of first-order sentences has a model if and only if every finite subset of it has a model. This implies that if a formula is a logical consequence of an infinite set of first-order axioms, then it is a logical consequence of some finite number of those axioms. This theorem was proved first by Kurt Gödel as a consequence of the completeness theorem, but many additional proofs have been obtained over time. It is a central tool in model theory, providing a fundamental method for constructing models. The compactness theorem has a limiting effect on which collections of first-order structures are elementary classes. For example, the compactness theorem implies that any theory that has arbitrarily large finite models has an infinite model. Thus, the class of all finite graphs is not an elementary class (the same holds for many other algebraic structures). There are also more subtle limitations of first-order logic that are implied by the compactness theorem. For example, in computer science, many situations can be modeled as a directed graph of states (nodes) and connections (directed edges). Validating such a system may require showing that no "bad" state can be reached from any "good" state. Thus, one seeks to determine if the good and bad states are in different connected components of the graph. However, the compactness theorem can be used to show that connected graphs are not an elementary class in first-order logic, and there is no formula φ(x,y) of first-order logic, in the logic of graphs, that expresses the idea that there is a path from x to y. Connectedness can be expressed in second-order logic, however, but not with only existential set quantifiers, as Σ 1 1 {\displaystyle \Sigma _{1}^{1}} also enjoys compactness. === Lindström's theorem === Per Lindström showed that the metalogical properties just discussed actually characterize first-order logic in the sense that no stronger logic can also have those properties (Ebbinghaus and Flum 1994, Chapter XIII). Lindström defined a class of abstract logical systems, and a rigorous definition of the relative strength of a member of this class. He established two theorems for systems of this type: A logical system satisfying Lindström's definition that contains first-order logic and satisfies both the Löwenheim–Skolem theorem and the compactness theorem must be equivalent to first-order logic. A logical system satisfying Lindström's definition that has a semidecidable logical consequence relation and satisfies the Löwenheim–Skolem theorem must be equivalent to first-order logic. == Limitations == Although first-order logic is sufficient for formalizing much of mathematics and is commonly used in computer science and other fields, it has certain limitations. These include limitations on its expressiveness and limitations of the fragments of natural languages that it can describe. For instance, first-order logic is undecidable, meaning a sound, complete and terminating decision algorithm for provability is impossible. This has led to the study of interesting decidable fragments, such as C2: first-order logic with two variables and the counting quantifiers ∃ ≥ n {\displaystyle \exists ^{\geq n}} and ∃ ≤ n {\displaystyle \exists ^{\leq n}} . === Expressiveness === The Löwenheim–Skolem theorem shows that if a first-order theory has any infinite model, then it has infinite models of every cardinality. In particular, no first-order theory with an infinite model can be categorical. Thus, there is no first-order theory whose only model has the set of natural numbers as its domain, or whose only model has the set of real numbers as its domain. Many extensions of first-order logic, including infinitary logics and higher-order logics, are more expressive in the sense that they do permit categorical axiomatizations of the natural numbers or real numbers. This expressiveness comes at a metalogical cost, however: by Lindström's theorem, the compactness theorem and the downward Löwenheim–Skolem theorem cannot hold in any logic stronger than first-order. === Formalizing natural languages === First-order logic is able to formalize many simple quantifier constructions in natural language, such as "every person who lives in Perth lives in Australia". Hence, first-order logic is used as a basis for knowledge representation languages, such as FO(.). Still, there are complicated features of natural language that cannot be expressed in first-order logic. "Any logical system which is appropriate as an instrument for the analysis of natural language needs a much richer structure than first-order predicate logic". == Restrictions, extensions, and variations == There are many variations of first-order logic. Some of these are inessential in the sense that they merely change notation without affecting the semantics. Others change the expressive power more significantly, by extending the semantics through additional quantifiers or other new logical symbols. For example, infinitary logics permit formulas of infinite size, and modal logics add symbols for possibility and necessity. === Restricted languages === First-order logic can be studied in languages with fewer logical symbols than were described above: Because ∃ x φ ( x ) {\displaystyle \exists x\varphi (x)} can be expressed as ¬ ∀ x ¬ φ ( x ) {\displaystyle \neg \forall x\neg \varphi (x)} , and ∀ x φ ( x ) {\displaystyle \forall x\varphi (x)} can be expressed as ¬ ∃ x ¬ φ ( x ) {\displaystyle \neg \exists x\neg \varphi (x)} , either of the two quantifiers ∃ {\displaystyle \exists } and ∀ {\displaystyle \forall } can be dropped. Since φ ∨ ψ {\displaystyle \varphi \lor \psi } can be expressed as ¬ ( ¬ φ ∧ ¬ ψ ) {\displaystyle \lnot (\lnot \varphi \land \lnot \psi )} and φ ∧ ψ {\displaystyle \varphi \land \psi } can be expressed as ¬ ( ¬ φ ∨ ¬ ψ ) {\displaystyle \lnot (\lnot \varphi \lor \lnot \psi )} , either ∨ {\displaystyle \vee } or ∧ {\displaystyle \wedge } can be dropped. In other words, it is sufficient to have ¬ {\displaystyle \neg } and ∨ {\displaystyle \vee } , or ¬ {\displaystyle \neg } and ∧ {\displaystyle \wedge } , as the only logical connectives. Similarly, it is sufficient to have only ¬ {\displaystyle \neg } and → {\displaystyle \rightarrow } as logical connectives, or to have only the Sheffer stroke (NAND) or the Peirce arrow (NOR) operator. It is possible to entirely avoid function symbols and constant symbols, rewriting them via predicate symbols in an appropriate way. For example, instead of using a constant symbol 0 {\displaystyle \;0} one may use a predicate 0 ( x ) {\displaystyle \;0(x)} (interpreted as x = 0 {\displaystyle \;x=0} ) and replace every predicate such as P ( 0 , y ) {\displaystyle \;P(0,y)} with ∀ x ( 0 ( x ) → P ( x , y ) ) {\displaystyle \forall x\;(0(x)\rightarrow P(x,y))} . A function such as f ( x 1 , x 2 , . . . , x n ) {\displaystyle f(x_{1},x_{2},...,x_{n})} will similarly be replaced by a predicate F ( x 1 , x 2 , . . . , x n , y ) {\displaystyle F(x_{1},x_{2},...,x_{n},y)} interpreted as y = f ( x 1 , x 2 , . . . , x n ) {\displaystyle y=f(x_{1},x_{2},...,x_{n})} . This change requires adding additional axioms to the theory at hand, so that interpretations of the predicate symbols used have the correct semantics. Restrictions such as these are useful as a technique to reduce the number of inference rules or axiom schemas in deductive systems, which leads to shorter proofs of metalogical results. The cost of the restrictions is that it becomes more difficult to express natural-language statements in the formal system at hand, because the logical connectives used in the natural language statements must be replaced by their (longer) definitions in terms of the restricted collection of logical connectives. Similarly, derivations in the limited systems may be longer than derivations in systems that include additional connectives. There is thus a trade-off between the ease of working within the formal system and the ease of proving results about the formal system. It is also possible to restrict the arities of function symbols and predicate symbols, in sufficiently expressive theories. One can in principle dispense entirely with functions of arity greater than 2 and predicates of arity greater than 1 in theories that include a pairing function. This is a function of arity 2 that takes pairs of elements of the domain and returns an ordered pair containing them. It is also sufficient to have two predicate symbols of arity 2 that define projection functions from an ordered pair to its components. In either case it is necessary that the natural axioms for a pairing function and its projections are satisfied. === Many-sorted logic === Ordinary first-order interpretations have a single domain of discourse over which all quantifiers range. Many-sorted first-order logic allows variables to have different sorts, which have different domains. This is also called typed first-order logic, and the sorts called types (as in data type), but it is not the same as first-order type theory. Many-sorted first-order logic is often used in the study of second-order arithmetic. When there are only finitely many sorts in a theory, many-sorted first-order logic can be reduced to single-sorted first-order logic.: 296–299  One introduces into the single-sorted theory a unary predicate symbol for each sort in the many-sorted theory and adds an axiom saying that these unary predicates partition the domain of discourse. For example, if there are two sorts, one adds predicate symbols P 1 ( x ) {\displaystyle P_{1}(x)} and P 2 ( x ) {\displaystyle P_{2}(x)} and the axiom: ∀ x ( P 1 ( x ) ∨ P 2 ( x ) ) ∧ ¬ ∃ x ( P 1 ( x ) ∧ P 2 ( x ) ) {\displaystyle \forall x(P_{1}(x)\lor P_{2}(x))\land \lnot \exists x(P_{1}(x)\land P_{2}(x))} . Then the elements satisfying P 1 {\displaystyle P_{1}} are thought of as elements of the first sort, and elements satisfying P 2 {\displaystyle P_{2}} as elements of the second sort. One can quantify over each sort by using the corresponding predicate symbol to limit the range of quantification. For example, to say there is an element of the first sort satisfying formula φ ( x ) {\displaystyle \varphi (x)} , one writes: ∃ x ( P 1 ( x ) ∧ φ ( x ) ) {\displaystyle \exists x(P_{1}(x)\land \varphi (x))} . === Additional quantifiers === Additional quantifiers can be added to first-order logic. Sometimes it is useful to say that "P(x) holds for exactly one x", which can be expressed as ∃!x P(x). This notation, called uniqueness quantification, may be taken to abbreviate a formula such as ∃x (P(x) ∧ {\displaystyle \wedge } ∀y (P(y) → (x = y))). First-order logic with extra quantifiers has new quantifiers Qx,..., with meanings such as "there are many x such that ...". Also see branching quantifiers and the plural quantifiers of George Boolos and others. Bounded quantifiers are often used in the study of set theory or arithmetic. === Infinitary logics === Infinitary logic allows infinitely long sentences. For example, one may allow a conjunction or disjunction of infinitely many formulas, or quantification over infinitely many variables. Infinitely long sentences arise in areas of mathematics including topology and model theory. Infinitary logic generalizes first-order logic to allow formulas of infinite length. The most common way in which formulas can become infinite is through infinite conjunctions and disjunctions. However, it is also possible to admit generalized signatures in which function and relation symbols are allowed to have infinite arities, or in which quantifiers can bind infinitely many variables. Because an infinite formula cannot be represented by a finite string, it is necessary to choose some other representation of formulas; the usual representation in this context is a tree. Thus, formulas are, essentially, identified with their parse trees, rather than with the strings being parsed. The most commonly studied infinitary logics are denoted Lαβ, where α and β are each either cardinal numbers or the symbol ∞. In this notation, ordinary first-order logic is Lωω. In the logic L∞ω, arbitrary conjunctions or disjunctions are allowed when building formulas, and there is an unlimited supply of variables. More generally, the logic that permits conjunctions or disjunctions with less than κ constituents is known as Lκω. For example, Lω1ω permits countable conjunctions and disjunctions. The set of free variables in a formula of Lκω can have any cardinality strictly less than κ, yet only finitely many of them can be in the scope of any quantifier when a formula appears as a subformula of another. In other infinitary logics, a subformula may be in the scope of infinitely many quantifiers. For example, in Lκ∞, a single universal or existential quantifier may bind arbitrarily many variables simultaneously. Similarly, the logic Lκλ permits simultaneous quantification over fewer than λ variables, as well as conjunctions and disjunctions of size less than κ. === Non-classical and modal logics === Intuitionistic first-order logic uses intuitionistic rather than classical reasoning; for example, ¬¬φ need not be equivalent to φ and ¬ ∀x.φ is in general not equivalent to ∃ x.¬φ . First-order modal logic allows one to describe other possible worlds as well as this contingently true world which we inhabit. In some versions, the set of possible worlds varies depending on which possible world one inhabits. Modal logic has extra modal operators with meanings which can be characterized informally as, for example "it is necessary that φ" (true in all possible worlds) and "it is possible that φ" (true in some possible world). With standard first-order logic we have a single domain, and each predicate is assigned one extension. With first-order modal logic we have a domain function that assigns each possible world its own domain, so that each predicate gets an extension only relative to these possible worlds. This allows us to model cases where, for example, Alex is a philosopher, but might have been a mathematician, and might not have existed at all. In the first possible world P(a) is true, in the second P(a) is false, and in the third possible world there is no a in the domain at all. First-order fuzzy logics are first-order extensions of propositional fuzzy logics rather than classical propositional calculus. === Fixpoint logic === Fixpoint logic extends first-order logic by adding the closure under the least fixed points of positive operators. === Higher-order logics === The characteristic feature of first-order logic is that individuals can be quantified, but not predicates. Thus ∃ a ( Phil ( a ) ) {\displaystyle \exists a({\text{Phil}}(a))} is a legal first-order formula, but ∃ Phil ( Phil ( a ) ) {\displaystyle \exists {\text{Phil}}({\text{Phil}}(a))} is not, in most formalizations of first-order logic. Second-order logic extends first-order logic by adding the latter type of quantification. Other higher-order logics allow quantification over even higher types than second-order logic permits. These higher types include relations between relations, functions from relations to relations between relations, and other higher-type objects. Thus the "first" in first-order logic describes the type of objects that can be quantified. Unlike first-order logic, for which only one semantics is studied, there are several possible semantics for second-order logic. The most commonly employed semantics for second-order and higher-order logic is known as full semantics. The combination of additional quantifiers and the full semantics for these quantifiers makes higher-order logic stronger than first-order logic. In particular, the (semantic) logical consequence relation for second-order and higher-order logic is not semidecidable; there is no effective deduction system for second-order logic that is sound and complete under full semantics. Second-order logic with full semantics is more expressive than first-order logic. For example, it is possible to create axiom systems in second-order logic that uniquely characterize the natural numbers and the real line. The cost of this expressiveness is that second-order and higher-order logics have fewer attractive metalogical properties than first-order logic. For example, the Löwenheim–Skolem theorem and compactness theorem of first-order logic become false when generalized to higher-order logics with full semantics. == Automated theorem proving and formal methods == Automated theorem proving refers to the development of computer programs that search and find derivations (formal proofs) of mathematical theorems. Finding derivations is a difficult task because the search space can be very large; an exhaustive search of every possible derivation is theoretically possible but computationally infeasible for many systems of interest in mathematics. Thus complicated heuristic functions are developed to attempt to find a derivation in less time than a blind search. The related area of automated proof verification uses computer programs to check that human-created proofs are correct. Unlike complicated automated theorem provers, verification systems may be small enough that their correctness can be checked both by hand and through automated software verification. This validation of the proof verifier is needed to give confidence that any derivation labeled as "correct" is actually correct. Some proof verifiers, such as Metamath, insist on having a complete derivation as input. Others, such as Mizar and Isabelle, take a well-formatted proof sketch (which may still be very long and detailed) and fill in the missing pieces by doing simple proof searches or applying known decision procedures: the resulting derivation is then verified by a small core "kernel". Many such systems are primarily intended for interactive use by human mathematicians: these are known as proof assistants. They may also use formal logics that are stronger than first-order logic, such as type theory. Because a full derivation of any nontrivial result in a first-order deductive system will be extremely long for a human to write, results are often formalized as a series of lemmas, for which derivations can be constructed separately. Automated theorem provers are also used to implement formal verification in computer science. In this setting, theorem provers are used to verify the correctness of programs and of hardware such as processors with respect to a formal specification. Because such analysis is time-consuming and thus expensive, it is usually reserved for projects in which a malfunction would have grave human or financial consequences. For the problem of model checking, efficient algorithms are known to decide whether an input finite structure satisfies a first-order formula, in addition to computational complexity bounds: see Model checking § First-order logic. == See also == == Notes == == References == Rautenberg, Wolfgang (2010), A Concise Introduction to Mathematical Logic (3rd ed.), New York, NY: Springer Science+Business Media, doi:10.1007/978-1-4419-1221-3, ISBN 978-1-4419-1220-6 Andrews, Peter B. (2002); An Introduction to Mathematical Logic and Type Theory: To Truth Through Proof, 2nd ed., Berlin: Kluwer Academic Publishers. Available from Springer. Avigad, Jeremy; Donnelly, Kevin; Gray, David; and Raff, Paul (2007); "A formally verified proof of the prime number theorem", ACM Transactions on Computational Logic, vol. 9 no. 1 doi:10.1145/1297658.1297660 Barwise, Jon (1977). "An Introduction to First-Order Logic". In Barwise, Jon (ed.). Handbook of Mathematical Logic. Studies in Logic and the Foundations of Mathematics. Amsterdam, NL: North-Holland (published 1982). ISBN 978-0-444-86388-1. Monk, J. Donald (1976). Mathematical Logic. New York, NY: Springer New York. doi:10.1007/978-1-4684-9452-5. ISBN 978-1-4684-9454-9. Barwise, Jon; and Etchemendy, John (2000); Language Proof and Logic, Stanford, CA: CSLI Publications (Distributed by the University of Chicago Press) Bocheński, Józef Maria (2007); A Précis of Mathematical Logic, Dordrecht, NL: D. Reidel, translated from the French and German editions by Otto Bird Ferreirós, José (2001); The Road to Modern Logic — An Interpretation, Bulletin of Symbolic Logic, Volume 7, Issue 4, 2001, pp. 441–484, doi:10.2307/2687794, JSTOR 2687794 Gamut, L. T. F. (1991), Logic, Language, and Meaning, Volume 2: Intensional Logic and Logical Grammar, Chicago, Illinois: University of Chicago Press, ISBN 0-226-28088-8 Hilbert, David; and Ackermann, Wilhelm (1950); Principles of Mathematical Logic, Chelsea (English translation of Grundzüge der theoretischen Logik, 1928 German first edition) Hodges, Wilfrid (2001); "Classical Logic I: First-Order Logic", in Goble, Lou (ed.); The Blackwell Guide to Philosophical Logic, Blackwell Ebbinghaus, Heinz-Dieter; Flum, Jörg; and Thomas, Wolfgang (1994); Mathematical Logic, Undergraduate Texts in Mathematics, Berlin, DE/New York, NY: Springer-Verlag, Second Edition, ISBN 978-0-387-94258-2 Tarski, Alfred and Givant, Steven (1987); A Formalization of Set Theory without Variables. Vol.41 of American Mathematical Society colloquium publications, Providence RI: American Mathematical Society, ISBN 978-0821810415 == External links == "Predicate calculus", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Stanford Encyclopedia of Philosophy (2000): Shapiro, S., "Classical Logic". Covers syntax, model theory, and metatheory for first-order logic in the natural deduction style. Magnus, P. D.; forall x: an introduction to formal logic. Covers formal semantics and proof theory for first-order logic. Metamath: an ongoing online project to reconstruct mathematics as a huge first-order theory, using first-order logic and the axiomatic set theory ZFC. Principia Mathematica modernized. Podnieks, Karl; Introduction to mathematical logic Cambridge Mathematical Tripos notes (typeset by John Fremlin). These notes cover part of a past Cambridge Mathematical Tripos course taught to undergraduate students (usually) within their third year. The course is entitled "Logic, Computation and Set Theory" and covers Ordinals and cardinals, Posets and Zorn's Lemma, Propositional logic, Predicate logic, Set theory and Consistency issues related to ZFC and other set theories. Tree Proof Generator can validate or invalidate formulas of first-order logic through the semantic tableaux method.
Wikipedia/Predicate_calculus
In logic and computer science, the Davis–Putnam–Logemann–Loveland (DPLL) algorithm is a complete, backtracking-based search algorithm for deciding the satisfiability of propositional logic formulae in conjunctive normal form, i.e. for solving the CNF-SAT problem. It was introduced in 1961 by Martin Davis, George Logemann and Donald W. Loveland and is a refinement of the earlier Davis–Putnam algorithm, which is a resolution-based procedure developed by Davis and Hilary Putnam in 1960. Especially in older publications, the Davis–Logemann–Loveland algorithm is often referred to as the "Davis–Putnam method" or the "DP algorithm". Other common names that maintain the distinction are DLL and DPLL. == Implementations and applications == The SAT problem is important both from theoretical and practical points of view. In complexity theory it was the first problem proved to be NP-complete, and can appear in a broad variety of applications such as model checking, automated planning and scheduling, and diagnosis in artificial intelligence. As such, writing efficient SAT solvers has been a research topic for many years. GRASP (1996-1999) was an early implementation using DPLL. In the international SAT competitions, implementations based around DPLL such as zChaff and MiniSat were in the first places of the competitions in 2004 and 2005. Another application that often involves DPLL is automated theorem proving or satisfiability modulo theories (SMT), which is a SAT problem in which propositional variables are replaced with formulas of another mathematical theory. == The algorithm == The basic backtracking algorithm runs by choosing a literal, assigning a truth value to it, simplifying the formula and then recursively checking if the simplified formula is satisfiable; if this is the case, the original formula is satisfiable; otherwise, the same recursive check is done assuming the opposite truth value. This is known as the splitting rule, as it splits the problem into two simpler sub-problems. The simplification step essentially removes all clauses that become true under the assignment from the formula, and all literals that become false from the remaining clauses. The DPLL algorithm enhances over the backtracking algorithm by the eager use of the following rules at each step: Unit propagation If a clause is a unit clause, i.e. it contains only a single unassigned literal, this clause can only be satisfied by assigning the necessary value to make this literal true. Thus, no choice is necessary. Unit propagation consists in removing every clause containing a unit clause's literal and in discarding the complement of a unit clause's literal from every clause containing that complement. In practice, this often leads to deterministic cascades of units, thus avoiding a large part of the naive search space. Pure literal elimination If a propositional variable occurs with only one polarity in the formula, it is called pure. A pure literal can always be assigned in a way that makes all clauses containing it true. Thus, when it is assigned in such a way, these clauses do not constrain the search anymore, and can be deleted. Unsatisfiability of a given partial assignment is detected if one clause becomes empty, i.e. if all its variables have been assigned in a way that makes the corresponding literals false. Satisfiability of the formula is detected either when all variables are assigned without generating the empty clause, or, in modern implementations, if all clauses are satisfied. Unsatisfiability of the complete formula can only be detected after exhaustive search. The DPLL algorithm can be summarized in the following pseudocode, where Φ is the CNF formula: In this pseudocode, unit-propagate(l, Φ) and pure-literal-assign(l, Φ) are functions that return the result of applying unit propagation and the pure literal rule, respectively, to the literal l and the formula Φ. In other words, they replace every occurrence of l with "true" and every occurrence of not l with "false" in the formula Φ, and simplify the resulting formula. The or in the return statement is a short-circuiting operator. Φ ∧ {l} denotes the simplified result of substituting "true" for l in Φ. The algorithm terminates in one of two cases. Either the CNF formula Φ is empty, i.e., it contains no clause. Then it is satisfied by any assignment, as all its clauses are vacuously true. Otherwise, when the formula contains an empty clause, the clause is vacuously false because a disjunction requires at least one member that is true for the overall set to be true. In this case, the existence of such a clause implies that the formula (evaluated as a conjunction of all clauses) cannot evaluate to true and must be unsatisfiable. The pseudocode DPLL function only returns whether the final assignment satisfies the formula or not. In a real implementation, the partial satisfying assignment typically is also returned on success; this can be derived by keeping track of branching literals and of the literal assignments made during unit propagation and pure literal elimination. The Davis–Logemann–Loveland algorithm depends on the choice of branching literal, which is the literal considered in the backtracking step. As a result, this is not exactly an algorithm, but rather a family of algorithms, one for each possible way of choosing the branching literal. Efficiency is strongly affected by the choice of the branching literal: there exist instances for which the running time is constant or exponential depending on the choice of the branching literals. Such choice functions are also called heuristic functions or branching heuristics. === Formalization === The sequent calculus-similar notation can be used to formalize many rewriting algorithms, including DPLL. The following are the 5 rules a DPLL solver can apply in order to either find or fail to find a satisfying assignment, i.e. A = ( l 1 , ¬ l 2 , l 3 , . . . ) {\displaystyle A=(l_{1},\neg l_{2},l_{3},...)} . If a clause in the formula Φ {\displaystyle \Phi } has exactly one unassigned literal l {\displaystyle l} in A {\displaystyle A} , with all other literals in the clause appearing negatively, extend A {\displaystyle A} with l {\displaystyle l} . This rule represents the idea a currently false clause with only one unset variable left forces that variable to be set in such a way as to make the entire clause true, otherwise the formula will not be satisfied. { l 1 , … , l n , l } ∈ Φ ¬ l 1 , … , ¬ l n ∈ A l , ¬ l ∉ A A := A l (Propagate) {\displaystyle {\frac {\begin{array}{c}\{l_{1},\dots ,l_{n},l\}\in \Phi \;\;\;\neg l_{1},\dots ,\neg l_{n}\in A\;\;\;\;\;l,\neg l\notin A\end{array}}{A:=A\;l}}{\text{ (Propagate)}}} If a literal l {\displaystyle l} appears in the formula Φ {\displaystyle \Phi } but its negation ¬ l {\displaystyle \neg l} does not, and l {\displaystyle l} and ¬ l {\displaystyle \neg l} are not in A {\displaystyle A} , extend A {\displaystyle A} with l {\displaystyle l} . This rule represents the idea that if a variable appears only purely positively or purely negatively in a formula, all the instances can be set to true or false to make their corresponding clauses true. l literal of Φ ¬ l not literal of Φ l , ¬ l ∉ A A := A l (Pure) {\displaystyle {\frac {\begin{array}{c}l{\text{ literal of }}\Phi \;\;\;\neg l{\text{ not literal of }}\Phi \;\;\;\;\;l,\neg l\notin A\end{array}}{A:=A\;l}}{\text{ (Pure)}}} If a literal l {\displaystyle l} is in the set of literals of Φ {\displaystyle \Phi } and neither l {\displaystyle l} nor ¬ l {\displaystyle \neg l} is in A {\displaystyle A} , then decide on the truth value of l {\displaystyle l} and extend A {\displaystyle A} with the decision literal ∙ l {\displaystyle \bullet l} . This rule represents the idea that if you aren't forced to do an assignment, you must choose a variable to assign and make note which assignment was a choice so you can go back if the choice didn't result in a satisfying assignment. l ∈ Lits ( Φ ) l , ¬ l ∉ A A := A ∙ l (Decide) {\displaystyle {\frac {\begin{array}{c}l\in {\text{Lits}}(\Phi )\;\;\;l,\neg l\notin A\end{array}}{A:=A\;\bullet \;l}}{\text{ (Decide)}}} If a clause { l 1 , … , l n } {\displaystyle \{l_{1},\dots ,l_{n}\}} is in Φ {\displaystyle \Phi } , and their negations ¬ l 1 , … , ¬ l n {\displaystyle \neg l_{1},\dots ,\neg l_{n}} are in A {\displaystyle A} , and A {\displaystyle A} can be represented as A = A ′ ∙ l N {\displaystyle A=A'\;\bullet \;l\;N} where ∙ ∉ N {\displaystyle \bullet \notin N} , then backtrack by setting A {\displaystyle A} to A ′ ¬ l {\displaystyle A'\;\neg l} . This rule represents the idea that if you reach a contradiction in trying to find a valid assignment, you need to go back to where you previously did a decision between two assignments and pick the other one. { l 1 , … , l n } ∈ Φ ¬ l 1 , … , ¬ l n ∈ A A = A ′ ∙ l N ∙ ∉ N A := A ′ ¬ l (Backtrack) {\displaystyle {\frac {\begin{array}{c}\{l_{1},\dots ,l_{n}\}\in \Phi \;\;\;\neg l_{1},\dots ,\neg l_{n}\in A\;\;\;\;\;A=A'\;\bullet \;l\;N\;\;\;\;\;\bullet \notin N\end{array}}{A:=A'\;\neg l}}{\text{ (Backtrack)}}} If a clause { l 1 , … , l n } {\displaystyle \{l_{1},\dots ,l_{n}\}} is in Φ {\displaystyle \Phi } , and their negations ¬ l 1 , … , ¬ l n {\displaystyle \neg l_{1},\dots ,\neg l_{n}} are in A {\displaystyle A} , and there is no conflict marker ∙ {\displaystyle \bullet } in A {\displaystyle A} , then the DPLL algorithm fails. This rule represents the idea that if you reach a contradiction but there wasn't anything you could have done differently on the way there, the formula is unsatisfiable. { l 1 , … , l n } ∈ Φ ¬ l 1 , … , ¬ l n ∈ A ∙ ∉ A Fail (Fail) {\displaystyle {\frac {\begin{array}{c}\{l_{1},\dots ,l_{n}\}\in \Phi \;\;\;\neg l_{1},\dots ,\neg l_{n}\in A\;\;\;\;\;\bullet \notin A\end{array}}{\text{Fail}}}{\text{ (Fail)}}} === Visualization === Davis, Logemann, Loveland (1961) had developed this algorithm. Some properties of this original algorithm are: It is based on search. It is the basis for almost all modern SAT solvers. It does not use learning or non-chronological backtracking (introduced in 1996). An example with visualization of a DPLL algorithm having chronological backtracking: == Related algorithms == Since 1986, (Reduced ordered) binary decision diagrams have also been used for SAT solving. In 1989-1990, Stålmarck's method for formula verification was presented and patented. It has found some use in industrial applications. DPLL has been extended for automated theorem proving for fragments of first-order logic by way of the DPLL(T) algorithm. In the 2010-2019 decade, work on improving the algorithm has found better policies for choosing the branching literals and new data structures to make the algorithm faster, especially the part on unit propagation. However, the main improvement has been a more powerful algorithm, Conflict-Driven Clause Learning (CDCL), which is similar to DPLL but after reaching a conflict "learns" the root causes (assignments to variables) of the conflict, and uses this information to perform non-chronological backtracking (aka backjumping) in order to avoid reaching the same conflict again. Most state-of-the-art SAT solvers are based on the CDCL framework as of 2019. == Relation to other notions == Runs of DPLL-based algorithms on unsatisfiable instances correspond to tree resolution refutation proofs. == See also == Proof complexity Herbrandization == References == General Davis, Martin; Putnam, Hilary (1960). "A Computing Procedure for Quantification Theory". Journal of the ACM. 7 (3): 201–215. doi:10.1145/321033.321034. S2CID 31888376. Davis, Martin; Logemann, George; Loveland, Donald (1962). "A Machine Program for Theorem Proving". Communications of the ACM. 5 (7): 394–397. doi:10.1145/368273.368557. hdl:2027/mdp.39015095248095. S2CID 15866917. Ouyang, Ming (1998). "How Good Are Branching Rules in DPLL?". Discrete Applied Mathematics. 89 (1–3): 281–286. doi:10.1016/S0166-218X(98)00045-6. John Harrison (2009). Handbook of practical logic and automated reasoning. Cambridge University Press. pp. 79–90. ISBN 978-0-521-89957-4. Specific == Further reading == Malay Ganai; Aarti Gupta; Dr. Aarti Gupta (2007). SAT-based scalable formal verification solutions. Springer. pp. 23–32. ISBN 978-0-387-69166-4. Gomes, Carla P.; Kautz, Henry; Sabharwal, Ashish; Selman, Bart (2008). "Satisfiability Solvers". In Van Harmelen, Frank; Lifschitz, Vladimir; Porter, Bruce (eds.). Handbook of knowledge representation. Foundations of Artificial Intelligence. Vol. 3. Elsevier. pp. 89–134. doi:10.1016/S1574-6526(07)03002-7. ISBN 978-0-444-52211-5.
Wikipedia/DPLL_algorithm
In mathematical logic, a proof calculus or a proof system is built to prove statements. == Overview == A proof system includes the components: Formal language: The set L of formulas admitted by the system, for example, propositional logic or first-order logic. Rules of inference: List of rules that can be employed to prove theorems from axioms and theorems. Axioms: Formulas in L assumed to be valid. All theorems are derived from axioms. A formal proof of a well-formed formula in a proof system is a set of axioms and rules of inference of proof system that infers that the well-formed formula is a theorem of proof system. Usually a given proof calculus encompasses more than a single particular formal system, since many proof calculi are under-determined and can be used for radically different logics. For example, a paradigmatic case is the sequent calculus, which can be used to express the consequence relations of both intuitionistic logic and relevance logic. Thus, loosely speaking, a proof calculus is a template or design pattern, characterized by a certain style of formal inference, that may be specialized to produce specific formal systems, namely by specifying the actual inference rules for such a system. There is no consensus among logicians on how best to define the term. == Examples of proof calculi == The most widely known proof calculi are those classical calculi that are still in widespread use: The class of Hilbert systems, of which the most famous example is the 1928 Hilbert–Ackermann system of first-order logic; Gerhard Gentzen's calculus of natural deduction, which is the first formalism of structural proof theory, and which is the cornerstone of the formulae-as-types correspondence relating logic to functional programming; Gentzen's sequent calculus, which is the most studied formalism of structural proof theory. Many other proof calculi were, or might have been, seminal, but are not widely used today. Aristotle's syllogistic calculus, presented in the Organon, readily admits formalisation. There is still some modern interest in syllogisms, carried out under the aegis of term logic. Gottlob Frege's two-dimensional notation of the Begriffsschrift (1879) is usually regarded as introducing the modern concept of quantifier to logic. C.S. Peirce's existential graph easily might have been seminal, had history worked out differently. Modern research in logic teems with rival proof calculi: Several systems have been proposed that replace the usual textual syntax with some graphical syntax. proof nets and cirquent calculus are among such systems. Recently, many logicians interested in structural proof theory have proposed calculi with deep inference, for instance display logic, hypersequents, the calculus of structures, and bunched implication. == See also == Method of analytic tableaux Proof procedure Propositional proof system Resolution (logic) == References ==
Wikipedia/Proof_calculus
In classical propositional logic, material implication is a valid rule of replacement that allows a conditional statement to be replaced by a disjunction in which the antecedent is negated. The rule states that P implies Q is logically equivalent to not- P {\displaystyle P} or Q {\displaystyle Q} and that either form can replace the other in logical proofs. In other words, if P {\displaystyle P} is true, then Q {\displaystyle Q} must also be true, while if Q {\displaystyle Q} is not true, then P {\displaystyle P} cannot be true either; additionally, when P {\displaystyle P} is not true, Q {\displaystyle Q} may be either true or false. P → Q ⇔ ¬ P ∨ Q , {\displaystyle P\to Q\Leftrightarrow \neg P\lor Q,} where " ⇔ {\displaystyle \Leftrightarrow } " is a metalogical symbol representing "can be replaced in a proof with", P and Q are any given logical statements, and ¬ P ∨ Q {\displaystyle \neg P\lor Q} can be read as "(not P) or Q". To illustrate this, consider the following statements: P {\displaystyle P} : Sam ate an orange for lunch. Q {\displaystyle Q} : Sam ate a fruit for lunch. Then, to say "Sam ate an orange for lunch" implies "Sam ate a fruit for lunch" ( P → Q {\displaystyle P\to Q} ). Logically, if Sam did not eat a fruit for lunch, then Sam also cannot have eaten an orange for lunch (by contraposition). However, merely saying that Sam did not eat an orange for lunch provides no information on whether or not Sam ate a fruit (of any kind) for lunch. == Partial proof == Suppose we are given that P → Q {\displaystyle P\to Q} . Then we have ¬ P ∨ P {\displaystyle \neg P\lor P} by the law of excluded middle (i.e. either P {\displaystyle P} must be true, or P {\displaystyle P} must not be true). Subsequently, since P → Q {\displaystyle P\to Q} , P {\displaystyle P} can be replaced by Q {\displaystyle Q} in the statement, and thus it follows that ¬ P ∨ Q {\displaystyle \neg P\lor Q} (i.e. either Q {\displaystyle Q} must be true, or P {\displaystyle P} must not be true). Suppose, conversely, we are given ¬ P ∨ Q {\displaystyle \neg P\lor Q} . Then if P {\displaystyle P} is true, that rules out the first disjunct, so we have Q {\displaystyle Q} . In short, P → Q {\displaystyle P\to Q} . However, if P {\displaystyle P} is false, then this entailment fails, because the first disjunct ¬ P {\displaystyle \neg P} is true, which puts no constraint on the second disjunct Q {\displaystyle Q} . Hence, nothing can be said about P → Q {\displaystyle P\to Q} . In sum, the equivalence in the case of false P {\displaystyle P} is only conventional, and hence the formal proof of equivalence is only partial. This can also be expressed with a truth table: == Example == An example: we are given the conditional fact that if it is a bear, then it can swim. Then, all 4 possibilities in the truth table are compared to that fact. If it is a bear, then it can swim — T If it is a bear, then it can not swim — F If it is not a bear, then it can swim — T because it doesn’t contradict our initial fact. If it is not a bear, then it can not swim — T (as above) Thus, the conditional fact can be converted to ¬ P ∨ Q {\displaystyle \neg P\vee Q} , which is "it is not a bear" or "it can swim", where P {\displaystyle P} is the statement "it is a bear" and Q {\displaystyle Q} is the statement "it can swim". == The equivalence does not hold in intuitionistic logic == Intuitionistic logic does not treat P → Q {\displaystyle P\to Q} as equivalent to ¬ P ∨ Q {\displaystyle \neg P\vee Q} because P → Q ⇒ ¬ P ∨ Q {\displaystyle P\to Q{\cancel {\Rightarrow }}\neg P\lor Q} Given P → Q {\displaystyle P\to Q} , one can constructively transform a proof of P {\displaystyle P} into a proof of Q {\displaystyle Q} . In particular, P → P {\displaystyle P\to P} holds in intuitionistic logic. If P → Q ⇒ ¬ P ∨ Q {\displaystyle P\to Q\Rightarrow \neg P\lor Q} would hold, then ¬ P ∨ P {\displaystyle \neg P\lor P} could be derived. However, the latter is the law of excluded middle, which is not accepted by intuitionistic logic (one cannot assume ¬ P ∨ P {\displaystyle \neg P\lor P} without knowing which case applies). == References ==
Wikipedia/Material_implication_(rule_of_inference)
In proof theory, the semantic tableau (; plural: tableaux), also called an analytic tableau, truth tree, or simply tree, is a decision procedure for sentential and related logics, and a proof procedure for formulae of first-order logic. An analytic tableau is a tree structure computed for a logical formula, having at each node a subformula of the original formula to be proved or refuted. Computation constructs this tree and uses it to prove or refute the whole formula. The tableau method can also determine the satisfiability of finite sets of formulas of various logics. It is the most popular proof procedure for modal logics. A method of truth trees contains a fixed set of rules for producing trees from a given logical formula, or set of logical formulas. Those trees will have more formulas at each branch, and in some cases, a branch can come to contain both a formula and its negation, which is to say, a contradiction. In that case, the branch is said to close. If every branch in a tree closes, the tree itself is said to close. In virtue of the rules for construction of tableaux, a closed tree is a proof that the original formula, or set of formulas, used to construct it was itself self-contradictory, and therefore false. Conversely, a tableau can also prove that a logical formula is tautologous: if a formula is tautologous, its negation is a contradiction, so a tableau built from its negation will close. == History == In his Symbolic Logic Part II, Charles Lutwidge Dodgson (also known by his literary pseudonym, Lewis Carroll) introduced the Method of Trees, the earliest modern use of a truth tree. The method of semantic tableaux was invented by the Dutch logician Evert Willem Beth (Beth 1955) and simplified, for classical logic, by Raymond Smullyan (Smullyan 1968, 1995). Smullyan's simplification, "one-sided tableaux", is described here. Smullyan's method has been generalized to arbitrary many-valued propositional and first-order logics by Walter Carnielli (Carnielli 1987). Tableaux can be intuitively seen as sequent systems upside-down. This symmetrical relation between tableaux and sequent systems was formally established in (Carnielli 1991). == Propositional logic == === Definitions === Assume an infinite set P V {\displaystyle PV} of propositional variables and define the set Φ {\displaystyle \Phi } of formulae by induction, represented by the following grammar: Φ ::= P V ∣ ¬ Φ ∣ ( Φ → Φ ) ∣ ( Φ ∨ Φ ) ∣ ( Φ ∧ Φ ) {\displaystyle \Phi ::=PV\mid \neg \Phi \mid (\Phi \to \Phi )\mid (\Phi \lor \Phi )\mid (\Phi \land \Phi )} . That is, the basic connectives are: negation ¬ {\displaystyle \neg } , implication → {\displaystyle \to } , disjunction ∨ {\displaystyle \lor } , and conjunction ∧ {\displaystyle \land } . The truth or falsehood of a formula is called its truth value. A formula, or set of formulas, is said to be satisfiable if there is a possible assignment of truth-values to the propositional variables such that the entire formula, which combines the variables with connectives, is itself true as well. Such an assignment is said to satisfy the formula. === General method === A tableau checks whether a given set of formulae is satisfiable or not. It can be used to check either validity or entailment: a formula is valid if its negation is unsatisfiable, and formulae A 1 , … , A n {\displaystyle A_{1},\ldots ,A_{n}} imply B {\displaystyle B} if { A 1 , … , A n , ¬ B } {\displaystyle \{A_{1},\ldots ,A_{n},\neg B\}} is unsatisfiable. For any formulae X {\displaystyle X} , Y {\displaystyle Y} the following facts hold: If a conjunction X ∧ Y {\displaystyle X\land Y} is true, then X {\displaystyle X} , Y {\displaystyle Y} are both true; is false, then either X {\displaystyle X} is false or Y {\displaystyle Y} is false. If a disjunction X ∨ Y {\displaystyle X\lor Y} is true, then either X {\displaystyle X} is true or Y {\displaystyle Y} is true; is false, then X {\displaystyle X} , Y {\displaystyle Y} are both false. If a conditional X → Y {\displaystyle X\to Y} is true, then either X {\displaystyle X} is false or Y {\displaystyle Y} is true; is false, then X {\displaystyle X} is true and Y {\displaystyle Y} is false. If a negation ¬ X {\displaystyle \neg X} is true, then X {\displaystyle X} is false; is false, then X {\displaystyle X} is true. The method of analytic tableaux is based on these facts. The main principle of propositional tableaux is to attempt to "break" complex formulae into smaller ones until complementary pairs of literals are produced or no further expansion is possible. The method works on a tree whose nodes are labeled with formulae. At each step, this tree is modified; in the propositional case, the only allowed changes are additions of a node as descendant of a leaf. The procedure starts by generating the tree made of a chain of all formulae in the set to prove unsatisfiability. Then, the following procedure may be repeatedly applied nondeterministically: Pick an open leaf node. (The leaf node in the initial chain is marked open). Pick an applicable node on the branch above the selected node. Apply the applicable node, which corresponds to expanding the tree below the selected leaf node based on some expansion rule (detailed below). For every newly created node that is both a literal/negated literal, and whose complement appears in a prior node on the same branch, mark the branch as closed. Mark all other newly created nodes as open. If a branch of the tableau contains a formula ... T ( X ∧ Y ) {\displaystyle {\boldsymbol {\mathsf {T}}}(X\land Y)} , add to its leaf the chain of two nodes containing the formulae T ( X ) {\displaystyle {\boldsymbol {\mathsf {T}}}(X)} and T ( Y ) {\displaystyle {\boldsymbol {\mathsf {T}}}(Y)} ; F ( X ∧ Y ) {\displaystyle {\boldsymbol {\mathsf {F}}}(X\land Y)} , create two sibling children to its leaf, containing the formulae F ( X ) {\displaystyle {\boldsymbol {\mathsf {F}}}(X)} and F ( Y ) {\displaystyle {\boldsymbol {\mathsf {F}}}(Y)} respectively; T ( X ∨ Y ) {\displaystyle {\boldsymbol {\mathsf {T}}}(X\lor Y)} , create two sibling children to its leaf, containing the formulae T ( X ) {\displaystyle {\boldsymbol {\mathsf {T}}}(X)} and T ( Y ) {\displaystyle {\boldsymbol {\mathsf {T}}}(Y)} respectively; F ( X ∨ Y ) {\displaystyle {\boldsymbol {\mathsf {F}}}(X\lor Y)} , add to its leaf the chain of two nodes containing the formulae F ( X ) {\displaystyle {\boldsymbol {\mathsf {F}}}(X)} and F ( Y ) {\displaystyle {\boldsymbol {\mathsf {F}}}(Y)} ; T ( X → Y ) {\displaystyle {\boldsymbol {\mathsf {T}}}(X\to Y)} , create two sibling children to its leaf, containing the formulae F ( X ) {\displaystyle {\boldsymbol {\mathsf {F}}}(X)} and T ( Y ) {\displaystyle {\boldsymbol {\mathsf {T}}}(Y)} respectively; F ( X → Y ) {\displaystyle {\boldsymbol {\mathsf {F}}}(X\to Y)} , add to its leaf the chain of two nodes containing the formulae T ( X ) {\displaystyle {\boldsymbol {\mathsf {T}}}(X)} and F ( Y ) {\displaystyle {\boldsymbol {\mathsf {F}}}(Y)} ; T ( ¬ X ) {\displaystyle {\boldsymbol {\mathsf {T}}}(\neg X)} , add to its leaf the node containing the formula F ( X ) {\displaystyle {\boldsymbol {\mathsf {F}}}(X)} ; F ( ¬ X ) {\displaystyle {\boldsymbol {\mathsf {F}}}(\neg X)} , add to its leaf the node containing the formula T ( X ) {\displaystyle {\boldsymbol {\mathsf {T}}}(X)} . The breakdown process terminates after a finite number of steps, because each application of a rule eliminates a connective, and there are only finitely many connectives in any formula. Note: In systems based on the grammar Φ ::= ⊥ ∣ P V ∣ ( Φ → Φ ) ∣ ( Φ ∨ Φ ) ∣ ( Φ ∧ Φ ) {\displaystyle \Phi ::=\bot \mid PV\mid (\Phi \to \Phi )\mid (\Phi \lor \Phi )\mid (\Phi \land \Phi )} , that do not treat negation as primitive but define it in terms of implication and falsity ( ¬ Φ = def Φ → ⊥ {\displaystyle \neg \Phi \,{\overset {\text{def}}{=}}\,\Phi \to \bot } ),the tableau rules for ¬ {\displaystyle \neg } are replaced by The principle of tableau is that formulae in nodes of the same branch are considered in conjunction while the different branches are considered to be disjuncted. As a result, a tableau is a tree-like representation of a formula that is a disjunction of conjunctions. This formula is equivalent to the set to prove unsatisfiability. The procedure modifies the tableau in such a way that the formula represented by the resulting tableau is equivalent to the original one. One of these conjunctions may contain a pair of complementary literals, in which case that conjunction is proved to be unsatisfiable. If all conjunctions are proved unsatisfiable, the original set of formulae is unsatisfiable. === Closure === Every tableau can be considered as a graphical representation of a formula, which is equivalent to the set the tableau is built from. This formula is as follows: each branch of the tableau represents the conjunction of its formulae; the tableau represents the disjunction of its branches. The expansion rules transforms a tableau into one having an equivalent represented formula. Since the tableau is initialized as a single branch containing the formulae of the input set, all subsequent tableaux obtained from it represent formulae which are equivalent to that set (in the variant where the initial tableau is the single node labeled true, the formulae represented by tableaux are consequences of the original set.) The method of tableaux works by starting with the initial set of formulae and then adding to the tableau simpler and simpler formulae until contradiction is shown in the simple form of opposite literals. Since the formula represented by a tableau is the disjunction of the formulae represented by its branches, contradiction is obtained when every branch contains a pair of opposite literals. Once a branch contains a literal and its negation, its corresponding formula is unsatisfiable. As a result, this branch can be now "closed", as there is no need to further expand it. If all branches of a tableau are closed, the formula represented by the tableau is unsatisfiable; therefore, the original set is unsatisfiable as well. Obtaining a tableau where all branches are closed is a way for proving the unsatisfiability of the original set. In the propositional case, one can also prove that satisfiability is proved by the impossibility of finding a closed tableau, provided that every expansion rule has been applied everywhere it could be applied. In particular, if a tableau contains some open (non-closed) branches and every formula that is not a literal has been used by a rule to generate a new node on every branch the formula is in, the set is satisfiable. This rule takes into account that a formula may occur in more than one branch (this is the case if there is at least a branching point "below" the node). In this case, the rule for expanding the formula has to be applied so that its conclusion(s) are appended to all of these branches that are still open, before one can conclude that the tableau cannot be further expanded and that the formula is therefore satisfiable. === Propositional tableau with unification === The above rules for propositional tableau can be simplified by using uniform notation. In uniform notation, each formula is either of type α {\displaystyle \alpha } (alpha) or of type β {\displaystyle \beta } (beta). Each formula of type alpha is assigned the two components α 1 , α 2 {\displaystyle \alpha _{1},\alpha _{2}} , and each formula of type beta is assigned the two components β 1 , β 2 {\displaystyle \beta _{1},\beta _{2}} . Formulae of type alpha can be thought of as being conjunctive, as both α 1 {\displaystyle \alpha _{1}} and α 2 {\displaystyle \alpha _{2}} are implied by α {\displaystyle \alpha } being true. Formulae of type beta can be thought of as being disjunctive, as either β 1 {\displaystyle \beta _{1}} or β 2 {\displaystyle \beta _{2}} is implied by β {\displaystyle \beta } being true. The below tables shows how to determine the type, and the components, of any given propositional formula. In each table, the left-most column shows all the possible structures for the formulae of type alpha or beta, and the right-most columns show their respective components. When constructing a propositional tableau using the above notation, whenever one encounters a formula of type alpha, its two components α 1 , α 2 {\displaystyle \alpha _{1},\alpha _{2}} are added to the current branch that is being expanded. Whenever one encounters a formula of type beta on some branch θ {\displaystyle \theta } , one can split θ {\displaystyle \theta } into two branches, one with the set { θ {\displaystyle \theta } , β 1 {\displaystyle \beta _{1}} } of formulae, and the other with the set { θ {\displaystyle \theta } , β 2 {\displaystyle \beta _{2}} } of formulae. === Set-labeled tableau === A variant of tableau is to label nodes with sets of formulae rather than single formulae. In this case, the initial tableau is a single node labeled with the set to be proved satisfiable. The formulae in a set are therefore considered to be in conjunction. The rules of expansion of the tableau can now work on the leaves of the tableau, ignoring all internal nodes. For conjunction, the rule is based on the equivalence of a set containing a conjunction A ∧ B {\displaystyle A\land B} with the set containing both A {\displaystyle A} and B {\displaystyle B} in place of it. In particular, if a leaf is labeled with X ∪ { A ∧ B } {\displaystyle X\cup \{A\land B\}} , a node can be appended to it with label X ∪ { A , B } {\displaystyle X\cup \{A,B\}} : ( ∧ ) X ∪ { A ∧ B } X ∪ { A , B } {\displaystyle (\land ){\frac {X\cup \{A\land B\}}{X\cup \{A,B\}}}} For disjunction, a set X ∪ { A ∨ B } {\displaystyle X\cup \{A\lor B\}} is equivalent to the disjunction of the two sets X ∪ { A } {\displaystyle X\cup \{A\}} and X ∪ { B } {\displaystyle X\cup \{B\}} . As a result, if the first set labels a leaf, two children can be appended to it, labeled with the latter two formulae. ( ∨ ) X ∪ { A ∨ B } X ∪ { A } | X ∪ { B } {\displaystyle (\lor ){\frac {X\cup \{A\lor B\}}{X\cup \{A\}|X\cup \{B\}}}} Finally, if a set contains both a literal and its negation, this branch can be closed: ( i d ) X ∪ { p , ¬ p } c l o s e d {\displaystyle (id){\frac {X\cup \{p,\neg p\}}{closed}}} A tableau for a given finite set X is a finite (upside down) tree with root X in which all child nodes are obtained by applying the tableau rules to their parents. A branch in such a tableau is closed if its leaf node contains "closed". A tableau is closed if all its branches are closed. A tableau is open if at least one branch is not closed. Below are two closed tableaux for the set X = { r ∧ ¬ r , p ∧ ( ( ¬ p ∨ q ) ∧ ¬ q ) } {\displaystyle X=\{r\land \neg r,\;p\land ((\neg p\lor q)\land \neg q)\}} Each rule application is marked at the right hand side. Both achieve the same effect; the first closes faster. The only difference is the order in which the reduction is performed. r ∧ ¬ r , p ∧ ( ( ¬ p ∨ q ) ∧ ¬ q ) r , ¬ r , p ∧ ( ( ¬ p ∨ q ) ∧ ¬ q ) ( ∧ ) c l o s e d ( ∧ ) {\displaystyle {\dfrac {\quad {\dfrac {\quad r\land \neg r,\;p\land ((\neg p\lor q)\land \neg q)\quad }{r,\;\neg r,\;p\land ((\neg p\lor q)\land \neg q)}}(\land )}{closed}}(\land )} and second, longer one, with the rules applied in a different order: r ∧ ¬ r , p ∧ ( ( ¬ p ∨ q ) ∧ ¬ q ) r ∧ ¬ r , p , ( ( ¬ p ∨ q ) ∧ ¬ q ) ( ∧ ) r ∧ ¬ r , p , ( ¬ p ∨ q ) , ¬ q ( ∧ ) r ∧ ¬ r , p , ¬ p , ¬ q c l o s e d ( i d ) r ∧ ¬ r , p , q , ¬ q c l o s e d ( i d ) ( ∨ ) {\displaystyle {\dfrac {\quad {\dfrac {\quad {\dfrac {\quad r\land \neg r,\;p\land ((\neg p\lor q)\land \neg q)\quad }{r\land \neg r,\;p,\;((\neg p\lor q)\land \neg q)}}(\land )\quad }{r\land \neg r,\;p,\;(\neg p\lor q),\;\neg q}}(\land )}{\quad {\dfrac {\quad r\land \neg r,\;p,\;\neg p,\;\neg q\quad }{closed}}(id)\quad \quad {\dfrac {\quad r\land \neg r,\;p,\;q,\;\neg q\quad }{closed}}(id)}}(\lor )} The first tableau closes after only one rule application while the second one misses the mark and takes much longer to close. Clearly, one would prefer to always find the shortest closed tableau but it can be shown that one single algorithm that finds the shortest closed tableau for all input sets of formulae cannot exist. The three rules ( ∧ ) {\displaystyle (\land )} , ( ∨ ) {\displaystyle (\lor )} and ( i d ) {\displaystyle (id)} given above are then enough to decide whether a given set X ′ {\displaystyle X'} of formulae in negated normal form are jointly satisfiable: Just apply all possible rules in all possible orders until we find a closed tableau for X ′ {\displaystyle X'} or until we exhaust all possibilities and conclude that every tableau for X ′ {\displaystyle X'} is open. In the first case, X ′ {\displaystyle X'} is jointly unsatisfiable and in the second the case the leaf node of the open branch gives an assignment to the atomic formulae and negated atomic formulae which makes X ′ {\displaystyle X'} jointly satisfiable. Classical logic actually has the rather nice property that we need to investigate only (any) one tableau completely: if it closes then X ′ {\displaystyle X'} is unsatisfiable and if it is open then X ′ {\displaystyle X'} is satisfiable. But this property is not generally enjoyed by other logics. These rules suffice for all of classical logic by taking an initial set of formulae X and replacing each member C by its logically equivalent negated normal form C' giving a set of formulae X' . We know that X is satisfiable if and only if X' is satisfiable, so it suffices to search for a closed tableau for X' using the procedure outlined above. By setting X = { ¬ A } {\displaystyle X=\{\neg A\}} one can test whether the formula A is a tautology of classical logic: If the tableau for { ¬ A } {\displaystyle \{\neg A\}} closes then ¬ A {\displaystyle \neg A} is unsatisfiable and so A is a tautology since no assignment of truth values will ever make A false. Otherwise any open leaf of any open branch of any open tableau for { ¬ A } {\displaystyle \{\neg A\}} gives an assignment that falsifies A. == First-order logic tableau == Tableaux are extended to first-order predicate logic by two rules for dealing with universal and existential quantifiers, respectively. Two different sets of rules can be used; both employ a form of Skolemization for handling existential quantifiers, but differ on the handling of universal quantifiers. The set of formulae to check for validity is here supposed to contain no free variables; this is not a limitation as free variables are implicitly universally quantified, so universal quantifiers over these variables can be added, resulting in a formula with no free variables. === First-order tableau without unification === A first-order formula ∀ x . γ ( x ) {\displaystyle \forall x.\gamma (x)} implies all formulae γ ( t ) {\displaystyle \gamma (t)} where t {\displaystyle t} is a ground term. The following inference rule is therefore correct: ( ∀ ) ∀ x . γ ( x ) γ ( t ) {\displaystyle (\forall ){\frac {\forall x.\gamma (x)}{\gamma (t)}}} where t {\displaystyle t} is an arbitrary ground term Contrarily to the rules for the propositional connectives, multiple applications of this rule to the same formula may be necessary. As an example, the set { ¬ P ( a ) ∨ ¬ P ( b ) , ∀ x . P ( x ) } {\displaystyle \{\neg P(a)\lor \neg P(b),\forall x.P(x)\}} can only be proved unsatisfiable if both P ( a ) {\displaystyle P(a)} and P ( b ) {\displaystyle P(b)} are generated from ∀ x . P ( x ) {\displaystyle \forall x.P(x)} . Existential quantifiers are dealt with by means of Skolemization. In particular, a formula with a leading existential quantifier like ∃ x . δ ( x ) {\displaystyle \exists x.\delta (x)} generates its Skolemization δ ( c ) {\displaystyle \delta (c)} , where c {\displaystyle c} is a new constant symbol. ( ∃ ) ∃ x . δ ( x ) δ ( c ) {\displaystyle (\exists ){\frac {\exists x.\delta (x)}{\delta (c)}}} where c {\displaystyle c} is a new constant symbol The Skolem term c {\displaystyle c} is a constant (a function of arity 0) because the quantification over x {\displaystyle x} does not occur within the scope of any universal quantifier. If the original formula contained some universal quantifiers such that the quantification over x {\displaystyle x} was within their scope, these quantifiers have evidently been removed by the application of the rule for universal quantifiers. The rule for existential quantifiers introduces new constant symbols. These symbols can be used by the rule for universal quantifiers, so that ∀ y . γ ( y ) {\displaystyle \forall y.\gamma (y)} can generate γ ( c ) {\displaystyle \gamma (c)} even if c {\displaystyle c} was not in the original formula but is a Skolem constant created by the rule for existential quantifiers. The above two rules for universal and existential quantifiers are correct, and so are the propositional rules: if a set of formulae generates a closed tableau, this set is unsatisfiable. Completeness can also be proved: if a set of formulae is unsatisfiable, there exists a closed tableau built from it by these rules. However, actually finding such a closed tableau requires a suitable policy of application of rules. Otherwise, an unsatisfiable set can generate an infinite-growing tableau. As an example, the set { ¬ P ( f ( c ) ) , ∀ x . P ( x ) } {\displaystyle \{\neg P(f(c)),\forall x.P(x)\}} is unsatisfiable, but a closed tableau is never obtained if one unwisely keeps applying the rule for universal quantifiers to ∀ x . P ( x ) {\displaystyle \forall x.P(x)} , generating for example P ( c ) , P ( f ( c ) ) , P ( f ( f ( c ) ) ) , … {\displaystyle P(c),P(f(c)),P(f(f(c))),\ldots } . A closed tableau can always be found by ruling out this and similar "unfair" policies of application of tableau rules. The rule for universal quantifiers ( ∀ ) {\displaystyle (\forall )} is the only non-deterministic rule, as it does not specify which term to instantiate with. Moreover, while the other rules need to be applied only once for each formula and each path the formula is in, this one may require multiple applications. Application of this rule can however be restricted by delaying the application of the rule until no other rule is applicable and by restricting the application of the rule to ground terms that already appear in the path of the tableau. The variant of tableaux with unification shown below aims at solving the problem of non-determinism. === First-order tableau with unification === The main problem of tableau without unification is how to choose a ground term t {\displaystyle t} for the universal quantifier rule. Indeed, every possible ground term can be used, but clearly most of them might be useless for closing the tableau. A solution to this problem is to "delay" the choice of the term to the time when the consequent of the rule allows closing at least a branch of the tableau. This can be done by using a variable instead of a term, so that ∀ x . γ ( x ) {\displaystyle \forall x.\gamma (x)} generates γ ( x ′ ) {\displaystyle \gamma (x')} , and then allowing substitutions to later replace x ′ {\displaystyle x'} with a term. The rule for universal quantifiers becomes: ( ∀ ) ∀ x . γ ( x ) γ ( x ′ ) {\displaystyle (\forall ){\frac {\forall x.\gamma (x)}{\gamma (x')}}} where x ′ {\displaystyle x'} is a variable not occurring everywhere else in the tableau While the initial set of formulae is supposed not to contain free variables, a formula of the tableau may contain the free variables generated by this rule. These free variables are implicitly considered universally quantified. This rule employs a variable instead of a ground term. What is gained by this change is that these variables can be then given a value when a branch of the tableau can be closed, solving the problem of generating terms that might be useless. As an example, { ¬ P ( a ) , ∀ x . P ( x ) } {\displaystyle \{\neg P(a),\forall x.P(x)\}} can be proved unsatisfiable by first generating P ( x 1 ) {\displaystyle P(x_{1})} ; the negation of this literal is unifiable with ¬ P ( a ) {\displaystyle \neg P(a)} , the most general unifier being the substitution that replaces x 1 {\displaystyle x_{1}} with a {\displaystyle a} ; applying this substitution results in replacing P ( x 1 ) {\displaystyle P(x_{1})} with P ( a ) {\displaystyle P(a)} , which closes the tableau. This rule closes at least a branch of the tableau—the one containing the considered pair of literals. However, the substitution has to be applied to the whole tableau, not only on these two literals. This is expressed by saying that the free variables of the tableau are rigid: if an occurrence of a variable is replaced by something else, all other occurrences of the same variable must be replaced in the same way. Formally, the free variables are (implicitly) universally quantified and all formulae of the tableau are within the scope of these quantifiers. Existential quantifiers are dealt with by Skolemization. Contrary to the tableau without unification, Skolem terms may not be simple constants. Indeed, formulae in a tableau with unification may contain free variables, which are implicitly considered universally quantified. As a result, a formula like ∃ x . δ ( x ) {\displaystyle \exists x.\delta (x)} may be within the scope of universal quantifiers; if this is the case, the Skolem term is not a simple constant but a term made of a new function symbol and the free variables of the formula. ( ∃ ) ∃ x . δ ( x ) δ ( f ( x 1 , … , x n ) ) {\displaystyle (\exists ){\frac {\exists x.\delta (x)}{\delta (f(x_{1},\ldots ,x_{n}))}}} where f {\displaystyle f} is a new function symbol and x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} the free variables of δ {\displaystyle \delta } This rule incorporates a simplification over a rule where x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} are the free variables of the branch, not of δ {\displaystyle \delta } alone. This rule can be further simplified by the reuse of a function symbol if it has already been used in a formula that is identical to δ {\displaystyle \delta } up to variable renaming. The formula represented by a tableau is obtained in a way that is similar to the propositional case, with the additional assumption that free variables are considered universally quantified. As for the propositional case, formulae in each branch are conjoined and the resulting formulae are disjoined. In addition, all free variables of the resulting formula are universally quantified. All these quantifiers have the whole formula in their scope. In other words, if F {\displaystyle F} is the formula obtained by disjoining the conjunction of the formulae in each branch, and x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} are the free variables in it, then ∀ x 1 , … , x n . F {\displaystyle \forall x_{1},\ldots ,x_{n}.F} is the formula represented by the tableau. The following considerations apply: The assumption that free variables are universally quantified is what makes the application of a most general unifier a sound rule: since γ ( x ′ ) {\displaystyle \gamma (x')} means that γ {\displaystyle \gamma } is true for every possible value of x ′ {\displaystyle x'} , then γ ( t ) {\displaystyle \gamma (t)} is true for the term t {\displaystyle t} that the most general unifier replaces x {\displaystyle x} with. Free variables in a tableau are rigid: all occurrences of the same variable have to be replaced all with the same term. Every variable can be considered a symbol representing a term that is yet to be decided. This is a consequence of free variables being assumed universally quantified over the whole formula represented by the tableau: if the same variable occurs free in two different nodes, both occurrences are in the scope of the same quantifier. As an example, if the formulae in two nodes are A ( x ) {\displaystyle A(x)} and B ( x ) {\displaystyle B(x)} , where x {\displaystyle x} is free in both, the formula represented by the tableau is something in the form ∀ x . ( . . . A ( x ) . . . B ( x ) . . . ) {\displaystyle \forall x.(...A(x)...B(x)...)} . This formula implies that ( . . . A ( x ) . . . B ( x ) . . . ) {\displaystyle (...A(x)...B(x)...)} is true for any value of x {\displaystyle x} , but does not in general imply ( . . . A ( t ) . . . A ( t ′ ) . . . ) {\displaystyle (...A(t)...A(t')...)} for two different terms t {\displaystyle t} and t ′ {\displaystyle t'} , as these two terms may in general take different values. This means that x {\displaystyle x} cannot be replaced by two different terms in A ( x ) {\displaystyle A(x)} and B ( x ) {\displaystyle B(x)} . Free variables in a formula to check for validity are also considered universally quantified. However, these variables cannot be left free when building a tableau, because tableau rules works on the converse of the formula but still treats free variables as universally quantified. For example, P ( x ) → P ( c ) {\displaystyle P(x)\to P(c)} is not valid (it is not true in the model where D = { 1 , 2 } , P ( 1 ) = ⊥ , P ( 2 ) = ⊤ , c = 1 {\displaystyle D=\{1,2\},P(1)=\bot ,P(2)=\top ,c=1} , and the interpretation where x = 2 {\displaystyle x=2} ). Consequently, { P ( x ) , ¬ P ( c ) } {\displaystyle \{P(x),\neg P(c)\}} is satisfiable (it is satisfied by the same model and interpretation). However, a closed tableau could be generated with P ( x ) {\displaystyle P(x)} and ¬ P ( c ) {\displaystyle \neg P(c)} , and substituting x {\displaystyle x} with c {\displaystyle c} would generate a closure. A correct procedure is to first make universal quantifiers explicit, thus generating ∀ x . ( P ( x ) → P ( c ) ) {\displaystyle \forall x.(P(x)\to P(c))} . The following two variants are also correct. Applying to the whole tableau a substitution to the free variables of the tableau is a correct rule, provided that this substitution is free for the formula representing the tableau. In other worlds, applying such a substitution leads to a tableau whose formula is still a consequence of the input set. Using most general unifiers automatically ensures that the condition of freeness for the tableau is met. While in general every variable has to be replaced with the same term in the whole tableau, there are some special cases in which this is not necessary. Tableaux with unification can be proved complete: if a set of formulae is unsatisfiable, it has a tableau-with-unification proof. However, actually finding such a proof may be a difficult problem. Contrarily to the case without unification, applying a substitution can modify the existing part of a tableau; while applying a substitution closes at least a branch, it may make other branches impossible to close (even if the set is unsatisfiable). A solution to this problem is delayed instantiation: no substitution is applied until one that closes all branches at the same time is found. With this variant, a proof for an unsatisfiable set can always be found by a suitable policy of application of the other rules. This method however requires the whole tableau to be kept in memory: the general method closes branches, which can be then discarded, while this variant does not close any branch until the end. The problem that some tableaux that can be generated are impossible to close even if the set is unsatisfiable is common to other sets of tableau expansion rules: even if some specific sequences of application of these rules allow constructing a closed tableau (if the set is unsatisfiable), some other sequences lead to tableaux that cannot be closed. General solutions for these cases are outlined in the "Searching for a tableau" section. == Tableau calculi and their properties == A tableau calculus is a set of rules that allows building and modification of a tableau. Propositional tableau rules, tableau rules without unification, and tableau rules with unification, are all tableau calculi. Some important properties a tableau calculus may or may not possess are completeness, destructiveness, and proof confluence. A tableau calculus is called complete if it allows building a tableau proof for every given unsatisfiable set of formulae. The tableau calculi mentioned above can be proved complete. A remarkable difference between tableau with unification and the other two calculi is that the latter two calculi only modify a tableau by adding new nodes to it, while the former one allows substitutions to modify the existing part of the tableau. More generally, tableau calculi are classed as destructive or non-destructive depending on whether they only add new nodes to tableau or not. Tableau with unification is therefore destructive, while propositional tableau and tableau without unification are non-destructive. Proof confluence is the property of a tableau calculus being able to obtain a proof for an arbitrary unsatisfiable set from an arbitrary tableau, assuming that this tableau has itself been obtained by applying the rules of the calculus. In other words, in a proof confluent tableau calculus, from an unsatisfiable set one can apply whatever set of rules and still obtain a tableau from which a closed one can be obtained by applying some other rules. == Proof procedures == A tableau calculus is simply a set of rules that prescribes how a tableau can be modified. A proof procedure is a method for actually finding a proof (if one exists). In other words, a tableau calculus is a set of rules, while a proof procedure is a policy of application of these rules. Even if a calculus is complete, not every possible choice of application of rules leads to a proof of an unsatisfiable set. For example, { P ( f ( c ) ) , R ( c ) , ¬ P ( f ( c ) ) ∨ ¬ R ( c ) , ∀ x . Q ( x ) } {\displaystyle \{P(f(c)),R(c),\neg P(f(c))\lor \neg R(c),\forall x.Q(x)\}} is unsatisfiable, but both tableaux with unification and tableaux without unification allow the rule for the universal quantifiers to be applied repeatedly to the last formula, while simply applying the rule for disjunction to the third one would directly lead to closure. For proof procedures, a definition of completeness has been given: a proof procedure is strongly complete if it allows finding a closed tableau for any given unsatisfiable set of formulae. Proof confluence of the underlying calculus is relevant to completeness: proof confluence is the guarantee that a closed tableau can be always generated from an arbitrary partially constructed tableau (if the set is unsatisfiable). Without proof confluence, the application of a 'wrong' rule may result in the impossibility of making the tableau complete by applying other rules. Propositional tableaux and tableaux without unification have strongly complete proof procedures. In particular, a complete proof procedure is that of applying the rules in a fair way. This is because the only way such calculi cannot generate a closed tableau from an unsatisfiable set is by not applying some applicable rules. For propositional tableaux, fairness amounts to expanding every formula in every branch. More precisely, for every formula and every branch the formula is in, the rule having the formula as a precondition has been used to expand the branch. A fair proof procedure for propositional tableaux is strongly complete. For first-order tableaux without unification, the condition of fairness is similar, with the exception that the rule for universal quantifiers might require more than one application. Fairness amounts to expanding every universal quantifier infinitely often. In other words, a fair policy of application of rules cannot keep applying other rules without expanding every universal quantifier in every branch that is still open once in a while. == Searching for a closed tableau == If a tableau calculus is complete, every unsatisfiable set of formulae has an associated closed tableau. While this tableau can always be obtained by applying some of the rules of the calculus, the problem of which rules to apply for a given formula still remains. As a result, completeness does not automatically imply the existence of a feasible policy of application of rules that always leads to a closed tableau for every given unsatisfiable set of formulae. While a fair proof procedure is complete for ground tableau and tableau without unification, this is not the case for tableau with unification. A general solution for this problem is that of searching the space of tableaux until a closed one is found (if any exists, that is, the set is unsatisfiable). In this approach, one starts with an empty tableau and then recursively applies every possible applicable rule. This procedure visits a (implicit) tree whose nodes are labeled with tableaux, and such that the tableau in a node is obtained from the tableau in its parent by applying one of the valid rules. Since each branch can be infinite, this tree has to be visited breadth-first rather than depth-first. This requires a large amount of space, as the breadth of the tree can grow exponentially. A method that may visit some nodes more than once but works in polynomial space is to visit in a depth-first manner with iterative deepening: one first visits the tree depth first up to a certain depth, then increases the depth and perform the visit again. This particular procedure uses the depth (which is also the number of tableau rules that have been applied) for deciding when to stop at each step. Various other parameters (such as the size of the tableau labeling a node) have been used instead. === Reducing search === The size of the search tree depends on the number of (children) tableaux that can be generated from a given (parent) one. Reducing the number of such tableaux therefore reduces the required search. A way for reducing this number is to disallow the generation of some tableaux based on their internal structure. An example is the condition of regularity: if a branch contains a literal, using an expansion rule that generates the same literal is useless because the branch containing two copies of the literals would have the same set of formulae of the original one. This expansion can be disallowed because if a closed tableau exists, it can be found without it. This restriction is structural because it can be checked by looking at the structure of the tableau to expand only. Different methods for reducing search disallow the generation of some tableaux on the ground that a closed tableau can still be found by expanding the other ones. These restrictions are called global. As an example of a global restriction, one may employ a rule that specifies which of the open branches is to be expanded. As a result, if a tableau has for example two non-closed branches, the rule specifies which one is to be expanded, disallowing the expansion of the second one. This restriction reduces the search space because one possible choice is now forbidden; completeness is however not harmed, as the second branch will still be expanded if the first one is eventually closed. As an example, a tableau with root ¬ a ∧ ¬ b {\displaystyle \neg a\land \neg b} , child a ∨ b {\displaystyle a\lor b} , and two leaves a {\displaystyle a} and b {\displaystyle b} can be closed in two ways: applying ( ∧ ) {\displaystyle (\land )} first to a {\displaystyle a} and then to b {\displaystyle b} , or vice versa. There is clearly no need to follow both possibilities; one may consider only the case in which ( ∧ ) {\displaystyle (\land )} is first applied to a {\displaystyle a} and disregard the case in which it is first applied to b {\displaystyle b} . This is a global restriction because what allows neglecting this second expansion is the presence of the other tableau, where expansion is applied to a {\displaystyle a} first and b {\displaystyle b} afterwards. == Clause tableaux == When applied to sets of clauses (rather than of arbitrary formulae), tableaux methods allow for a number of efficiency improvements. A first-order clause is a formula ∀ x 1 , … , x n L 1 ∨ ⋯ ∨ L m {\displaystyle \forall x_{1},\ldots ,x_{n}L_{1}\lor \cdots \lor L_{m}} that does not contain free variables and such that each L i {\displaystyle L_{i}} is a literal. The universal quantifiers are often omitted for clarity, so that for example P ( x , y ) ∨ Q ( f ( x ) ) {\displaystyle P(x,y)\lor Q(f(x))} actually means ∀ x , y . P ( x , y ) ∨ Q ( f ( x ) ) {\displaystyle \forall x,y.P(x,y)\lor Q(f(x))} . Note that, if taken literally, these two formulae are not the same as for satisfiability: rather, the satisfiability P ( x , y ) ∨ Q ( f ( x ) ) {\displaystyle P(x,y)\lor Q(f(x))} is the same as that of ∃ x , y . P ( x , y ) ∨ Q ( f ( x ) ) {\displaystyle \exists x,y.P(x,y)\lor Q(f(x))} . That free variables are universally quantified is not a consequence of the definition of first-order satisfiability; it is rather used as an implicit common assumption when dealing with clauses. The only expansion rules that are applicable to a clause are ( ∀ ) {\displaystyle (\forall )} and ( ∨ ) {\displaystyle (\lor )} ; these two rules can be replaced by their combination without losing completeness. In particular, the following rule corresponds to applying in sequence the rules ( ∀ ) {\displaystyle (\forall )} and ( ∨ ) {\displaystyle (\lor )} of the first-order calculus with unification. ( C ) L 1 ∨ ⋯ ∨ L n L 1 ′ | ⋯ | L n ′ {\displaystyle (C){\frac {L_{1}\lor \cdots \lor L_{n}}{L_{1}'|\cdots |L_{n}'}}} where L 1 ′ ∨ ⋯ ∨ L n ′ {\displaystyle L_{1}'\lor \cdots \lor L_{n}'} is obtained by replacing every variable with a new one in L 1 ∨ ⋯ ∨ L n {\displaystyle L_{1}\lor \cdots \lor L_{n}} When the set to be checked for satisfiability is only composed of clauses, this and the unification rules are sufficient to prove unsatisfiability. In other worlds, the tableau calculi composed of ( C ) {\displaystyle (C)} and ( σ ) {\displaystyle (\sigma )} is complete. Since the clause expansion rule only generates literals and never new clauses, the clauses to which it can be applied are only clauses of the input set. As a result, the clause expansion rule can be further restricted to the case where the clause is in the input set. ( C ) L 1 ∨ ⋯ ∨ L n L 1 ′ | ⋯ | L n ′ {\displaystyle (C){\frac {L_{1}\lor \cdots \lor L_{n}}{L_{1}'|\cdots |L_{n}'}}} where L 1 ′ ∨ ⋯ ∨ L n ′ {\displaystyle L_{1}'\lor \cdots \lor L_{n}'} is obtained by replacing every variable with a new one in L 1 ∨ ⋯ ∨ L n {\displaystyle L_{1}\lor \cdots \lor L_{n}} , which is a clause of the input set Since this rule directly exploits the clauses in the input set there is no need to initialize the tableau to the chain of the input clauses. The initial tableau can therefore be initialize with the single node labeled t r u e {\displaystyle true} ; this label is often omitted as implicit. As a result of this further simplification, every node of the tableau (apart from the root) is labeled with a literal. A number of optimizations can be used for clause tableau. These optimization are aimed at reducing the number of possible tableaux to be explored when searching for a closed tableau as described in the "Searching for a closed tableau" section above. === Connection tableau === Connection is a condition over tableau that forbids expanding a branch using clauses that are unrelated to the literals that are already in the branch. Connection can be defined in two ways: strong connectedness when expanding a branch, use an input clause only if it contains a literal that can be unified with the negation of the literal in the current leaf weak connectedness allow the use of clauses that contain a literal that unifies with the negation of a literal on the branch Both conditions apply only to branches consisting not only of the root. The second definition allows for the use of a clause containing a literal that unifies with the negation of a literal in the branch, while the first only further constraint that literal to be in leaf of the current branch. If clause expansion is restricted by connectedness (either strong or weak), its application produces a tableau in which substitution can applied to one of the new leaves, closing its branch. In particular, this is the leaf containing the literal of the clause that unifies with the negation of a literal in the branch (or the negation of the literal in the parent, in case of strong connection). Both conditions of connectedness lead to a complete first-order calculus: if a set of clauses is unsatisfiable, it has a closed connected (strongly or weakly) tableau. Such a closed tableau can be found by searching in the space of tableaux as explained in the "Searching for a closed tableau" section. During this search, connectedness eliminates some possible choices of expansion, thus reducing search. In other worlds, while the tableau in a node of the tree can be in general expanded in several different ways, connection may allow only few of them, thus reducing the number of resulting tableaux that need to be further expanded. This can be seen on the following (propositional) example. The tableau made of a chain t r u e − a {\displaystyle true-a} for the set of clauses { a , ¬ a ∨ b , ¬ c ∨ d , ¬ b } {\displaystyle \{a,\neg a\lor b,\neg c\lor d,\neg b\}} can be in general expanded using each of the four input clauses, but connection only allows the expansion that uses ¬ a ∨ b {\displaystyle \neg a\lor b} . This means that the tree of tableaux has four leaves in general but only one if connectedness is imposed. This means that connectedness leaves only one tableau to try to expand, instead of the four ones to consider in general. In spite of this reduction of choices, the completeness theorem implies that a closed tableau can be found if the set is unsatisfiable. The connectedness conditions, when applied to the propositional (clausal) case, make the resulting calculus non-confluent. As an example, { a , b , ¬ b } {\displaystyle \{a,b,\neg b\}} is unsatisfiable, but applying ( C ) {\displaystyle (C)} to a {\displaystyle a} generates the chain t r u e − a {\displaystyle true-a} , which is not closed and to which no other expansion rule can be applied without violating either strong or weak connectedness. In the case of weak connectedness, confluence holds provided that the clause used for expanding the root is relevant to unsatisfiability, that is, it is contained in a minimally unsatisfiable subset of the set of clauses. Unfortunately, the problem of checking whether a clause meets this condition is itself a hard problem. In spite of non-confluence, a closed tableau can be found using search, as presented in the "Searching for a closed tableau" section above. While search is made necessary, connectedness reduces the possible choices of expansion, thus making search more efficient. === Regular tableaux === A tableau is regular if no literal occurs twice in the same branch. Enforcing this condition allows for a reduction of the possible choices of tableau expansion, as the clauses that would generate a non-regular tableau cannot be expanded. These disallowed expansion steps are however useless. If B {\displaystyle B} is a branch containing a literal L {\displaystyle L} , and C {\displaystyle C} is a clause whose expansion violates regularity, then C {\displaystyle C} contains L {\displaystyle L} . In order to close the tableau, one needs to expand and close, among others, the branch where B − L {\displaystyle B-L} , where L {\displaystyle L} occurs twice. However, the formulae in this branch are exactly the same as the formulae of B {\displaystyle B} alone. As a result, the same expansion steps that close B − L {\displaystyle B-L} also close B {\displaystyle B} . This means that expanding C {\displaystyle C} was unnecessary; moreover, if C {\displaystyle C} contained other literals, its expansion generated other leaves that needed to be closed. In the propositional case, the expansion needed to close these leaves are completely useless; in the first-order case, they may only affect the rest of the tableau because of some unifications; these can however be combined to the substitutions used to close the rest of the tableau. == Tableaux for modal logics == In a modal logic, a model comprises a set of possible worlds, each one associated to a truth evaluation; an accessibility relation specifies when a world is accessible from another one. A modal formula may specify not only conditions over a possible world, but also on the ones that are accessible from it. As an example, ◻ A {\displaystyle \Box A} is true in a world if A {\displaystyle A} is true in all worlds that are accessible from it. As for propositional logic, tableaux for modal logics are based on recursively breaking formulae into its basic components. Expanding a modal formula may however require stating conditions over different worlds. As an example, if ¬ ◻ A {\displaystyle \neg \Box A} is true in a world then there exists a world accessible from it where A {\displaystyle A} is false. However, one cannot simply add the following rule to the propositional ones. ¬ ◻ A ¬ A {\displaystyle {\frac {\neg \Box A}{\neg A}}} In propositional tableaux all formulae refer to the same truth evaluation, but the precondition of the rule above holds in one world while the consequence holds in another. Not taking this into account would generate incorrect results. For example, formula a ∧ ¬ ◻ a {\displaystyle a\land \neg \Box a} states that a {\displaystyle a} is true in the current world and a {\displaystyle a} is false in a world that is accessible from it. Simply applying ( ∧ ) {\displaystyle (\land )} and the expansion rule above would produce a {\displaystyle a} and ¬ a {\displaystyle \neg a} , but these two formulae should not in general generate a contradiction, as they hold in different worlds. Modal tableaux calculi do contain rules of the kind of the one above, but include mechanisms to avoid the incorrect interaction of formulae referring to different worlds. Technically, tableaux for modal logics check the satisfiability of a set of formulae: they check whether there exists a model M {\displaystyle M} and world w {\displaystyle w} such that the formulae in the set are true in that model and world. In the example above, while a {\displaystyle a} states the truth of a {\displaystyle a} in w {\displaystyle w} , the formula ¬ ◻ a {\displaystyle \neg \Box a} states the truth of ¬ a {\displaystyle \neg a} in some world w ′ {\displaystyle w'} that is accessible from w {\displaystyle w} and which may in general be different from w {\displaystyle w} . Tableaux calculi for modal logic take into account that formulae may refer to different worlds. This fact has an important consequence: formulae that hold in a world may imply conditions over different successors of that world. Unsatisfiability may then be proved from the subset of formulae referring to a single successor. This holds if a world may have more than one successor, which is true for most modal logics. If this is the case, a formula like ¬ ◻ A ∧ ¬ ◻ B {\displaystyle \neg \Box A\land \neg \Box B} is true if a successor where ¬ A {\displaystyle \neg A} holds exists and a successor where ¬ B {\displaystyle \neg B} holds exists. In the other way around, if one can show unsatisfiability of ¬ A {\displaystyle \neg A} in an arbitrary successor, the formula is proved unsatisfiable without checking for worlds where ¬ B {\displaystyle \neg B} holds. At the same time, if one can show unsatisfiability of ¬ B {\displaystyle \neg B} , there is no need to check ¬ A {\displaystyle \neg A} . As a result, while there are two possible way to expand ¬ ◻ A ∧ ¬ ◻ B {\displaystyle \neg \Box A\land \neg \Box B} , one of these two ways is always sufficient to prove unsatisfiability if the formula is unsatisfiable. For example, one may expand the tableau by considering an arbitrary world where ¬ A {\displaystyle \neg A} holds. If this expansion leads to unsatisfiability, the original formula is unsatisfiable. However, it is also possible that unsatisfiability cannot be proved this way, and that the world where ¬ B {\displaystyle \neg B} holds should have been considered instead. As a result, one can always prove unsatisfiability by expanding either ¬ ◻ A {\displaystyle \neg \Box A} only or ¬ ◻ B {\displaystyle \neg \Box B} only; however, if the wrong choice is made the resulting tableau may not be closed. Expanding either subformula leads to tableau calculi that are complete but not proof-confluent. Searching as described in the "Searching for a closed tableau" may therefore be necessary. Depending on whether the precondition and consequence of a tableau expansion rule refer to the same world or not, the rule is called static or transactional. While rules for propositional connectives are all static, not all rules for modal connectives are transactional: for example, in every modal logic including axiom T, it holds that ◻ A {\displaystyle \Box A} implies A {\displaystyle A} in the same world. As a result, the relative (modal) tableau expansion rule is static, as both its precondition and consequence refer to the same world. === Formula-deleting tableau === A method for avoiding formulae referring to different worlds interacting in the wrong way is to make sure that all formulae of a branch refer to the same world. This condition is initially true as all formulae in the set to be checked for consistency are assumed referring to the same world. When expanding a branch, two situations are possible: either the new formulae refer to the same world as the other one in the branch or not. In the first case, the rule is applied normally. In the second case, all formulae of the branch that do not also hold in the new world are deleted from the branch, and possibly added to all other branches that are still relative to the old world. As an example, in S5 every formula ◻ A {\displaystyle \Box A} that is true in a world is also true in all accessible worlds (that is, in all accessible worlds both A {\displaystyle A} and ◻ A {\displaystyle \Box A} are true). Therefore, when applying ¬ ◻ B ¬ B {\displaystyle {\frac {\neg \Box B}{\neg B}}} , whose consequence holds in a different world, one deletes all formulae from the branch, but can keep all formulae ◻ A {\displaystyle \Box A} , as these hold in the new world as well. In order to retain completeness, the deleted formulae are then added to all other branches that still refer to the old world. === World-labeled tableau === A different mechanism for ensuring the correct interaction between formulae referring to different worlds is to switch from formulae to labeled formulae: instead of writing A {\displaystyle A} , one would write w : A {\displaystyle w:A} to make it explicit that A {\displaystyle A} holds in world w {\displaystyle w} . All propositional expansion rules are adapted to this variant by stating that they all refer to formulae with the same world label. For example, w : A ∧ B {\displaystyle w:A\land B} generates two nodes labeled with w : A {\displaystyle w:A} and w : B {\displaystyle w:B} ; a branch is closed only if it contains two opposite literals of the same world, like w : a {\displaystyle w:a} and w : ¬ a {\displaystyle w:\neg a} ; no closure is generated if the two world labels are different, like in w : a {\displaystyle w:a} and w ′ : ¬ a {\displaystyle w':\neg a} . A modal expansion rule may have a consequence that refers to different worlds. For example, the rule for ¬ ◻ A {\displaystyle \neg \Box A} would be written as follows w : ¬ ◻ A w ′ : ¬ A {\displaystyle {\frac {w:\neg \Box A}{w':\neg A}}} The precondition and consequent of this rule refer to worlds w {\displaystyle w} and w ′ {\displaystyle w'} , respectively. The various calculi use different methods for keeping track of the accessibility of the worlds used as labels. Some include pseudo-formulae like w R w ′ {\displaystyle wRw'} to denote that w ′ {\displaystyle w'} is accessible from w {\displaystyle w} . Some others use sequences of integers as world labels, this notation implicitly representing the accessibility relation (for example, ( 1 , 4 , 2 , 3 ) {\displaystyle (1,4,2,3)} is accessible from ( 1 , 4 , 2 ) {\displaystyle (1,4,2)} .) === Set-labeling tableaux === The problem of interaction between formulae holding in different worlds can be overcome by using set-labeling tableaux. These are trees whose nodes are labeled with sets of formulae; the expansion rules explain how to attach new nodes to a leaf, based only on the label of the leaf (and not on the label of other nodes in the branch). Tableaux for modal logics are used to verify the satisfiability of a set of modal formulae in a given modal logic. Given a set of formulae S {\displaystyle S} , they check the existence of a model M {\displaystyle M} and a world w {\displaystyle w} such that M , w ⊨ S {\displaystyle M,w\models S} . The expansion rules depend on the particular modal logic used. A tableau system for the basic modal logic K can be obtained by adding to the propositional tableau rules the following one: ( K ) ◻ A 1 ; … ; ◻ A n ; ¬ ◻ B A 1 ; … ; A n ; ¬ B {\displaystyle (K){\frac {\Box A_{1};\ldots ;\Box A_{n};\neg \Box B}{A_{1};\ldots ;A_{n};\neg B}}} Intuitively, the precondition of this rule expresses the truth of all formulae A 1 , … , A n {\displaystyle A_{1},\ldots ,A_{n}} at all accessible worlds, and truth of ¬ B {\displaystyle \neg B} at some accessible worlds. The consequence of this rule is a formula that must be true at one of those worlds where ¬ B {\displaystyle \neg B} is true. More technically, modal tableaux methods check the existence of a model M {\displaystyle M} and a world w {\displaystyle w} that make set of formulae true. If ◻ A 1 ; … ; ◻ A n ; ¬ ◻ B {\displaystyle \Box A_{1};\ldots ;\Box A_{n};\neg \Box B} are true in w {\displaystyle w} , there must be a world w ′ {\displaystyle w'} that is accessible from w {\displaystyle w} and that makes A 1 ; … ; A n ; ¬ B {\displaystyle A_{1};\ldots ;A_{n};\neg B} true. This rule therefore amounts to deriving a set of formulae that must be satisfied in such w ′ {\displaystyle w'} . While the preconditions ◻ A 1 ; … ; ◻ A n ; ¬ ◻ B {\displaystyle \Box A_{1};\ldots ;\Box A_{n};\neg \Box B} are assumed satisfied by M , w {\displaystyle M,w} , the consequences A 1 ; … ; A n ; ¬ B {\displaystyle A_{1};\ldots ;A_{n};\neg B} are assumed satisfied in M , w ′ {\displaystyle M,w'} : same model but possibly different worlds. Set-labeled tableaux do not explicitly keep track of the world where each formula is assumed true: two nodes may or may not refer to the same world. However, the formulae labeling any given node are assumed true at the same world. As a result of the possibly different worlds where formulae are assumed true, a formula in a node is not automatically valid in all its descendants, as every application of the modal rule corresponds to a move from a world to another one. This condition is automatically captured by set-labeling tableaux, as expansion rules are based only on the leaf where they are applied and not on its ancestors. Notably, ( K ) {\displaystyle (K)} does not directly extend to multiple negated boxed formulae such as in ◻ A 1 ; … ; ◻ A n ; ¬ ◻ B 1 ; ¬ ◻ B 2 {\displaystyle \Box A_{1};\ldots ;\Box A_{n};\neg \Box B_{1};\neg \Box B_{2}} : while there exists an accessible world where B 1 {\displaystyle B_{1}} is false and one in which B 2 {\displaystyle B_{2}} is false, these two worlds are not necessarily the same. Differently from the propositional rules, ( K ) {\displaystyle (K)} states conditions over all its preconditions. For example, it cannot be applied to a node labeled by a ; ◻ b ; ◻ ( b → c ) ; ¬ ◻ c {\displaystyle a;\Box b;\Box (b\to c);\neg \Box c} ; while this set is inconsistent and this could be easily proved by applying ( K ) {\displaystyle (K)} , this rule cannot be applied because of formula a {\displaystyle a} , which is not even relevant to inconsistency. Removal of such formulae is made possible by the rule: ( θ ) A 1 ; … ; A n ; B 1 ; … ; B m A 1 ; … ; A n {\displaystyle (\theta ){\frac {A_{1};\ldots ;A_{n};B_{1};\ldots ;B_{m}}{A_{1};\ldots ;A_{n}}}} The addition of this rule (thinning rule) makes the resulting calculus non-confluent: a tableau for an inconsistent set may be impossible to close, even if a closed tableau for the same set exists. Rule ( θ ) {\displaystyle (\theta )} is non-deterministic: the set of formulae to be removed (or to be kept) can be chosen arbitrarily; this creates the problem of choosing a set of formulae to discard that is not so large it makes the resulting set satisfiable and not so small it makes the necessary expansion rules inapplicable. Having a large number of possible choices makes the problem of searching for a closed tableau harder. This non-determinism can be avoided by restricting the usage of ( θ ) {\displaystyle (\theta )} so that it is only applied before a modal expansion rule, and so that it only removes the formulae that make that other rule inapplicable. This condition can be also formulated by merging the two rules in a single one. The resulting rule produces the same result as the old one, but implicitly discard all formulae that made the old rule inapplicable. This mechanism for removing ( θ ) {\displaystyle (\theta )} has been proved to preserve completeness for many modal logics. Axiom T expresses reflexivity of the accessibility relation: every world is accessible from itself. The corresponding tableau expansion rule is: ( T ) A 1 ; … ; A n ; ◻ B A 1 ; … ; A n ; ◻ B ; B {\displaystyle (T){\frac {A_{1};\ldots ;A_{n};\Box B}{A_{1};\ldots ;A_{n};\Box B;B}}} This rule relates conditions over the same world: if ◻ B {\displaystyle \Box B} is true in a world, by reflexivity B {\displaystyle B} is also true in the same world. This rule is static, not transactional, as both its precondition and consequent refer to the same world. This rule copies ◻ B {\displaystyle \Box B} from the precondition to the consequent, in spite of this formula having been "used" to generate B {\displaystyle B} . This is correct, as the considered world is the same, so ◻ B {\displaystyle \Box B} also holds there. This "copying" is necessary in some cases. It is for example necessary to prove the inconsistency of ◻ ( a ∧ ¬ ◻ a ) {\displaystyle \Box (a\land \neg \Box a)} : the only applicable rules are in order ( T ) , ( ∧ ) , ( θ ) , ( K ) {\displaystyle (T),(\land ),(\theta ),(K)} , from which one is blocked if ◻ a {\displaystyle \Box a} is not copied. === Auxiliary tableaux === A different method for dealing with formulae holding in alternate worlds is to start a different tableau for each new world that is introduced in the tableau. For example, ¬ ◻ A {\displaystyle \neg \Box A} implies that A {\displaystyle A} is false in an accessible world, so one starts a new tableau rooted by ¬ A {\displaystyle \neg A} . This new tableau is attached to the node of the original tableau where the expansion rule has been applied; a closure of this tableau immediately generates a closure of all branches where that node is, regardless of whether the same node is associated other auxiliary tableaux. The expansion rules for the auxiliary tableaux are the same as for the original one; therefore, an auxiliary tableau can have in turns other (sub-)auxiliary tableaux. === Global assumptions === The above modal tableaux establish the consistency of a set of formulae, and can be used for solving the local logical consequence problem. This is the problem of telling whether, for each model M {\displaystyle M} , if A {\displaystyle A} is true in a world w {\displaystyle w} , then B {\displaystyle B} is also true in the same world. This is the same as checking whether B {\displaystyle B} is true in a world of a model, in the assumption that A {\displaystyle A} is also true in the same world of the same model. A related problem is the global consequence problem, where the assumption is that a formula (or set of formulae) G {\displaystyle G} is true in all possible worlds of the model. The problem is that of checking whether, in all models M {\displaystyle M} where G {\displaystyle G} is true in all worlds, B {\displaystyle B} is also true in all worlds. Local and global assumption differ on models where the assumed formula is true in some worlds but not in others. As an example, { P , ¬ ◻ ( P ∧ Q ) } {\displaystyle \{P,\neg \Box (P\land Q)\}} entails ¬ ◻ Q {\displaystyle \neg \Box Q} globally but not locally. Local entailment does not hold in a model consisting of two worlds making P {\displaystyle P} and ¬ P , Q {\displaystyle \neg P,Q} true, respectively, and where the second is accessible from the first; in the first world, the assumptions are true but ¬ ◻ Q {\displaystyle \neg \Box Q} is false. This counterexample works because P {\displaystyle P} can be assumed true in a world and false in another one. If however the same assumption is considered global, ¬ P {\displaystyle \neg P} is not allowed in any world of the model. These two problems can be combined, so that one can check whether B {\displaystyle B} is a local consequence of A {\displaystyle A} under the global assumption G {\displaystyle G} . Tableaux calculi can deal with global assumption by a rule allowing its addition to every node, regardless of the world it refers to. == Notations == The following conventions are sometimes used. === Uniform notation === When writing tableaux expansion rules, formulae are often denoted using a convention, so that for example α is always considered to be α 1 ∧ α 2 {\displaystyle \alpha _{1}\land \alpha _{2}} . The following table provides the notation for formulae in propositional, first-order, and modal logic. Each label in the first column is taken to be either formula in the other columns. An overlined formula such as α 1 ¯ {\displaystyle {\overline {\alpha _{1}}}} indicates that α 1 {\displaystyle \alpha _{1}} is the negation of whatever formula appears in its place, so that for example in formula ¬ ( a ∨ b ) {\displaystyle \neg (a\lor b)} the subformula α 1 {\displaystyle \alpha _{1}} is the negation of a. Since every label indicates many equivalent formulae, this notation allows writing a single rule for all these equivalent formulae. For example, the conjunction expansion rule is formulated as: ( α ) α α 1 α 2 {\displaystyle (\alpha ){\frac {\alpha }{\begin{array}{c}\alpha _{1}\\\alpha _{2}\end{array}}}} == See also == Resolution (logic) == Notes == == References == Beth, Evert W. (1955). "Semantic entailment and formal derivability". Mededelingen van de Koninklijke Nederlandse Akademie van Wetenschappen, Afdeling Letterkunde. 18 (13): 309–42. Reprinted in Intikka, Jaakko, ed. (1969). The Philosophy of Mathematics. Oxford University Press. ISBN 978-0-19-875011-6. Bostock, David (1997). Intermediate Logic. Oxford University Press. ISBN 978-0-19-156707-0. Carnielli, Walter A. (1987). "Systematization of Finite Many-Valued Logics Through the Method of Tableaux". The Journal of Symbolic Logic. 52 (2): 473–493. doi:10.2307/2274395. JSTOR 2274395. S2CID 42822367. Carnielli, Walter A. (1991). "On sequents and tableaux for many-valued logics" (PDF). The Journal of Non-Classical Logics. 8 (1): 59–76. Archived from the original (PDF) on 2016-03-05. Retrieved 2014-10-11. D'Agostino, M.; Gabbay, D.; Haehnle, R.; Posegga, J., eds. (1999). Handbook of Tableau Methods. Kluwer. ISBN 978-94-017-1754-0. Fitting, Melvin (1996) [1990]. First-order logic and automated theorem proving (2nd ed.). New York: Springer. doi:10.1007/978-1-4612-2360-3. ISBN 978-1-4612-7515-2. S2CID 10411039. Girle, Rod (2014). Modal Logics and Philosophy (2nd ed.). Taylor & Francis. ISBN 978-1-317-49217-7. Goré, Rajeev. "Tableau Methods for Modal and Temporal Logics". Handbook of Tableau Methods. pp. 297–396. Hähnle, Reiner (2001). "3. Tableaux and Related Methods". In Robinson, Alan J.A.; Voronkov, Andrei (eds.). Handbook of Automated Reasoning. Elsevier. pp. 101–179. ISBN 978-0-08-053279-0. Howson, Colin (11 October 2005) [1997]. Logic with trees: an introduction to symbolic logic. Routledge. ISBN 978-1-134-78550-6. Jarmużek, Tomasz (2020). Hartman, Jan (ed.). "Tableau Methods for Propositional Logic and Term Logic" (PDF). Series: Studies in Philosophy, History of Ideas and Modern Societies. 20. Translated by Jaskólski, Sławomir. Berlin, Bern, Bruxelles, New York, Oxford, Warszawa, Wien: Peter Lang: 228. doi:10.3726/b18008. ISBN 9783631846537. ISSN 2191-1878. Jeffrey, Richard (2006) [1967]. Formal Logic: Its Scope and Limits (4th ed.). Hackett. ISBN 978-0-87220-813-1. Letz, Reinhold; Stenz, Gernot. "28. Model Elimination and Connection Tableau Procedures". Handbook of Automated Reasoning. pp. 2015–2114. Robinson, John Alan; Voronkov, Andrei, eds. (2001). Handbook of Automated Reasoning. Vol. 1. MIT Press. pp. 203 ff. ISBN 0444829490. Smullyan, Raymond (1995) [1968]. First Order-Logic. Dover. ISBN 978-0-486-68370-6. Smullyan, Raymond (2014). A Beginner's Guide to Mathematical Logic. Dover. ISBN 978-0486492377. The Encyclopedia of Philosophy, ed. (11 December 2023). "Modern Logic: The Boolean Period: Carroll". The Encyclopedia of Philosophy. Retrieved 26 December 2023. Zeman, Joseph Jay (1973). Modal Logic: The Lewis-modal Systems. Clarendon Press. ISBN 978-0-19-824374-8. OCLC 641504. == External links == TABLEAUX: an annual international conference on automated reasoning with analytic tableaux and related methods JAR: Journal of Automated Reasoning The tableaux package: an interactive prover for propositional and first-order logic using tableaux Tree proof generator: another interactive prover for propositional and first-order logic using tableaux LoTREC: a generic tableaux-based prover for modal logics from IRIT/Toulouse University Intro to Truth Trees on YouTube
Wikipedia/Method_of_analytic_tableaux
Intuitionistic logic, sometimes more generally called constructive logic, refers to systems of symbolic logic that differ from the systems used for classical logic by more closely mirroring the notion of constructive proof. In particular, systems of intuitionistic logic do not assume the law of excluded middle and double negation elimination, which are fundamental inference rules in classical logic. Formalized intuitionistic logic was originally developed by Arend Heyting to provide a formal basis for L. E. J. Brouwer's programme of intuitionism. From a proof-theoretic perspective, Heyting’s calculus is a restriction of classical logic in which the law of excluded middle and double negation elimination have been removed. Excluded middle and double negation elimination can still be proved for some propositions on a case by case basis, however, but do not hold universally as they do with classical logic. The standard explanation of intuitionistic logic is the BHK interpretation. Several systems of semantics for intuitionistic logic have been studied. One of these semantics mirrors classical Boolean-valued semantics but uses Heyting algebras in place of Boolean algebras. Another semantics uses Kripke models. These, however, are technical means for studying Heyting’s deductive system rather than formalizations of Brouwer’s original informal semantic intuitions. Semantical systems claiming to capture such intuitions, due to offering meaningful concepts of “constructive truth” (rather than merely validity or provability), are Kurt Gödel’s dialectica interpretation, Stephen Cole Kleene’s realizability, Yurii Medvedev’s logic of finite problems, or Giorgi Japaridze’s computability logic. Yet such semantics persistently induce logics properly stronger than Heyting’s logic. Some authors have argued that this might be an indication of inadequacy of Heyting’s calculus itself, deeming the latter incomplete as a constructive logic. == Mathematical constructivism == In the semantics of classical logic, propositional formulae are assigned truth values from the two-element set { ⊤ , ⊥ } {\displaystyle \{\top ,\bot \}} ("true" and "false" respectively), regardless of whether we have direct evidence for either case. This is referred to as the 'law of excluded middle', because it excludes the possibility of any truth value besides 'true' or 'false'. In contrast, propositional formulae in intuitionistic logic are not assigned a definite truth value and are only considered "true" when we have direct evidence, hence proof. We can also say, instead of the propositional formula being "true" due to direct evidence, that it is inhabited by a proof in the Curry–Howard sense. Operations in intuitionistic logic therefore preserve justification, with respect to evidence and provability, rather than truth-valuation. Intuitionistic logic is a commonly-used tool in developing approaches to constructivism in mathematics. The use of constructivist logics in general has been a controversial topic among mathematicians and philosophers (see, for example, the Brouwer–Hilbert controversy). A common objection to their use is the above-cited lack of two central rules of classical logic, the law of excluded middle and double negation elimination. David Hilbert considered them to be so important to the practice of mathematics that he wrote: Taking the principle of excluded middle from the mathematician would be the same, say, as proscribing the telescope to the astronomer or to the boxer the use of his fists. To prohibit existence statements and the principle of excluded middle is tantamount to relinquishing the science of mathematics altogether. Intuitionistic logic has found practical use in mathematics despite the challenges presented by the inability to utilize these rules. One reason for this is that its restrictions produce proofs that have the disjunction and existence properties, making it also suitable for other forms of mathematical constructivism. Informally, this means that if there is a constructive proof that an object exists, that constructive proof may be used as an algorithm for generating an example of that object, a principle known as the Curry–Howard correspondence between proofs and algorithms. One reason that this particular aspect of intuitionistic logic is so valuable is that it enables practitioners to utilize a wide range of computerized tools, known as proof assistants. These tools assist their users in the generation and verification of large-scale proofs, whose size usually precludes the usual human-based checking that goes into publishing and reviewing a mathematical proof. As such, the use of proof assistants (such as Agda or Coq) is enabling modern mathematicians and logicians to develop and prove extremely complex systems, beyond those that are feasible to create and check solely by hand. One example of a proof that was impossible to satisfactorily verify without formal verification is the famous proof of the four color theorem. This theorem stumped mathematicians for more than a hundred years, until a proof was developed that ruled out large classes of possible counterexamples, yet still left open enough possibilities that a computer program was needed to finish the proof. That proof was controversial for some time, but, later, it was verified using Coq. == Syntax == The syntax of formulas of intuitionistic logic is similar to propositional logic or first-order logic. However, intuitionistic connectives are not definable in terms of each other in the same way as in classical logic, hence their choice matters. In intuitionistic propositional logic (IPL) it is customary to use →, ∧, ∨, ⊥ as the basic connectives, treating ¬A as an abbreviation for (A → ⊥). In intuitionistic first-order logic both quantifiers ∃, ∀ are needed. === Hilbert-style calculus === Intuitionistic logic can be defined using the following Hilbert-style calculus. This is similar to a way of axiomatizing classical propositional logic. In propositional logic, the inference rule is modus ponens MP: from ϕ → ψ {\displaystyle \phi \to \psi } and ϕ {\displaystyle \phi } infer ψ {\displaystyle \psi } and the axioms are THEN-1: ψ → ( ϕ → ψ ) {\displaystyle \psi \to (\phi \to \psi )} THEN-2: ( χ → ( ϕ → ψ ) ) → ( ( χ → ϕ ) → ( χ → ψ ) ) {\displaystyle {\big (}\chi \to (\phi \to \psi ){\big )}\to {\big (}(\chi \to \phi )\to (\chi \to \psi ){\big )}} AND-1: ϕ ∧ χ → ϕ {\displaystyle \phi \land \chi \to \phi } AND-2: ϕ ∧ χ → χ {\displaystyle \phi \land \chi \to \chi } AND-3: ϕ → ( χ → ( ϕ ∧ χ ) ) {\displaystyle \phi \to {\big (}\chi \to (\phi \land \chi ){\big )}} OR-1: ϕ → ϕ ∨ χ {\displaystyle \phi \to \phi \lor \chi } OR-2: χ → ϕ ∨ χ {\displaystyle \chi \to \phi \lor \chi } OR-3: ( ϕ → ψ ) → ( ( χ → ψ ) → ( ( ϕ ∨ χ ) → ψ ) ) {\displaystyle (\phi \to \psi )\to {\Big (}(\chi \to \psi )\to {\big (}(\phi \lor \chi )\to \psi ){\Big )}} FALSE: ⊥ → ϕ {\displaystyle \bot \to \phi } To make this a system of first-order predicate logic, the generalization rules ∀ {\displaystyle \forall } -GEN: from ψ → ϕ {\displaystyle \psi \to \phi } infer ψ → ( ∀ x ϕ ) {\displaystyle \psi \to (\forall x\ \phi )} , if x {\displaystyle x} is not free in ψ {\displaystyle \psi } ∃ {\displaystyle \exists } -GEN: from ϕ → ψ {\displaystyle \phi \to \psi } infer ( ∃ x ϕ ) → ψ {\displaystyle (\exists x\ \phi )\to \psi } , if x {\displaystyle x} is not free in ψ {\displaystyle \psi } are added, along with the axioms PRED-1: ( ∀ x ϕ ( x ) ) → ϕ ( t ) {\displaystyle (\forall x\ \phi (x))\to \phi (t)} , if the term t {\displaystyle t} is free for substitution for the variable x {\displaystyle x} in ϕ {\displaystyle \phi } (i.e., if no occurrence of any variable in t {\displaystyle t} becomes bound in ϕ ( t ) {\displaystyle \phi (t)} ) PRED-2: ϕ ( t ) → ( ∃ x ϕ ( x ) ) {\displaystyle \phi (t)\to (\exists x\ \phi (x))} , with the same restriction as for PRED-1 ==== Negation ==== If one wishes to include a connective ¬ {\displaystyle \neg } for negation rather than consider it an abbreviation for ϕ → ⊥ {\displaystyle \phi \to \bot } , it is enough to add: NOT-1': ( ϕ → ⊥ ) → ¬ ϕ {\displaystyle (\phi \to \bot )\to \neg \phi } NOT-2': ¬ ϕ → ( ϕ → ⊥ ) {\displaystyle \neg \phi \to (\phi \to \bot )} There are a number of alternatives available if one wishes to omit the connective ⊥ {\displaystyle \bot } (false). For example, one may replace the three axioms FALSE, NOT-1', and NOT-2' with the two axioms NOT-1: ( ϕ → χ ) → ( ( ϕ → ¬ χ ) → ¬ ϕ ) {\displaystyle (\phi \to \chi )\to {\big (}(\phi \to \neg \chi )\to \neg \phi {\big )}} NOT-2: χ → ( ¬ χ → ψ ) {\displaystyle \chi \to (\neg \chi \to \psi )} as at Propositional calculus § Axioms. Alternatives to NOT-1 are ( ϕ → ¬ χ ) → ( χ → ¬ ϕ ) {\displaystyle (\phi \to \neg \chi )\to (\chi \to \neg \phi )} or ( ϕ → ¬ ϕ ) → ¬ ϕ {\displaystyle (\phi \to \neg \phi )\to \neg \phi } . ==== Equivalence ==== The connective ↔ {\displaystyle \leftrightarrow } for equivalence may be treated as an abbreviation, with ϕ ↔ χ {\displaystyle \phi \leftrightarrow \chi } standing for ( ϕ → χ ) ∧ ( χ → ϕ ) {\displaystyle (\phi \to \chi )\land (\chi \to \phi )} . Alternatively, one may add the axioms IFF-1: ( ϕ ↔ χ ) → ( ϕ → χ ) {\displaystyle (\phi \leftrightarrow \chi )\to (\phi \to \chi )} IFF-2: ( ϕ ↔ χ ) → ( χ → ϕ ) {\displaystyle (\phi \leftrightarrow \chi )\to (\chi \to \phi )} IFF-3: ( ϕ → χ ) → ( ( χ → ϕ ) → ( ϕ ↔ χ ) ) {\displaystyle (\phi \to \chi )\to ((\chi \to \phi )\to (\phi \leftrightarrow \chi ))} IFF-1 and IFF-2 can, if desired, be combined into a single axiom ( ϕ ↔ χ ) → ( ( ϕ → χ ) ∧ ( χ → ϕ ) ) {\displaystyle (\phi \leftrightarrow \chi )\to ((\phi \to \chi )\land (\chi \to \phi ))} using conjunction. === Sequent calculus === Gerhard Gentzen discovered that a simple restriction of his system LK (his sequent calculus for classical logic) results in a system that is sound and complete with respect to intuitionistic logic. He called this system LJ. In LK any number of formulas is allowed to appear on the conclusion side of a sequent; in contrast LJ allows at most one formula in this position. Other derivatives of LK are limited to intuitionistic derivations but still allow multiple conclusions in a sequent. LJ' is one example. == Theorems == The theorems of the pure logic are the statements provable from the axioms and inference rules. For example, using THEN-1 in THEN-2 reduces it to ( χ → ( ϕ → ψ ) ) → ( ϕ → ( χ → ψ ) ) {\displaystyle {\big (}\chi \to (\phi \to \psi ){\big )}\to {\big (}\phi \to (\chi \to \psi ){\big )}} . A formal proof of the latter using the Hilbert system is given on that page. With ⊥ {\displaystyle \bot } for ψ {\displaystyle \psi } , this in turn implies ( χ → ¬ ϕ ) → ( ϕ → ¬ χ ) {\displaystyle (\chi \to \neg \phi )\to (\phi \to \neg \chi )} . In words: "If χ {\displaystyle \chi } being the case implies that ϕ {\displaystyle \phi } is absurd, then if ϕ {\displaystyle \phi } does hold, one has that χ {\displaystyle \chi } is not the case." Due to the symmetry of the statement, one in fact obtained ( χ → ¬ ϕ ) ↔ ( ϕ → ¬ χ ) {\displaystyle (\chi \to \neg \phi )\leftrightarrow (\phi \to \neg \chi )} When explaining the theorems of intuitionistic logic in terms of classical logic, it can be understood as a weakening thereof: It is more conservative in what it allows a reasoner to infer, while not permitting any new inferences that could not be made under classical logic. Each theorem of intuitionistic logic is a theorem in classical logic, but not conversely. Many tautologies in classical logic are not theorems in intuitionistic logic – in particular, as said above, one of intuitionistic logic's chief aims is to not affirm the law of the excluded middle so as to vitiate the use of non-constructive proof by contradiction, which can be used to furnish existence claims without providing explicit examples of the objects that it proves exist. === Double negations === A double negation does not affirm the law of the excluded middle (PEM); while it is not necessarily the case that PEM is upheld in any context, no counterexample can be given either. Such a counterexample would be an inference (inferring the negation of the law for a certain proposition) disallowed under classical logic and thus PEM is not allowed in a strict weakening like intuitionistic logic. Formally, it is a simple theorem that ( ( ψ ∨ ( ψ → φ ) ) → φ ) ↔ φ {\displaystyle {\big (}(\psi \lor (\psi \to \varphi ))\to \varphi {\big )}\leftrightarrow \varphi } for any two propositions. By considering any φ {\displaystyle \varphi } established to be false this indeed shows that the double negation of the law ¬ ¬ ( ψ ∨ ¬ ψ ) {\displaystyle \neg \neg (\psi \lor \neg \psi )} is retained as a tautology already in minimal logic. This means any ¬ ( ψ ∨ ¬ ψ ) {\displaystyle \neg (\psi \lor \neg \psi )} is established to be inconsistent and the propositional calculus is in turn always compatible with classical logic. When assuming the law of excluded middle implies a proposition, then by applying contraposition twice and using the double-negated excluded middle, one may prove double-negated variants of various strictly classical tautologies. The situation is more intricate for predicate logic formulas, when some quantified expressions are being negated. ==== Double negation and implication ==== Akin to the above, from modus ponens in the form ψ → ( ( ψ → φ ) → φ ) {\displaystyle \psi \to ((\psi \to \varphi )\to \varphi )} follows ψ → ¬ ¬ ψ {\displaystyle \psi \to \neg \neg \psi } . The relation between them may always be used to obtain new formulas: A weakened premise makes for a strong implication, and vice versa. For example, note that if ( ¬ ¬ ψ ) → ϕ {\displaystyle (\neg \neg \psi )\to \phi } holds, then so does ψ → ϕ {\displaystyle \psi \to \phi } , but the schema in the other direction would imply the double-negation elimination principle. Propositions for which double-negation elimination is possible are also called stable. Intuitionistic logic proves stability only for restricted types of propositions. A formula for which excluded middle holds can be proven stable using the disjunctive syllogism, which is discussed more thoroughly below. The converse does however not hold in general, unless the excluded middle statement at hand is stable itself. An implication ψ → ¬ ϕ {\displaystyle \psi \to \neg \phi } can be proven to be equivalent to ¬ ¬ ψ → ¬ ϕ {\displaystyle \neg \neg \psi \to \neg \phi } , whatever the propositions. As a special case, it follows that propositions of negated form ( ψ = ¬ ϕ {\displaystyle \psi =\neg \phi } here) are stable, i.e. ¬ ¬ ¬ ϕ → ¬ ϕ {\displaystyle \neg \neg \neg \phi \to \neg \phi } is always valid. In general, ¬ ¬ ψ → ϕ {\displaystyle \neg \neg \psi \to \phi } is stronger than ψ → ϕ {\displaystyle \psi \to \phi } , which is stronger than ¬ ¬ ( ψ → ϕ ) {\displaystyle \neg \neg (\psi \to \phi )} , which itself implies the three equivalent statements ψ → ( ¬ ¬ ϕ ) {\displaystyle \psi \to (\neg \neg \phi )} , ( ¬ ¬ ψ ) → ( ¬ ¬ ϕ ) {\displaystyle (\neg \neg \psi )\to (\neg \neg \phi )} and ¬ ϕ → ¬ ψ {\displaystyle \neg \phi \to \neg \psi } . Using the disjunctive syllogism, the previous four are indeed equivalent. This also gives an intuitionistically valid derivation of ¬ ¬ ( ¬ ¬ ϕ → ϕ ) {\displaystyle \neg \neg (\neg \neg \phi \to \phi )} , as it is thus equivalent to an identity. When ψ {\displaystyle \psi } expresses a claim, then its double-negation ¬ ¬ ψ {\displaystyle \neg \neg \psi } merely expresses the claim that a refutation of ψ {\displaystyle \psi } would be inconsistent. Having proven such a mere double-negation also still aids in negating other statements through negation introduction, as then ( ϕ → ¬ ψ ) → ¬ ϕ {\displaystyle (\phi \to \neg \psi )\to \neg \phi } . A double-negated existential statement does not denote existence of an entity with a property, but rather the absurdity of assumed non-existence of any such entity. Also all the principles in the next section involving quantifiers explain use of implications with hypothetical existence as premise. ==== Formula translation ==== Weakening statements by adding two negations before existential quantifiers (and atoms) is also the core step in the double-negation translation. It constitutes an embedding of classical first-order logic into intuitionistic logic: a first-order formula is provable in classical logic if and only if its Gödel–Gentzen translation is provable intuitionistically. For example, any theorem of classical propositional logic of the form ψ → ϕ {\displaystyle \psi \to \phi } has a proof consisting of an intuitionistic proof of ψ → ¬ ¬ ϕ {\displaystyle \psi \to \neg \neg \phi } followed by one application of double-negation elimination. Intuitionistic logic can thus be seen as a means of extending classical logic with constructive semantics. === Non-interdefinability of operators === Already minimal logic easily proves the following theorems, relating conjunction resp. disjunction to the implication using negation. Firstly, ( ϕ ∨ ψ ) → ¬ ( ¬ ϕ ∧ ¬ ψ ) {\displaystyle (\phi \lor \psi )\to \neg (\neg \phi \land \neg \psi )} In words: " ϕ {\displaystyle \phi } and ψ {\displaystyle \psi } each imply that it is not the case that both ϕ {\displaystyle \phi } and ψ {\displaystyle \psi } fail to hold together." And here the logically negative conclusion ¬ ( ¬ ϕ ∧ ¬ ψ ) {\displaystyle \neg (\neg \phi \land \neg \psi )} is in fact equivalent to ¬ ϕ → ¬ ¬ ψ {\displaystyle \neg \phi \to \neg \neg \psi } . The alternative implied theorem, ( ϕ ∨ ψ ) → ( ¬ ϕ → ¬ ¬ ψ ) {\displaystyle (\phi \lor \psi )\to (\neg \phi \to \neg \neg \psi )} , represents a weakened variant of the disjunctive syllogism. Secondly, ( ϕ ∧ ψ ) → ¬ ( ¬ ϕ ∨ ¬ ψ ) {\displaystyle (\phi \land \psi )\to \neg (\neg \phi \lor \neg \psi )} In words: " ϕ {\displaystyle \phi } and ψ {\displaystyle \psi } both together imply that neither ϕ {\displaystyle \phi } nor ψ {\displaystyle \psi } fail to hold." And here the logically negative conclusion ¬ ( ¬ ϕ ∨ ¬ ψ ) {\displaystyle \neg (\neg \phi \lor \neg \psi )} is in fact equivalent to ¬ ( ϕ → ¬ ψ ) {\displaystyle \neg (\phi \to \neg \psi )} . A variant of the converse of the implied theorem here does also hold, namely ( ϕ → ψ ) → ¬ ( ϕ ∧ ¬ ψ ) {\displaystyle (\phi \to \psi )\to \neg (\phi \land \neg \psi )} In words: " ϕ {\displaystyle \phi } implying ψ {\displaystyle \psi } implies that it is not the case that ϕ {\displaystyle \phi } holds while ψ {\displaystyle \psi } fails to hold." And indeed, stronger variants of all of these still do hold - for example the antecedents may be double-negated, as noted, or all ψ {\displaystyle \psi } may be replaced by ¬ ¬ ψ {\displaystyle \neg \neg \psi } on the antecedent sides, as will be discussed. However, neither of these five implications above can be reversed without immediately implying excluded middle (consider ¬ ψ {\displaystyle \neg \psi } for ϕ {\displaystyle \phi } ) resp. double-negation elimination (consider true ϕ {\displaystyle \phi } ). Hence, the left hand sides do not constitute a possible definition of the right hand sides. In contrast, in classical propositional logic it is possible to take one of those three connectives plus negation as primitive and define the other two in terms of it, in this way. Such is done, for example, in Łukasiewicz's three axioms of propositional logic. It is even possible to define all in terms of a sole sufficient operator such as the Peirce arrow (NOR) or Sheffer stroke (NAND). Similarly, in classical first-order logic, one of the quantifiers can be defined in terms of the other and negation. These are fundamentally consequences of the law of bivalence, which makes all such connectives merely Boolean functions. The law of bivalence is not required to hold in intuitionistic logic. As a result, none of the basic connectives can be dispensed with, and the above axioms are all necessary. So most of the classical identities between connectives and quantifiers are only theorems of intuitionistic logic in one direction. Some of the theorems go in both directions, i.e. are equivalences, as subsequently discussed. ==== Existential vs. universal quantification ==== Firstly, when x {\displaystyle x} is not free in the proposition φ {\displaystyle \varphi } , then ( ∃ x ( ϕ ( x ) → φ ) ) → ( ( ∀ x ϕ ( x ) ) → φ ) {\displaystyle {\big (}\exists x\,(\phi (x)\to \varphi ){\big )}\,\,\to \,\,{\Big (}{\big (}\forall x\ \phi (x){\big )}\to \varphi {\Big )}} When the domain of discourse is empty, then by the principle of explosion, an existential statement implies anything. When the domain contains at least one term, then assuming excluded middle for ∀ x ϕ ( x ) {\displaystyle \forall x\,\phi (x)} , the inverse of the above implication becomes provably too, meaning the two sides become equivalent. This inverse direction is equivalent to the drinker's paradox (DP). Moreover, an existential and dual variant of it is given by the independence of premise principle (IP). Classically, the statement above is moreover equivalent to a more disjunctive form discussed further below. Constructively, existence claims are however generally harder to come by. If the domain of discourse is not empty and ϕ {\displaystyle \phi } is moreover independent of x {\displaystyle x} , such principles are equivalent to formulas in the propositional calculus. Here, the formula then just expresses the identity ( ϕ → φ ) → ( ϕ → φ ) {\displaystyle (\phi \to \varphi )\to (\phi \to \varphi )} . This is the curried form of modus ponens ( ( ϕ → φ ) ∧ ϕ ) → φ {\displaystyle ((\phi \to \varphi )\land \phi )\to \varphi } , which in the special the case with φ {\displaystyle \varphi } as a false proposition results in the law of non-contradiction principle ¬ ( ϕ ∧ ¬ ϕ ) {\displaystyle \neg (\phi \land \neg \phi )} . Considering a false proposition φ {\displaystyle \varphi } for the original implication results in the important ( ∃ x ¬ ϕ ( x ) ) → ¬ ( ∀ x ϕ ( x ) ) {\displaystyle (\exists x\ \neg \phi (x))\to \neg (\forall x\ \phi (x))} In words: "If there exists an entity x {\displaystyle x} that does not have the property ϕ {\displaystyle \phi } , then the following is refuted: Each entity has the property ϕ {\displaystyle \phi } ." The quantifier formula with negations also immediately follows from the non-contradiction principle derived above, each instance of which itself already follows from the more particular ¬ ( ¬ ¬ ϕ ∧ ¬ ϕ ) {\displaystyle \neg (\neg \neg \phi \land \neg \phi )} . To derive a contradiction given ¬ ϕ {\displaystyle \neg \phi } , it suffices to establish its negation ¬ ¬ ϕ {\displaystyle \neg \neg \phi } (as opposed to the stronger ϕ {\displaystyle \phi } ) and this makes proving double-negations valuable also. By the same token, the original quantifier formula in fact still holds with ∀ x ϕ ( x ) {\displaystyle \forall x\ \phi (x)} weakened to ∀ x ( ( ϕ ( x ) → φ ) → φ ) {\displaystyle \forall x{\big (}(\phi (x)\to \varphi )\to \varphi {\big )}} . And so, in fact, a stronger theorem holds: ( ∃ x ¬ ϕ ( x ) ) → ¬ ( ∀ x ¬ ¬ ϕ ( x ) ) {\displaystyle (\exists x\ \neg \phi (x))\to \neg (\forall x\,\neg \neg \phi (x))} In words: "If there exists an entity x {\displaystyle x} that does not have the property ϕ {\displaystyle \phi } , then the following is refuted: For each entity, one is not able to prove that it does not have the property ϕ {\displaystyle \phi } ". Secondly, ( ∀ x ( ϕ ( x ) → φ ) ) ↔ ( ( ∃ x ϕ ( x ) ) → φ ) {\displaystyle {\big (}\forall x\,(\phi (x)\to \varphi ){\big )}\,\,\leftrightarrow \,\,{\big (}(\exists x\ \phi (x))\to \varphi {\big )}} where similar considerations apply. Here the existential part is always a hypothesis and this is an equivalence. Considering the special case again, ( ∀ x ¬ ϕ ( x ) ) ↔ ¬ ( ∃ x ϕ ( x ) ) {\displaystyle (\forall x\ \neg \phi (x))\leftrightarrow \neg (\exists x\ \phi (x))} The proven conversion ( χ → ¬ ϕ ) ↔ ( ϕ → ¬ χ ) {\displaystyle (\chi \to \neg \phi )\leftrightarrow (\phi \to \neg \chi )} can be used to obtain two further implications: ( ∀ x ϕ ( x ) ) → ¬ ( ∃ x ¬ ϕ ( x ) ) {\displaystyle (\forall x\ \phi (x))\to \neg (\exists x\ \neg \phi (x))} ( ∃ x ϕ ( x ) ) → ¬ ( ∀ x ¬ ϕ ( x ) ) {\displaystyle (\exists x\ \phi (x))\to \neg (\forall x\ \neg \phi (x))} Of course, variants of such formulas can also be derived that have the double-negations in the antecedent. A special case of the first formula here is ( ∀ x ¬ ϕ ( x ) ) → ¬ ( ∃ x ¬ ¬ ϕ ( x ) ) {\displaystyle (\forall x\,\neg \phi (x))\to \neg (\exists x\,\neg \neg \phi (x))} and this is indeed stronger than the → {\displaystyle \to } -direction of the equivalence bullet point listed above. For simplicity of the discussion here and below, the formulas are generally presented in weakened forms without all possible insertions of double-negations in the antecedents. More general variants hold. Incorporating the predicate ψ {\displaystyle \psi } and currying, the following generalization also entails the relation between implication and conjunction in the predicate calculus, discussed below. ( ∀ x ϕ ( x ) → ( ψ ( x ) → φ ) ) ↔ ( ( ∃ x ϕ ( x ) ∧ ψ ( x ) ) → φ ) {\displaystyle {\big (}\forall x\ \phi (x)\to (\psi (x)\to \varphi ){\big )}\,\,\leftrightarrow \,\,{\Big (}{\big (}\exists x\ \phi (x)\land \psi (x){\big )}\to \varphi {\Big )}} If the predicate ψ {\displaystyle \psi } is decidedly false for all x {\displaystyle x} , then this equivalence is trivial. If ψ {\displaystyle \psi } is decidedly true for all x {\displaystyle x} , the schema simply reduces to the previously stated equivalence. In the language of classes, A = { x ∣ ϕ ( x ) } {\displaystyle A=\{x\mid \phi (x)\}} and B = { x ∣ ψ ( x ) } {\displaystyle B=\{x\mid \psi (x)\}} , the special case of this equivalence with false φ {\displaystyle \varphi } equates two characterizations of disjointness A ∩ B = ∅ {\displaystyle A\cap B=\emptyset } : ∀ ( x ∈ A ) . x ∉ B ↔ ¬ ∃ ( x ∈ A ) . x ∈ B {\displaystyle \forall (x\in A).x\notin B\,\,\leftrightarrow \,\,\neg \exists (x\in A).x\in B} ==== Disjunction vs. conjunction ==== There are finite variations of the quantifier formulas, with just two propositions: ( ¬ ϕ ∨ ¬ ψ ) → ¬ ( ϕ ∧ ψ ) {\displaystyle (\neg \phi \lor \neg \psi )\to \neg (\phi \land \psi )} ( ¬ ϕ ∧ ¬ ψ ) ↔ ¬ ( ϕ ∨ ψ ) {\displaystyle (\neg \phi \land \neg \psi )\leftrightarrow \neg (\phi \lor \psi )} The first principle cannot be reversed: Considering ¬ ψ {\displaystyle \neg \psi } for ϕ {\displaystyle \phi } would imply the weak excluded middle, i.e. the statement ¬ ψ ∨ ¬ ¬ ψ {\displaystyle \neg \psi \lor \neg \neg \psi } . But intuitionistic logic alone does not even prove ¬ ψ ∨ ¬ ¬ ψ ∨ ( ¬ ¬ ψ → ψ ) {\displaystyle \neg \psi \lor \neg \neg \psi \lor (\neg \neg \psi \to \psi )} . So in particular, there is no distributivity principle for negations deriving the claim ¬ ϕ ∨ ¬ ψ {\displaystyle \neg \phi \lor \neg \psi } from ¬ ( ϕ ∧ ψ ) {\displaystyle \neg (\phi \land \psi )} . For an informal example of the constructive reading, consider the following: From conclusive evidence it not to be the case that both Alice and Bob showed up to their date, one cannot derive conclusive evidence, tied to either of the two persons, that this person did not show up. Negated propositions are comparably weak, in that the classically valid De Morgan's law, granting a disjunction from a single negative hypothetical, does not automatically hold constructively. The intuitionistic propositional calculus and some of its extensions exhibit the disjunction property instead, implying one of the disjuncts of any disjunction individually would have to be derivable as well. The converse variants of those two, and the equivalent variants with double-negated antecedents, had already been mentioned above. Implications towards the negation of a conjunction can often be proven directly from the non-contradiction principle. In this way one may also obtain the mixed form of the implications, e.g. ( ¬ ϕ ∨ ψ ) → ¬ ( ϕ ∧ ¬ ψ ) {\displaystyle (\neg \phi \lor \psi )\to \neg (\phi \land \neg \psi )} . Concatenating the theorems, we also find ( ¬ ¬ ϕ ∨ ¬ ¬ ψ ) → ¬ ¬ ( ϕ ∨ ψ ) {\displaystyle (\neg \neg \phi \lor \neg \neg \psi )\to \neg \neg (\phi \lor \psi )} The reverse cannot be provable, as it would prove weak excluded middle. In predicate logic, the constant domain principle is not valid: ∀ x ( φ ∨ ψ ( x ) ) {\displaystyle \forall x{\big (}\varphi \lor \psi (x){\big )}} does not imply the stronger φ ∨ ∀ x ψ ( x ) {\displaystyle \varphi \lor \forall x\,\psi (x)} . The distributive properties does however hold for any finite number of propositions. For a variant of the De Morgan law concerning two existentially closed decidable predicates, see LLPO. ==== Conjunction vs. implication ==== From the general equivalence also follows import-export, expressing incompatibility of two predicates using two different connectives: ( ϕ → ¬ ψ ) ↔ ¬ ( ϕ ∧ ψ ) {\displaystyle (\phi \to \neg \psi )\leftrightarrow \neg (\phi \land \psi )} Due to the symmetry of the conjunction connective, this again implies the already established ( ϕ → ¬ ψ ) ↔ ( ψ → ¬ ϕ ) {\displaystyle (\phi \to \neg \psi )\leftrightarrow (\psi \to \neg \phi )} . The equivalence formula for the negated conjunction may be understood as a special case of currying and uncurrying. Many more considerations regarding double-negations again apply. And both non-reversible theorems relating conjunction and implication mentioned in the introduction to non-interdefinability above follow from this equivalence. One is a simply proven variant of a converse, while ( ϕ → ψ ) → ¬ ( ϕ ∧ ¬ ψ ) {\displaystyle (\phi \to \psi )\to \neg (\phi \land \neg \psi )} holds simply because ϕ → ψ {\displaystyle \phi \to \psi } is stronger than ϕ → ¬ ¬ ψ {\displaystyle \phi \to \neg \neg \psi } . Now when using the principle in the next section, the following variant of the latter, with more negations on the left, also holds: ¬ ( ϕ → ψ ) ↔ ( ¬ ¬ ϕ ∧ ¬ ψ ) {\displaystyle \neg (\phi \to \psi )\leftrightarrow (\neg \neg \phi \land \neg \psi )} A consequence is that ¬ ¬ ( ϕ ∧ ψ ) ↔ ( ¬ ¬ ϕ ∧ ¬ ¬ ψ ) {\displaystyle \neg \neg (\phi \land \psi )\leftrightarrow (\neg \neg \phi \land \neg \neg \psi )} ==== Disjunction vs. implication ==== Already minimal logic proves excluded middle equivalent to consequentia mirabilis, an instance of Peirce's law. Now akin to modus ponens, clearly ( ϕ ∨ ψ ) → ( ( ϕ → ψ ) → ψ ) {\displaystyle (\phi \lor \psi )\to ((\phi \to \psi )\to \psi )} already in minimal logic, which is a theorem that does not even involve negations. In classical logic, this implication is in fact an equivalence. With taking ϕ {\displaystyle \phi } to be of the form ψ → φ {\displaystyle \psi \to \varphi } , excluded middle together with explosion is seen to entail Peirce's law. In intuitionistic logic, one obtains variants of the stated theorem involving ⊥ {\displaystyle \bot } , as follows. Firstly, note that two different formulas for ¬ ( ϕ ∧ ψ ) {\displaystyle \neg (\phi \land \psi )} mentioned above can be used to imply ( ¬ ϕ ∨ ¬ ψ ) → ( ϕ → ¬ ψ ) {\displaystyle (\neg \phi \vee \neg \psi )\to (\phi \to \neg \psi )} . It also followed from direct case-analysis, as do variants where the negations are moved around, such as the theorems ( ¬ ϕ ∨ ψ ) → ( ϕ → ¬ ¬ ψ ) {\displaystyle (\neg \phi \lor \psi )\to (\phi \to \neg \neg \psi )} or ( ϕ ∨ ψ ) → ( ¬ ϕ → ¬ ¬ ψ ) {\displaystyle (\phi \lor \psi )\to (\neg \phi \to \neg \neg \psi )} , the latter being mentioned in the introduction to non-interdefinability. These are forms of the disjunctive syllogism involving negated propositions ¬ ψ {\displaystyle \neg \psi } . Strengthened forms still holds in intuitionistic logic, say ( ¬ ϕ ∨ ψ ) → ( ϕ → ψ ) {\displaystyle (\neg \phi \lor \psi )\to (\phi \to \psi )} The implication cannot generally be reversed, as that would immediately imply excluded middle. So, intuitionistically, "Either P {\displaystyle P} or Q {\displaystyle Q} " is generally also a stronger propositional formula than "If not P {\displaystyle P} , then Q {\displaystyle Q} ", whereas in classical logic these are interchangeable. Non-contradiction and explosion together actually also prove the stronger variant ( ¬ ϕ ∨ ψ ) → ( ¬ ¬ ϕ → ψ ) {\displaystyle (\neg \phi \lor \psi )\to (\neg \neg \phi \to \psi )} . And this shows how excluded middle for ψ {\displaystyle \psi } implies double-negation elimination for it. For a fixed ψ {\displaystyle \psi } , this implication also cannot generally be reversed. However, as ¬ ¬ ( ψ ∨ ¬ ψ ) {\displaystyle \neg \neg (\psi \lor \neg \psi )} is always constructively valid, it follows that assuming double-negation elimination for all such disjunctions implies classical logic also. Of course the formulas established here may be combined to obtain yet more variations. For example, the disjunctive syllogism as presented generalizes to ( ( ∃ x ¬ ϕ ( x ) ) ∨ φ ) → ( ( ∀ x ϕ ( x ) ) → φ ) {\displaystyle {\Big (}{\big (}\exists x\ \neg \phi (x){\big )}\lor \varphi {\Big )}\,\,\to \,\,{\Big (}{\big (}\forall x\ \phi (x){\big )}\to \varphi {\Big )}} If some term exists at all, the antecedent here even implies ∃ x ( ϕ ( x ) → φ ) {\displaystyle \exists x{\big (}\phi (x)\to \varphi {\big )}} , which in turn itself also implies the conclusion here (this is again the very first formula mentioned in this section). The bulk of the discussion in these sections applies just as well to just minimal logic. But as for the disjunctive syllogism with general ψ {\displaystyle \psi } and in its form as a single proposition, minimal logic can at most prove ( ¬ ϕ ∨ ψ ) → ( ¬ ¬ ϕ → ψ ′ ) {\displaystyle (\neg \phi \lor \psi )\to (\neg \neg \phi \to \psi ')} where ψ ′ {\displaystyle \psi '} denotes ¬ ¬ ψ ∧ ( ψ ∨ ¬ ψ ) {\displaystyle \neg \neg \psi \land (\psi \lor \neg \psi )} . The conclusion here can only be simplified to ψ {\displaystyle \psi } using explosion. ==== Equivalences ==== The above lists also contain equivalences. The equivalence involving a conjunction and a disjunction stems from ( P ∨ Q ) → R {\displaystyle (P\lor Q)\to R} actually being stronger than P → R {\displaystyle P\to R} . Both sides of the equivalence can be understood as conjunctions of independent implications. Above, absurdity ⊥ {\displaystyle \bot } is used for R {\displaystyle R} . In functional interpretations, it corresponds to if-clause constructions. So e.g. "Not ( P {\displaystyle P} or Q {\displaystyle Q} )" is equivalent to "Not P {\displaystyle P} , and also not Q {\displaystyle Q} ". An equivalence itself is generally defined as, and then equivalent to, a conjunction ( ∧ {\displaystyle \land } ) of implications ( → {\displaystyle \to } ), as follows: ( ϕ ↔ ψ ) ↔ ( ( ϕ → ψ ) ∧ ( ψ → ϕ ) ) {\displaystyle (\phi \leftrightarrow \psi )\leftrightarrow {\big (}(\phi \to \psi )\land (\psi \to \phi ){\big )}} With it, such connectives become in turn definable from it: ( ϕ → ψ ) ↔ ( ( ϕ ∨ ψ ) ↔ ψ ) {\displaystyle (\phi \to \psi )\leftrightarrow ((\phi \lor \psi )\leftrightarrow \psi )} ( ϕ → ψ ) ↔ ( ( ϕ ∧ ψ ) ↔ ϕ ) {\displaystyle (\phi \to \psi )\leftrightarrow ((\phi \land \psi )\leftrightarrow \phi )} ( ϕ ∧ ψ ) ↔ ( ( ϕ → ψ ) ↔ ϕ ) {\displaystyle (\phi \land \psi )\leftrightarrow ((\phi \to \psi )\leftrightarrow \phi )} ( ϕ ∧ ψ ) ↔ ( ( ( ϕ ∨ ψ ) ↔ ψ ) ↔ ϕ ) {\displaystyle (\phi \land \psi )\leftrightarrow (((\phi \lor \psi )\leftrightarrow \psi )\leftrightarrow \phi )} In turn, { ∨ , ↔ , ⊥ } {\displaystyle \{\lor ,\leftrightarrow ,\bot \}} and { ∨ , ↔ , ¬ } {\displaystyle \{\lor ,\leftrightarrow ,\neg \}} are complete bases of intuitionistic connectives, for example. ==== Functionally complete connectives ==== As shown by Alexander V. Kuznetsov, either of the following connectives – the first one ternary, the second one quinary – is by itself functionally complete: either one can serve the role of a sole sufficient operator for intuitionistic propositional logic, thus forming an analog of the Sheffer stroke from classical propositional logic: ( ( P ∨ Q ) ∧ ¬ R ) ∨ ( ¬ P ∧ ( Q ↔ R ) ) {\displaystyle {\big (}(P\lor Q)\land \neg R{\big )}\lor {\big (}\neg P\land (Q\leftrightarrow R){\big )}} P → ( Q ∧ ¬ R ∧ ( S ∨ T ) ) {\displaystyle P\to {\big (}Q\land \neg R\land (S\lor T){\big )}} == Semantics == The semantics are rather more complicated than for the classical case. A model theory can be given by Heyting algebras or, equivalently, by Kripke semantics. In 2014, a Tarski-like model theory was proved complete by Bob Constable, but with a different notion of completeness than classically. Unproved statements in intuitionistic logic are not given a intermediate or third truth value (as is sometimes mistakenly asserted). One can prove that such statements have no third truth value, a result dating back to Glivenko in 1928. Instead they remain of unknown truth value, until they are either proved or disproved. Statements are disproved by deducing a contradiction from them. A consequence of this point of view is that intuitionistic logic has no interpretation as a two-valued logic, nor even as a finite-valued logic, in the familiar sense. Although intuitionistic logic retains the trivial propositions { ⊤ , ⊥ } {\displaystyle \{\top ,\bot \}} from classical logic, each proof of a propositional formula is considered a valid propositional value, thus by Heyting's notion of propositions-as-sets, propositional formulae are (potentially non-finite) sets of their proofs. === Heyting algebra semantics === In classical logic, we often discuss the truth values that a formula can take. The values are usually chosen as the members of a Boolean algebra. The meet and join operations in the Boolean algebra are identified with the ∧ and ∨ logical connectives, so that the value of a formula of the form A ∧ B is the meet of the value of A and the value of B in the Boolean algebra. Then we have the useful theorem that a formula is a valid proposition of classical logic if and only if its value is 1 for every valuation—that is, for any assignment of values to its variables. A corresponding theorem is true for intuitionistic logic, but instead of assigning each formula a value from a Boolean algebra, one uses values from a Heyting algebra, of which Boolean algebras are a special case. A formula is valid in intuitionistic logic if and only if it receives the value of the top element for any valuation on any Heyting algebra. It can be shown that to recognize valid formulas, it is sufficient to consider a single Heyting algebra whose elements are the open subsets of the real line R. In this algebra we have: Value [ ⊥ ] = ∅ Value [ ⊤ ] = R Value [ A ∧ B ] = Value [ A ] ∩ Value [ B ] Value [ A ∨ B ] = Value [ A ] ∪ Value [ B ] Value [ A → B ] = int ( Value [ A ] ∁ ∪ Value [ B ] ) {\displaystyle {\begin{aligned}{\text{Value}}[\bot ]&=\emptyset \\{\text{Value}}[\top ]&=\mathbf {R} \\{\text{Value}}[A\land B]&={\text{Value}}[A]\cap {\text{Value}}[B]\\{\text{Value}}[A\lor B]&={\text{Value}}[A]\cup {\text{Value}}[B]\\{\text{Value}}[A\to B]&={\text{int}}\left({\text{Value}}[A]^{\complement }\cup {\text{Value}}[B]\right)\end{aligned}}} where int(X) is the interior of X and X∁ its complement. The last identity concerning A → B allows us to calculate the value of ¬A: Value [ ¬ A ] = Value [ A → ⊥ ] = int ( Value [ A ] ∁ ∪ Value [ ⊥ ] ) = int ( Value [ A ] ∁ ∪ ∅ ) = int ( Value [ A ] ∁ ) {\displaystyle {\begin{aligned}{\text{Value}}[\neg A]&={\text{Value}}[A\to \bot ]\\&={\text{int}}\left({\text{Value}}[A]^{\complement }\cup {\text{Value}}[\bot ]\right)\\&={\text{int}}\left({\text{Value}}[A]^{\complement }\cup \emptyset \right)\\&={\text{int}}\left({\text{Value}}[A]^{\complement }\right)\end{aligned}}} With these assignments, intuitionistically valid formulas are precisely those that are assigned the value of the entire line. For example, the formula ¬(A ∧ ¬A) is valid, because no matter what set X is chosen as the value of the formula A, the value of ¬(A ∧ ¬A) can be shown to be the entire line: Value [ ¬ ( A ∧ ¬ A ) ] = int ( Value [ A ∧ ¬ A ] ∁ ) Value [ ¬ B ] = int ( Value [ B ] ∁ ) = int ( ( Value [ A ] ∩ Value [ ¬ A ] ) ∁ ) = int ( ( Value [ A ] ∩ int ( Value [ A ] ∁ ) ) ∁ ) = int ( ( X ∩ int ( X ∁ ) ) ∁ ) = int ( ∅ ∁ ) int ( X ∁ ) ⊆ X ∁ = int ( R ) = R {\displaystyle {\begin{aligned}{\text{Value}}[\neg (A\land \neg A)]&={\text{int}}\left({\text{Value}}[A\land \neg A]^{\complement }\right)&&{\text{Value}}[\neg B]={\text{int}}\left({\text{Value}}[B]^{\complement }\right)\\&={\text{int}}\left(\left({\text{Value}}[A]\cap {\text{Value}}[\neg A]\right)^{\complement }\right)\\&={\text{int}}\left(\left({\text{Value}}[A]\cap {\text{int}}\left({\text{Value}}[A]^{\complement }\right)\right)^{\complement }\right)\\&={\text{int}}\left(\left(X\cap {\text{int}}\left(X^{\complement }\right)\right)^{\complement }\right)\\&={\text{int}}\left(\emptyset ^{\complement }\right)&&{\text{int}}\left(X^{\complement }\right)\subseteq X^{\complement }\\&={\text{int}}(\mathbf {R} )\\&=\mathbf {R} \end{aligned}}} So the valuation of this formula is true, and indeed the formula is valid. But the law of the excluded middle, A ∨ ¬A, can be shown to be invalid by using a specific value of the set of positive real numbers for A: Value [ A ∨ ¬ A ] = Value [ A ] ∪ Value [ ¬ A ] = Value [ A ] ∪ int ( Value [ A ] ∁ ) Value [ ¬ B ] = int ( Value [ B ] ∁ ) = { x > 0 } ∪ int ( { x > 0 } ∁ ) = { x > 0 } ∪ int ( { x ⩽ 0 } ) = { x > 0 } ∪ { x < 0 } = { x ≠ 0 } ≠ R {\displaystyle {\begin{aligned}{\text{Value}}[A\lor \neg A]&={\text{Value}}[A]\cup {\text{Value}}[\neg A]\\&={\text{Value}}[A]\cup {\text{int}}\left({\text{Value}}[A]^{\complement }\right)&&{\text{Value}}[\neg B]={\text{int}}\left({\text{Value}}[B]^{\complement }\right)\\&=\{x>0\}\cup {\text{int}}\left(\{x>0\}^{\complement }\right)\\&=\{x>0\}\cup {\text{int}}\left(\{x\leqslant 0\}\right)\\&=\{x>0\}\cup \{x<0\}\\&=\{x\neq 0\}\\&\neq \mathbf {R} \end{aligned}}} The interpretation of any intuitionistically valid formula in the infinite Heyting algebra described above results in the top element, representing true, as the valuation of the formula, regardless of what values from the algebra are assigned to the variables of the formula. Conversely, for every invalid formula, there is an assignment of values to the variables that yields a valuation that differs from the top element. No finite Heyting algebra has the second of these two properties. === Kripke semantics === Building upon his work on semantics of modal logic, Saul Kripke created another semantics for intuitionistic logic, known as Kripke semantics or relational semantics. === Tarski-like semantics === It was discovered that Tarski-like semantics for intuitionistic logic were not possible to prove complete. However, Robert Constable has shown that a weaker notion of completeness still holds for intuitionistic logic under a Tarski-like model. In this notion of completeness we are concerned not with all of the statements that are true of every model, but with the statements that are true in the same way in every model. That is, a single proof that the model judges a formula to be true must be valid for every model. In this case, there is not only a proof of completeness, but one that is valid according to intuitionistic logic. == Metalogic == === Admissible rules === In intuitionistic logic or a fixed theory using the logic, the situation can occur that an implication always hold metatheoretically, but not in the language. For example, in the pure propositional calculus, if ( ¬ A ) → ( B ∨ C ) {\displaystyle (\neg A)\to (B\lor C)} is provable, then so is ( ¬ A → B ) ∨ ( ¬ A → C ) {\displaystyle (\neg A\to B)\lor (\neg A\to C)} . Another example is that ( A → B ) → ( A ∨ C ) {\displaystyle (A\to B)\to (A\lor C)} being provable always also means that so is ( ( A → B ) → A ) ∨ ( ( A → B ) → C ) {\displaystyle {\big (}(A\to B)\to A{\big )}\lor {\big (}(A\to B)\to C{\big )}} . One says the system is closed under these implications as rules and they may be adopted. === Theories' features === Theories over constructive logics can exhibit the disjunction property. The pure intuitionistic propositional calculus does so as well. In particular, it means the excluded middle disjunction for an un-rejectable statement A {\displaystyle A} is provable exactly when A {\displaystyle A} is provable. This also means, for examples, that the excluded middle disjunction for some the excluded middle disjunctions are not provable also. === Relation to other logics === ==== Paraconsistent logic ==== Intuitionistic logic is related by duality to a paraconsistent logic known as Brazilian, anti-intuitionistic or dual-intuitionistic logic. The subsystem of intuitionistic logic with the FALSE (resp. NOT-2) axiom removed is known as minimal logic and some differences have been elaborated on above. ==== Intermediate logics ==== In 1932, Kurt Gödel defined a system of logics intermediate between classical and intuitionistic logic. Indeed, any finite Heyting algebra that is not equivalent to a Boolean algebra defines (semantically) an intermediate logic. On the other hand, validity of formulae in pure intuitionistic logic is not tied to any individual Heyting algebra but relates to any and all Heyting algebras at the same time. So for example, for a schema not involving negations, consider the classically valid ( A → B ) ∨ ( B → A ) {\displaystyle (A\to B)\lor (B\to A)} . Adopting this over intuitionistic logic gives the intermediate logic called Gödel-Dummett logic. ==== Relation to classical logic ==== The system of classical logic is obtained by adding any one of the following axioms: ϕ ∨ ¬ ϕ {\displaystyle \phi \lor \neg \phi } (Law of the excluded middle) ¬ ¬ ϕ → ϕ {\displaystyle \neg \neg \phi \to \phi } (Double negation elimination) ( ¬ ϕ → ϕ ) → ϕ {\displaystyle (\neg \phi \to \phi )\to \phi } (Consequentia mirabilis, see also Peirce's law) Various reformulations, or formulations as schemata in two variables (e.g. Peirce's law), also exist. One notable one is the (reverse) law of contraposition ( ¬ ϕ → ¬ χ ) → ( χ → ϕ ) {\displaystyle (\neg \phi \to \neg \chi )\to (\chi \to \phi )} Such are detailed on the intermediate logics article. In general, one may take as the extra axiom any classical tautology that is not valid in the two-element Kripke frame ∘ ⟶ ∘ {\displaystyle \circ {\longrightarrow }\circ } (in other words, that is not included in Smetanich's logic). ==== Many-valued logic ==== Kurt Gödel's work involving many-valued logic showed in 1932 that intuitionistic logic is not a finite-valued logic. (See the section titled Heyting algebra semantics above for an infinite-valued logic interpretation of intuitionistic logic.) ==== Modal logic ==== Any formula of the intuitionistic propositional logic (IPC) may be translated into the language of the normal modal logic S4 as follows: ⊥ ∗ = ⊥ A ∗ = ◻ A if A is prime (a positive literal) ( A ∧ B ) ∗ = A ∗ ∧ B ∗ ( A ∨ B ) ∗ = A ∗ ∨ B ∗ ( A → B ) ∗ = ◻ ( A ∗ → B ∗ ) ( ¬ A ) ∗ = ◻ ( ¬ ( A ∗ ) ) ¬ A := A → ⊥ {\displaystyle {\begin{aligned}\bot ^{*}&=\bot \\A^{*}&=\Box A&&{\text{if }}A{\text{ is prime (a positive literal)}}\\(A\wedge B)^{*}&=A^{*}\wedge B^{*}\\(A\vee B)^{*}&=A^{*}\vee B^{*}\\(A\to B)^{*}&=\Box \left(A^{*}\to B^{*}\right)\\(\neg A)^{*}&=\Box (\neg (A^{*}))&&\neg A:=A\to \bot \end{aligned}}} and it has been demonstrated that the translated formula is valid in the propositional modal logic S4 if and only if the original formula is valid in IPC. The above set of formulae are called the Gödel–McKinsey–Tarski translation. There is also an intuitionistic version of modal logic S4 called Constructive Modal Logic CS4. === Lambda calculus === There is an extended Curry–Howard isomorphism between IPC and simply typed lambda calculus. == See also == == Notes == == References == == External links == Van Atten, Mark (4 May 2022). "The Development of Intuitionistic Logic". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. McCarty, David Charles (2009). "Intuitionism in Mathematics". In Shapiro, Stewart (ed.). The Oxford Handbook of Philosophy of Mathematics and Logic. pp. 356–386. doi:10.1093/oxfordhb/9780195325928.003.0010. ISBN 978-0-19-532592-8. Moschovakis, Joan (16 December 2022). "Intuitionistic Logic". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. "Tableaux'method for intuitionistic logic through S4-translation". Laboratoire d'Informatique de Grenoble. tests the intuitionistic validity of propositional formulae
Wikipedia/Intuitionistic_propositional_calculus
Destructive dilemma is the name of a valid rule of inference of propositional logic. It is the inference that, if P implies Q and R implies S and either Q is false or S is false, then either P or R must be false. In sum, if two conditionals are true, but one of their consequents is false, then one of their antecedents has to be false. Destructive dilemma is the disjunctive version of modus tollens. The disjunctive version of modus ponens is the constructive dilemma. The destructive dilemma rule can be stated: P → Q , R → S , ¬ Q ∨ ¬ S ∴ ¬ P ∨ ¬ R {\displaystyle {\frac {P\to Q,R\to S,\neg Q\lor \neg S}{\therefore \neg P\lor \neg R}}} where the rule is that wherever instances of " P → Q {\displaystyle P\to Q} ", " R → S {\displaystyle R\to S} ", and " ¬ Q ∨ ¬ S {\displaystyle \neg Q\lor \neg S} " appear on lines of a proof, " ¬ P ∨ ¬ R {\displaystyle \neg P\lor \neg R} " can be placed on a subsequent line. == Formal notation == The destructive dilemma rule may be written in sequent notation: ( P → Q ) , ( R → S ) , ( ¬ Q ∨ ¬ S ) ⊢ ( ¬ P ∨ ¬ R ) {\displaystyle (P\to Q),(R\to S),(\neg Q\lor \neg S)\vdash (\neg P\lor \neg R)} where ⊢ {\displaystyle \vdash } is a metalogical symbol meaning that ¬ P ∨ ¬ R {\displaystyle \neg P\lor \neg R} is a syntactic consequence of P → Q {\displaystyle P\to Q} , R → S {\displaystyle R\to S} , and ¬ Q ∨ ¬ S {\displaystyle \neg Q\lor \neg S} in some logical system; and expressed as a truth-functional tautology or theorem of propositional logic: ( ( ( P → Q ) ∧ ( R → S ) ) ∧ ( ¬ Q ∨ ¬ S ) ) → ( ¬ P ∨ ¬ R ) {\displaystyle (((P\to Q)\land (R\to S))\land (\neg Q\lor \neg S))\to (\neg P\lor \neg R)} where P {\displaystyle P} , Q {\displaystyle Q} , R {\displaystyle R} and S {\displaystyle S} are propositions expressed in some formal system. == Natural language example == If it rains, we will stay inside. If it is sunny, we will go for a walk. Either we will not stay inside, or we will not go for a walk, or both. Therefore, either it will not rain, or it will not be sunny, or both. == Proof == == Example proof == The validity of this argument structure can be shown by using both conditional proof (CP) and reductio ad absurdum (RAA) in the following way: == References == == Bibliography == Howard-Snyder, Frances; Howard-Snyder, Daniel; Wasserman, Ryan. The Power of Logic (4th ed.). McGraw-Hill, 2009, ISBN 978-0-07-340737-1, p. 414. == External links == http://mathworld.wolfram.com/DestructiveDilemma.html
Wikipedia/Destructive_dilemma
This is a list of topics around Boolean algebra and propositional logic. == Articles with a wide scope and introductions == Algebra of sets Boolean algebra (structure) Boolean algebra Field of sets Logical connective Propositional calculus == Boolean functions and connectives == Ampheck Analysis of Boolean functions Balanced Boolean function Bent function Boolean algebras canonically defined Boolean function Boolean matrix Boolean-valued function Conditioned disjunction Evasive Boolean function Exclusive or Functional completeness Logical biconditional Logical conjunction Logical disjunction Logical equality Logical implication Logical negation Logical NOR Lupanov representation Majority function Material conditional Minimal axioms for Boolean algebra Peirce arrow Read-once function Sheffer stroke Sole sufficient operator Symmetric Boolean function Symmetric difference Zhegalkin polynomial == Examples of Boolean algebras == Boolean domain Complete Boolean algebra Interior algebra Two-element Boolean algebra == Extensions of Boolean algebras == Derivative algebra (abstract algebra) Free Boolean algebra Monadic Boolean algebra == Generalizations of Boolean algebras == De Morgan algebra First-order logic Heyting algebra Lindenbaum–Tarski algebra Skew Boolean algebra == Syntax == Algebraic normal form Boolean conjunctive query Canonical form (Boolean algebra) Conjunctive normal form Disjunctive normal form Formal system == Technical applications == And-inverter graph Logic gate Boolean analysis == Theorems and specific laws == Boolean prime ideal theorem Compactness theorem Consensus theorem De Morgan's laws Duality (order theory) Laws of classical logic Peirce's law Stone's representation theorem for Boolean algebras == People == Boole, George De Morgan, Augustus Jevons, William Stanley Peirce, Charles Sanders Stone, Marshall Harvey Venn, John Zhegalkin, Ivan Ivanovich == Philosophy == Boole's syllogistic Boolean implicant Entitative graph Existential graph Laws of Form Logical graph == Visualization == Truth table Karnaugh map Venn diagram == Unclassified == Boolean function Boolean-valued function Boolean-valued model Boolean satisfiability problem Boolean differential calculus Indicator function (also called the characteristic function, but that term is used in probability theory for a different concept) Espresso heuristic logic minimizer Logical matrix Logical value Stone duality Stone space Topological Boolean algebra
Wikipedia/Boolean_algebra_topics