text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
This is a glossary of some terms used in various branches of mathematics that are related to the fields of order , lattice , and domain theory . Note that there is a structured list of order topics available as well. Other helpful resources might be the following overview articles:
In the following, partial orders will usually just be denoted by their carrier sets. As long as the intended meaning is clear from the context, ≤ {\displaystyle \,\leq \,} will suffice to denote the corresponding relational symbol, even without prior introduction. Furthermore, < will denote the strict order induced by ≤ . {\displaystyle \,\leq .}
The definitions given here are consistent with those that can be found in the following standard reference books:
Specific definitions: | https://en.wikipedia.org/wiki/Projection_(order_theory) |
In set theory , a projection is one of two closely related types of functions or operations, namely:
This set theory -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Projection_(set_theory) |
A projection plane , or plane of projection , is a type of view in which graphical projections from an object intersect. [ 1 ] Projection planes are used often in descriptive geometry and graphical representation. A picture plane in perspective drawing is a type of projection plane.
With perspective drawing, the lines of sight , or projection lines , between an object and a picture plane return to a vanishing point and are not parallel. With parallel projection the lines of sight from the object to the projection plane are parallel. | https://en.wikipedia.org/wiki/Projection_plane |
In mathematics , a projectionless C*-algebra is a C*-algebra with no nontrivial projections . For a unital C*-algebra, the projections 0 and 1 are trivial. While for a non-unital C*-algebra, only 0 is considered trivial. The problem of whether simple infinite-dimensional C*-algebras with this property exist was posed in 1958 by Irving Kaplansky , [ 1 ] and the first example of one was published in 1981 by Bruce Blackadar . [ 1 ] [ 2 ] For commutative C*-algebras, being projectionless is equivalent to its spectrum being connected . Due to this, being projectionless can be considered as a noncommutative analogue of a connected space .
Let B 0 {\displaystyle {\mathcal {B}}_{0}} be the class consisting of the C*-algebras C 0 ( R ) , C 0 ( R 2 ) , D n , S D n {\displaystyle C_{0}(\mathbb {R} ),C_{0}(\mathbb {R} ^{2}),D_{n},SD_{n}} for each n ≥ 2 {\displaystyle n\geq 2} , and let B {\displaystyle {\mathcal {B}}} be the class of all C*-algebras of the form
M k 1 ( B 1 ) ⊕ M k 2 ( B 2 ) ⊕ . . . ⊕ M k r ( B r ) {\displaystyle M_{k_{1}}(B_{1})\oplus M_{k_{2}}(B_{2})\oplus ...\oplus M_{k_{r}}(B_{r})} ,
where r , k 1 , . . . , k r {\displaystyle r,k_{1},...,k_{r}} are integers , and where B 1 , . . . , B r {\displaystyle B_{1},...,B_{r}} belong to B 0 {\displaystyle {\mathcal {B}}_{0}} .
Every C*-algebra A in B {\displaystyle {\mathcal {B}}} is projectionless, moreover, its only projection is 0. [ 5 ]
This algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Projectionless_C*-algebra |
A projective cone (or just cone ) in projective geometry is the union of all lines that intersect a projective subspace R (the apex of the cone) and an arbitrary subset A (the basis) of some other subspace S , disjoint from R .
In the special case that R is a single point, S is a plane, and A is a conic section on S , the projective cone is a conical surface ; hence the name.
Let X be a projective space over some field K , and R , S be disjoint subspaces of X . Let A be an arbitrary subset of S . Then we define RA , the cone with top R and basis A , as follows :
This geometry-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Projective_cone |
In mathematics , projective geometry is the study of geometric properties that are invariant with respect to projective transformations . This means that, compared to elementary Euclidean geometry , projective geometry has a different setting ( projective space ) and a selective set of basic geometric concepts. The basic intuitions are that projective space has more points than Euclidean space , for a given dimension, and that geometric transformations are permitted that transform the extra points (called " points at infinity ") to Euclidean points, and vice versa.
Properties meaningful for projective geometry are respected by this new idea of transformation, which is more radical in its effects than can be expressed by a transformation matrix and translations (the affine transformations ). The first issue for geometers is what kind of geometry is adequate for a novel situation. Unlike in Euclidean geometry , the concept of an angle does not apply in projective geometry, because no measure of angles is invariant with respect to projective transformations, as is seen in perspective drawing from a changing perspective. One source for projective geometry was indeed the theory of perspective. Another difference from elementary geometry is the way in which parallel lines can be said to meet in a point at infinity , once the concept is translated into projective geometry's terms. Again this notion has an intuitive basis, such as railway tracks meeting at the horizon in a perspective drawing. See Projective plane for the basics of projective geometry in two dimensions.
While the ideas were available earlier, projective geometry was mainly a development of the 19th century. This included the theory of complex projective space , the coordinates used ( homogeneous coordinates ) being complex numbers. Several major types of more abstract mathematics (including invariant theory , the Italian school of algebraic geometry , and Felix Klein 's Erlangen programme resulting in the study of the classical groups ) were motivated by projective geometry. It was also a subject with many practitioners for its own sake, as synthetic geometry . Another topic that developed from axiomatic studies of projective geometry is finite geometry .
The topic of projective geometry is itself now divided into many research subtopics, two examples of which are projective algebraic geometry (the study of projective varieties ) and projective differential geometry (the study of differential invariants of the projective transformations).
Projective geometry is an elementary non- metrical form of geometry, meaning that it does not support any concept of distance. In two dimensions it begins with the study of configurations of points and lines . That there is indeed some geometric interest in this sparse setting was first established by Desargues and others in their exploration of the principles of perspective art . [ 1 ] In higher dimensional spaces there are considered hyperplanes (that always meet), and other linear subspaces, which exhibit the principle of duality . The simplest illustration of duality is in the projective plane, where the statements "two distinct points determine a unique line" (i.e. the line through them) and "two distinct lines determine a unique point" (i.e. their point of intersection) show the same structure as propositions. Projective geometry can also be seen as a geometry of constructions with a straight-edge alone, excluding compass constructions, common in straightedge and compass constructions . [ 2 ] As such, there are no circles, no angles, no measurements, no parallels, and no concept of intermediacy (or "betweenness"). [ 3 ] It was realised that the theorems that do apply to projective geometry are simpler statements. For example, the different conic sections are all equivalent in (complex) projective geometry, and some theorems about circles can be considered as special cases of these general theorems.
During the early 19th century the work of Jean-Victor Poncelet , Lazare Carnot and others established projective geometry as an independent field of mathematics . [ 3 ] Its rigorous foundations were addressed by Karl von Staudt and perfected by Italians Giuseppe Peano , Mario Pieri , Alessandro Padoa and Gino Fano during the late 19th century. [ 4 ] Projective geometry, like affine and Euclidean geometry , can also be developed from the Erlangen program of Felix Klein; projective geometry is characterized by invariants under transformations of the projective group .
After much work on the very large number of theorems in the subject, therefore, the basics of projective geometry became understood. The incidence structure and the cross-ratio are fundamental invariants under projective transformations. Projective geometry can be modeled by the affine plane (or affine space) plus a line (hyperplane) "at infinity" and then treating that line (or hyperplane) as "ordinary". [ 5 ] An algebraic model for doing projective geometry in the style of analytic geometry is given by homogeneous coordinates. [ 6 ] [ 7 ] On the other hand, axiomatic studies revealed the existence of non-Desarguesian planes , examples to show that the axioms of incidence can be modelled (in two dimensions only) by structures not accessible to reasoning through homogeneous coordinate systems.
In a foundational sense, projective geometry and ordered geometry are elementary since they each involve a minimal set of axioms and either can be used as the foundation for affine and Euclidean geometry . [ 8 ] [ 9 ] Projective geometry is not "ordered" [ 3 ] and so it is a distinct foundation for geometry.
Projective geometry is less restrictive than either Euclidean geometry or affine geometry . It is an intrinsically non- metrical geometry, meaning that facts are independent of any metric structure. Under the projective transformations, the incidence structure and the relation of projective harmonic conjugates are preserved. A projective range is the one-dimensional foundation. Projective geometry formalizes one of the central principles of perspective art: that parallel lines meet at infinity , and therefore are drawn that way. In essence, a projective geometry may be thought of as an extension of Euclidean geometry in which the "direction" of each line is subsumed within the line as an extra "point", and in which a "horizon" of directions corresponding to coplanar lines is regarded as a "line". Thus, two parallel lines meet on a horizon line by virtue of their incorporating the same direction.
Idealized directions are referred to as points at infinity, while idealized horizons are referred to as lines at infinity. In turn, all these lines lie in the plane at infinity. However, infinity is a metric concept, so a purely projective geometry does not single out any points, lines or planes in this regard—those at infinity are treated just like any others.
Because a Euclidean geometry is contained within a projective geometry—with projective geometry having a simpler foundation—general results in Euclidean geometry may be derived in a more transparent manner, where separate but similar theorems of Euclidean geometry may be handled collectively within the framework of projective geometry. For example, parallel and nonparallel lines need not be treated as separate cases; rather an arbitrary projective plane is singled out as the ideal plane and located "at infinity" using homogeneous coordinates .
Additional properties of fundamental importance include Desargues' Theorem and the Theorem of Pappus . In projective spaces of dimension 3 or greater there is a construction that allows one to prove Desargues' Theorem . But for dimension 2, it must be separately postulated.
Using Desargues' Theorem , combined with the other axioms, it is possible to define the basic operations of arithmetic, geometrically. The resulting operations satisfy the axioms of a field – except that the commutativity of multiplication requires Pappus's hexagon theorem . As a result, the points of each line are in one-to-one correspondence with a given field, F , supplemented by an additional element, ∞, such that r ⋅ ∞ = ∞ , −∞ = ∞ , r + ∞ = ∞ , r / 0 = ∞ , r / ∞ = 0 , ∞ − r = r − ∞ = ∞ , except that 0 / 0 , ∞ / ∞ , ∞ + ∞ , ∞ − ∞ , 0 ⋅ ∞ and ∞ ⋅ 0 remain undefined.
Projective geometry also includes a full theory of conic sections , a subject also extensively developed in Euclidean geometry. There are advantages to being able to think of a hyperbola and an ellipse as distinguished only by the way the hyperbola lies across the line at infinity ; and that a parabola is distinguished only by being tangent to the same line. The whole family of circles can be considered as conics passing through two given points on the line at infinity — at the cost of requiring complex coordinates. Since coordinates are not "synthetic", one replaces them by fixing a line and two points on it, and considering the linear system of all conics passing through those points as the basic object of study. This method proved very attractive to talented geometers, and the topic was studied thoroughly. An example of this method is the multi-volume treatise by H. F. Baker .
The first geometrical properties of a projective nature were discovered during the 3rd century by Pappus of Alexandria . [ 3 ] Filippo Brunelleschi (1404–1472) started investigating the geometry of perspective during 1425 [ 10 ] (see Perspective (graphical) § History for a more thorough discussion of the work in the fine arts that motivated much of the development of projective geometry). Johannes Kepler (1571–1630) and Girard Desargues (1591–1661) independently developed the concept of the "point at infinity". [ 11 ] Desargues developed an alternative way of constructing perspective drawings by generalizing the use of vanishing points to include the case when these are infinitely far away. He made Euclidean geometry , where parallel lines are truly parallel, into a special case of an all-encompassing geometric system. Desargues's study on conic sections drew the attention of 16-year-old Blaise Pascal and helped him formulate Pascal's theorem . The works of Gaspard Monge at the end of 18th and beginning of 19th century were important for the subsequent development of projective geometry. The work of Desargues was ignored until Michel Chasles chanced upon a handwritten copy during 1845. Meanwhile, Jean-Victor Poncelet had published the foundational treatise on projective geometry during 1822. Poncelet examined the projective properties of objects (those invariant under central projection) and, by basing his theory on the concrete pole and polar relation with respect to a circle, established a relationship between metric and projective properties. The non-Euclidean geometries discovered soon thereafter were eventually demonstrated to have models, such as the Klein model of hyperbolic space , relating to projective geometry.
In 1855 A. F. Möbius wrote an article about permutations, now called Möbius transformations , of generalised circles in the complex plane . These transformations represent projectivities of the complex projective line . In the study of lines in space, Julius Plücker used homogeneous coordinates in his description, and the set of lines was viewed on the Klein quadric , one of the early contributions of projective geometry to a new field called algebraic geometry , an offshoot of analytic geometry with projective ideas.
Projective geometry was instrumental in the validation of speculations of Lobachevski and Bolyai concerning hyperbolic geometry by providing models for the hyperbolic plane : [ 12 ] for example, the Poincaré disc model where generalised circles perpendicular to the unit circle correspond to "hyperbolic lines" ( geodesics ), and the "translations" of this model are described by Möbius transformations that map the unit disc to itself. The distance between points is given by a Cayley–Klein metric , known to be invariant under the translations since it depends on cross-ratio , a key projective invariant. The translations are described variously as isometries in metric space theory, as linear fractional transformations formally, and as projective linear transformations of the projective linear group , in this case SU(1, 1) .
The work of Poncelet , Jakob Steiner and others was not intended to extend analytic geometry. Techniques were supposed to be synthetic : in effect projective space as now understood was to be introduced axiomatically. As a result, reformulating early work in projective geometry so that it satisfies current standards of rigor can be somewhat difficult. Even in the case of the projective plane alone, the axiomatic approach can result in models not describable via linear algebra .
This period in geometry was overtaken by research on the general algebraic curve by Clebsch , Riemann , Max Noether and others, which stretched existing techniques, and then by invariant theory . Towards the end of the century, the Italian school of algebraic geometry ( Enriques , Segre , Severi ) broke out of the traditional subject matter into an area demanding deeper techniques.
During the later part of the 19th century, the detailed study of projective geometry became less fashionable, although the literature is voluminous. Some important work was done in enumerative geometry in particular, by Schubert, that is now considered as anticipating the theory of Chern classes , taken as representing the algebraic topology of Grassmannians .
Projective geometry later proved key to Paul Dirac 's invention of quantum mechanics . At a foundational level, the discovery that quantum measurements could fail to commute had disturbed and dissuaded Heisenberg , but past study of projective planes over noncommutative rings had likely desensitized Dirac. In more advanced work, Dirac used extensive drawings in projective geometry to understand the intuitive meaning of his equations, before writing up his work in an exclusively algebraic formalism. [ 13 ]
There are many projective geometries, which may be divided into discrete and continuous: a discrete geometry comprises a set of points, which may or may not be finite in number, while a continuous geometry has infinitely many points with no gaps in between.
The only projective geometry of dimension 0 is a single point. A projective geometry of dimension 1 consists of a single line containing at least 3 points. The geometric construction of arithmetic operations cannot be performed in either of these cases. For dimension 2, there is a rich structure in virtue of the absence of Desargues' Theorem .
The smallest 2-dimensional projective geometry (that with the fewest points) is the Fano plane , which has 3 points on every line, with 7 points and 7 lines in all, having the following collinearities:
with homogeneous coordinates A = (0,0,1) , B = (0,1,1) , C = (0,1,0) , D = (1,0,1) , E = (1,0,0) , F = (1,1,1) , G = (1,1,0) , or, in affine coordinates, A = (0,0) , B = (0,1) , C = (∞) , D = (1,0) , E = (0) , F = (1,1) and G = (1) . The affine coordinates in a Desarguesian plane for the points designated to be the points at infinity (in this example: C, E and G) can be defined in several other ways.
In standard notation, a finite projective geometry is written PG( a , b ) where:
Thus, the example having only 7 points is written PG(2, 2) .
The term "projective geometry" is used sometimes to indicate the generalised underlying abstract geometry, and sometimes to indicate a particular geometry of wide interest, such as the metric geometry of flat space which we analyse through the use of homogeneous coordinates , and in which Euclidean geometry may be embedded (hence its name, Extended Euclidean plane ).
The fundamental property that singles out all projective geometries is the elliptic incidence property that any two distinct lines L and M in the projective plane intersect at exactly one point P . The special case in analytic geometry of parallel lines is subsumed in the smoother form of a line at infinity on which P lies. The line at infinity is thus a line like any other in the theory: it is in no way special or distinguished. (In the later spirit of the Erlangen programme one could point to the way the group of transformations can move any line to the line at infinity ).
The parallel properties of elliptic, Euclidean and hyperbolic geometries contrast as follows:
The parallel property of elliptic geometry is the key idea that leads to the principle of projective duality, possibly the most important property that all projective geometries have in common.
In 1825, Joseph Gergonne noted the principle of duality characterizing projective plane geometry: given any theorem or definition of that geometry, substituting point for line , lie on for pass through , collinear for concurrent , intersection for join , or vice versa, results in another theorem or valid definition, the "dual" of the first. Similarly in 3 dimensions, the duality relation holds between points and planes, allowing any theorem to be transformed by swapping point and plane , is contained by and contains . More generally, for projective spaces of dimension N, there is a duality between the subspaces of dimension R and dimension N − R − 1 . For N = 2 , this specializes to the most commonly known form of duality—that between points and lines.
The duality principle was also discovered independently by Jean-Victor Poncelet .
To establish duality only requires establishing theorems which are the dual versions of the axioms for the dimension in question. Thus, for 3-dimensional spaces, one needs to show that (1*) every point lies in 3 distinct planes, (2*) every two planes intersect in a unique line and a dual version of (3*) to the effect: if the intersection of plane P and Q is coplanar with the intersection of plane R and S, then so are the respective intersections of planes P and R, Q and S (assuming planes P and S are distinct from Q and R).
In practice, the principle of duality allows us to set up a dual correspondence between two geometric constructions. The most famous of these is the polarity or reciprocity of two figures in a conic curve (in 2 dimensions) or a quadric surface (in 3 dimensions). A commonplace example is found in the reciprocation of a symmetrical polyhedron in a concentric sphere to obtain the dual polyhedron.
Another example is Brianchon's theorem , the dual of the already mentioned Pascal's theorem , and one of whose proofs simply consists of applying the principle of duality to Pascal's. Here are comparative statements of these two theorems (in both cases within the framework of the projective plane):
Any given geometry may be deduced from an appropriate set of axioms . Projective geometries are characterised by the "elliptic parallel" axiom, that any two planes always meet in just one line , or in the plane, any two lines always meet in just one point . In other words, there are no such things as parallel lines or planes in projective geometry.
Many alternative sets of axioms for projective geometry have been proposed (see for example Coxeter 2003, Hilbert & Cohn-Vossen 1999, Greenberg 1980).
These axioms are based on Whitehead , "The Axioms of Projective Geometry". There are two types, points and lines, and one "incidence" relation between points and lines. The three axioms are:
The reason each line is assumed to contain at least 3 points is to eliminate some degenerate cases. The spaces satisfying these
three axioms either have at most one line, or are projective spaces of some dimension over a division ring , or are non-Desarguesian planes .
One can add further axioms restricting the dimension or the coordinate ring. For example, Coxeter's Projective Geometry , [ 14 ] references Veblen [ 15 ] in the three axioms above, together with a further 5 axioms that make the dimension 3 and the coordinate ring a commutative field of characteristic not 2.
One can pursue axiomatization by postulating a ternary relation, [ABC] to denote when three points (not all necessarily distinct) are collinear. An axiomatization may be written down in terms of this relation as well:
For two distinct points, A and B, the line AB is defined as consisting of all points C for which [ABC]. The axioms C0 and C1 then provide a formalization of G2; C2 for G1 and C3 for G3.
The concept of line generalizes to planes and higher-dimensional subspaces. A subspace, AB...XY may thus be recursively defined in terms of the subspace AB...X as that containing all the points of all lines YZ, as Z ranges over AB...X. Collinearity then generalizes to the relation of "independence". A set {A, B, ..., Z} of points is independent, [AB...Z] if {A, B, ..., Z} is a minimal generating subset for the subspace AB...Z.
The projective axioms may be supplemented by further axioms postulating limits on the dimension of the space. The minimum dimension is determined by the existence of an independent set of the required size. For the lowest dimensions, the relevant conditions may be stated in equivalent
form as follows. A projective space is of:
The maximum dimension may also be determined in a similar fashion. For the lowest dimensions, they take on the following forms. A projective space is of:
and so on. It is a general theorem (a consequence of axiom (3)) that all coplanar lines intersect—the very principle that projective geometry was originally intended to embody. Therefore, property (M3) may be equivalently stated that all lines intersect one another.
It is generally assumed that projective spaces are of at least dimension 2. In some cases, if the focus is on projective planes, a variant of M3 may be postulated. The axioms of (Eves 1997: 111), for instance, include (1), (2), (L3) and (M3). Axiom (3) becomes vacuously true under (M3) and is therefore not needed in this context.
In incidence geometry , most authors [ 16 ] give a treatment that embraces the Fano plane PG(2, 2) as the smallest finite projective plane. An axiom system that achieves this is as follows:
Coxeter's Introduction to Geometry [ 17 ] gives a list of five axioms for a more restrictive concept of a projective plane that is attributed to Bachmann, adding Pappus's theorem to the list of axioms above (which eliminates non-Desarguesian planes ) and excluding projective planes over fields of characteristic 2 (those that do not satisfy Fano's axiom ). The restricted planes given in this manner more closely resemble the real projective plane .
Given three non- collinear points, there are three lines connecting them, but with four points, no three collinear, there are six connecting lines and three additional "diagonal points" determined by their intersections. The science of projective geometry captures this surplus determined by four points through a quaternary relation and the projectivities which preserve the complete quadrangle configuration.
An harmonic quadruple of points on a line occurs when there is a complete quadrangle two of whose diagonal points are in the first and third position of the quadruple, and the other two positions are points on the lines joining two quadrangle points through the third diagonal point. [ 18 ]
A spatial perspectivity of a projective configuration in one plane yields such a configuration in another, and this applies to the configuration of the complete quadrangle. Thus harmonic quadruples are preserved by perspectivity. If one perspectivity follows another the configurations follow along. The composition of two perspectivities is no longer a perspectivity, but a projectivity .
While corresponding points of a perspectivity all converge at a point, this convergence is not true for a projectivity that is not a perspectivity. In projective geometry the intersection of lines formed by corresponding points of a projectivity in a plane are of particular interest. The set of such intersections is called a projective conic , and in acknowledgement of the work of Jakob Steiner , it is referred to as a Steiner conic .
Suppose a projectivity is formed by two perspectivities centered on points A and B , relating x to X by an intermediary p :
The projectivity is then x ⊼ X . {\displaystyle x\ \barwedge \ X.} Then given the projectivity ⊼ {\displaystyle \barwedge } the induced conic is
Given a conic C and a point P not on it, two distinct secant lines through P intersect C in four points. These four points determine a quadrangle of which P is a diagonal point. The line through the other two diagonal points is called the polar of P and P is the pole of this line. [ 19 ] Alternatively, the polar line of P is the set of projective harmonic conjugates of P on a variable secant line passing through P and C . | https://en.wikipedia.org/wiki/Projective_geometry |
In mathematics , a projective plane is a geometric structure that extends the concept of a plane . In the ordinary Euclidean plane, two lines typically intersect at a single point, but there are some pairs of lines (namely, parallel lines) that do not intersect. A projective plane can be thought of as an ordinary plane equipped with additional "points at infinity" where parallel lines intersect. Thus any two distinct lines in a projective plane intersect at exactly one point.
Renaissance artists, in developing the techniques of drawing in perspective , laid the groundwork for this mathematical topic. The archetypical example is the real projective plane , also known as the extended Euclidean plane . [ 1 ] This example, in slightly different guises, is important in algebraic geometry , topology and projective geometry where it may be denoted variously by PG(2, R ) , RP 2 , or P 2 ( R ), among other notations. There are many other projective planes, both infinite, such as the complex projective plane , and finite, such as the Fano plane .
A projective plane is a 2-dimensional projective space . Not all projective planes can be embedded in 3-dimensional projective spaces; such embeddability is a consequence of a property known as Desargues' theorem , not shared by all projective planes.
A projective plane is a rank 2 incidence structure ( P , L , I ) {\displaystyle ({\mathcal {P}},{\mathcal {L}},I)} consisting of a set of points P {\displaystyle {\mathcal {P}}} , a set of lines L {\displaystyle {\mathcal {L}}} , and a symmetric relation I {\displaystyle I} on the set P ∪ L {\displaystyle {\mathcal {P}}\cup {\mathcal {L}}} called incidence , having the following properties: [ 2 ]
The second condition means that there are no parallel lines . The last condition excludes the so-called degenerate cases (see below ). The term "incidence" is used to emphasize the symmetric nature of the relationship between points and lines. Thus the expression "point P is incident with line ℓ " is used instead of either " P is on ℓ " or " ℓ passes through P ".
It follows from the definition that the number of points s + 1 {\displaystyle s+1} incident with any given line in a projective plane is the same as the number of lines incident with any given point. The (possibly infinite) cardinal number s {\displaystyle s} is called order of the plane.
To turn the ordinary Euclidean plane into a projective plane, proceed as follows:
The extended structure is a projective plane and is called the extended Euclidean plane or the real projective plane . The process outlined above, used to obtain it, is called "projective completion" or projectivization . This plane can also be constructed by starting from R 3 viewed as a vector space, see § Vector space construction below.
The points of the Moulton plane are the points of the Euclidean plane, with coordinates in the usual way. To create the Moulton plane from the Euclidean plane some of the lines are redefined. That is, some of their point sets will be changed, but other lines will remain unchanged. Redefine all the lines with negative slopes so that they look like "bent" lines, meaning that these lines keep their points with negative x -coordinates, but the rest of their points are replaced with the points of the line with the same y -intercept but twice the slope wherever their x -coordinate is positive.
The Moulton plane has parallel classes of lines and is an affine plane . It can be projectivized, as in the previous example, to obtain the projective Moulton plane . Desargues' theorem is not a valid theorem in either the Moulton plane or the projective Moulton plane.
This example has just thirteen points and thirteen lines. We label the points P 1 , ..., P 13 and the lines m 1 , ..., m 13 . The incidence relation (which points are on which lines) can be given by the following incidence matrix . The rows are labelled by the points and the columns are labelled by the lines. A 1 in row i and column j means that the point P i is on the line m j , while a 0 (which we represent here by a blank cell for ease of reading) means that they are not incident. The matrix is in Paige–Wexler normal form.
To verify the conditions that make this a projective plane, observe that every two rows have exactly one common column in which 1s appear (every pair of distinct points are on exactly one common line) and that every two columns have exactly one common row in which 1s appear (every pair of distinct lines meet at exactly one point). Among many possibilities, the points P 1 , P 4 , P 5 , and P 8 , for example, will satisfy the third condition. This example is known as the projective plane of order three .
Though the line at infinity of the extended real plane may appear to have a different nature than the other lines of that projective plane, this is not the case. Another construction of the same projective plane shows that no line can be distinguished (on geometrical grounds) from any other. In this construction, each "point" of the real projective plane is the one-dimensional subspace (a geometric line) through the origin in a 3-dimensional vector space, and a "line" in the projective plane arises from a ( geometric ) plane through the origin in the 3-space. This idea can be generalized and made more precise as follows. [ 3 ]
Let K be any division ring (skewfield). Let K 3 denote the set of all triples x = ( x 0 , x 1 , x 2 ) of elements of K (a Cartesian product viewed as a vector space ). For any nonzero x in K 3 , the minimal subspace of K 3 containing x (which may be visualized as all the vectors in a line through the origin) is the subset
of K 3 . Similarly, let x and y be linearly independent elements of K 3 , meaning that kx + my = 0 implies that k = m = 0 . The minimal subspace of K 3 containing x and y (which may be visualized as all the vectors in a plane through the origin) is the subset
of K 3 . This 2-dimensional subspace contains various 1-dimensional subspaces through the origin that may be obtained by fixing k and m and taking the multiples of the resulting vector. Different choices of k and m that are in the same ratio will give the same line.
The projective plane over K , denoted PG(2, K ) or K P 2 , has a set of points consisting of all the 1-dimensional subspaces in K 3 . A subset L of the points of PG(2, K ) is a line in PG(2, K ) if there exists a 2-dimensional subspace of K 3 whose set of 1-dimensional subspaces is exactly L .
Verifying that this construction produces a projective plane is usually left as a linear algebra exercise.
An alternate (algebraic) view of this construction is as follows. The points of this projective plane are the equivalence classes of the set K 3 \ {(0, 0, 0)} modulo the equivalence relation
Lines in the projective plane are defined exactly as above.
The coordinates ( x 0 , x 1 , x 2 ) of a point in PG(2, K ) are called homogeneous coordinates . Each triple ( x 0 , x 1 , x 2 ) represents a well-defined point in PG(2, K ), except for the triple (0, 0, 0) , which represents no point. Each point in PG(2, K ), however, is represented by many triples.
If K is a topological space , then K P 2 inherits a topology via the product , subspace , and quotient topologies.
The real projective plane RP 2 arises when K is taken to be the real numbers , R . As a closed, non-orientable real 2- manifold , it serves as a fundamental example in topology. [ 4 ]
In this construction, consider the unit sphere centered at the origin in R 3 . Each of the R 3 lines in this construction intersects the sphere at two antipodal points. Since the R 3 line represents a point of RP 2 , we will obtain the same model of RP 2 by identifying the antipodal points of the sphere. The lines of RP 2 will be the great circles of the sphere after this identification of antipodal points. This description gives the standard model of elliptic geometry .
The complex projective plane CP 2 arises when K is taken to be the complex numbers , C . It is a closed complex 2-manifold, and hence a closed, orientable real 4-manifold. It and projective planes over other fields (known as pappian planes ) serve as fundamental examples in algebraic geometry . [ 5 ]
The quaternionic projective plane HP 2 is also of independent interest. [ 6 ]
By Wedderburn's Theorem , a finite division ring must be commutative and so be a field. Thus, the finite examples of this construction are known as "field planes". Taking K to be the finite field of q = p n elements with prime p produces a projective plane of q 2 + q + 1 points. The field planes are usually denoted by PG(2, q ) where PG stands for projective geometry, the "2" is the dimension and q is called the order of the plane (it is one less than the number of points on any line). The Fano plane, discussed below, is denoted by PG(2, 2). The third example above is the projective plane PG(2, 3).
The Fano plane is the projective plane arising from the field of two elements. It is the smallest projective plane, with only seven points and seven lines. In the figure at right, the seven points are shown as small balls, and the seven lines are shown as six line segments and a circle. However, one could equivalently consider the balls to be the "lines" and the line segments and circle to be the "points" – this is an example of duality in the projective plane: if the lines and points are interchanged, the result is still a projective plane (see below ). A permutation of the seven points that carries collinear points (points on the same line) to collinear points is called a collineation or symmetry of the plane. The collineations of a geometry form a group under composition, and for the Fano plane this group ( PΓL(3, 2) = PGL(3, 2) ) has 168 elements.
The theorem of Desargues is universally valid in a projective plane if and only if the plane can be constructed from a three-dimensional vector space over a skewfield as above . [ 7 ] These planes are called Desarguesian planes , named after Girard Desargues . The real (or complex) projective plane and the projective plane of order 3 given above are examples of Desarguesian projective planes. The projective planes that can not be constructed in this manner are called non-Desarguesian planes , and the Moulton plane given above is an example of one. The PG(2, K ) notation is reserved for the Desarguesian planes. When K is a field , a very common case, they are also known as field planes and if the field is a finite field they can be called Galois planes .
A subplane of a projective plane ( P , L , I ) {\displaystyle ({\mathcal {P}},{\mathcal {L}},I)} is a pair of subsets ( P ′ , L ′ ) {\displaystyle ({{\mathcal {P}}'},{{\mathcal {L}}'})} where P ′ ⊆ P {\displaystyle {{\mathcal {P}}'}\subseteq {\mathcal {P}}} , L ′ ⊆ L {\displaystyle {{\mathcal {L}}'}\subseteq {\mathcal {L}}} and ( P ′ , L ′ , I ′ ) {\displaystyle ({{\mathcal {P}}'},{{\mathcal {L}}'},I')} is itself a projective plane with respect to the restriction I ′ {\displaystyle I'} of the incidence relation I {\displaystyle I} to ( P ′ ∪ L ′ ) × ( P ′ ∪ L ′ ) {\displaystyle ({{\mathcal {P}}'}\cup {{\mathcal {L}}'})\times ({{\mathcal {P}}'}\cup {{\mathcal {L}}'})} .
( Bruck 1955 ) proves the following theorem. Let Π be a finite projective plane of order N with a proper subplane Π 0 of order M . Then either N = M 2 or N ≥ M 2 + M .
A subplane ( P ′ , L ′ ) {\displaystyle ({{\mathcal {P}}'},{{\mathcal {L}}'})} of ( P , L , I ) {\displaystyle ({\mathcal {P}},{\mathcal {L}},I)} is a Baer subplane if every line in L ∖ L ′ {\displaystyle {\mathcal {L}}\setminus {{\mathcal {L}}'}} is incident with exactly one point in P ′ {\displaystyle {\mathcal {P}}'} and every point in P ∖ P ′ {\displaystyle {\mathcal {P}}\setminus {{\mathcal {P}}'}} is incident with exactly one line of L ′ {\displaystyle {\mathcal {L}}'} .
A finite Desarguesian projective plane of order q {\displaystyle q} admits Baer subplanes (all necessarily Desarguesian) if and
only if q {\displaystyle q} is square; in this
case the order of the Baer subplanes is q {\displaystyle {\sqrt {q}}} .
In the finite Desarguesian planes PG(2, p n ), the subplanes have orders which are the orders of the subfields of the finite field GF( p n ), that is, p i where i is a divisor of n . In non-Desarguesian planes however, Bruck's theorem gives the only information about subplane orders. The case of equality in the inequality of this theorem is not known to occur. Whether or not there exists a subplane of order M in a plane of order N with M 2 + M = N is an open question. If such subplanes existed there would be projective planes of composite (non-prime power) order.
A Fano subplane is a subplane isomorphic to PG(2, 2), the unique projective plane of order 2.
If you consider a quadrangle (a set of 4 points no three collinear) in this plane, the points determine six of the lines of the plane. The remaining three points (called the diagonal points of the quadrangle) are the points where the lines that do not intersect at a point of the quadrangle meet. The seventh line consists of all the diagonal points (usually drawn as a circle or semicircle).
In finite desarguesian planes, PG(2, q ), Fano subplanes exist if and only if q is even (that is, a power of 2). The situation in non-desarguesian planes is unsettled. They could exist in any non-desarguesian plane of order greater than 6, and indeed, they have been found in all non-desarguesian planes in which they have been looked for (in both odd and even orders).
An open question, apparently due to Hanna Neumann though not published by her, is: Does every non-desarguesian plane contain a Fano subplane?
A theorem concerning Fano subplanes due to ( Gleason 1956 ) is:
Projectivization of the Euclidean plane produced the real projective plane. The inverse operation—starting with a projective plane, remove one line and all the points incident with that line—produces an affine plane .
More formally an affine plane consists of a set of lines and a set of points , and a relation between points and lines called incidence , having the following properties:
The second condition means that there are parallel lines and is known as Playfair's axiom. The expression "does not meet" in this condition is shorthand for "there does not exist a point incident with both lines".
The Euclidean plane and the Moulton plane are examples of infinite affine planes. A finite projective plane will produce a finite affine plane when one of its lines and the points on it are removed. The order of a finite affine plane is the number of points on any of its lines (this will be the same number as the order of the projective plane from which it comes). The affine planes which arise from the projective planes PG(2, q ) are denoted by AG(2, q ).
There is a projective plane of order N if and only if there is an affine plane of order N . When there is only one affine plane of order N there is only one projective plane of order N , but the converse is not true. The affine planes formed by the removal of different lines of the projective plane will be isomorphic if and only if the removed lines are in the same orbit of the collineation group of the projective plane. These statements hold for infinite projective planes as well.
The affine plane K 2 over K embeds into K P 2 via the map which sends affine (non-homogeneous) coordinates to homogeneous coordinates ,
The complement of the image is the set of points of the form (0, x 1 , x 2 ) . From the point of view of the embedding just given, these points are the points at infinity . They constitute a line in K P 2 —namely, the line arising from the plane
in K 3 —called the line at infinity . The points at infinity are the "extra" points where parallel lines intersect in the construction of the extended real plane; the point (0, x 1 , x 2 ) is where all lines of slope x 2 / x 1 intersect. Consider for example the two lines
in the affine plane K 2 . These lines have slope 0 and do not intersect. They can be regarded as subsets of K P 2 via the embedding above, but these subsets are not lines in K P 2 . Add the point (0, 1, 0) to each subset; that is, let
These are lines in K P 2 ; ū arises from the plane
in K 3 , while ȳ arises from the plane
The projective lines ū and ȳ intersect at (0, 1, 0) . In fact, all lines in K 2 of slope 0, when projectivized in this manner, intersect at (0, 1, 0) in K P 2 .
The embedding of K 2 into K P 2 given above is not unique. Each embedding produces its own notion of points at infinity. For example, the embedding
has as its complement those points of the form ( x 0 , 0, x 2 ) , which are then regarded as points at infinity.
When an affine plane does not have the form of K 2 with K a division ring, it can still be embedded in a projective plane, but the construction used above does not work. A commonly used method for carrying out the embedding in this case involves expanding the set of affine coordinates and working in a more general "algebra".
One can construct a coordinate "ring"—a so-called planar ternary ring (not a genuine ring)—corresponding to any projective plane. A planar ternary ring need not be a field or division ring, and there are many projective planes that are not constructed from a division ring. They are called non-Desarguesian projective planes and are an active area of research. The Cayley plane ( OP 2 ), a projective plane over the octonions , is one of these because the octonions do not form a division ring. [ 8 ]
Conversely, given a planar ternary ring ( R , T ), a projective plane can be constructed (see below). The relationship is not one to one. A projective plane may be associated with several non-isomorphic planar ternary rings. The ternary operator T can be used to produce two binary operators on the set R , by:
The ternary operator is linear if T ( x , m , k ) = x ⋅ m + k . When the set of coordinates of a projective plane actually form a ring, a linear ternary operator may be defined in this way, using the ring operations on the right, to produce a planar ternary ring.
Algebraic properties of this planar ternary coordinate ring turn out to correspond to geometric incidence properties of the plane. For example, Desargues' theorem corresponds to the coordinate ring being obtained from a division ring , while Pappus's theorem corresponds to this ring being obtained from a commutative field. A projective plane satisfying Pappus's theorem universally is called a Pappian plane . Alternative , not necessarily associative , division algebras like the octonions correspond to Moufang planes .
There is no known purely geometric proof of the purely geometric statement that Desargues' theorem implies Pappus' theorem in a finite projective plane (finite Desarguesian planes are Pappian). (The converse is true in any projective plane and is provable geometrically, but finiteness is essential in this statement as there are infinite Desarguesian planes which are not Pappian.) The most common proof uses coordinates in a division ring and Wedderburn's theorem that finite division rings must be commutative; Bamberg & Penttila (2015) give a proof that uses only more "elementary" algebraic facts about division rings.
To describe a finite projective plane of order N (≥ 2) using non-homogeneous coordinates and a planar ternary ring:
On these points, construct the following lines:
For example, for N = 2 we can use the symbols {0, 1} associated with the finite field of order 2. The ternary operation defined by T ( x , m , k ) = xm + k with the operations on the right being the multiplication and addition in the field yields the following:
Degenerate planes do not fulfill the third condition in the definition of a projective plane. They are not structurally complex enough to be interesting in their own right, but from time to time they arise as special cases in general arguments. There are seven kinds of degenerate plane according to ( Albert & Sandler 1968 ). They are:
These seven cases are not independent, the fourth and fifth can be considered as special cases of the sixth, while the second and third are special cases of the fourth and fifth respectively. The special case of the seventh plane with no additional lines can be seen as an eighth plane. All the cases can therefore be organized into two families of degenerate planes as follows (this representation is for finite degenerate planes, but may be extended to infinite ones in a natural way):
1) For any number of points P 1 , ..., P n , and lines L 1 , ..., L m ,
2) For any number of points P 1 , ..., P n , and lines L 1 , ..., L n , (same number of points as lines)
A collineation of a projective plane is a bijective map of the plane to itself which maps points to points and lines to lines that preserves incidence, meaning that if σ is a bijection and point P is on line m , then P σ is on m σ . [ 9 ]
If σ is a collineation of a projective plane, a point P with P = P σ is called a fixed point of σ , and a line m with m = m σ is called a fixed line of σ . The points on a fixed line need not be fixed points, their images under σ are just constrained to lie on this line. The collection of fixed points and fixed lines of a collineation form a closed configuration , which is a system of points and lines that satisfy the first two but not necessarily the third condition in the definition of a projective plane. Thus, the fixed point and fixed line structure for any collineation either form a projective plane by themselves, or a degenerate plane . Collineations whose fixed structure forms a plane are called planar collineations .
A homography (or projective transformation ) of PG(2, K ) is a collineation of this type of projective plane which is a linear transformation of the underlying vector space. Using homogeneous coordinates they can be represented by invertible 3 × 3 matrices over K which act on the points of PG(2, K ) by y = M x T , where x and y are points in K 3 (vectors) and M is an invertible 3 × 3 matrix over K . [ 10 ] Two matrices represent the same projective transformation if one is a constant multiple of the other. Thus the group of projective transformations is the quotient of the general linear group by the scalar matrices called the projective linear group .
Another type of collineation of PG(2, K ) is induced by any automorphism of K , these are called automorphic collineations . If α is an automorphism of K , then the collineation given by ( x 0 , x 1 , x 2 ) → ( x 0 α , x 1 α , x 2 α ) is an automorphic collineation. The fundamental theorem of projective geometry says that all the collineations of PG(2, K ) are compositions of homographies and automorphic collineations. Automorphic collineations are planar collineations.
A projective plane is defined axiomatically as an incidence structure , in terms of a set P of points, a set L of lines, and an incidence relation I that determines which points lie on which lines. As P and L are only sets one can interchange their roles and define a plane dual structure .
By interchanging the role of "points" and "lines" in
we obtain the dual structure
where I * is the converse relation of I .
In a projective plane a statement involving points, lines and incidence between them that is obtained from another such statement by interchanging the words "point" and "line" and making whatever grammatical adjustments that are necessary, is called the plane dual statement of the first. The plane dual statement of "Two points are on a unique line." is "Two lines meet at a unique point." Forming the plane dual of a statement is known as dualizing the statement.
If a statement is true in a projective plane C , then the plane dual of that statement must be true in the dual plane C *. This follows since dualizing each statement in the proof "in C " gives a statement of the proof "in C *."
In the projective plane C , it can be shown that there exist four lines, no three of which are concurrent. Dualizing this theorem and the first two axioms in the definition of a projective plane shows that the plane dual structure C * is also a projective plane, called the dual plane of C .
If C and C * are isomorphic, then C is called self-dual . The projective planes PG(2, K ) for any division ring K are self-dual. However, there are non-Desarguesian planes which are not self-dual, such as the Hall planes and some that are, such as the Hughes planes .
The Principle of plane duality says that dualizing any theorem in a self-dual projective plane C produces another theorem valid in C .
A duality is a map from a projective plane C = ( P , L , I ) to its dual plane C * = ( L , P , I *) (see above ) which preserves incidence. That is, a duality σ will map points to lines and lines to points ( P σ = L and L σ = P ) in such a way that if a point Q is on a line m (denoted by Q I m ) then Q σ I * m σ ⇔ m σ I Q σ . A duality which is an isomorphism is called a correlation . [ 11 ] If a correlation exists then the projective plane C is self-dual.
In the special case that the projective plane is of the PG(2, K ) type, with K a division ring, a duality is called a reciprocity . [ 12 ] These planes are always self-dual. By the fundamental theorem of projective geometry a reciprocity is the composition of an automorphic function of K and a homography . If the automorphism involved is the identity, then the reciprocity is called a projective correlation .
A correlation of order two (an involution ) is called a polarity . If a correlation φ is not a polarity then φ 2 is a nontrivial collineation.
It can be shown that a projective plane has the same number of lines as it has points (infinite or finite). Thus, for every finite projective plane there is an integer N ≥ 2 such that the plane has
The number N is called the order of the projective plane.
The projective plane of order 2 is called the Fano plane . See also the article on finite geometry .
Using the vector space construction with finite fields there exists a projective plane of order N = p n , for each prime power p n . In fact, for all known finite projective planes, the order N is a prime power. [ citation needed ]
The existence of finite projective planes of other orders is an open question. The only general restriction known on the order is the Bruck–Ryser–Chowla theorem that if the order N is congruent to 1 or 2 mod 4, it must be the sum of two squares. This rules out N = 6 . The next case N = 10 has been ruled out by massive computer calculations. [ 13 ] Nothing more is known; in particular, the question of whether there exists a finite projective plane of order N = 12 is still open. [ citation needed ]
Another longstanding open problem is whether there exist finite projective planes of prime order which are not finite field planes (equivalently, whether there exists a non-Desarguesian projective plane of prime order). [ citation needed ]
A projective plane of order N is a Steiner S(2, N + 1, N 2 + N + 1) system
(see Steiner system ). Conversely, one can prove that all Steiner systems of this form ( λ = 2 ) are projective planes.
Automorphisms for PG( n , k ), with k = p m , p =prime is ( m !)( k n +1 − 1)( k n +1 − k )( k n +1 − k 2 )...( k n +1 − k n )/( k − 1).
The number of mutually orthogonal Latin squares of order N is at most N − 1 . N − 1 exist if and only if there is a projective plane of order N .
While the classification of all projective planes is far from complete, results are known for small orders:
Projective planes may be thought of as projective geometries of dimension two. [ 15 ] Higher-dimensional projective geometries can be defined in terms of incidence relations in a manner analogous to the definition of a projective plane.
The smallest projective space of dimension 3 is PG(3,2) .
These turn out to be "tamer" than the projective planes since the extra degrees of freedom permit Desargues' theorem to be proved geometrically in the higher-dimensional geometry. This means that the coordinate "ring" associated to the geometry must be a division ring (skewfield) K , and the projective geometry is isomorphic to the one constructed from the vector space K d +1 , i.e. PG( d , K ). As in the construction given earlier, the points of the d -dimensional projective space PG( d , K ) are the lines through the origin in K d +1 and a line in PG( d , K ) corresponds to a plane through the origin in K d +1 . In fact, each i -dimensional object in PG( d , K ), with i < d , is an ( i + 1) -dimensional (algebraic) vector subspace of K d +1 ("goes through the origin"). The projective spaces in turn generalize to the Grassmannian spaces .
It can be shown that if Desargues' theorem holds in a projective space of dimension greater than two, then it must also hold in all planes that are contained in that space. Since there are projective planes in which Desargues' theorem fails ( non-Desarguesian planes ), these planes can not be embedded in a higher-dimensional projective space. Only the planes from the vector space construction PG(2, K ) can appear in projective spaces of higher dimension. Some disciplines in mathematics restrict the meaning of projective plane to only this type of projective plane since otherwise general statements about projective spaces would always have to mention the exceptions when the geometric dimension is two. [ 16 ] | https://en.wikipedia.org/wiki/Projective_plane |
In mathematics , the concept of a projective space originated from the visual effect of perspective , where parallel lines seem to meet at infinity . A projective space may thus be viewed as the extension of a Euclidean space , or, more generally, an affine space with points at infinity , in such a way that there is one point at infinity of each direction of parallel lines .
This definition of a projective space has the disadvantage of not being isotropic , having two different sorts of points, which must be considered separately in proofs. Therefore, other definitions are generally preferred. There are two classes of definitions. In synthetic geometry , point and line are primitive entities that are related by the incidence relation "a point is on a line" or "a line passes through a point", which is subject to the axioms of projective geometry . For some such set of axioms, the projective spaces that are defined have been shown to be equivalent to those resulting from the following definition, which is more often encountered in modern textbooks.
Using linear algebra , a projective space of dimension n is defined as the set of the vector lines (that is, vector subspaces of dimension one) in a vector space V of dimension n + 1 . Equivalently, it is the quotient set of V \ {0} by the equivalence relation "being on the same vector line". As a vector line intersects the unit sphere of V in two antipodal points , projective spaces can be equivalently defined as spheres in which antipodal points are identified. A projective space of dimension 1 is a projective line , and a projective space of dimension 2 is a projective plane .
Projective spaces are widely used in geometry , allowing for simpler statements and simpler proofs. For example, in affine geometry , two distinct lines in a plane intersect in at most one point, while, in projective geometry , they intersect in exactly one point. Also, there is only one class of conic sections , which can be distinguished only by their intersections with the line at infinity: two intersection points for hyperbolas ; one for the parabola , which is tangent to the line at infinity; and no real intersection point of ellipses .
In topology , and more specifically in manifold theory , projective spaces play a fundamental role, being typical examples of non-orientable manifolds .
As outlined above, projective spaces were introduced for formalizing statements like "two coplanar lines intersect in exactly one point, and this point is at infinity if the lines are parallel ". Such statements are suggested by the study of perspective , which may be considered as a central projection of the three dimensional space onto a plane (see Pinhole camera model ). More precisely, the entrance pupil of a camera or of the eye of an observer is the center of projection , and the image is formed on the projection plane .
Mathematically, the center of projection is a point O of the space (the intersection of the axes in the figure); the projection plane ( P 2 , in blue on the figure) is a plane not passing through O , which is often chosen to be the plane of equation z = 1 , when Cartesian coordinates are considered. Then, the central projection maps a point P to the intersection of the line OP with the projection plane. Such an intersection exists if and only if the point P does not belong to the plane ( P 1 , in green on the figure) that passes through O and is parallel to P 2 .
It follows that the lines passing through O split in two disjoint subsets: the lines that are not contained in P 1 , which are in one to one correspondence with the points of P 2 , and those contained in P 1 , which are in one to one correspondence with the directions of parallel lines in P 2 . This suggests to define the points (called here projective points for clarity) of the projective plane as the lines passing through O . A projective line in this plane consists of all projective points (which are lines) contained in a plane passing through O . As the intersection of two planes passing through O is a line passing through O , the intersection of two distinct projective lines consists of a single projective point. The plane P 1 defines a projective line which is called the line at infinity of P 2 . By identifying each point of P 2 with the corresponding projective point, one can thus say that the projective plane is the disjoint union of P 2 and the (projective) line at infinity.
As an affine space with a distinguished point O may be identified with its associated vector space (see Affine space § Vector spaces as affine spaces ), the preceding construction is generally done by starting from a vector space and is called projectivization . Also, the construction can be done by starting with a vector space of any positive dimension.
So, a projective space of dimension n can be defined as the set of vector lines (vector subspaces of dimension one) in a vector space of dimension n + 1 . A projective space can also be defined as the elements of any set that is in natural correspondence with this set of vector lines.
This set can be the set of equivalence classes under the equivalence relation between vectors defined by "one vector is the product of the other by a nonzero scalar". In other words, this amounts to defining a projective space as the set of vector lines in which the zero vector has been removed.
A third equivalent definition is to define a projective space of dimension n as the set of pairs of antipodal points in a sphere of dimension n (in a space of dimension n + 1 ).
Given a vector space V over a field K , the projective space P ( V ) is the set of equivalence classes of V \ {0} under the equivalence relation ~ defined by x ~ y if there is a nonzero element λ of K such that x = λy . If V is a topological vector space , the quotient space P ( V ) is a topological space , endowed with the quotient topology of the subspace topology of V \ {0} . This is the case when K is the field R of the real numbers or the field C of the complex numbers . If V is finite dimensional, the dimension of P ( V ) is the dimension of V minus one.
In the common case where V = K n +1 , the projective space P ( V ) is denoted P n ( K ) (as well as K P n or P n ( K ) , although this notation may be confused with exponentiation). The space P n ( K ) is often called the projective space of dimension n over K , or the projective n -space , since all projective spaces of dimension n are isomorphic to it (because every K vector space of dimension n + 1 is isomorphic to K n +1 ).
The elements of a projective space P ( V ) are commonly called points . If a basis of V has been chosen, and, in particular if V = K n +1 , the projective coordinates of a point P are the coordinates on the basis of any element of the corresponding equivalence class. These coordinates are commonly denoted [ x 0 : ... : x n ] , the colons and the brackets being used for distinguishing from usual coordinates, and emphasizing that this is an equivalence class, which is defined up to the multiplication by a non zero constant. That is, if [ x 0 : ... : x n ] are projective coordinates of a point, then [ λx 0 : ... : λx n ] are also projective coordinates of the same point, for any nonzero λ in K . Also, the above definition implies that [ x 0 : ... : x n ] are projective coordinates of a point if and only if at least one of the coordinates is nonzero.
If K is the field of real or complex numbers, a projective space is called a real projective space or a complex projective space , respectively. If n is one or two, a projective space of dimension n is called a projective line or a projective plane , respectively. The complex projective line is also called the Riemann sphere .
All these definitions extend naturally to the case where K is a division ring ; see, for example, Quaternionic projective space . The notation PG( n , K ) is sometimes used for P n ( K ) . [ 1 ] If K is a finite field with q elements, P n ( K ) is often denoted PG( n , q ) (see PG(3,2) ). [ a ]
Let P ( V ) be a projective space, where V is a vector space over a field K , and p : V → P ( V ) {\displaystyle p:V\to \mathbf {P} (V)} be the canonical map that maps a nonzero vector v to its equivalence class, which is the vector line containing v with the zero vector removed.
Every linear subspace W of V is a union of lines. It follows that p ( W ) is a projective space, which can be identified with P ( W ) .
A projective subspace is thus a projective space that is obtained by restricting to a linear subspace the equivalence relation that defines P ( V ) .
If p ( v ) and p ( w ) are two different points of P ( V ) , the vectors v and w are linearly independent . It follows that:
In synthetic geometry , where projective lines are primitive objects, the first property is an axiom, and the second one is the definition of a projective subspace.
Every intersection of projective subspaces is a projective subspace. It follows that for every subset S of a projective space, there is a smallest projective subspace containing S , the intersection of all projective subspaces containing S . This projective subspace is called the projective span of S , and S is a spanning set for it.
A set S of points is projectively independent if its span is not the span of any proper subset of S . If S is a spanning set of a projective space P , then there is a subset of S that spans P and is projectively independent (this results from the similar theorem for vector spaces). If the dimension of P is n , such an independent spanning set has n + 1 elements.
Contrarily to the cases of vector spaces and affine spaces , an independent spanning set does not suffice for defining coordinates. One needs one more point, see next section.
A projective frame or projective basis is an ordered set of points in a projective space that allows defining coordinates. [ 2 ] More precisely, in an n -dimensional projective space, a projective frame is a tuple of n + 2 points such that any n + 1 of them are independent; that is, they are not contained in a hyperplane .
If V is an ( n + 1) -dimensional vector space, and p is the canonical projection from V to P ( V ) , then ( p ( e 0 ), ..., p ( e n +1 )) is a projective frame if and only if ( e 0 , ..., e n ) is a basis of V and the coefficients of e n +1 on this basis are all nonzero. By rescaling the first n vectors, any frame can be rewritten as ( p ( e ′ 0 ), ..., p( e ′ n +1 )) such that e ′ n +1 = e ′ 0 + ... + e ′ n ; this representation is unique up to the multiplication of all e ′ i with a common nonzero factor.
The projective coordinates or homogeneous coordinates of a point p ( v ) on a frame ( p ( e 0 ), ..., p ( e n +1 )) with e n +1 = e 0 + ... + e n are the coordinates of v on the basis ( e 0 , ..., e n ) . They are only defined up to scaling with a common nonzero factor.
The canonical frame of the projective space P n ( K ) consists of images by p of the elements of the canonical basis of K n +1 (that is, the tuples with only one nonzero entry, equal to 1), and the image by p of their sum.
In mathematics , projective geometry is the study of geometric properties that are invariant with respect to projective transformations . This means that, compared to elementary Euclidean geometry , projective geometry has a different setting ( projective space ) and a selective set of basic geometric concepts. The basic intuitions are that projective space has more points than Euclidean space , for a given dimension, and that geometric transformations are permitted that transform the extra points (called " points at infinity ") to Euclidean points, and vice versa.
Properties meaningful for projective geometry are respected by this new idea of transformation, which is more radical in its effects than can be expressed by a transformation matrix and translations (the affine transformations ). The first issue for geometers is what kind of geometry is adequate for a novel situation. Unlike in Euclidean geometry , the concept of an angle does not apply in projective geometry, because no measure of angles is invariant with respect to projective transformations, as is seen in perspective drawing from a changing perspective. One source for projective geometry was indeed the theory of perspective. Another difference from elementary geometry is the way in which parallel lines can be said to meet in a point at infinity , once the concept is translated into projective geometry's terms. Again this notion has an intuitive basis, such as railway tracks meeting at the horizon in a perspective drawing. See Projective plane for the basics of projective geometry in two dimensions.
While the ideas were available earlier, projective geometry was mainly a development of the 19th century. This included the theory of complex projective space , the coordinates used ( homogeneous coordinates ) being complex numbers. Several major types of more abstract mathematics (including invariant theory , the Italian school of algebraic geometry , and Felix Klein 's Erlangen programme resulting in the study of the classical groups ) were motivated by projective geometry. It was also a subject with many practitioners for its own sake, as synthetic geometry . Another topic that developed from axiomatic studies of projective geometry is finite geometry .
In projective geometry , a homography is an isomorphism of projective spaces, induced by an isomorphism of the vector spaces from which the projective spaces derive. [ 3 ] It is a bijection that maps lines to lines, and thus a collineation . In general, some collineations are not homographies, but the fundamental theorem of projective geometry asserts that is not so in the case of real projective spaces of dimension at least two. Synonyms include projectivity, projective transformation, and projective collineation.
Historically, homographies (and projective spaces) have been introduced to study perspective and projections in Euclidean geometry , and the term homography , which, etymologically, roughly means "similar drawing", dates from this time. At the end of the 19th century, formal definitions of projective spaces were introduced, which extended Euclidean and affine spaces by the addition of new points called points at infinity . The term "projective transformation" originated in these abstract constructions. These constructions divide into two classes that have been shown to be equivalent. A projective space may be constructed as the set of the lines of a vector space over a given field (the above definition is based on this version); this construction facilitates the definition of projective coordinates and allows using the tools of linear algebra for the study of homographies. The alternative approach consists in defining the projective space through a set of axioms, which do not involve explicitly any field ( incidence geometry , see also synthetic geometry ); in this context, collineations are easier to define than homographies, and homographies are defined as specific collineations, thus called "projective collineations".
A projective space is a topological space , as endowed with the quotient topology of the topology of a finite dimensional real vector space.
Let S be the unit sphere in a normed vector space V , and consider the function π : S → P ( V ) {\displaystyle \pi :S\to \mathbf {P} (V)} that maps a point of S to the vector line passing through it. This function is continuous and surjective. The inverse image of every point of P ( V ) consist of two antipodal points . As spheres are compact spaces , it follows that:
For every point P of S , the restriction of π to a neighborhood of P is a homeomorphism onto its image, provided that the neighborhood is small enough for not containing any pair of antipodal points. This shows that a projective space is a manifold. A simple atlas can be provided, as follows.
As soon as a basis has been chosen for V , any vector can be identified with its coordinates on the basis, and any point of P ( V ) may be identified with its homogeneous coordinates . For i = 0, ..., n , the set U i = { [ x 0 : ⋯ : x n ] , x i ≠ 0 } {\displaystyle U_{i}=\{[x_{0}:\cdots :x_{n}],x_{i}\neq 0\}} is an open subset of P ( V ) , and P ( V ) = ⋃ i = 0 n U i {\displaystyle \mathbf {P} (V)=\bigcup _{i=0}^{n}U_{i}} since every point of P ( V ) has at least one nonzero coordinate.
To each U i is associated a chart , which is the homeomorphisms φ i : R n → U i ( y 0 , … , y i ^ , … , y n ) ↦ [ y 0 : ⋯ : y i − 1 : 1 : y i + 1 : ⋯ : y n ] , {\displaystyle {\begin{aligned}\mathbb {\varphi } _{i}:R^{n}&\to U_{i}\\(y_{0},\dots ,{\widehat {y_{i}}},\dots ,y_{n})&\mapsto [y_{0}:\cdots :y_{i-1}:1:y_{i+1}:\cdots :y_{n}],\end{aligned}}} such that φ i − 1 ( [ x 0 : ⋯ : x n ] ) = ( x 0 x i , … , x i x i ^ , … , x n x i ) , {\displaystyle \varphi _{i}^{-1}\left([x_{0}:\cdots :x_{n}]\right)=\left({\frac {x_{0}}{x_{i}}},\dots ,{\widehat {\frac {x_{i}}{x_{i}}}},\dots ,{\frac {x_{n}}{x_{i}}}\right),} where hats means that the corresponding term is missing.
These charts form an atlas , and, as the transition maps are analytic functions , it results that projective spaces are analytic manifolds .
For example, in the case of n = 1 , that is of a projective line, there are only two U i , which can each be identified to a copy of the real line . In both lines, the intersection of the two charts is the set of nonzero real numbers, and the transition map is x ↦ 1 x {\displaystyle x\mapsto {\frac {1}{x}}} in both directions. The image represents the projective line as a circle where antipodal points are identified, and shows the two homeomorphisms of a real line to the projective line; as antipodal points are identified, the image of each line is represented as an open half circle, which can be identified with the projective line with a single point removed.
Real projective spaces have a simple CW complex structure, as P n ( R ) can be obtained from P n −1 ( R ) by attaching an n -cell with the quotient projection S n −1 → P n −1 ( R ) as the attaching map.
Originally, algebraic geometry was the study of common zeros of sets of multivariate polynomials . These common zeros, called algebraic varieties belong to an affine space . It appeared soon, that in the case of real coefficients, one must consider all the complex zeros for having accurate results. For example, the fundamental theorem of algebra asserts that a univariate square-free polynomial of degree n has exactly n complex roots. In the multivariate case, the consideration of complex zeros is also needed, but not sufficient: one must also consider zeros at infinity . For example, Bézout's theorem asserts that the intersection of two plane algebraic curves of respective degrees d and e consists of exactly de points if one consider complex points in the projective plane, and if one counts the points with their multiplicity. [ b ] Another example is the genus–degree formula that allows computing the genus of a plane algebraic curve from its singularities in the complex projective plane .
So a projective variety is the set of points in a projective space, whose homogeneous coordinates are common zeros of a set of homogeneous polynomials . [ c ]
Any affine variety can be completed , in a unique way, into a projective variety by adding its points at infinity , which consists of homogenizing the defining polynomials, and removing the components that are contained in the hyperplane at infinity, by saturating with respect to the homogenizing variable.
An important property of projective spaces and projective varieties is that the image of a projective variety under a morphism of algebraic varieties is closed for Zariski topology (that is, it is an algebraic set ). This is a generalization to every ground field of the compactness of the real and complex projective space.
A projective space is itself a projective variety, being the set of zeros of the zero polynomial.
Scheme theory, introduced by Alexander Grothendieck during the second half of 20th century, allows defining a generalization of algebraic varieties, called schemes , by gluing together smaller pieces called affine schemes , similarly as manifolds can be built by gluing together open sets of R n . The Proj construction is the construction of the scheme of a projective space, and, more generally of any projective variety, by gluing together affine schemes. In the case of projective spaces, one can take for these affine schemes the affine schemes associated to the charts (affine spaces) of the above description of a projective space as a manifold.
In synthetic geometry , a projective space S can be defined axiomatically as a set P (the set of points), together with a set L of subsets of P (the set of lines), satisfying these axioms: [ 4 ]
The last axiom eliminates reducible cases that can be written as a disjoint union of projective spaces together with 2-point lines joining any two points in distinct projective spaces. More abstractly, it can be defined as an incidence structure ( P , L , I ) consisting of a set P of points, a set L of lines, and an incidence relation I that states which points lie on which lines.
The structures defined by these axioms are more general than those obtained from the vector space construction given above. If the (projective) dimension is at least three then, by the Veblen–Young theorem , there is no difference. However, for dimension two, there are examples that satisfy these axioms that can not be constructed from vector spaces (or even modules over division rings). These examples do not satisfy the theorem of Desargues and are known as non-Desarguesian planes . In dimension one, any set with at least three elements satisfies the axioms, so it is usual to assume additional structure for projective lines defined axiomatically. [ 5 ]
It is possible to avoid the troublesome cases in low dimensions by adding or modifying axioms that define a projective space. Coxeter (1969 , p. 231) gives such an extension due to Bachmann. [ 6 ] To ensure that the dimension is at least two, replace the three point per line axiom above by:
To avoid the non-Desarguesian planes, include Pappus's theorem as an axiom; [ e ]
And, to ensure that the vector space is defined over a field that does not have even characteristic include Fano's axiom ; [ f ]
A subspace of the projective space is a subset X , such that any line containing two points of X is a subset of X (that is, completely contained in X ). The full space and the empty space are always subspaces.
The geometric dimension of the space is said to be n if that is the largest number for which there is a strictly ascending chain of subspaces of this form: ∅ = X − 1 ⊂ X 0 ⊂ ⋯ X n = P . {\displaystyle \varnothing =X_{-1}\subset X_{0}\subset \cdots X_{n}=P.}
A subspace X i in such a chain is said to have (geometric) dimension i . Subspaces of dimension 0 are called points , those of dimension 1 are called lines and so on. If the full space has dimension n then any subspace of dimension n − 1 is called a hyperplane .
Projective spaces admit an equivalent formulation in terms of lattice theory. There is a bijective correspondence between projective spaces and geomodular lattices, namely, subdirectly irreducible , compactly generated , complemented , modular lattices . [ 7 ]
A finite projective space is a projective space where P is a finite set of points. In any finite projective space, each line contains the same number of points and the order of the space is defined as one less than this common number. For finite projective spaces of dimension at least three, Wedderburn's theorem implies that the division ring over which the projective space is defined must be a finite field , GF( q ) , whose order (that is, number of elements) is q (a prime power). A finite projective space defined over such a finite field has q + 1 points on a line, so the two concepts of order coincide. Notationally, PG( n , GF( q )) is usually written as PG( n , q ) .
All finite fields of the same order are isomorphic, so, up to isomorphism, there is only one finite projective space for each dimension greater than or equal to three, over a given finite field. However, in dimension two there are non-Desarguesian planes. Up to isomorphism there are
finite projective planes of orders 2, 3, 4, ..., 10, respectively. The numbers beyond this are very difficult to calculate and are not determined except for some zero values due to the Bruck–Ryser theorem .
The smallest projective plane is the Fano plane , PG(2, 2) with 7 points and 7 lines. The smallest 3-dimensional projective space is PG(3, 2) , with 15 points, 35 lines and 15 planes.
Injective linear maps T ∈ L ( V , W ) between two vector spaces V and W over the same field K induce mappings of the corresponding projective spaces P ( V ) → P ( W ) via:
where v is a non-zero element of V and [...] denotes the equivalence classes of a vector under the defining identification of the respective projective spaces. Since members of the equivalence class differ by a scalar factor, and linear maps preserve scalar factors, this induced map is well-defined . (If T is not injective, it has a null space larger than {0} ; in this case the meaning of the class of T ( v ) is problematic if v is non-zero and in the null space. In this case one obtains a so-called rational map , see also Birational geometry .)
Two linear maps S and T in L ( V , W ) induce the same map between P ( V ) and P ( W ) if and only if they differ by a scalar multiple, that is if T = λS for some λ ≠ 0 . Thus if one identifies the scalar multiples of the identity map with the underlying field K , the set of K -linear morphisms from P ( V ) to P ( W ) is simply P ( L ( V , W )) .
The automorphisms P ( V ) → P ( V ) can be described more concretely. (We deal only with automorphisms preserving the base field K ). Using the notion of sheaves generated by global sections , it can be shown that any algebraic (not necessarily linear) automorphism must be linear, i.e., coming from a (linear) automorphism of the vector space V . The latter form the group GL( V ) . By identifying maps that differ by a scalar, one concludes that
the quotient group of GL( V ) modulo the matrices that are scalar multiples of the identity. (These matrices form the center of Aut( V ) .) The groups PGL are called projective linear groups . The automorphisms of the complex projective line P 1 ( C ) are called Möbius transformations .
When the construction above is applied to the dual space V ∗ rather than V , one obtains the dual projective space, which can be canonically identified with the space of hyperplanes through the origin of V . That is, if V is n -dimensional, then P ( V ∗ ) is the Grassmannian of n − 1 planes in V .
In algebraic geometry, this construction allows for greater flexibility in the construction of projective bundles. One would like to be able to associate a projective space to every quasi-coherent sheaf E over a scheme Y , not just the locally free ones. [ clarification needed ] See EGA II , Chap. II, par. 4 for more details.
Severi–Brauer varieties are algebraic varieties over a field K , which become isomorphic to projective spaces after an extension of the base field K .
Another generalization of projective spaces are weighted projective spaces ; these are themselves special cases of toric varieties . [ 8 ] | https://en.wikipedia.org/wiki/Projective_space |
In real analysis , the projectively extended real line (also called the one-point compactification of the real line ), is the extension of the set of the real numbers , R {\displaystyle \mathbb {R} } , by a point denoted ∞ . [ 1 ] It is thus the set R ∪ { ∞ } {\displaystyle \mathbb {R} \cup \{\infty \}} with the standard arithmetic operations extended where possible, [ 1 ] and is sometimes denoted by R ∗ {\displaystyle \mathbb {R} ^{*}} [ 2 ] or R ^ . {\displaystyle {\widehat {\mathbb {R} }}.} The added point is called the point at infinity , because it is considered as a neighbour of both ends of the real line. More precisely, the point at infinity is the limit of every sequence of real numbers whose absolute values are increasing and unbounded .
The projectively extended real line may be identified with a real projective line in which three points have been assigned the specific values 0 , 1 and ∞ . The projectively extended real number line is distinct from the affinely extended real number line , in which +∞ and −∞ are distinct.
Unlike most mathematical models of numbers, this structure allows division by zero :
for nonzero a . In particular, 1 / 0 = ∞ and 1 / ∞ = 0 , making the reciprocal function 1 / x a total function in this structure. [ 1 ] The structure, however, is not a field , and none of the binary arithmetic operations are total – for example, 0 ⋅ ∞ is undefined, even though the reciprocal is total. [ 1 ] It has usable interpretations, however – for example, in geometry, the slope of a vertical line is ∞ . [ 1 ]
The projectively extended real line extends the field of real numbers in the same way that the Riemann sphere extends the field of complex numbers , by adding a single point called conventionally ∞ .
In contrast, the affinely extended real number line (also called the two-point compactification of the real line) distinguishes between +∞ and −∞ .
The order relation cannot be extended to R ^ {\displaystyle {\widehat {\mathbb {R} }}} in a meaningful way. Given a number a ≠ ∞ , there is no convincing argument to define either a > ∞ or that a < ∞ . Since ∞ can't be compared with any of the other elements, there's no point in retaining this relation on R ^ {\displaystyle {\widehat {\mathbb {R} }}} . [ 2 ] However, order on R {\displaystyle \mathbb {R} } is used in definitions in R ^ {\displaystyle {\widehat {\mathbb {R} }}} .
Fundamental to the idea that ∞ is a point no different from any other is the way the real projective line is a homogeneous space , in fact homeomorphic to a circle. For example the general linear group of 2 × 2 real invertible matrices has a transitive action on it. The group action may be expressed by Möbius transformations (also called linear fractional transformations), with the understanding that when the denominator of the linear fractional transformation is 0 , the image is ∞ .
The detailed analysis of the action shows that for any three distinct points P , Q and R , there is a linear fractional transformation taking P to 0, Q to 1, and R to ∞ that is, the group of linear fractional transformations is triply transitive on the real projective line. This cannot be extended to 4-tuples of points, because the cross-ratio is invariant.
The terminology projective line is appropriate, because the points are in 1-to-1 correspondence with one- dimensional linear subspaces of R 2 {\displaystyle \mathbb {R} ^{2}} .
The arithmetic operations on this space are an extension of the same operations on reals. A motivation for the new definitions is the limits of functions of real numbers.
In addition to the standard operations on the subset R {\displaystyle \mathbb {R} } of R ^ {\displaystyle {\widehat {\mathbb {R} }}} , the following operations are defined for a ∈ R ^ {\displaystyle a\in {\widehat {\mathbb {R} }}} , with exceptions as indicated: [ 3 ] [ 2 ]
The following expressions cannot be motivated by considering limits of real functions, and no definition of them allows the statement of the standard algebraic properties to be retained unchanged in form for all defined cases. [ a ] Consequently, they are left undefined:
The exponential function e x {\displaystyle e^{x}} cannot be extended to R ^ {\displaystyle {\widehat {\mathbb {R} }}} . [ 2 ]
The following equalities mean: Either both sides are undefined, or both sides are defined and equal. This is true for any a , b , c ∈ R ^ . {\displaystyle a,b,c\in {\widehat {\mathbb {R} }}.}
The following is true whenever expressions involved are defined, for any a , b , c ∈ R ^ . {\displaystyle a,b,c\in {\widehat {\mathbb {R} }}.}
In general, all laws of arithmetic that are valid for R {\displaystyle \mathbb {R} } are also valid for R ^ {\displaystyle {\widehat {\mathbb {R} }}} whenever all the occurring expressions are defined.
The concept of an interval can be extended to R ^ {\displaystyle {\widehat {\mathbb {R} }}} . However, since it is not an ordered set, the interval has a slightly different meaning. The definitions for closed intervals are as follows (it is assumed that a , b ∈ R , a < b {\displaystyle a,b\in \mathbb {R} ,a<b} ): [ 2 ] [ additional citation(s) needed ]
With the exception of when the end-points are equal, the corresponding open and half-open intervals are defined by removing the respective endpoints. This redefinition is useful in interval arithmetic when dividing by an interval containing 0. [ 2 ]
R ^ {\displaystyle {\widehat {\mathbb {R} }}} and the empty set are also intervals, as is R ^ {\displaystyle {\widehat {\mathbb {R} }}} excluding any single point. [ b ]
The open intervals as a base define a topology on R ^ {\displaystyle {\widehat {\mathbb {R} }}} . Sufficient for a base are the bounded open intervals in R {\displaystyle \mathbb {R} } and the intervals ( b , a ) = { x ∣ x ∈ R , b < x } ∪ { ∞ } ∪ { x ∣ x ∈ R , x < a } {\displaystyle (b,a)=\{x\mid x\in \mathbb {R} ,b<x\}\cup \{\infty \}\cup \{x\mid x\in \mathbb {R} ,x<a\}} for all a , b ∈ R {\displaystyle a,b\in \mathbb {R} } such that a < b . {\displaystyle a<b.}
As said, the topology is homeomorphic to a circle. Thus it is metrizable corresponding (for a given homeomorphism) to the ordinary metric on this circle (either measured straight or along the circle). There is no metric which is an extension of the ordinary metric on R . {\displaystyle \mathbb {R} .}
Interval arithmetic extends to R ^ {\displaystyle {\widehat {\mathbb {R} }}} from R {\displaystyle \mathbb {R} } . The result of an arithmetic operation on intervals is always an interval, except when the intervals with a binary operation contain incompatible values leading to an undefined result. [ c ] In particular, we have, for every a , b ∈ R ^ {\displaystyle a,b\in {\widehat {\mathbb {R} }}} :
irrespective of whether either interval includes 0 and ∞ .
The tools of calculus can be used to analyze functions of R ^ {\displaystyle {\widehat {\mathbb {R} }}} . The definitions are motivated by the topology of this space.
Let x ∈ R ^ {\displaystyle x\in {\widehat {\mathbb {R} }}} and A ⊆ R ^ {\displaystyle A\subseteq {\widehat {\mathbb {R} }}} .
Let f : R ^ → R ^ , {\displaystyle f:{\widehat {\mathbb {R} }}\to {\widehat {\mathbb {R} }},} p ∈ R ^ , {\displaystyle p\in {\widehat {\mathbb {R} }},} and L ∈ R ^ {\displaystyle L\in {\widehat {\mathbb {R} }}} .
The limit of f ( x ) as x approaches p is L , denoted
if and only if for every neighbourhood A of L , there is a punctured neighbourhood B of p , such that x ∈ B {\displaystyle x\in B} implies f ( x ) ∈ A {\displaystyle f(x)\in A} .
The one-sided limit of f ( x ) as x approaches p from the right (left) is L , denoted
if and only if for every neighbourhood A of L , there is a right-sided (left-sided) punctured neighbourhood B of p , such that x ∈ B {\displaystyle x\in B} implies f ( x ) ∈ A . {\displaystyle f(x)\in A.}
It can be shown that lim x → p f ( x ) = L {\displaystyle \lim _{x\to p}{f(x)}=L} if and only if both lim x → p + f ( x ) = L {\displaystyle \lim _{x\to p^{+}}{f(x)}=L} and lim x → p − f ( x ) = L {\displaystyle \lim _{x\to p^{-}}{f(x)}=L} .
The definitions given above can be compared with the usual definitions of limits of real functions. In the following statements, p , L ∈ R , {\displaystyle p,L\in \mathbb {R} ,} the first limit is as defined above, and the second limit is in the usual sense:
Let A ⊆ R ^ {\displaystyle A\subseteq {\widehat {\mathbb {R} }}} . Then p is a limit point of A if and only if every neighbourhood of p includes a point y ∈ A {\displaystyle y\in A} such that y ≠ p . {\displaystyle y\neq p.}
Let f : R ^ → R ^ , A ⊆ R ^ , L ∈ R ^ , p ∈ R ^ {\displaystyle f:{\widehat {\mathbb {R} }}\to {\widehat {\mathbb {R} }},A\subseteq {\widehat {\mathbb {R} }},L\in {\widehat {\mathbb {R} }},p\in {\widehat {\mathbb {R} }}} , p a limit point of A . The limit of f ( x ) as x approaches p through A is L , if and only if for every neighbourhood B of L , there is a punctured neighbourhood C of p , such that x ∈ A ∩ C {\displaystyle x\in A\cap C} implies f ( x ) ∈ B . {\displaystyle f(x)\in B.}
This corresponds to the regular topological definition of continuity , applied to the subspace topology on A ∪ { p } , {\displaystyle A\cup \lbrace p\rbrace ,} and the restriction of f to A ∪ { p } . {\displaystyle A\cup \lbrace p\rbrace .}
The function
is continuous at p if and only if f is defined at p and
If A ⊆ R ^ , {\displaystyle A\subseteq {\widehat {\mathbb {R} }},} the function
is continuous in A if and only if, for every p ∈ A {\displaystyle p\in A} , f is defined at p and the limit of f ( x ) {\displaystyle f(x)} as x tends to p through A is f ( p ) . {\displaystyle f(p).}
Every rational function P ( x )/ Q ( x ) , where P and Q are polynomials , can be prolongated, in a unique way, to a function from R ^ {\displaystyle {\widehat {\mathbb {R} }}} to R ^ {\displaystyle {\widehat {\mathbb {R} }}} that is continuous in R ^ . {\displaystyle {\widehat {\mathbb {R} }}.} In particular, this is the case of polynomial functions , which take the value ∞ {\displaystyle \infty } at ∞ , {\displaystyle \infty ,} if they are not constant .
Also, if the tangent function tan {\displaystyle \tan } is extended so that
then tan {\displaystyle \tan } is continuous in R , {\displaystyle \mathbb {R} ,} but cannot be prolongated further to a function that is continuous in R ^ . {\displaystyle {\widehat {\mathbb {R} }}.}
Many elementary functions that are continuous in R {\displaystyle \mathbb {R} } cannot be prolongated to functions that are continuous in R ^ . {\displaystyle {\widehat {\mathbb {R} }}.} This is the case, for example, of the exponential function and all trigonometric functions . For example, the sine function is continuous in R , {\displaystyle \mathbb {R} ,} but it cannot be made continuous at ∞ . {\displaystyle \infty .} As seen above, the tangent function can be prolongated to a function that is continuous in R , {\displaystyle \mathbb {R} ,} but this function cannot be made continuous at ∞ . {\displaystyle \infty .}
Many discontinuous functions that become continuous when the codomain is extended to R ^ {\displaystyle {\widehat {\mathbb {R} }}} remain discontinuous if the codomain is extended to the affinely extended real number system R ¯ . {\displaystyle {\overline {\mathbb {R} }}.} This is the case of the function x ↦ 1 x . {\displaystyle x\mapsto {\frac {1}{x}}.} On the other hand, some functions that are continuous in R {\displaystyle \mathbb {R} } and discontinuous at ∞ ∈ R ^ {\displaystyle \infty \in {\widehat {\mathbb {R} }}} become continuous if the domain is extended to R ¯ . {\displaystyle {\overline {\mathbb {R} }}.} This is the case for the arctangent .
When the real projective line is considered in the context of the real projective plane , then the consequences of Desargues' theorem are implicit. In particular, the construction of the projective harmonic conjugate relation between points is part of the structure of the real projective line. For instance, given any pair of points, the point at infinity is the projective harmonic conjugate of their midpoint .
As projectivities preserve the harmonic relation, they form the automorphisms of the real projective line. The projectivities are described algebraically as homographies , since the real numbers form a ring , according to the general construction of a projective line over a ring . Collectively they form the group PGL(2, R ) .
The projectivities which are their own inverses are called involutions . A hyperbolic involution has two fixed points . Two of these correspond to elementary, arithmetic operations on the real projective line: negation and reciprocation . Indeed, 0 and ∞ are fixed under negation, while 1 and −1 are fixed under reciprocation. | https://en.wikipedia.org/wiki/Projectively_extended_real_line |
In mathematics , projectivization is a procedure which associates with a non-zero vector space V a projective space P ( V ) , whose elements are one-dimensional subspaces of V . More generally, any subset S of V closed under scalar multiplication defines a subset of P ( V ) formed by the lines contained in S and is called the projectivization of S . [ 1 ] [ 2 ]
A related procedure embeds a vector space V over a field K into the projective space P ( V ⊕ K ) of the same dimension. To every vector v of V , it associates the line spanned by the vector ( v , 1) of V ⊕ K .
In algebraic geometry , there is a procedure that associates a projective variety Proj S with a graded commutative algebra S (under some technical restrictions on S ). If S is the algebra of polynomials on a vector space V then Proj S is P ( V ) . This Proj construction gives rise to a contravariant functor from the category of graded commutative rings and surjective graded maps to the category of projective schemes . | https://en.wikipedia.org/wiki/Projectivization |
The projector augmented wave method (PAW) is a technique used in ab initio electronic structure calculations. It is a generalization of the pseudopotential and linear augmented-plane-wave methods , and allows for density functional theory calculations to be performed with greater computational efficiency. [ 1 ]
Valence wavefunctions tend to have rapid oscillations near ion cores due to the requirement that they be orthogonal to core states; this situation is problematic because it requires many Fourier components (or in the case of grid-based methods, a very fine mesh) to describe the wavefunctions accurately. The PAW approach addresses this issue by transforming these rapidly oscillating wavefunctions into smooth wavefunctions which are more computationally convenient, and provides a way to calculate all-electron properties from these smooth wavefunctions. This approach is somewhat reminiscent of a change from the Schrödinger picture to the Heisenberg picture .
The linear transformation T {\displaystyle {\mathcal {T}}} transforms the fictitious pseudo wavefunction | Ψ ~ ⟩ {\displaystyle |{\tilde {\Psi }}\rangle } to the all-electron wavefunction | Ψ ⟩ {\displaystyle |\Psi \rangle } :
Note that the "all-electron" wavefunction is a Kohn–Sham single particle wavefunction, and should not be confused with the many-body wavefunction. In order to have | Ψ ~ ⟩ {\displaystyle |{\tilde {\Psi }}\rangle } and | Ψ ⟩ {\displaystyle |\Psi \rangle } differ only in the regions near the ion cores, we write
where T ^ R {\displaystyle {\hat {\mathcal {T}}}_{R}} is non-zero only within some spherical augmentation region Ω R {\displaystyle \Omega _{R}} enclosing atom R {\displaystyle R} .
Around each atom, it is useful to expand the pseudo wavefunction into pseudo partial waves:
Because the operator T {\displaystyle {\mathcal {T}}} is linear, the coefficients c i {\displaystyle c_{i}} can be written as an inner product with a set of so-called projector functions, | p i ⟩ {\displaystyle |p_{i}\rangle } :
where ⟨ p i | ϕ ~ j ⟩ = δ i j {\displaystyle \langle p_{i}|{\tilde {\phi }}_{j}\rangle =\delta _{ij}} . The all-electron partial waves, | ϕ i ⟩ = T | ϕ ~ i ⟩ {\displaystyle |\phi _{i}\rangle ={\mathcal {T}}|{\tilde {\phi }}_{i}\rangle } , are typically chosen to be solutions to the Kohn–Sham Schrödinger equation for an isolated atom. The transformation T {\displaystyle {\mathcal {T}}} is thus specified by three quantities:
and we can explicitly write it down as
Outside the augmentation regions, the pseudo partial waves are equal to the all-electron partial waves. Inside the spheres, they can be any smooth continuation, such as a linear combination of polynomials or Bessel functions .
The PAW method is typically combined with the frozen core approximation, in which the core states are assumed to be unaffected by the ion's environment. There are several online repositories of pre-computed atomic PAW data. [ 2 ] [ 3 ] [ 4 ]
The PAW transformation allows all-electron observables to be calculated using the pseudo-wavefunction from a pseudopotential calculation, conveniently avoiding having to ever represent the all-electron wavefunction explicitly in memory. This is particularly important for the calculation of properties such as NMR , [ 5 ] which strongly depend on the form of the wavefunction near the nucleus. Starting with the definition of the expectation value of an operator:
where you can substitute in the pseudo wavefunction as you know | Ψ ⟩ = T | Ψ ~ ⟩ {\displaystyle |\Psi \rangle ={\mathcal {T}}|{\tilde {\Psi }}\rangle } :
from which you can define the pseudo operator , indicated by a tilde:
If the operator A ^ {\displaystyle {\hat {A}}} is local and well-behaved we can expand this using the definition of T {\displaystyle {\mathcal {T}}} to give the PAW operator transform
where the indices i , j {\displaystyle i,j} run over all projectors on all atoms. Usually only indices on the same atom are summed over, i.e. off-site contributions are ignored, and this is called the "on-site approximation".
In the original paper, Blöchl notes that there is a degree of freedom in this equation for an arbitrary operator B ^ {\displaystyle {\hat {B}}} , that is localised inside the spherical augmentation region, to add a term of the form:
which can be seen as the basis for implementation of pseudopotentials within PAW, as the nuclear coulomb potential can now be substituted with a smoother one. | https://en.wikipedia.org/wiki/Projector_augmented_wave_method |
In epigenetics , proline isomerization is the effect that cis-trans isomerization of the amino acid proline has on the regulation of gene expression . Similar to aspartic acid , the amino acid proline has the rare property of being able to occupy both cis and trans isomers of its prolyl peptide bonds with ease. Peptidyl-prolyl isomerase , or PPIase, is an enzyme very commonly associated with proline isomerization due to their ability to catalyze the isomerization of prolines. PPIases are present in three types: cyclophilins, FK507-binding proteins, and the parvulins. [ 1 ] PPIase enzymes catalyze the transition of proline between cis and trans isomers and are essential to the numerous biological functions controlled and affected by prolyl isomerization (i.e. cell signalling , protein folding , and epigenetic modifications) [ 2 ] Without PPIases, prolyl peptide bonds will slowly switch between cis and trans isomers, a process that can lock proteins in a nonnative structure that can affect render the protein temporarily ineffective. Although this switch can occur on its own, PPIases are responsible for most isomerization of prolyl peptide bonds. The specific amino acid that precedes the prolyl peptide bond also can have an effect on which conformation the bond assumes. For instance, when an aromatic amino acid is bonded to a proline the bond is more favorable to the cis conformation. Cyclophilin A uses an "electrostatic handle" to pull proline into cis and trans formations. [ 3 ] Most of these biological functions are affected by the isomerization of proline when one isomer interacts differently than the other, commonly causing an activation/deactivation relationship. As an amino acid, proline is present in many proteins . This aids in the multitude of effects that isomerization of proline can have in different biological mechanisms and functions.
Cell signaling involves many different processes and proteins. One of the most studied cell signaling phenomena involving proline is the interactions with p53 and prolyl isomerases, specifically Pin1 . The protein p53, along with p63 and p73 , are responsible for ensuring that alterations to the genome are corrected and for preventing the formation and growth of tumors . proline residues are found throughout the p53 proteins and without the phosphorylation and isomerization of specific Serine/Threonine-Proline motifs within p53, they cannot exhibit control over their target genes. The main signalling processes that are affected by p53 are apoptosis and cell cycle arrest, both of which are controlled by specific isomerization of the prolines in p53. [ 4 ]
Although isomerization of proteins has been known about since 1968 when it was discovered by C. Tanford, proline isomerization and its use as a noncovalent histone tail modification was not discovered until 2006 by Nelson and his colleagues. [ 1 ] [ 5 ]
One of the most well known epigenetic mechanisms that proline isomerization plays a role in is the modification of histone tails, specifically those of histone H3. Fpr4 is a PPIase, in the FK507BP group, that exhibits catalytic activity at the proline positions 16, 30, and 38 (also written P16, P30, and P38 respectively) on the N-terminal region of histone H3 in Saccharomyces cerevisiae . [ 1 ] [ 5 ] [ 6 ] Fpr4's binding affinity is strongest at the P38 site, followed by P30 and then P16. However the catalytic efficiency, or the increase in isomerization rates, is highest at P16 and P30 equally, followed by P38 which exhibits a very small change in isomerization rates with the binding of Fpr4. [ 6 ] Histone H3 has an important lysine residue at the 36 position (also written K36) on the N-terminal tail which can be methylated by Set2, a methyltransferase . Methylation of K36 is key to normal transcription elongation . [ 5 ] Due to P38's proximity to K36, cross-talk between P38 isomerization and K36 methylation can occur. [ 1 ] [ 5 ] [ 7 ] This means that isomer changes at the P38 position can affect methylation at the K36 position. In the cis position, P38 shifts the histone tail closer to the DNA, crowding the area around the tail. This can cause a decrease the ability of proteins to bind to the DNA and to the histone tail, including preventing Set2 from methylating K36. Also, this tail movement can increase the number interactions between the histone tail and the DNA, increasing likelihood of nucleosome formation and potentially leading to the creation of higher-order chromatin structure . In trans , P38 leads to the opposite effects: allowing for Set2 to methylate K36. Set2 is only affected by isomerization of P38 when creating a trimethylated K36 (commonly written as K36me3), however, and not K36me2. [ 1 ] [ 8 ] Fpr4 also binds to P32 in H4, though its effects are minimal. [ 9 ]
In mammalian cells, the isomerization of H3P30 interacts with the phosphorylation of H3S28 (serine in the 28 position of histone H3) and the methylation of H3K27. [ 1 ] [ 7 ] hFKBP25 is a PPIase that is a homolog for Fpr4 in mammalian cells and is found to commonly be associated with the presence of HDACs . Cyp33 is a cyclophilin that has the ability to isomerize H3 proline residues at P16 and P30 positions. [ 9 ] [ 10 ] Histones H2A and H2B also have multiple proline residues near amino acids that when modified affect the activity surrounding the histone.
The isomerization of the peptide bond between histone H3's alanine 15 and proline 16 is affected by the acetylation at K14 and can control the methylation states of K4. [ 3 ] [ 11 ] K4me3 represses gene transcription and depends upon the Set1 methyltransferase complex subunit Spp1 being balanced with the Jhd2 demethylases for proper function. Acetylation of K14 allows for a state change in P16 and primarily promotes the trans state of P16. This trans isomer of P16 reduces K4 methylation, which results in transcription repression. [ 5 ] [ 11 ] Isomerization of P16 has downstream effects of controlling protein binding to acetylated K18. [ 9 ] When P16 is in the trans conformation, Spt7 is allowed to bind to K18ac, increasing transcription.
Proline isomerization of certain prolines in RNA polymerase II is key in the process of recruiting and placing processing factors during transcription. [ 12 ] PPIases target RNA polymerase II by interacting with the Rpb1 carboxy terminal domain , or CTD. [ 9 ] [ 12 ] Proline isomerization is then used as part of the mechanism of the CTD to recruit co-factors required for co-transcriptional RNA processing , regulating RNA polymerase II activity. Nrd1 is a protein that is responsible for many of the transcriptional activities of RNAP II, specifically through the Nrd1 - dependent termination pathway. [ 12 ] This pathway requires the parvulin Ess1, or Pin1 depending on the organism, to isomerize the pSer5-Pro6 bond in the CTD. Without the cis conformation of the pSer5-Pro6 bond, created by Ess1/Pin1, Nrd1 cannot bind to RNAP II. Any variation from this process leads to a decrease in Nrd1 binding affinity, lowering the ability of RNAP II to process and degrade noncoding RNAs .
Cyp33 in mammals causes isomerization in MLL1 . [ 13 ] MLL1 is a multiprotein complex that regulates gene expression and chromosomal translocations involving this gene often lead to leukemia. [ 10 ] MLL's target genes include HOXC8, HOXA9, CDKN1B, and C-MYC. MLL also has two binding domains: a Cyp33 RNA-recognition motif domain (RRM), and a PHD3 domain that binds to H3K4me3 or Cyp33 RRM. Cyp33 has the ability to downregulate the expression of these genes through proline isomerization at the peptide bond between His1628 and Pro1629 within MLL. [ 13 ] This bond lies in a sequence between the PHD3 finger of MLL1 and the bromeodomain of MLL1, and its isomerization mediates the bonding of the PHD3 domain and the Cyp33 RRM domain. When these two domains are bonded transcription is repressed through recruitment of histone deacetylases to MLL1 and inhibition of H3K4me3. [ 9 ] [ 13 ]
Phosphorylated amino acids are crucial for the modulation of the binding of transcription factors and other gene regulatory proteins . Pin1's effect on isomerization of proline residues leads to an increase or decrease in recruitment of phosphatases, namely Scp1 and Ssu72 and their recruitment to the RNAP II CTD. [ 13 ] The cis- Pro formation is associated with an increase in Ssu72. Scp1 on recognizes trans -Pro formations, and is not affected by such isomerization. Pin1 also triggers the activation of the DSIF complex and NELF , which are responsible for pausing RNAP II in mammalian cells, and their conversion into positive elongation factors, facilitating elongation. [ 9 ] This potentially could be an isomerization dependent process.
Pin1 , a parvulin, regulates mRNA stability and expression in certain eukaryotics mRNAs. [ 13 ] These mRNAs are GM-CSF, P th , and TGFβ and each of them have AREs, or AU-rich cis-elements . The ARE binding protein KSRP has a Pin1 binding site. Pin1 binds to this site and dephosphorylates the serine and isomerizes the peptide bond between Ser181 and Pro182. This isomerization causes the decay of P th mRNA. KSRP, and other ARE binding proteins like AUF1, are thought to affect the other mRNAs through mechanisms similar to P th , with the requirement of a phosphorylated serine bonded to a proline in a specific conformation. Pin1 also triggers proline isomerization of Stem-Loop Binding Protein (SLBP) , allowing it to control the dissociation of SLBP from histone mRNA. This leads to Pin1 being able to affect histone mRNA decay. Pin1 affects many other genes in the form of gene silencing through the disruption of cell pathways, making it important in mRNA turnover by modulating RNA binding protein activity.
Currently there are no existing compounds that can mimic the peptide bond of proline to other amino acids while maintaining only a cis or trans configuration because most mimics found will eventually change from one isomer to another. This makes research on the direct effect of each of the isomers on biological mechanisms more difficult. [ 14 ] Also, the actual isomerization of proline is a slow process, meaning that any studying of the effects of the different isomers of proline takes a large amount of time to complete. [ 15 ] | https://en.wikipedia.org/wiki/Proline_isomerization_in_epigenetics |
Proline organocatalysis is the use of proline as an organocatalyst in organic chemistry . This theme is often considered the starting point for the area of organocatalysis, even though early discoveries went unappreciated. [ 1 ] Modifications, such as MacMillan’s catalyst and Jorgensen's catalysts, proceed with excellent stereocontrol. [ 2 ] : 5574 [ 3 ]
Proline catalysis was initially reported by groups at Schering AG and Hoffmann-La Roche . [ 1 ] [ 4 ] [ 5 ] [ 6 ] Proline's chiral structure enables enantioselective synthesis , favoring a particular enantiomer or diastereomer . [ 2 ] : 5574 [ 1 ] [ 7 ] [ 8 ] [ 9 ] : 47
The Hajos–Parrish–Eder–Sauer–Wiechert reaction , reported in 1971 by several research teams, is an early example of an enantioselective catalytic reaction in organic chemistry. [ 10 ] Its scope has been modified and expanded through the development of related reactions including the Michael addition , asymmetric aldol reaction, and the Mannich reaction . This reaction has likewise been used to perform asymmetric Robinson annulations . The general scheme of this reaction follows:
This example illustrates a proline-catalyzed asymmetric 6-enolendo aldolization . The zwitterionic character and the H-bonding of proline in the transition state determine the reaction outcome. [ 11 ] [ 12 ] [ 13 ] [ 14 ] An enamine is formed during the reaction and only one proline molecule is involved in forming the transition state. [ 15 ]
Asymmetric synthesis of the Wieland-Miescher ketone is also based on proline. [ 16 ] Additional reactions include aldol reactions , [ 17 ] [ 18 ] [ 19 ] [ 20 ] Mannich reaction , [ 21 ] [ 22 ] [ 23 ] Michael reaction , [ 24 ] [ 25 ] amination, [ 22 ] α-oxyamination, [ 26 ] [ 27 ] and α-halogenation. [ 28 ] [ 29 ]
Modifications on the basic proline structure improved the enantioselectivity and regioselectivity of the catalysis. [ 28 ] [ 29 ] These proline-derived auxiliaries and catalysts, [ 30 ] including the Enders hydrazone reaction and Corey–Itsuno reduction , have been reviewed, [ 31 ] [ 32 ] as have MacMillan’s iminium catalysts, [ 33 ] Miller catalysts, [ 33 ] and CBS-oxazaborolidines . [ 34 ]
Illustrating an enolexo intramolecular aldolization, dicarbonyl (dials,diketones) can be converted to anti-aldol products with a 10% L-proline catalyst loading. [ 35 ] [ 36 ]
A prominent example of proline catalysis is the addition of acetone or hydroxyacetone to a diverse set of aldehydes catalyzed by 20-30% proline catalyst loading with high (>99%) enantioselectivity yielding diol products. [ 37 ] As refined by List and Notz, the aforementioned reaction produces diol products as follows: [ 38 ]
Proline-catalyzed aldol additions proceed via a six-membered enamine transition state according to the Zimmerman-Traxler model. Addition of 20-30 mol% proline to acetone or hydroxyacetone catalyzes their addition to a diverse set of aldehydes with high (>99%) enantioselectivity yielding diol products. [ 39 ] [ 40 ] [ 41 ] Proline and proline derivatives have been implemented as organocatalysts to promote asymmetric condensation reactions. An example of such a reaction proceeding through a six membered transition state is modelled as follows.
Intramolecular aldolization reactions that are catalyzed by proline likewise go through six-membered transition states. These transition states can enable the formation of either the enolexo or the enolendo product. [ 42 ] | https://en.wikipedia.org/wiki/Proline_organocatalysis |
Promate is a Taiwanese electronics manufacturing company that develops mainly computing , communications and consumer electronics but also engages in the development, design and manufacturing of computer and mobile accessories. Promate produces personal computer peripherals, MFi certified product, mobile and smartphone accessories, USB products, power banks, universal power adaptors, audio devices, digital gadgets, LED, wired microphone and Solar lights and photography accessories. [ 1 ] Founded in 2004, the company is headquartered right now in Shenzhen , China. [ citation needed ]
Promate Technologies was founded in early 2004 in Taipei, Taiwan by a group of industry leaders working at ASUS , Foxconn , and Pegatron . The company later moved its headquarter to Shenzhen, China and set up its manufacturing facilities on a large scale.
In 2006, Promate established its first overseas branch office in Dubai – UAE to serve the rapidly growing demand in the Middle East and Africa region, the MEA branch office is located in jebel Ali free zone with 5,000 Square Meters warehousing facility catering to more than 45 markets in MEA and staffed with 130+ personnel's.
In 2008, Promate became an Apple MFi Certified brand with a dedicated R&D and Design center in Shenzhen-China.
In 2010, Promate established a fully-fledged branch office in Manila – Philippines to serve the South east Asia region. [ 2 ]
In 2013, Promate established a fully-fledged branch office in London – United Kingdom. [ 3 ]
Promate produces a variety of products. It mostly produces lifestyle technology accessories and peripherals ranging from smartphones, laptops to mobile car accessories. [ 4 ] [ 5 ] [ 6 ] Its products also include power banks, chargers, speakers, earphones, headphones, universal power adaptors, bluetooth adaptors, card readers, mini projectors, cables, tripods, monopods, bag packs, wearable tech, selfie lights, LED lights, screen protectors, [ 7 ] [ 8 ] car holders, bike holders, iPhone cases, iPad cases, Macbook cases, travel kits etc. [ 9 ] [ 10 ] [ 11 ] [ 12 ]
Promate operates in over 150 countries worldwide through offline distribution, retail and e-commerce channels. [ 13 ] [ 14 ] [ 15 ] [ 16 ]
Promate has received a number of design and product awards that include; | https://en.wikipedia.org/wiki/Promate |
The Prometheus rocket engine is an ongoing European Space Agency (ESA) development effort begun in 2017 to create a reusable methane -fueled rocket engine for use on the Themis reusable rocket demonstrator and Ariane Next , the successor to Ariane 6 , and possibly a version of Ariane 6 itself. [ 1 ] [ 2 ]
Prometheus is a backronym from the original French project designation PROMETHEE, standing for " P recursor R eusable O xygen Meth ane cost E ffective prop u lsion S ystem", and for the Titan Prometheus , from Greek mythology, creator of humanity, and god of fire, known for giving fire to humanity in defiance of the gods.
By 2020, the program was funded and is under development by ArianeGroup . [ 3 ]
The engine is aimed to be reusable with substantially lower costs than traditional engines manufactured in Europe. The cost goal is to manufacture the Prometheus engine at one-tenth the cost of the Ariane 5 's first-stage engine. [ 4 ] [ 3 ]
The engine is planned to have the following features:
The European Space Agency (ESA) began funding Prometheus engine development in June 2017 with €85 million provided through the Future Launchers Preparatory Programme , 63% of which came from France . [ 1 ]
By June 2017, Patrick Bonguet, lead of the Ariane 6 launch vehicle program at Arianespace , indicated that it was possible the Prometheus engine could find a use on a future version of the expendable Ariane 6 launcher. In this scenario, a "streamlined version of Vulcain rocket engine called Vulcain 2.1 would have the same performance as Vulcain 2". The expendable Ariane 6 was then expected to make an initial launch in 2020. [ 4 ]
By June 2020, the ESA was on board with this plan and had agreed to completely fund the development of the Prometheus precursor engine to bring the "engine design to a technical maturity suitable for industry". The objective of the overall program as stated in June 2020 was to utilize Prometheus technology to eventually "lower the cost of production by a factor of ten of the current main stage Ariane 5 Vulcain 2 engine". [ 3 ]
In 2021, ESA invested an additional €135 million in the project, [ 7 ] including €30 million from DLR. [ 8 ]
A Prometheus engine was started up in Nov 2022.
The engine had a successful 12 second test firing in June 2023, at the THEMIS test stand in Vernon , France. [ 9 ]
An additional successful hot fire test was reported at the end of 2024. [ 10 ]
This rocketry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Prometheus_(rocket_engine) |
Promicromonosporaceae is an Actinomycete family. [ 1 ] [ 2 ]
The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature [ 2 ] and the phylogeny is based on whole-genome sequences. [ 3 ] [ a ]
Luteimicrobium
Promicromonospora
Xylanimonas
Krasilnikoviella
Isoptericola
Paraoerskovia
Oerskovia
Cellulosimicrobium
Jonesiaceae
This Actinomycetota -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Promicromonosporaceae |
Promiscuous gene expression (PGE), formerly referred to as ectopic expression, is a process specific to the thymus that plays a pivotal role in the establishment of central tolerance . This phenomenon enables generation of self- antigens , so called tissue-restricted antigens (TRAs), which are in the body expressed only by one or few specific tissues (antigens rank among TRAs if they are expressed by less than five tissues from the sixty tested [ 1 ] ). These antigens are represented for example by insulin from the pancreas or defensins from the gastrointestinal tract . [ 2 ] Antigen-presenting cells (APCs) of the thymus, namely medullary thymic epithelial cells (mTECs), dendritic cells (DCs) and B cells are capable to present peptides derived from TRAs to developing T cells (thymus is the major origin of T cell development [ 3 ] ) and hereby test, whether their T cell receptors (TCRs) engage self entities and therefore their occurrence in the body can potentially lead to the development of autoimmune disease . In that case, thymic APCs either induce apoptosis in these autoreactive T cells ( negative selection ) or they deviate them to become T regulatory cells ( Treg selection ), which suppress self-reactive T cells in the body that escaped negative selection in the thymus. [ 4 ] Thus, PGE is crucial for tissues protection against autoimmunity .
The usual level of gene expression in the peripheral tissues (e.g. spleen , kidney , liver etc.) reaches about 60% of the mouse coding genome . Some peripheral tissues, including lungs , brain and testis , reveal the repertoire of expressed genes about 10% broader. Importantly, PGE in the thymus, which is mediated by unique subset of epithelial cells called mTECs, triggers expression of vast majority of the genes from the whole genome (~85%). Such a broad repertoire of expressed genes wasn't shown in any other tissue of the body. [ 5 ]
The process of PGE in the thymus was discovered in late 80's [ 6 ] however, it took a decade to find that the cell subset that mediates PGE and therefore provides a "library" of TRAs are mTECs. [ 7 ] These cells were shown to uniquely express a protein called autoimmune regulator (Aire), which drives the expression of approximately 40% TRAs, referred to as Aire-dependent, and is so far the only well characterized driver of PGE. [ 8 ] Defects in the expression of Aire lead to multiorgan autoimmunity in mice and cause a severe autoimmune syndrome called APECED in human. [ 9 ] [ 10 ] Because Aire is not the exclusive PGE regulator, more than half of TRAs are Aire-independent and it isn't still known how their PGE is orchestrated. [ 11 ]
mTECs are very heterogenous population and at least should be subdivided to MHCII low expressing subset ( mTECsLO ) and MHCII high expressing subset ( mTECsHI ) which is considered to be mature. [ 12 ] Aire is expressed only by 30% from the latter. [ 5 ]
PGE was found to act in a stochastic manner, which means that each mTEC expresses distinct set of Aire-dependent and Aire-independent TRAs. [ 13 ] Despite its stochasticity, TRAs are co-expressed in clusters which however, rather mirror their co-localization on chromosomes than co-expression patterns from particular tissues. Even though TRAs involved in each cluster were found to be consistent, the PGE of whole cluster is transient and changes during mTEC development. [ 14 ] Moreover, these clusters are highly variable between individuals. [ 15 ] PGE is distinct from the expression of TRAs in the peripheral tissues also by its monoalelic or bialelic course. [ 16 ] On the other hand, the level of TRA expression and numbers of alternative-splicing protein variants in the thymus correspond to the peripheral tissues. [ 17 ] [ 5 ]
PGE is highly conserved between mice and human. [ 18 ]
Although thymic B cells were shown to induce either negative selection or Treg selection, their importance for the establishment of central tolerance remains elusive. [ 19 ] [ 20 ] It is assumed however, that B cells in the thymus are licensed by CD40 - CD40L interaction with autoreactive T cells to activate the expression of Aire and upregulate levels of MHCII and CD80 . Moreover, Aire drives the PGE of Aire-dependent TRAs in B cells and because their repertoire is non-overlapping with that of mTECs it should broaden the scope of peripheral antigens displayed in the thymus. [ 21 ]
Except the thymus, Aire is expressed also in the periphery, namely in the secondary lymphoid organs . However, the search for particular Aire-expressing cell type still continues due to conflicting results. [ 22 ] [ 23 ] What seems to be clear is that these cells express Aire-dependent TRAs, that are distinct from those in mTECs. [ 22 ] In line with their high expression of MHCII and very limited expression of costimulatory molecules , these cells were shown to establish tolerance by inactivation of autoreactive T cells rather than inducing apoptosis in them. [ 23 ]
Aire is not classical transcription factor , because instead of recognition of specific consensus sequences , Aire seeks after genes marked by specific histone marks, such as the absence of H3K4me3 and presence of H3K27me3 , which indicate transcriptionally inactive chromatin . [ 24 ] [ 25 ] [ 26 ] This type of gene recognition logically explains the high numbers of genes whose expression is affected by Aire. There is available also alternative explanation, that Aire recognizes silenced chromatin thanks to interaction with molecular complex ATF7ip - MBD1 which binds methylated CpG di-nucleotides . [ 27 ]
After the recognition of Aire dependent genes, Aire recruits topoisomerase II to perform double-strand DNA breaks at their transcriptional start sites (TSSs). [ 28 ] These brakes attract DNA PK and other DNA damage response proteins which relax the surrounding chromatin and repair the breaks. [ 29 ] [ 30 ] Subsequently, Aire recruits elongation complex p-TEFb to the TSSs, [ 31 ] which releases stalled RNA II polymerases and therefore activates transcription (PGE) of Aire-dependent genes. [ 32 ] Interaction between Aire and p-TEFb is enabled by another partner molecule Brd4 , which stabilizes this molecular complex. [ 33 ]
Altogether, Aire requires around fifty partner molecules to properly activate PGE. [ 30 ] Among these molecules further rank acetylase Creb-binding protein (CBP), which enhances stability of Aire, however dampens its transactivation properties and deacetylase Sirtuin 1 (Sirt1), which is essential for activation of PGE of Aire-dependent TRAs. [ 34 ] [ 35 ] Worth mentioning is also Hipk2 , which phosphorylates Aire and CBP however, its absence affects mostly PGE of Aire-independent genes, suggesting that this kinase might cooperate with other unknown transcriptional regulator. [ 36 ]
Recently, molecular complexes of Aire and its partners were shown to localize to specific parts of chromatin called super-enhancers . [ 37 ]
By contrast, little is known about transcription of Aire itself. Nevertheless, several studies suggest that major role in triggering of Aire expression plays NF-κB signaling pathway, [ 38 ] [ 39 ] similarly as in the development of mTECs. [ 40 ] Aire expression and PGE of Aire-dependent TRAs is also affected by sex hormones. Androgens enhance these processes, whereas impact by estrogens is completely opposite and results in less efficient PGE. [ 41 ] [ 42 ]
Fezf2 ( forebrain embryonic zinc-finger-like protein 2) was recently discovered as the second regulator of PGE. [ 43 ] Even though little is known about its operation in the thymus, Fezf2, in marked contrast with Aire, plays role in different physiological processes than central tolerance, e.g. development of the brain, and acts as a classical transcription factor. In the thymus however, Fezf2 is expressed by nearly 80% of mTECs and not other cells. The repertoire of TRAs involved in Fezf2-driven PGE is nonoverlapping with that of Aire and comprises genes previously defined as Aire-independent, e.g. Fabp9 ( TRA of testis ). This fact is also bolstered by different manifestations of autoimmunity in Fezf2 knockout mouse , in comparison with Aire KO mouse. [ 44 ]
The expression of Fezf2 was found to be independent on Aire however, was found to be triggered also by the receptor of NF-κB signaling pathway, namely by LtβR . [ 43 ]
The expression of Aire and Fezf2 was found to be upregulated after mTEC adhesion to developing T cells which points to the fact that functional PGE requires direct contact with T cells. [ 45 ] | https://en.wikipedia.org/wiki/Promiscuous_gene_expression |
In genetics , a promoter is a sequence of DNA to which proteins bind to initiate transcription of a single RNA transcript from the DNA downstream of the promoter. The RNA transcript may encode a protein ( mRNA ), or can have a function in and of itself, such as tRNA or rRNA . Promoters are located near the transcription start sites of genes, upstream on the DNA (towards the 5' region of the sense strand ).
Promoters can be about 100–1000 base pairs long, the sequence of which is highly dependent on the gene and product of transcription, type or class of RNA polymerase recruited to the site, and species of organism. [ 1 ] [ 2 ]
For transcription to take place, the enzyme that synthesizes RNA, known as RNA polymerase , must attach to the DNA near a gene. Promoters contain specific DNA sequences such as response elements that provide a secure initial binding site for RNA polymerase and for proteins called transcription factors that recruit RNA polymerase. These transcription factors have specific activator or repressor sequences of corresponding nucleotides that attach to specific promoters and regulate gene expression. [ citation needed ]
Promoters represent critical elements that can work in concert with other regulatory regions ( enhancers , silencers , boundary elements/ insulators ) to direct the level of transcription of a given gene.
A promoter is induced in response to changes in abundance or conformation of regulatory proteins in a cell, which enable activating transcription factors to recruit RNA polymerase. [ 4 ] [ 5 ]
Given the short sequences of most promoter elements, promoters can rapidly evolve from random sequences. For instance, in E. coli , ~60% of random sequences can evolve expression levels comparable to the wild-type lac promoter with only one mutation, and that ~10% of random sequences can serve as active promoters even without evolution. [ 6 ]
As promoters are typically immediately adjacent to the gene in question, positions in the promoter are designated relative to the transcriptional start site , where transcription of DNA begins for a particular gene (i.e., positions upstream are negative numbers counting back from -1, for example -100 is a position 100 base pairs upstream). [ citation needed ]
In bacteria , the promoter contains two short sequence elements approximately 10 ( Pribnow Box ) and 35 nucleotides upstream from the transcription start site . [ 2 ]
The above promoter sequences are recognized only by RNA polymerase holoenzyme containing sigma-70 . RNA polymerase holoenzymes containing other sigma factors recognize different core promoter sequences.
Promoters can be very closely located in the DNA. Such "closely spaced promoters" have been observed in the DNAs of all life forms, from humans [ 10 ] to prokaryotes [ 11 ] and are highly conserved. [ 12 ] Therefore, they may provide some (presently unknown) advantages.
These pairs of promoters can be positioned in divergent, tandem, and convergent directions. They can also be regulated by transcription factors and differ in various features, such as the nucleotide distance between them, the two promoter strengths, etc.
The most important aspect of two closely spaced promoters is that they will, most likely, interfere with each other. Several studies have explored this using both analytical and stochastic models. [ 13 ] [ 14 ] [ 15 ] There are also studies that measured gene expression in synthetic genes or from one to a few genes controlled by bidirectional promoters. [ 16 ]
More recently, one study measured most genes controlled by tandem promoters in E. coli . [ 17 ] In that study, two main forms of interference were measured. One is when an RNAP is on the downstream promoter, blocking the movement of RNAPs elongating from the upstream promoter. The other is when the two promoters are so close that when an RNAP sits on one of the promoters, it blocks any other RNAP from reaching the other promoter. These events are possible because the RNAP occupies several nucleotides when bound to the DNA, including in transcription start sites.
Similar events occur when the promoters are in divergent and convergent formations. The possible events also depend on the distance between them.
Gene promoters are typically located upstream of the gene and can have regulatory elements several kilobases away from the transcriptional start site (enhancers). In eukaryotes, the transcriptional complex can cause the DNA to bend back on itself, which allows for placement of regulatory sequences far from the actual site of transcription. Eukaryotic RNA-polymerase-II-dependent promoters can contain a TATA box ( consensus sequence TATAAA), which is recognized by the general transcription factor TATA-binding protein (TBP); and a B recognition element (BRE), which is recognized by the general transcription factor TFIIB . [ 18 ] [ 19 ] [ 20 ] The TATA element and BRE typically are located close to the transcriptional start site (typically within 30 to 40 base pairs).
Eukaryotic promoter regulatory sequences typically bind proteins called transcription factors that are involved in the formation of the transcriptional complex. An example is the E-box (sequence CACGTG), which binds transcription factors in the basic helix-loop-helix (bHLH) family (e.g. BMAL1-Clock , cMyc ). [ 21 ] Some promoters that are targeted by multiple transcription factors might achieve a hyperactive state, leading to increased transcriptional activity. [ 22 ]
Up-regulated expression of genes in mammals is initiated when signals are transmitted to the promoters associated with the genes. Promoter DNA sequences may include different elements such as CpG islands (present in about 70% of promoters), a TATA box (present in about 24% of promoters), initiator (Inr) (present in about 49% of promoters), upstream and downstream TFIIB recognition elements (BREu and BREd) (present in about 22% of promoters), and downstream core promoter element (DPE) (present in about 12% of promoters). [ 24 ] The presence of multiple methylated CpG sites in CpG islands of promoters causes stable silencing of genes. [ 25 ] However, the presence or absence of the other elements have relatively small effects on gene expression in experiments. [ 26 ] Two sequences, the TATA box and Inr, caused small but significant increases in expression (45% and 28% increases, respectively). The BREu and the BREd elements significantly decreased expression by 35% and 20%, respectively, and the DPE element had no detected effect on expression. [ 26 ]
Cis-regulatory modules that are localized in DNA regions distant from the promoters of genes can have very large effects on gene expression, with some genes undergoing up to 100-fold increased expression due to such a cis-regulatory module. [ 27 ] These cis-regulatory modules include enhancers , silencers , insulators and tethering elements. [ 28 ] Among this constellation of elements, enhancers and their associated transcription factors have a leading role in the regulation of gene expression. [ 29 ]
Enhancers are regions of the genome that are major gene-regulatory elements. Enhancers control cell-type-specific gene expression programs, most often by looping through long distances to come in physical proximity with the promoters of their target genes. [ 30 ] In a study of brain cortical neurons, 24,937 loops were found, bringing enhancers to promoters. [ 27 ] Multiple enhancers, each often at tens or hundred of thousands of nucleotides distant from their target genes, loop to their target gene promoters and coordinate with each other to control expression of their common target gene. [ 30 ]
The schematic illustration in this section shows an enhancer looping around to come into close physical proximity with the promoter of a target gene. The loop is stabilized by a dimer of a connector protein (e.g. dimer of CTCF or YY1 ), with one member of the dimer anchored to its binding motif on the enhancer and the other member anchored to its binding motif on the promoter (represented by the red zigzags in the illustration). [ 31 ] Several cell function specific transcription factors (there are about 1,600 transcription factors in a human cell [ 32 ] ) generally bind to specific motifs on an enhancer [ 33 ] and a small combination of these enhancer-bound transcription factors, when brought close to a promoter by a DNA loop, govern the level of transcription of the target gene. Mediator (coactivator) (a complex usually consisting of about 26 proteins in an interacting structure) communicates regulatory signals from enhancer DNA-bound transcription factors directly to the RNA polymerase II (pol II) enzyme bound to the promoter. [ 34 ]
Enhancers, when active, are generally transcribed from both strands of DNA with RNA polymerases acting in two different directions, producing two eRNAs as illustrated in the Figure. [ 35 ] An inactive enhancer may be bound by an inactive transcription factor. Phosphorylation of the transcription factor may activate it and that activated transcription factor may then activate the enhancer to which it is bound (see small red star representing phosphorylation of transcription factor bound to enhancer in the illustration). [ 36 ] An activated enhancer begins transcription of its RNA before activating a promoter to initiate transcription of messenger RNA from its target gene. [ 37 ]
Bidirectional promoters are short (<1 kbp) intergenic regions of DNA between the 5' ends of the genes in a bidirectional gene pair. [ 38 ] A "bidirectional gene pair" refers to two adjacent genes coded on opposite strands, with their 5' ends oriented toward one another. [ 39 ] The two genes are often functionally related, and modification of their shared promoter region allows them to be co-regulated and thus co-expressed. [ 40 ] Bidirectional promoters are a common feature of mammalian genomes . [ 41 ] About 11% of human genes are bidirectionally paired. [ 38 ]
Bidirectionally paired genes in the Gene Ontology database shared at least one database-assigned functional category with their partners 47% of the time. [ 42 ] Microarray analysis has shown bidirectionally paired genes to be co-expressed to a higher degree than random genes or neighboring unidirectional genes. [ 38 ] Although co-expression does not necessarily indicate co-regulation, methylation of bidirectional promoter regions has been shown to downregulate both genes, and demethylation to upregulate both genes. [ 43 ] There are exceptions to this, however. In some cases (about 11%), only one gene of a bidirectional pair is expressed. [ 38 ] In these cases, the promoter is implicated in suppression of the non-expressed gene. The mechanism behind this could be competition for the same polymerases, or chromatin modification. Divergent transcription could shift nucleosomes to upregulate transcription of one gene, or remove bound transcription factors to downregulate transcription of one gene. [ 44 ]
Some functional classes of genes are more likely to be bidirectionally paired than others. Genes implicated in DNA repair are five times more likely to be regulated by bidirectional promoters than by unidirectional promoters. Chaperone proteins are three times more likely, and mitochondrial genes are more than twice as likely. Many basic housekeeping and cellular metabolic genes are regulated by bidirectional promoters. [ 38 ] The overrepresentation of bidirectionally paired DNA repair genes associates these promoters with cancer . Forty-five percent of human somatic oncogenes seem to be regulated by bidirectional promoters – significantly more than non-cancer causing genes. Hypermethylation of the promoters between gene pairs WNT9A /CD558500, CTDSPL /BC040563, and KCNK15 /BF195580 has been associated with tumors. [ 43 ]
Certain sequence characteristics have been observed in bidirectional promoters, including a lack of TATA boxes , an abundance of CpG islands , and a symmetry around the midpoint of dominant Cs and As on one side and Gs and Ts on the other. A motif with the consensus sequence of TCTCGCGAGA, also called the CGCG element , was recently shown to drive PolII-driven bidirectional transcription in CpG islands. [ 45 ] CCAAT boxes are common, as they are in many promoters that lack TATA boxes. In addition, the motifs NRF-1, GABPA , YY1 , and ACTACAnnTCCC are represented in bidirectional promoters at significantly higher rates than in unidirectional promoters. The absence of TATA boxes in bidirectional promoters suggests that TATA boxes play a role in determining the directionality of promoters, but counterexamples of bidirectional promoters do possess TATA boxes and unidirectional promoters without them indicates that they cannot be the only factor. [ 46 ]
Although the term "bidirectional promoter" refers specifically to promoter regions of mRNA -encoding genes, luciferase assays have shown that over half of human genes do not have a strong directional bias. Research suggests that non-coding RNAs are frequently associated with the promoter regions of mRNA-encoding genes. It has been hypothesized that the recruitment and initiation of RNA polymerase II usually begins bidirectionally, but divergent transcription is halted at a checkpoint later during elongation. Possible mechanisms behind this regulation include sequences in the promoter region, chromatin modification, and the spatial orientation of the DNA. [ 44 ]
The archaeal promoter resembles an eukaryotic one: a TATA box (at -26/-27) and an upstream BRE (at -33/-34) are commonly found, binding to TBP and TFB (homolog of TFIIB). [ 3 ] There are also occasionally an initiator element (INR) near the transcription start site [TSS], and a promoter proximal element (PPE) between BRE-TATA and TSS. These two are not necessary, but enhance the strength of a promoter. [ 47 ] TFE (homolog of TFIIE ) promotes initiation at suboptimal promoter sequences. [ 47 ] It binds between -10 and +1, near the Inr. [ 3 ]
Strict conservation of these motifs are not necessary, and many archaea with high GC% show "degenerated" TATA boxes. Rather, it's the energetic (duplex enthalpy, duplex stability) and structual (intrinsic curvature, bendability) features of the promoter that mainly matter. [ 47 ]
A subgenomic promoter is a promoter added to a virus for a specific heterologous gene, resulting in the formation of mRNA for that gene alone. Many positive-sense RNA viruses produce these subgenomic mRNAs (sgRNA) as one of the common infection techniques used by these viruses and generally transcribe late viral genes. Subgenomic promoters range from 24 nucleotide ( Sindbis virus ) to over 100 nucleotides ( Beet necrotic yellow vein virus ) and are usually found upstream of the transcription start. [ 48 ]
A wide variety of algorithms have been developed to facilitate detection of promoters in genomic sequence, and promoter prediction is a common element of many gene prediction methods. Many such tools have been developed. [ 49 ] A bacterial promoter region is located before the -35 and -10 Consensus sequences. The closer the promoter region is to the consensus sequences the more often transcription of that gene will take place. There is not a set pattern for promoter regions as there are for consensus sequences.
One approach is to use a biophysical theory of why promoters work. For archaea, a combination of calculated energetic and structual features can detect promoters. [ 47 ] For bacteria, a biophysical model that estimates RNAP-sigma70 binding probability can detect and estimate the strengths of promoters. [ 7 ]
Another approach is to use a pattern-matching program based on known promoters, from simple hand-crafted regular expressions to advanced machine learning methods such as decision trees, hidden Markov models (HMM), and neural networks . YAPP, an 2000s eukaryotic core-promoter prediction program, uses HMM. [ 50 ] A 2017 publication predicts bacterial and eukaryotic promoters using a convolutional neural network . [ 51 ]
The initiation of the transcription is a multistep sequential process that involves several mechanisms: promoter location, initial reversible binding of RNA polymerase, conformational changes in RNA polymerase, conformational changes in DNA, binding of nucleoside triphosphate (NTP) to the functional RNA polymerase-promoter complex, and nonproductive and productive initiation of RNA synthesis. [ 52 ] [ 2 ]
The promoter binding process is crucial in the understanding of the process of gene expression. Tuning synthetic genetic systems relies on precisely engineered synthetic promoters with known levels of transcription rates. [ 2 ]
Although RNA polymerase holoenzyme shows high affinity to non-specific sites of the DNA, this characteristic does not allow us to clarify the process of promoter location. [ 53 ] This process of promoter location has been attributed to the structure of the holoenzyme to DNA and sigma 4 to DNA complexes. [ 54 ]
Most diseases are heterogeneous in cause, meaning that one "disease" is often many different diseases at the molecular level, though symptoms exhibited and response to treatment may be identical. How diseases of different molecular origin respond to treatments is partially addressed in the discipline of pharmacogenomics .
Not listed here are the many kinds of cancers involving aberrant transcriptional regulation owing to creation of chimeric genes through pathological chromosomal translocation . Importantly, intervention in the number or structure of promoter-bound proteins is one key to treating a disease without affecting expression of unrelated genes sharing elements with the target gene. [ 55 ] Some genes whose change is not desirable are capable of influencing the potential of a cell to become cancerous. [ 56 ]
In humans, about 70% of promoters located near the transcription start site of a gene (proximal promoters) contain a CpG island . [ 57 ] [ 58 ] CpG islands are generally 200 to 2000 base pairs long, have a C:G base pair content >50%, and have regions of DNA where a cytosine nucleotide is followed by a guanine nucleotide and this occurs frequently in the linear sequence of bases along its 5' → 3' direction .
Distal promoters also frequently contain CpG islands, such as the promoter of the DNA repair gene ERCC1 , where the CpG island-containing promoter is located about 5,400 nucleotides upstream of the coding region of the ERCC1 gene. [ 59 ] CpG islands also occur frequently in promoters for functional noncoding RNAs such as microRNAs .
In humans, DNA methylation occurs at the 5' position of the pyrimidine ring of the cytosine residues within CpG sites to form 5-methylcytosines . The presence of multiple methylated CpG sites in CpG islands of promoters causes stable silencing of genes. [ 25 ] Silencing of a gene may be initiated by other mechanisms, but this is often followed by methylation of CpG sites in the promoter CpG island to cause the stable silencing of the gene. [ 25 ]
Generally, in progression to cancer, hundreds of genes are silenced or activated . Although silencing of some genes in cancers occurs by mutation, a large proportion of carcinogenic gene silencing is a result of altered DNA methylation (see DNA methylation in cancer ). DNA methylation causing silencing in cancer typically occurs at multiple CpG sites in the CpG islands that are present in the promoters of protein coding genes.
Altered expressions of microRNAs also silence or activate many genes in progression to cancer (see microRNAs in cancer ). Altered microRNA expression occurs through hyper/hypo-methylation of CpG sites in CpG islands in promoters controlling transcription of the microRNAs .
Silencing of DNA repair genes through methylation of CpG islands in their promoters appears to be especially important in progression to cancer (see methylation of DNA repair genes in cancer ).
The usage of the term canonical sequence to refer to a promoter is often problematic, and can lead to misunderstandings about promoter sequences. Canonical implies perfect, in some sense.
In the case of a transcription factor binding site, there may be a single sequence that binds the protein most strongly under specified cellular conditions. This might be called canonical.
However, natural selection may favor less energetic binding as a way of regulating transcriptional output. In this case, we may call the most common sequence in a population the wild-type sequence. It may not even be the most advantageous sequence to have under prevailing conditions.
Recent evidence also indicates that several genes (including the proto-oncogene c-myc ) have G-quadruplex motifs as potential regulatory signals.
Promoters are important gene regulatory elements used in tuning synthetically designed genetic circuits and metabolic networks . For example, to overexpress an important gene in a network, to yield higher production of target protein, synthetic biologists design promoters to upregulate its expression . Automated algorithms can be used to design neutral DNA or insulators that do not trigger gene expression of downstream sequences. [ 60 ] [ 2 ]
Some cases of many genetic diseases are associated with variations in promoters or transcription factors.
Examples include:
Some promoters are called constitutive as they are active in all circumstances in the cell, while others are regulated , becoming active in the cell only in response to specific stimuli.
A tissue-specific promoter is a promoter that has activity in only certain cell types.
When referring to a promoter some authors actually mean promoter + operator ; i.e., the lac promoter is IPTG inducible, meaning that besides the lac promoter, the lac operon is also present. If the lac operator were not present the IPTG would not have an inducible effect. [ citation needed ] Another example is the Tac-Promoter system (Ptac). Notice how tac is written as a tac promoter, while in fact tac is actually both a promoter and an operator. [ 65 ] | https://en.wikipedia.org/wiki/Promoter_(genetics) |
Promoter activity is a term that encompasses several meanings around the process of gene expression from regulatory sequences — promoters [ 2 ] and enhancers . [ 3 ] Gene expression has been commonly characterized as a measure of how much, how fast, when and where this process happens. [ 4 ] Promoters and enhancers are required for controlling where and when a specific gene is transcribed. [ 3 ]
Traditionally the measure of gene products (i.e. mRNA, proteins, etc.) has been the major approach of measure promoter activity. However, this method confront with two issues: the stochastic nature of the gene expression [ 5 ] and the lack of mechanistic interpretation of the thermodynamical process involved in the promoter activation. [ 4 ]
The actual developments in metabolomics product of developments of next-generation sequencing technologies and molecular structural analysis have enabled the development of more accurate models of the process of promoter activation (e.g. the sigma structure of the polymerase holoenzyme domains [ 6 ] ) and a better understanding of the complexities of the regulatory factors involved.
The process of binding is central in determining the "strength" of promoters, that is the relative estimation of how "well" a promoter perform the expression of a gene under specific circumstances. Brewster et al., [ 7 ] using a simple thermodynamical model based on the postulate that transcriptional activity is proportional to the probability of finding the RNA polymerase bound at the promoter, obtained predictions of the scaling of the RNA polymerase binding energy. This models support the relationship between the probability of binding and the output of gene expression [ 7 ]
The problem of gene regulation could be represented mathematically as the probability of n molecules — RNAP, activators, repressors and inducers — are bound to a target regions. [ 4 ] [ 2 ]
To compute the probability of bound, it is needed to sum the Boltzmann weights over all possible states of P {\displaystyle P} polymerase molecules
on DNA. [ 8 ] Here in this deduction P {\displaystyle P} is the effective number of RNAP molecules available for binding to the promoter.
This approach is based in statistical thermodynamics of two possible microscopic outcomes: [ 4 ]
The statistical weight of promoter unoccupied Z(P) is defined:
Z ( P ) = N N S ! P ! ∗ ( N N S − P ) ! ∗ e − E N S K b T {\displaystyle Z(P)=~{\frac {N_{NS}!}{P!*(N_{NS}-P)!}}*{e}^{-~{\frac {E_{NS}}{KbT}}}}
Where the first term is the combinatorial result of taken P {\displaystyle P} polymerase of N N S {\displaystyle N_{NS}} non-specific sites available, and the second term are the Boltzmann weights, where E N S {\displaystyle E_{NS}} is the energy that represents the average binding energy of RNA polymerase to the genomic background (non-specific sites).
Then, the total statistical weight Z ( P t o t a l ) {\displaystyle Z(Ptotal)} , can be written as the sum of the Z ( P ) {\displaystyle Z(P)} state and the RNA polymerase on promoter state:
Z ( P t o t a l ) = Z ( P ) + Z ( P − 1 ) ∗ e − E S K b T {\displaystyle Z(Ptotal)=Z(P)+Z(P-1)*{e}^{-~{\frac {E_{S}}{KbT}}}}
Where E S {\displaystyle E_{S}} in the Z ( P − 1 ) {\displaystyle Z(P-1)} state is the binding energy for RNA polymerase on the promoter (where the s stands for specific site).
Finally, to find the probability of a RNA polymerase to binding ( P r o b b o u n d {\displaystyle Prob_{bound}} ) to a specific promoter, we divide Z ( P ) {\displaystyle Z(P)} by Z ( P t o t a l ) {\displaystyle Z(Ptotal)} which produces:
P r o b b o u n d = 1 1 + N N S P ∗ e − Δ E K b T {\displaystyle Prob_{bound}={\frac {1}{1+{\frac {N_{N}S}{P}}*{e}^{-{\frac {\Delta E}{KbT}}}}}}
Where, Δ E = E S − E N S {\displaystyle \Delta E=E_{S}-E_{NS}}
An important result of this model is that any transcription factor, regulator or perturbation could be introduced as a term multiplying P {\displaystyle P} in the probability of binding equation. This term for any transcriptional factor (here called factor regulators) modify the probability of binding to:
P r o b b o u n d = 1 1 + N N S P ∗ F R ∗ e − Δ E K b T {\displaystyle Prob_{bound}={\frac {1}{1+{\frac {N_{N}S}{P*F_{R}}}*{e}^{-{\frac {\Delta E}{KbT}}}}}}
Where F R {\displaystyle F_{R}} is the term for transcriptional factors, and it has the value of F R > 1 {\displaystyle F_{R}>1} for increase of F R < 1 {\displaystyle F_{R}<1} for decrease of the number of RNA polymerase available to bind.
This result has an important significance to represent mathematically all the possible configurations of transcriptional factor by derive different models to estimate F R {\displaystyle F_{R}} (for further developments, see also [ 4 ] ).
The process of activation and binding in eukaryotes is different from bacteria in the way that specific DNA elements bind the factors for a functional pre-initiation complex. In bacteria there is a single polymerase, that contain catalytic subunits and a single regulatory subunits known as sigma , which transcribe for different type of genes. [ 9 ]
In eukaryotes, the transcription is performed by three different RNA polymerase, RNA pol I for ribosomal RNAs (rRNAs), RNA polymerase II for messenger RNAs (mRNAs) and some small regulatory RNAs, and the RNA polymerase III for small RNAs such as transfer RNAs (tRNAs). The process of positioning of the RNA polymerase II and the transcriptional machinery require the recognition of a region known as "core promoter". [ 9 ] The elements that could be found in the core promoter include the TATA element, the TFIIB recognition element (BRE), the initiator (Inr), and the downstream core promoter element (DPE). [ 10 ] Promoters in eukaryotes contain one or more of these core promotes elements (but any of them are absolutely essential for promoter function), [ 9 ] these elements are binding sites for subunits of the transcriptional machinery and are involve in the initiation of the transcription, but also they have some specific enhancer functions. [ 10 ] In addition, the promoter activity in eukaryotes include some complexities in the way of how they integrate signals from distal factors with the core promoter. [ 11 ]
Unlike in protein coding regions, where the assumption of sequence conservation of functionally homologous genes have been frequently proved, there is not a clear relationship of conservation between sequences and their functions for regulatory regions. [ 12 ] The transcriptional promoters regions are under less stringent selection, then have a higher substitutions rates, allowing transcription factor binding sites to be replaced easily be new ones arising from random mutations. [ 12 ] Notwithstanding the sequence changes, mainly the functions of regulatory sequences remain conserved. [ 12 ]
In recents years with the increase of availability of genome sequences, phylogenetic footprinting open the possibility to identify cis-elements, and then study their evolution processes. In this sense, Raijman et al., [ 13 ] Dermitzakis et al. [ 14 ] have developed techniques for analyzing evolutionary processes in transcription factor regions in Saccharomyces species promoters and mammalian regulatory networks respectively.
The basis for many of these evolutionary changes in nature are probably related with events within the cis-regulatory regions involve in gene expression. [ 15 ] The impact of variation in regulatory regions is important for disease risk [ 14 ] due their impact in the gene expression level. Furthermore, perturbations in the binding properties of proteins encoded by regulatory genes have been linked with phenotypes effects such as, duplicated structures, homeotic transformations and novel morphologies. [ 15 ]
The measure of the promoter activity has a broad meaning. The promoter activity could be measured for different situations or research questions, [ 4 ] such as:
Methods to study promoter activity commonly are based in the expression of a reporter gene from the promoter of the gene of interest. [ 16 ] [ 2 ] [ 17 ] Mutations and deletions are made in a promoter region, and their changes on couple expression of the reporter gene are measured. [ 18 ]
The most important reporter genes are the fluorescence proteins as GFP . These reporters allow to measure promoter activation by increasing fluorescent signals, and deactivation by decrease in the rate of fluorescence. [ 19 ]
The RNA world hypothesis assumes that very early in evolution, prior to the emergence of DNA as a genetic material and prior to the emergence of protein enzymes , RNA was the key player in the emergence of life. [ 20 ] A central idea in this hypothesis is an RNA replicase ( ribozyme ) that is capable of copying its own genome . [ 21 ] A holopolymerase ribozyme has been engineered that uses a sigma factor -like specificity primer to recognize an RNA promoter sequence. [ 22 ] This ribozyme can then, in a second step rearrange to a processive form that can polymerize from certain RNA promoters and not others. [ 22 ] | https://en.wikipedia.org/wiki/Promoter_activity |
Promoter bashing is a technique used in molecular biology to identify how certain regions of a DNA strand , commonly promoters , affect the transcription of downstream genes. Under normal circumstances, proteins bind to the promoter and activate or repress transcription. In a promoter bashing assay , specific point mutations or deletions are made in specific regions of the promoter and the transcription of the gene is then measured. The contribution of a region of the promoter can be observed by the level of transcription. If a mutation or deletion changes the level of transcription, then it is known that that region of the promoter may be a binding site or other regulatory element. [ 1 ] [ 2 ] [ 3 ]
Promoter bashing is often done with deletions from either the 5' or 3' end of the DNA strand; this assay is easier to perform based on repeated restriction digestion and gel-purifying fragments of specific sizes. It is often easiest to ligate the promoter into the reporter, generate a large amount of the reporter construct using PCR or growth in bacteria, and then perform serial restriction digests on this sample. The ability of upstream promoters can be easily assayed by removing segments from the 5' end, and the same for the 3' end of the strand for downstream promoters. [ 4 ]
As the promoter commonly contains binding sequences for proteins affecting transcription, those proteins are also necessary when testing the effects of the promoter. Proteins which associate with the promoter can be identified using an electrophoretic mobility shift assay (EMSA), and the effects of inclusion or exclusion of the proteins with the mutagenized promoters can be assessed in the assay. This allows the use of promoter bashing to not only discover the location on the DNA strand which affects transcription, but also the proteins which affect that strand. The effects of protein interactions with each other as well as the binding sites can also be assayed in this way; candidate proteins must instead be identified by protein/protein interaction assays instead of an EMSA. [ 5 ]
This is an example procedure for a promoter bashing assay, adapted from Boulin et al. : [ 6 ]
From the data received from assaying the different promoters, the effects of various parts of the promoter can be ascertained. However, it is possible that there may not be enough data present and the assay must be re-run with a different promoter region and/or different mutations. | https://en.wikipedia.org/wiki/Promoter_bashing |
The Promoting Women in Entrepreneurship Act ( Pub. L. 115–6 (text) (PDF) , H.R. 255 ) is a public law amendment to the Science and Engineering Equal Opportunities Act ( Pub. L. 96–516 ) to authorize the National Science Foundation to encourage its entrepreneurial programs to recruit and support women to extend their focus beyond the laboratory and into the commercial world.
The Promoting Women in Entrepreneurship Act was introduced in the United States House of Representatives on January 4, 2017, by Representative Elizabeth Esty of Connecticut and signed into law by President Donald Trump on February 28, 2017.
According to the Bureau of Labor Statistics, women account for 47% of the workforce, but make up only 25.6 percent of computer and mathematical occupations [ citation needed ] . In addition, only 15.4 percent of architecture and engineering jobs are filled by women. Congress also found that only 26 percent of women who earned STEM degrees actually worked in STEM related jobs. The president stated, “( Pub. L. 96–516 ) enables the National Science Foundation to support women inventors – of which there are many – researchers and scientists in bringing their discoveries to the business world, championing science and entrepreneurship and creating new ways to improve people’s lives.” Trump signed the bill in a room full of women including Representative Barbara Comstock , who introduced the Inspire Women Act , Senator Heidi Heitkamp , and First Lady Melania Trump . The bill was supported by both parties, with 36 Democrats and 8 Republicans signing as co-sponsors.
The bill was designed to primarily improve the programs in place at the National Science Foundation in order to encourage more women to enter into the STEM fields. The Science and Engineering Equal Opportunities Act allocates funding for educational programs and for research in STEM fields, and this bill adds the ability for the Science Foundation to allocate new funding towards incentivizing women to join their educational and entrepreneurial programs. There has been little news regarding this act and its effects recently and the expected results have yet to come to fruition. However, the act still represents a trend within the Trump administration with regard to technology and women. The president has said that this issue was, "going to be addressed by my administration over the years with more and more of these bills coming out and address the barriers faced by female entrepreneurs and by those in STEM fields." Despite this, since the day of the law being signed, the Trump administration has yet to give a statement regarding future legislation that would further help improve the numbers of women in science and technology. [ 1 ] [ 2 ] [ 3 ] [ 4 ]
This article relating to law in the United States or its constituent jurisdictions is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Promoting_Women_in_Entrepreneurship_Act |
In nuclear engineering , prompt criticality describes a nuclear fission event in which criticality (the threshold for an exponentially growing nuclear fission chain reaction) is achieved with prompt neutrons alone and does not rely on delayed neutrons . As a result, prompt supercriticality causes a much more rapid growth in the rate of energy release than other forms of criticality. Nuclear weapons are based on prompt criticality, while nuclear reactors rely on delayed neutrons or external neutrons to achieve criticality.
An assembly is critical if each fission event causes, on average, exactly one additional such event in a continual chain. Such a chain is a self-sustaining fission chain reaction . When a uranium -235 (U-235) atom undergoes nuclear fission , it typically releases between one and seven neutrons (with an average of 2.4). In this situation, an assembly is critical if every released neutron has a 1 / 2.4 = 0.42 = 42 % probability of causing another fission event as opposed to either being absorbed by a non-fission capture event or escaping from the fissile core.
The average number of neutrons that cause new fission events is called the effective neutron multiplication factor , usually denoted by the symbols k-effective , k-eff or k . When k-effective is equal to 1, the assembly is called critical, if k-effective is less than 1 the assembly is said to be subcritical, and if k-effective is greater than 1 the assembly is called supercritical.
In a supercritical assembly, the number of fissions per unit time, N , along with the power production, increases exponentially with time. How fast it grows depends on the average time it takes, T , for the neutrons released in a fission event to cause another fission. The growth rate of the reaction is given by:
Most of the neutrons released by a fission event are the ones released in the fission itself. These are called prompt neutrons, and strike other nuclei and cause additional fissions within nanoseconds (an average time interval used by scientists in the Manhattan Project was one shake , or 10 ns). A small additional source of neutrons is the fission products . Some of the nuclei resulting from the fission are radioactive isotopes with short half-lives , and nuclear reactions among them release additional neutrons after a long delay of up to several minutes after the initial fission event. These neutrons, which on average account for less than one percent of the total neutrons released by fission, are called delayed neutrons. The relatively slow timescale on which delayed neutrons appear is an important aspect for the design of nuclear reactors, as it allows the reactor power level to be controlled via the gradual, mechanical movement of control rods. Typically, control rods contain neutron poisons (substances, for example boron or hafnium , that easily capture neutrons without producing any additional ones) as a means of altering k-effective . With the exception of experimental pulsed reactors, nuclear reactors are designed to operate in a delayed-critical mode and are provided with safety systems to prevent them from ever achieving prompt criticality.
In a delayed-critical assembly, the delayed neutrons are needed to make k-effective greater than one. Thus the time between successive generations of the reaction, T , is dominated by the time it takes for the delayed neutrons to be released, of the order of seconds or minutes. Therefore, the reaction will increase slowly, with a long time constant. This is slow enough to allow the reaction to be controlled with electromechanical control systems such as control rods , and accordingly all nuclear reactors are designed to operate in the delayed-criticality regime.
In contrast, a critical assembly is said to be prompt-critical if it is critical ( k = 1 ) without any contribution from delayed neutrons and prompt-supercritical if it is supercritical (the fission rate growing exponentially, k > 1 ) without any contribution from delayed neutrons. In this case the time between successive generations of the reaction, T , is limited only by the fission rate from the prompt neutrons, and the increase in the reaction will be extremely rapid, causing a rapid release of energy within a few milliseconds. Prompt-critical assemblies are created by design in nuclear weapons and some specially designed research experiments.
The difference between a prompt neutron and a delayed neutron has to do with the source from which the neutron has been released into the reactor. The neutrons, once released, have no difference except the energy or speed that have been imparted to them. A nuclear weapon relies heavily on prompt-supercriticality (to produce a high peak power in a fraction of a second), whereas nuclear power reactors use delayed-criticality to produce controllable power levels for months or years.
In order to start up a controllable fission reaction, the assembly must be delayed-critical. In other words, k must be greater than 1 (supercritical) without crossing the prompt-critical threshold. In nuclear reactors this is possible due to delayed neutrons. Because it takes some time before these neutrons are emitted following a fission event, it is possible to control the nuclear reaction using control rods.
A steady-state (constant power) reactor is operated so that it is critical due to the delayed neutrons, but would not be so without their contribution. During a gradual and deliberate increase in reactor power level, the reactor is delayed-supercritical. The exponential increase of reactor activity is slow enough to make it possible to control the criticality factor, k , by inserting or withdrawing rods of neutron absorbing material. Using careful control rod movements, it is thus possible to achieve a supercritical reactor core without reaching an unsafe prompt-critical state.
Once a reactor plant is operating at its target or design power level, it can be operated to maintain its critical condition for long periods of time.
Nuclear reactors can be susceptible to prompt-criticality accidents if a large increase in reactivity (or k-effective ) occurs, e.g., following failure of their control and safety systems. The rapid uncontrollable increase in reactor power in prompt-critical conditions is likely to irreparably damage the reactor and in extreme cases, may breach the containment of the reactor. Nuclear reactors' safety systems are designed to prevent prompt criticality and, for defense in depth , reactor structures also provide multiple layers of containment as a precaution against any accidental releases of radioactive fission products .
With the exception of research and experimental reactors, only a small number of reactor accidents are thought to have achieved prompt criticality, for example Chernobyl #4 , the U.S. Army's SL-1 , and Soviet submarine K-431 . In all these examples the uncontrolled surge in power was sufficient to cause an explosion that destroyed each reactor and released radioactive fission products into the atmosphere.
At Chernobyl in 1986, a poorly understood positive scram effect resulted in an overheated reactor core. This led to the rupturing of the fuel elements and water pipes, vaporization of water, a steam explosion , and a graphite fire. Estimated power levels prior to the incident suggest that it operated in excess of 30 GW, ten times its 3 GW maximum thermal output. The reactor chamber's 2000-ton lid was lifted by the steam explosion. Since the reactor was not designed with a containment building capable of containing this catastrophic explosion, the accident released large amounts of radioactive material into the environment.
In the other two incidents, the reactor plants failed due to errors during a maintenance shutdown that was caused by the rapid and uncontrolled removal of at least one control rod. The SL-1 was a prototype reactor intended for use by the US Army in remote polar locations. At the SL-1 plant in 1961, the reactor was brought from shutdown to prompt critical state by manually extracting the central control rod too far. As the water in the core quickly converted to steam and expanded (in just a few milliseconds), the 26,000-pound (12,000 kg) reactor vessel jumped 9 feet 1 inch (2.77 m), leaving impressions in the ceiling above. [ 1 ] [ 2 ] All three men performing the maintenance procedure died from injuries. 1,100 curies of fission products were released as parts of the core were expelled. It took 2 years to investigate the accident and clean up the site. The excess prompt reactivity of the SL-1 core was calculated in a 1962 report: [ 3 ]
The delayed neutron fraction of the SL-1 is 0.70%... Conclusive evidence revealed that the SL-1 excursion was caused by the partial withdrawal of the central control rod. The reactivity associated with the 20-inch withdrawal of this one rod has been estimated to be 2.4% δk/k, which was sufficient to induce prompt criticality and place the reactor on a 4 millisecond period.
In the K-431 reactor accident, 10 were killed during a refueling operation. The K-431 explosion destroyed the adjacent machinery rooms and ruptured the submarine's hull. In these two catastrophes, the reactor plants went from complete shutdown to extremely high power levels in a fraction of a second, damaging the reactor plants beyond repair.
A number of research reactors and tests have purposely examined the operation of a prompt critical reactor plant. CRAC , KEWB , SPERT-I , Godiva device , and BORAX experiments contributed to this research. Many accidents have also occurred, however, primarily during research and processing of nuclear fuel. SL-1 is the notable exception.
The following list of prompt critical power excursions is adapted from a report submitted in 2000 by a team of American and Russian nuclear scientists who studied criticality accidents , published by the Los Alamos Scientific Laboratory, the location of many of the excursions. [ 4 ] A typical power excursion is about 1 x 10 17 fissions.
In the design of nuclear weapons , in contrast, achieving prompt criticality is essential. Indeed, one of the design problems to overcome in constructing a bomb is to compress the fissile materials enough to achieve prompt criticality before the chain reaction has a chance to produce enough energy to cause the core to expand too much. A good bomb design must therefore win the race to a dense, prompt critical core before a less-powerful chain reaction disassembles the core without allowing a significant amount of fuel to fission (known as a fizzle ). This generally means that nuclear bombs need special attention paid to the way the core is assembled, such as the implosion method invented by Richard C. Tolman , Robert Serber , and other scientists at the University of California, Berkeley in 1942. | https://en.wikipedia.org/wiki/Prompt_criticality |
Prompt-gamma neutron activation analysis ( PGAA ) is a very widely applicable technique for determining the presence and amount of many elements simultaneously in samples ranging in size from micrograms to many grams. It is a non-destructive method, and the chemical form and shape of the sample are relatively unimportant. Typical measurements take from a few minutes to several hours per sample.
The technique can be described as follows. The sample is continuously irradiated with a beam of neutrons . The constituent elements of the sample absorb some of these neutrons and emit prompt gamma rays which are measured with a gamma ray spectrometer . The energies of these gamma rays identify the neutron-capturing elements, while the intensities of the peaks at these energies reveal their concentrations. The amount of analyte element is given by the ratio of count rate of the characteristic peak in the sample to the rate in a known mass of the appropriate elemental standard irradiated under the same conditions.
Typically, the sample will not acquire significant long-lived radioactivity, and the sample may be removed from the facility and used for other purposes. One of the typical applications of PGAA is an online belt elemental analyzer or bulk material analyzer used in cement , coal and mineral industries. [ 1 ]
This atomic, molecular, and optical physics –related article is a stub . You can help Wikipedia by expanding it .
This article about analytical chemistry is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Prompt_gamma_neutron_activation_analysis |
Prone position ( / p r oʊ n / ) is a body position in which the person lies flat with the chest down and the back up. In anatomical terms of location , the dorsal side is up, and the ventral side is down. The supine position is the 180° contrast.
The word prone , meaning "naturally inclined to something, apt, liable," has been recorded in English since 1382; the meaning "lying face-down" was first recorded in 1578, but is also referred to as "lying down" or "going prone."
Prone derives from the Latin pronus , meaning "bent forward, inclined to," from the adverbial form of the prefix pro- "forward." Both the original, literal, and the derived figurative sense were used in Latin, but the figurative is older in English.
In anatomy , the prone position is a position of the human body lying face down. It is opposed to the supine position which is face up. Using the terms defined in the anatomical position , the ventral side is down, and the dorsal side is up.
Concerning the forearm, prone refers to that configuration where the palm of the hand is directed posteriorly, and the radius and ulna are crossed.
Researchers observed that the expiratory reserve volume measured at relaxation volume increased from supine to prone by the factor of 0.15. [ 1 ]
In competitive shooting , the prone position is the position of a shooter lying face down on the ground. It is considered the easiest and most accurate position as the ground provides extra stability. It is one of the positions in three positions events. For many years (1932–2016), the only purely prone Olympic event was the 50 meter rifle prone ; however, this has since been dropped from the Olympic program. Both men and women still have the 50 meter rifle three positions as an Olympic shooting event .
Prone position is often used in military combat as, like in competitive shooting, the prone position provides the best accuracy and stability (as well as making it more difficult for enemies to see the shooter or hit him/her with their own fire).
Many first-person shooter video games also allow the player character to go into the prone position, again with similar benefits. In other types of video games where this is not a factor, such as platformers , the prone position may be used to dodge attacks or crawl under obstacles.
In the "50 meter rifle prone" International Shooting Sport Federation standard event, as used in the Olympics and other shooting competitions, contestants shoot a .22 LR calibre ("smallbore") rifle over a course of fire of 60 shots to count in 50 minutes (when using electronic targets ). [ 2 ] These are shot after an unlimited number of sighting shots, which must be shot during the 15-minute preparation and sighting period. If necessary, an 'elimination' course of fire may be undertaken to reduce the number of shooters to the number that may fire simultaneously in a 'qualification' round. Up until 2013, each shot could score from 0 to 10 points, with no decimal points (e.g. 0,1,2,3,4,5,6,7,8,9,10, but not 3.2 or 9.8, etc.) making the maximum score for elimination or qualification round 600 points. After 2013, shots are scored as decimal values (e.g. 9.8 rather than what would have been a 9 under integer scoring), so the maximum score from a 60 shot match is 654.0.
Up until 2018, the top eight shooters in the qualification round were selected to shoot 'shot-for-shot' in an 'Olympic' final. Prior to 2013, this consisted of ten additional shots scored to one decimal place, so the maximum possible score was 109.0. This score was then added to the score for the qualification round; this summed score was used to determine final rankings and thus medallists. Starting in the 2013 season and continuing to the beginning of the 2018 season, a new finals format was introduced, where again the top 8 shooters in the qualification round shot against each other, only this time with the qualification scored being discarded and the number of shots being raised to 24. These shots were still scored decimally, so the maximum possible score under this new format was 261.6. From January 2018, the final for this event was discarded entirely; competition rankings were determined by the score obtained in the 60 shot match only.
The non-ISSF fullbore disciplines governed by the International Confederation of Fullbore Rifle Associations (ICFRA) are exclusively shot from the prone position over distances of 300–1,200 yards. [ 3 ] These disciplines are popular in Commonwealth countries, and are heavily influenced by the British National Rifle Association .
In Biathlon, prone is one of two positions that athletes shoot from, along with standing. Shooting takes place at "knock down" targets which indicate a simple hit or miss with no scoring rings. [ 4 ]
In the UK, the National Smallbore Rifle Association (NSRA) governs "smallbore" shooting with .22LR calibre rifles. "Short-range" is defined as distances between 15 yards and 25 metres 'indoors'. Targets are generally outward gauging (touching a ring on the target scores the lower of the two adjacent scores), except on some of the Schools and older targets (e.g. 5 bull targets). Being indoors, no allowance is necessary for wind, light or other changes. Shots are scored as integer values from 0 to 10, with no decimal places. "Long-range" smallbore shooting is generally over either 50 yards, 50 metres or 100 yards distance outdoors. Targets vary, but generally, the ISSF 50M (scaled) is used for 50 yards or 50 metres, and a proportionally sized target is used for 100 yards. A 50-yard, 50-metre or 100-yard target is generally constructed to allow 20 shots to count, to be executed during one 'detail' of 20 minutes duration. Sighting shots are included in that time period.
Outdoors, variables such as light, wind, temperature, humidity and mirage affect the target image and bullet trajectory. To help shooters, most ranges have wind flags placed at useful positions around the range to display the wind conditions.
The prone position is also used to describe the way pilots and other crew may be positioned in an aircraft; lying on their stomachs rather than seated in a normal upright position. [ 5 ] During World War II , the bomb aimer in some bombers would be positioned this way to be better able to view the ground through a transparent panel or bubble in the nose of a bomber. Later, it was suggested that a pilot in the prone position might be more effective in some kinds of high-speed aircraft, because it would permit the pilot to withstand a greater g -force in the upward and downward direction with respect to the plane, and many speculative designs of the 1950s featured this arrangement. [ 6 ] [ 7 ] However, it never became mainstream, as testing revealed that the increased difficulty of operating aircraft controls in the prone position outweighed the advantages. Three examples of this approach are seen in the Savoia-Marchetti SM.93 , the Gloster Meteor F8 "Prone Pilot" , and the Northrop XP-79 . Modern hang gliders are typically piloted in the prone position. | https://en.wikipedia.org/wiki/Prone_position |
Pronova BioPharma is a Norwegian company. In Denmark it is a bulk manufacturer of omega-3 products with a manufacturing plant in Kalundborg . It was acquired by BASF in 2014. [ 1 ]
Pronova BioPharma ASA had its roots in Norway's codfish liver oil industry. The company was founded in 1991 as a spinout from the JC Martens company, which in turn was founded in 1838 in Bergen, Norway. [ 2 ] Pronova developed the concentrated omega-3-acid ethyl esters formulation that is the active pharmaceutical ingredient of Lovaza. [ 3 ]
Pronova won approvals to market the drug, called Omacor in Europe (and initially in the US), in several European countries in 2001 after conducting a three-and-a-half-year trial in 11,000 subjects; [ 4 ] The company partnered with other companies like Pierre Fabre in France. [ 5 ] In 2004, Pronova licensed the US and Puerto Rican rights to Reliant Therapeutics, whose business model was in-licensing of cardiovascular drugs. [ 6 ] In that same year, Reliant and Pronova won FDA approval for the drug, [ 7 ] and it was launched in the US and Europe in 2005. Global sales in 2005 were $144M, and by 2008, they were $778M. [ 8 ] By 17 November 2010, the constituent companies of the OSEAX included Pronova BioPharma. [ 9 ]
In 2009, generic companies Teva Pharmaceuticals and Par Pharmaceutical made clear their intentions to file Abbreviated New Drug Applications (“ANDAs”) to bring generics to market, and in April 2009, Pronova sued them from infringing the key US patents covering Lovaza, US 5,656,667 (due to expire in April 2017), US 5,502,077 (exp March 2013). Subsequently, in May 2012, a district court ruled in Pronova's favor, saying that the patents were valid. [ 10 ] [ 11 ] [ 12 ] The generic companies appealed, and in September 2013, the Federal Circuit reversed, saying that because more than one year before Pronova's predecessor company applied for a patent, it had sent samples of the fish oil used in Lovaza to a researcher for testing. This event thus constituted "public use" that invalidated the patent in question. [ 13 ] [ 14 ] Generic versions of Lovaza were introduced in America in April 2014. [ 15 ]
Pronovo has continued to manufacture the ingredients in Lovaza, and in 2012, BASF announced it would acquire Pronova for $844 million. [ 16 ] The deal closed in 2013. [ 17 ]
Pronova BioPharma is a commercial partner in MabCent-SFI .
https://www.epax.com/why-epax/about-us/ | https://en.wikipedia.org/wiki/Pronova_BioPharma |
A pronucleus ( pl. : pronuclei ) denotes the nucleus found in either a sperm or egg cell during the process of fertilization . The sperm cell undergoes a transformation into a pronucleus after entering the egg cell but prior to the fusion of the genetic material of both the sperm and egg. In contrast, the egg cell possesses a pronucleus once it becomes haploid , not upon the arrival of the sperm cell. Haploid cells, such as sperm and egg cells in humans, carry half the number of chromosomes present in somatic cells , with 23 chromosomes compared to the 46 found in somatic cells. It is noteworthy that the male and female pronuclei do not physically merge, although their genetic material does. Instead, their membranes dissolve, eliminating any barriers between the male and female chromosomes, facilitating the combination of their chromosomes into a single diploid nucleus in the resulting embryo , which contains a complete set of 46 chromosomes.
The presence of two pronuclei serves as the initial indication of successful fertilization, often observed around 18 hours after insemination , or intracytoplasmic sperm injection (ICSI) during in vitro fertilization . At this stage, the zygote is termed a two-pronuclear zygote (2PN). Two-pronuclear zygotes transitioning through 1PN or 3PN states tend to yield poorer-quality embryos compared to those maintaining 2PN status throughout development, [ 1 ] and this distinction may hold significance in the selection of embryos during in vitro fertilization (IVF) procedures.
The pronucleus was discovered the 1870s microscopically using staining techniques combined with microscopes with improved magnification levels. The pronucleus was originally found during the first studies on meiosis . Edouard Van Beneden published a paper in 1875 in which he first mentions the pronucleus by studying the eggs of rabbits and bats. He stated that the two pronuclei form together in the center of the cell to form the embryonic nucleus. Van Beneden also found that the sperm enters into the cell through the membrane in order to form the male pronucleus. In 1876, Oscar Hertwig did a study on sea urchin eggs because the eggs of sea urchins are transparent, so it allowed for much better magnification of the egg. Hertwig confirmed Van Beneden's finding of the pronucleus, and also found the formation of the female pronucleus involves the formation of polar bodies . [ 2 ]
The female pronucleus is the female egg cell once it has become a haploid cell , and the male pronucleus forms when the sperm enters into the female egg. While the sperm develops inside of the male testes , the sperm does not become a pronucleus until it decondenses quickly inside of the female egg. [ 3 ] When the sperm reaches the female egg, the sperm loses its outside membrane as well as its tail. The sperm does this because the membrane and the tail are no longer needed by the female ovum. The purpose of the cell membrane was to protect the DNA from the acidic vaginal fluid , and the purpose of the tail of the sperm was to help move the sperm cell to the egg cell. The formation of the female egg is asymmetrical, while the formation of the male sperm is symmetrical. Typically in a female mammal, meiosis starts with one diploid cell and becomes one haploid ovum and typically two polar bodies , however one may later divide to form a third polar body. [ 4 ] In a male, meiosis starts with one diploid cell and ends with four sperm. [ 5 ] In mammals, the female pronucleus starts in the center of the egg before fertilization. When the male pronucleus is formed, after the sperm cell reaches the egg, the two pronuclei migrate towards each other. However, in brown alga Pelvetia , the egg pronucleus starts in the center of the egg before fertilization and remain in the center after fertilization. This is because the egg cells of brown alga Pelvetia , the egg pronucleus is anchored down by microtubules so only the male pronucleus migrates towards the female pronucleus. [ 6 ]
The calcium concentration within the egg cell cytoplasm has a very important role in the formation of an activated female egg. If there is no calcium influx, the female diploid cell will produce three pronuclei, rather than only one. This is due to the failure of release of the second polar body. [ 7 ]
In sea urchins, the formation of the zygote starts with the fusion of both the inner and outer nuclei of the male and female pronuclei. It is unknown if one of the pronuclei start the combination of the two, or if the microtubules that help the dissolution of membranes commence the action. [ 8 ] The microtubules that make the two pronuclei combine come from the sperm's centrosome . There is a study that strongly supports that microtubules are an important part of the fusion of the pronuclei. Vinblastine is a chemotherapy drug that affects both the plus and minus ends of microtubules. [ 9 ] When vinblastine is added to the ovum, there is a high rate of pronuclear fusion failure. This high rate of pronuclear fusion failure highly suggests that microtubules play a major role in the fusion of the pronucleus. [ 10 ] In mammals, the pronuclei only last in the cell for about twelve hours, due to the fusion of the genetic material of the two pronuclei within the egg cell. Many studies of pronuclei have been in the egg cells of sea urchins, where the pronuclei are in the egg cell for less than an hour. The main difference between the process of fusion of genetic materials in mammals versus sea urchins is that in sea urchins, the pronuclei go directly into forming a zygote nucleus. In mammalian egg cells, the chromatin from the pronuclei form chromosomes that merge onto the same mitotic spindle . The diploid nucleus in mammals is first seen at the two-cell stage, whereas in sea urchins it is first found at the zygote stage. [ 3 ] | https://en.wikipedia.org/wiki/Pronucleus |
Proof-carrying code ( PCC ) is a software mechanism that allows a host system to verify properties about an application via a formal proof that accompanies the application's executable code. The host system can quickly verify the validity of the proof, and it can compare the conclusions of the proof to its own security policy to determine whether the application is safe to execute. This can be particularly useful in ensuring memory safety (i.e. preventing issues like buffer overflows ).
Proof-carrying code was originally described in 1996 by George Necula and Peter Lee .
The original publication on proof-carrying code in 1996 [ 1 ] used packet filters as an example: a user-mode application hands a function written in machine code to the kernel that determines whether or not an application is interested in processing a particular network packet. Because the packet filter runs in kernel mode , it could compromise the integrity of the system if it contains malicious code that writes to kernel data structures. Traditional approaches to this problem include interpreting a domain-specific language for packet filtering, inserting checks on each memory access ( software fault isolation ), and writing the filter in a high-level language which is compiled by the kernel before it is run. These approaches have performance disadvantages for code as frequently run as a packet filter, except for the in-kernel compilation approach, which only compiles the code when it is loaded, not every time it is executed.
With proof-carrying code, the kernel publishes a security policy specifying properties that any packet filter must obey: for example, the packet filter will not access memory outside of the packet and its scratch memory area. A theorem prover is used to show that the machine code satisfies this policy. The steps of this proof are recorded and attached to the machine code which is given to the kernel program loader. The program loader can then rapidly validate the proof, allowing it to thereafter run the machine code without any additional checks. If a malicious party modifies either the machine code or the proof, the resulting proof-carrying code is either invalid or harmless (still satisfies the security policy). | https://en.wikipedia.org/wiki/Proof-carrying_code |
Proof-theoretic semantics is an approach to the semantics of logic that attempts to locate the meaning of propositions and logical connectives not in terms of interpretations , as in Tarskian approaches to semantics, but in the role that the proposition or logical connective plays within a system of inference .
Gerhard Gentzen is the founder of proof-theoretic semantics, providing the formal basis for it in his account of cut-elimination for the sequent calculus , and some provocative philosophical remarks about locating the meaning of logical connectives in their introduction rules within natural deduction . The history of proof-theoretic semantics since then has been devoted to exploring the consequences of these ideas. [ citation needed ]
Dag Prawitz extended Gentzen's notion of analytic proof to natural deduction , and suggested that the value of a proof in natural deduction may be understood as its normal form. [ citation needed ] This idea lies at the basis of the Curry–Howard isomorphism , and of intuitionistic type theory . His inversion principle lies at the heart of most modern accounts of proof-theoretic semantics.
Michael Dummett introduced the very fundamental idea of logical harmony , building on a suggestion of Nuel Belnap . In brief, a language , which is understood to be associated with certain patterns of inference, has logical harmony if it is always possible to recover analytic proofs from arbitrary demonstrations, as can be shown for the sequent calculus by means of cut-elimination theorems and for natural deduction by means of normalisation theorems. A language that lacks logical harmony will suffer from the existence of incoherent forms of inference: it will likely be inconsistent.
This logic -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Proof-theoretic_semantics |
A proof is sufficient evidence or a sufficient argument for the truth of a proposition . [ 1 ] [ 2 ] [ 3 ] [ 4 ]
The concept applies in a variety of disciplines, [ 5 ] with both the nature of the evidence or justification and the criteria for sufficiency being area-dependent. In the area of oral and written communication such as conversation , dialog , rhetoric , etc., a proof is a persuasive perlocutionary speech act , which demonstrates the truth of a proposition. [ 6 ] In any area of mathematics defined by its assumptions or axioms , a proof is an argument establishing a theorem of that area via accepted rules of inference starting from those axioms and from other previously established theorems. [ 7 ] The subject of logic , in particular proof theory , formalizes and studies the notion of formal proof . [ 8 ] In some areas of epistemology and theology , the notion of justification plays approximately the role of proof, [ 9 ] while in jurisprudence the corresponding term is evidence , [ 10 ] with "burden of proof" as a concept common to both philosophy and law .
In most disciplines, evidence is required to prove something. Evidence is drawn from the experience of the world around us, with science obtaining its evidence from nature , [ 11 ] law obtaining its evidence from witnesses and forensic investigation , [ 12 ] and so on. A notable exception is mathematics, whose proofs are drawn from a mathematical world begun with axioms and further developed and enriched by theorems proved earlier.
Exactly what evidence is sufficient to prove something is also strongly area-dependent, usually with no absolute threshold of sufficiency at which evidence becomes proof. [ 13 ] [ 14 ] In law, the same evidence that may convince one jury may not persuade another. Formal proof provides the main exception, where the criteria for proofhood are ironclad and it is impermissible to defend any step in the reasoning as "obvious" (except for the necessary ability of the one proving and the one being proven to, to correctly identify any symbol used in the proof.); [ 15 ] for a well-formed formula to qualify as part of a formal proof, it must be the result of applying a rule of the deductive apparatus of some formal system to the previous well-formed formulae in the proof sequence. [ 16 ]
Proofs have been presented since antiquity. Aristotle used the observation that patterns of nature never display the machine-like uniformity of determinism as proof that chance is an inherent part of nature. [ 17 ] On the other hand, Thomas Aquinas used the observation of the existence of rich patterns in nature as proof that nature is not ruled by chance. [ 18 ]
Proofs need not be verbal. Before Copernicus , people took the apparent motion of the Sun across the sky as proof that the Sun went round the Earth . [ 19 ] Suitably incriminating evidence left at the scene of a crime may serve as proof of the identity of the perpetrator. Conversely, a verbal entity need not assert a proposition to constitute a proof of that proposition. For example, a signature constitutes direct proof of authorship ; less directly, handwriting analysis may be submitted as proof of authorship of a document. [ 20 ] Privileged information in a document can serve as proof that the document's author had access to that information; such access might in turn establish the location of the author at certain time, which might then provide the author with an alibi .
18th-century Scottish philosopher David Hume built on Aristotle 's separation of belief from knowledge , [ 21 ] recognizing that one can be said to "know" something only if one has firsthand experience with it, in a strict sense proof, while one can infer that something is true and therefore "believe" it without knowing, via evidence or supposition. This speaks to one way of separating proof from evidence:
If one cannot find their chocolate bar, and sees chocolate on their napping roommate's face, this evidence can cause one to believe their roommate ate the chocolate bar. But they do not know their roommate ate it. It may turn out that the roommate put the candy away when straightening up, but was thus inspired to go eat their own chocolate. Only if one directly experiences proof of the roommate eating it, perhaps by walking in on them doing so, would one have certain knowledge , in Hume's sense, that the roommate did it.
In a more strict sense of sure knowledge, one may be unable to prove anything to a rational certainty beyond that of the existence of one's immediate sensory awareness. Descartes famously raised a similarly strict standard with his first principle Cogito, ergo sum (I think, therefore I am). While Descartes' larger project in Meditations on First Philosophy has knowledge of God and the external world—founded on the certainty of the cogito—as its aim, his legacy in doing so is to have shown that one cannot have such proof, because all perceptions could be false (such as under the evil demon or simulated reality hypotheses). One nevertheless can still have clear proof of the existence of one's thought, even if belief in the external world lacks the certainty of demonstration beyond that of one's own firsthand experience. | https://en.wikipedia.org/wiki/Proof_(truth) |
In computer science and mathematical logic , a proof assistant or interactive theorem prover is a software tool to assist with the development of formal proofs by human–machine collaboration. This involves some sort of interactive proof editor, or other interface , with which a human can guide the search for proofs, the details of which are stored in, and some steps provided by, a computer .
A recent effort within this field is making these tools use artificial intelligence to automate the formalization of ordinary mathematics. [ 1 ]
A popular front-end for proof assistants is the Emacs -based Proof General, developed at the University of Edinburgh .
Coq includes CoqIDE, which is based on OCaml/ Gtk . Isabelle includes Isabelle/jEdit, which is based on jEdit and the Isabelle/ Scala infrastructure for document-oriented proof processing. More recently, Visual Studio Code extensions have been developed for Coq, [ 9 ] Isabelle by Makarius Wenzel, [ 10 ] and for Lean 4 by the leanprover developers. [ 11 ]
Freek Wiedijk has been keeping a ranking of proof assistants by the amount of formalized theorems out of a list of 100 well-known theorems. As of September 2023, only five systems have formalized proofs of more than 70% of the theorems, namely Isabelle, HOL Light, Rocq, Lean, and Metamath. [ 12 ] [ 13 ]
The following is a list of notable proofs that have been formalized within proof assistants. | https://en.wikipedia.org/wiki/Proof_assistant |
In logic , proof by contradiction is a form of proof that establishes the truth or the validity of a proposition by showing that assuming the proposition to be false leads to a contradiction .
Although it is quite freely used in mathematical proofs, not every school of mathematical thought accepts this kind of nonconstructive proof as universally valid. [ 1 ]
More broadly, proof by contradiction is any form of argument that establishes a statement by arriving at a contradiction, even when the initial assumption is not the negation of the statement to be proved. In this general sense, proof by contradiction is also known as indirect proof , proof by assuming the opposite , [ 2 ] and reductio ad impossibile . [ 3 ]
A mathematical proof employing proof by contradiction usually proceeds as follows:
An important special case is the existence proof by contradiction: in order to demonstrate that an object with a given property exists, we derive a contradiction from the assumption that all objects satisfy the negation of the property.
The principle may be formally expressed as the propositional formula ¬¬ P ⇒ P , equivalently (¬ P ⇒ ⊥) ⇒ P , which reads: "If assuming P to be false implies falsehood, then P is true."
In natural deduction the principle takes the form of the rule of inference
which reads: "If ¬ ¬ P {\displaystyle \lnot \lnot P} is proved, then P {\displaystyle P} may be concluded."
In sequent calculus the principle is expressed by the sequent
which reads: "Hypotheses Γ {\displaystyle \Gamma } and ¬ ¬ P {\displaystyle \lnot \lnot P} entail the conclusion P {\displaystyle P} or Δ {\displaystyle \Delta } ."
In classical logic the principle may be justified by the examination of the truth table of the proposition ¬¬P ⇒ P , which demonstrates it to be a tautology :
Another way to justify the principle is to derive it from the law of the excluded middle , as follows. We assume ¬¬P and seek to prove P . By the law of excluded middle P either holds or it does not:
In either case, we established P . It turns out that, conversely, proof by contradiction can be used to derive the law of excluded middle.
In classical sequent calculus LK proof by contradiction is derivable from the inference rules for negation:
Proof by contradiction is similar to refutation by contradiction , [ 4 ] [ 5 ] also known as proof of negation , which states that ¬P is proved as follows:
In contrast, proof by contradiction proceeds as follows:
Formally these are not the same, as refutation by contradiction applies only when the proposition to be proved is negated, whereas proof by contradiction may be applied to any proposition whatsoever. [ 6 ] In classical logic, where P {\displaystyle P} and ¬ ¬ P {\displaystyle \neg \neg P} may be freely interchanged, the distinction is largely obscured. Thus in mathematical practice, both principles are referred to as "proof by contradiction".
Proof by contradiction is equivalent to the law of the excluded middle , first formulated by Aristotle , which states that either an assertion or its negation is true, P ∨ ¬P .
The law of noncontradiction was first stated as a metaphysical principle by Aristotle . It posits that a proposition and its negation cannot both be true, or equivalently, that a proposition cannot be both true and false. Formally the law of non-contradiction is written as ¬(P ∧ ¬P) and read as "it is not the case that a proposition is both true and false". The law of non-contradiction neither follows nor is implied by the principle of Proof by contradiction.
The laws of excluded middle and non-contradiction together mean that exactly one of P and ¬P is true.
In intuitionistic logic proof by contradiction is not generally valid, although some particular instances can be derived. In contrast, proof of negation and principle of noncontradiction are both intuitionistically valid. [ 7 ]
Brouwer–Heyting–Kolmogorov interpretation of proof by contradiction gives the following intuitionistic validity condition: if there is no method for establishing that a proposition is false, then there is a method for establishing that the proposition is true. [ clarify ]
If we take "method" to mean algorithm , then the condition is not acceptable, as it would allow us to solve the Halting problem . To see how, consider the statement H(M) stating " Turing machine M halts or does not halt". Its negation ¬H(M) states that " M neither halts nor does not halt", which is false by the law of noncontradiction (which is intuitionistically valid). If proof by contradiction were intuitionistically valid, we would obtain an algorithm for deciding whether an arbitrary Turing machine M halts, thereby violating the (intuitionistically valid) proof of non-solvability of the Halting problem .
A proposition P which satisfies ¬ ¬ P ⇒ P {\displaystyle \lnot \lnot P\Rightarrow P} is known as a ¬¬-stable proposition . Thus in intuitionistic logic proof by contradiction is not universally valid, but can only be applied to the ¬¬-stable propositions. An instance of such a proposition is a decidable one, i.e., satisfying P ∨ ¬ P {\displaystyle P\lor \lnot P} . Indeed, the above proof that the law of excluded middle implies proof by contradiction can be repurposed to show that a decidable proposition is ¬¬-stable. A typical example of a decidable proposition is a statement that can be checked by direct computation, such as " n {\displaystyle n} is prime" or " a {\displaystyle a} divides b {\displaystyle b} ".
An early occurrence of proof by contradiction can be found in Euclid's Elements , Book 1, Proposition 6: [ 8 ]
The proof proceeds by assuming that the opposite sides are not equal, and derives a contradiction.
An influential proof by contradiction was given by David Hilbert . His Nullstellensatz states:
Hilbert proved the statement by assuming that there are no such polynomials g 1 , … , g k {\displaystyle g_{1},\ldots ,g_{k}} and derived a contradiction. [ 9 ]
Euclid's theorem states that there are infinitely many primes. In Euclid's Elements the theorem is stated in Book IX, Proposition 20: [ 10 ]
Depending on how we formally write the above statement, the usual proof takes either the form of a proof by contradiction or a refutation by contradiction. We present here the former, see below how the proof is done as refutation by contradiction.
If we formally express Euclid's theorem as saying that for every natural number n {\displaystyle n} there is a prime bigger than it, then we employ proof by contradiction, as follows.
Given any number n {\displaystyle n} , we seek to prove that there is a prime larger than n {\displaystyle n} . Suppose to the contrary that no such p exists (an application of proof by contradiction). Then all primes are smaller than or equal to n {\displaystyle n} , and we may form the list p 1 , … , p k {\displaystyle p_{1},\ldots ,p_{k}} of them all. Let P = p 1 ⋅ … ⋅ p k {\displaystyle P=p_{1}\cdot \ldots \cdot p_{k}} be the product of all primes and Q = P + 1 {\displaystyle Q=P+1} . Because Q {\displaystyle Q} is larger than all prime numbers it is not prime, hence it must be divisible by one of them, say p i {\displaystyle p_{i}} . Now both P {\displaystyle P} and Q {\displaystyle Q} are divisible by p i {\displaystyle p_{i}} , hence so is their difference Q − P = 1 {\displaystyle Q-P=1} , but this cannot be because 1 is not divisible by any primes. Hence we have a contradiction and so there is a prime number bigger than n {\displaystyle n} .
The following examples are commonly referred to as proofs by contradiction, but formally employ refutation by contradiction (and therefore are intuitionistically valid). [ 11 ]
Let us take a second look at Euclid's theorem – Book IX, Proposition 20: [ 10 ]
We may read the statement as saying that for every finite list of primes, there is another prime not on that list,
which is arguably closer to and in the same spirit as Euclid's original formulation. In this case Euclid's proof applies refutation by contradiction at one step, as follows.
Given any finite list of prime numbers p 1 , … , p n {\displaystyle p_{1},\ldots ,p_{n}} , it will be shown that at least one additional prime number not in this list exists. Let P = p 1 ⋅ p 2 ⋯ p n {\displaystyle P=p_{1}\cdot p_{2}\cdots p_{n}} be the product of all the listed primes and p {\displaystyle p} a prime factor of P + 1 {\displaystyle P+1} , possibly P + 1 {\displaystyle P+1} itself. We claim that p {\displaystyle p} is not in the given list of primes. Suppose to the contrary that it were (an application of refutation by contradiction). Then p {\displaystyle p} would divide both P {\displaystyle P} and P + 1 {\displaystyle P+1} , therefore also their difference, which is 1 {\displaystyle 1} . This gives a contradiction, since no prime number divides 1.
The classic proof that the square root of 2 is irrational is a refutation by contradiction. [ 12 ] Indeed, we set out to prove the negation ¬ ∃ a, b ∈ N {\displaystyle \mathbb {N} } . a/b = √ 2 by assuming that there exist natural numbers a and b whose ratio is the square root of two, and derive a contradiction.
Proof by infinite descent is a method of proof whereby a smallest object with desired property is shown not to exist as follows:
Such a proof is again a refutation by contradiction. A typical example is the proof of the proposition "there is no smallest positive rational number": assume there is a smallest positive rational number q and derive a contradiction by observing that q / 2 is even smaller than q and still positive.
Russell's paradox , stated set-theoretically as "there is no set whose elements are precisely those sets that do not contain themselves", is a negated statement whose usual proof is a refutation by contradiction.
Proofs by contradiction sometimes end with the word "Contradiction!". Isaac Barrow and Baermann used the notation Q.E.A., for " quod est absurdum " ("which is absurd"), along the lines of Q.E.D. , but this notation is rarely used today. [ 13 ] A graphical symbol sometimes used for contradictions is a downwards zigzag arrow "lightning" symbol (U+21AF: ↯), for example in Davey and Priestley. [ 14 ] Others sometimes used include a pair of opposing arrows (as → ← {\displaystyle \rightarrow \!\leftarrow } [ citation needed ] or ⇒ ⇐ {\displaystyle \Rightarrow \!\Leftarrow } ), [ citation needed ] struck-out arrows ( ↮ {\displaystyle \nleftrightarrow } ), [ citation needed ] a stylized form of hash (such as U+2A33: ⨳), [ citation needed ] or the "reference mark" (U+203B: ※), [ citation needed ] or × × {\displaystyle \times \!\!\!\!\times } . [ 15 ] [ 16 ]
G. H. Hardy described proof by contradiction as "one of a mathematician's finest weapons", saying "It is a far finer gambit than any chess gambit : a chess player may offer the sacrifice of a pawn or even a piece, but a mathematician offers the game." [ 17 ]
In automated theorem proving the method of resolution is based on proof by contradiction. That is, in order to show that a given statement is entailed by given hypotheses, the automated prover assumes the hypotheses and the negation of the statement, and attempts to derive a contradiction. [ 18 ] | https://en.wikipedia.org/wiki/Proof_by_contradiction |
In mathematics , a proof by infinite descent , also known as Fermat's method of descent, is a particular kind of proof by contradiction [ 1 ] used to show that a statement cannot possibly hold for any number, by showing that if the statement were to hold for a number, then the same would be true for a smaller number, leading to an infinite descent and ultimately a contradiction. [ 2 ] It is a method which relies on the well-ordering principle , and is often used to show that a given equation, such as a Diophantine equation , has no solutions. [ 3 ] [ 4 ]
Typically, one shows that if a solution to a problem existed, which in some sense was related to one or more natural numbers , it would necessarily imply that a second solution existed, which was related to one or more 'smaller' natural numbers. This in turn would imply a third solution related to smaller natural numbers, implying a fourth solution, therefore a fifth solution, and so on. However, there cannot be an infinity of ever-smaller natural numbers, and therefore by mathematical induction , the original premise—that any solution exists—is incorrect: its correctness produces a contradiction .
An alternative way to express this is to assume one or more solutions or examples exists, from which a smallest solution or example—a minimal counterexample —can then be inferred. Once there, one would try to prove that if a smallest solution exists, then it must imply the existence of a smaller solution (in some sense), which again proves that the existence of any solution would lead to a contradiction.
The earliest uses of the method of infinite descent appear in Euclid's Elements . [ 3 ] A typical example is Proposition 31 of Book 7, in which Euclid proves that every composite integer is divided (in Euclid's terminology "measured") by some prime number. [ 2 ]
The method was much later developed by Fermat , who coined the term and often used it for Diophantine equations . [ 4 ] [ 5 ] Two typical examples are showing the non-solvability of the Diophantine equation r 2 + s 4 = t 4 {\displaystyle r^{2}+s^{4}=t^{4}} and proving Fermat's theorem on sums of two squares , which states that an odd prime p can be expressed as a sum of two squares when p ≡ 1 ( mod 4 ) {\displaystyle p\equiv 1{\pmod {4}}} (see Modular arithmetic and proof by infinite descent ). In this way Fermat was able to show the non-existence of solutions in many cases of Diophantine equations of classical interest (for example, the problem of four perfect squares in arithmetic progression ).
In some cases, to the modern eye, his "method of infinite descent" is an exploitation of the inversion of the doubling function for rational points on an elliptic curve E . The context is of a hypothetical non-trivial rational point on E . Doubling a point on E roughly doubles the length of the numbers required to write it (as number of digits), so that "halving" a point gives a rational with smaller terms. Since the terms are positive, they cannot decrease forever.
In the number theory of the twentieth century, the infinite descent method was taken up again, and pushed to a point where it connected with the main thrust of algebraic number theory and the study of L-functions . The structural result of Mordell , that the rational points on an elliptic curve E form a finitely-generated abelian group , used an infinite descent argument based on E /2 E in Fermat's style.
To extend this to the case of an abelian variety A , André Weil had to make more explicit the way of quantifying the size of a solution, by means of a height function – a concept that became foundational. To show that A ( Q )/2 A ( Q ) is finite, which is certainly a necessary condition for the finite generation of the group A ( Q ) of rational points of A , one must do calculations in what later was recognised as Galois cohomology . In this way, abstractly-defined cohomology groups in the theory become identified with descents in the tradition of Fermat. The Mordell–Weil theorem was at the start of what later became a very extensive theory.
The proof that the square root of 2 ( √ 2 ) is irrational (i.e. cannot be expressed as a fraction of two whole numbers) was discovered by the ancient Greeks , and is perhaps the earliest known example of a proof by infinite descent. Pythagoreans discovered that the diagonal of a square is incommensurable with its side, or in modern language, that the square root of two is irrational . Little is known with certainty about the time or circumstances of this discovery, but the name of Hippasus of Metapontum is often mentioned. For a while, the Pythagoreans treated as an official secret the discovery that the square root of two is irrational, and, according to legend, Hippasus was murdered for divulging it. [ 6 ] [ 7 ] [ 8 ] The square root of two is occasionally called "Pythagoras' number" or "Pythagoras' Constant", for example Conway & Guy (1996) . [ 9 ]
The ancient Greeks , not having algebra , worked out a geometric proof by infinite descent ( John Horton Conway presented another geometric proof by infinite descent that may be more accessible [ 10 ] ). The following is an algebraic proof along similar lines:
Suppose that √ 2 were rational . Then it could be written as
for two natural numbers, p and q . Then squaring would give
so 2 must divide p 2 . Because 2 is a prime number , it must also divide p , by Euclid's lemma . So p = 2 r , for some integer r .
But then,
which shows that 2 must divide q as well. So q = 2 s for some integer s .
This gives
Therefore, if √ 2 could be written as a rational number, then it could always be written as a rational number with smaller parts, which itself could be written with yet-smaller parts, ad infinitum . But this is impossible in the set of natural numbers . Since √ 2 is a real number , which can be either rational or irrational, the only option left is for √ 2 to be irrational. [ 11 ]
(Alternatively, this proves that if √ 2 were rational, no "smallest" representation as a fraction could exist, as any attempt to find a "smallest" representation p / q would imply that a smaller one existed, which is a similar contradiction.)
For positive integer k , suppose that √ k is not an integer, but is rational and can be expressed as m / n for natural numbers m and n , and let q be the largest integer less than √ k (that is, q is the floor of √ k ). Then
The numerator and denominator were each multiplied by the expression ( √ k − q )—which is positive but less than 1—and then simplified independently. So, the resulting products, say m′ and n′ , are themselves integers, and are less than m and n respectively. Therefore, no matter what natural numbers m and n are used to express √ k , there exist smaller natural numbers m′ < m and n′ < n that have the same ratio. But infinite descent on the natural numbers is impossible, so this disproves the original assumption that √ k could be expressed as a ratio of natural numbers. [ 12 ]
The non-solvability of r 2 + s 4 = t 4 {\displaystyle r^{2}+s^{4}=t^{4}} in integers is sufficient to show the non-solvability of q 4 + s 4 = t 4 {\displaystyle q^{4}+s^{4}=t^{4}} in integers, which is a special case of Fermat's Last Theorem , and the historical proofs of the latter proceeded by more broadly proving the former using infinite descent. The following more recent proof demonstrates both of these impossibilities by proving still more broadly that a Pythagorean triangle cannot have any two of its sides each either a square or twice a square, since there is no smallest such triangle: [ 13 ]
Suppose there exists such a Pythagorean triangle. Then it can be scaled down to give a primitive (i.e., with no common factors other than 1) Pythagorean triangle with the same property. Primitive Pythagorean triangles' sides can be written as x = 2 a b , {\displaystyle x=2ab,} y = a 2 − b 2 , {\displaystyle y=a^{2}-b^{2},} z = a 2 + b 2 {\displaystyle z=a^{2}+b^{2}} , with a and b relatively prime and with a+b odd and hence y and z both odd. The property that y and z are each odd means that neither y nor z can be twice a square. Furthermore, if x is a square or twice a square, then each of a and b is a square or twice a square. There are three cases, depending on which two sides are postulated to each be a square or twice a square:
In any of these cases, one Pythagorean triangle with two sides each of which is a square or twice a square has led to a smaller one, which in turn would lead to a smaller one, etc.; since such a sequence cannot go on infinitely, the original premise that such a triangle exists must be wrong.
This implies that the equations
cannot have non-trivial solutions, since non-trivial solutions would give Pythagorean triangles with two sides being squares.
For other similar proofs by infinite descent for the n = 4 case of Fermat's Theorem, see the articles by Grant and Perella [ 14 ] and Barbara. [ 15 ] | https://en.wikipedia.org/wiki/Proof_by_infinite_descent |
Proof by intimidation (or argumentum verbosum ) is a humorous phrase used mainly in mathematics to refer to a specific form of hand-waving whereby one attempts to advance an argument by giving an argument loaded with jargon and obscure results or by marking it as obvious or trivial . [ 1 ] It attempts to intimidate the audience into simply accepting the result without evidence by appealing to their ignorance or lack of understanding. [ 2 ]
The phrase is often used when the author is an authority in their field, presenting their proof to people who respect a priori the author's insistence of the validity of the proof, while in other cases, the author might simply claim that their statement is true because it is trivial or because they say so. Usage of this phrase is for the most part in good humour, though it can also appear in serious criticism. [ 3 ] A proof by intimidation is often associated with phrases such as:
Outside mathematics, "proof by intimidation" is also cited by critics of junk science , to describe cases in which scientific evidence is thrown aside in favour of dubious arguments—such as those presented to the public by articulate advocates who pose as experts in their field. [ 4 ]
Proof by intimidation may also back valid assertions. Ronald A. Fisher claimed in the book credited with the new evolutionary synthesis, "...by the analogy of compound interest the present value of the future offspring of persons aged x is easily seen to be...", thence presenting a novel integral-laden definition of reproductive value . [ 5 ] At this, Hal Caswell remarked, "With all due respect to Fisher, I have yet to meet anyone who finds this equation 'easily seen.'" [ 6 ] Valid proofs were provided by subsequent researchers such as Leo A. Goodman (1968). [ 7 ]
Whenever I meet in La Place with the words "Thus it plainly appears," I am sure that hours, and perhaps days, of hard study will alone enable me to discover how it plainly appears.
In a memoir, Gian-Carlo Rota claimed that the expression "proof by intimidation" was coined by Mark Kac , to describe a technique used by William Feller in his lectures:
He took umbrage when someone interrupted his lecturing by pointing out some glaring mistake. He became red in the face and raised his voice, often to full shouting range. It was reported that on occasion he had asked the objector to leave the classroom. The expression "proof by intimidation" was coined after Feller's lectures (by Mark Kac). During a Feller lecture, the hearer was made to feel privy to some wondrous secret, one that often vanished by magic as he walked out of the classroom at the end of the period. Like many great teachers, Feller was a bit of a con man.
Newton’s astonishing grasp of the entire problem of planetary perturbations and the power of his insight are clearly apparent, this part of the Principia is also among the most difficult to grasp because of the paucity of any real explanation and an apparent attempt to conceal details by recourse, too often, to phrases like “hence it comes to pass”, “by like reasoning”, and “it is manifest that” at crucial points of the argument. This “secretive style” is nowhere present, to the same extent, in the Principia. | https://en.wikipedia.org/wiki/Proof_by_intimidation |
In mathematical logic , a proof calculus or a proof system is built to prove statements .
A proof system includes the components: [ 1 ] [ 2 ]
A formal proof of a well-formed formula in a proof system is a set of axioms and rules of inference of proof system that infers that the well-formed formula is a theorem of proof system. [ 2 ]
Usually a given proof calculus encompasses more than a single particular formal system, since many proof calculi are under-determined and can be used for radically different logics. For example, a paradigmatic case is the sequent calculus , which can be used to express the consequence relations of both intuitionistic logic and relevance logic . Thus, loosely speaking, a proof calculus is a template or design pattern , characterized by a certain style of formal inference, that may be specialized to produce specific formal systems, namely by specifying the actual inference rules for such a system. There is no consensus among logicians on how best to define the term.
The most widely known proof calculi are those classical calculi that are still in widespread use:
Many other proof calculi were, or might have been, seminal, but are not widely used today.
Modern research in logic teems with rival proof calculi: | https://en.wikipedia.org/wiki/Proof_calculus |
In proof theory , an area of mathematical logic , proof compression is the problem of algorithmically compressing formal proofs . The developed algorithms can be used to improve the proofs generated by automated theorem proving tools such as SAT solvers , SMT-solvers , first-order theorem provers and proof assistants .
In propositional logic a resolution proof of a clause κ {\displaystyle \kappa } from a set of clauses C is a directed acyclic graph (DAG): the input nodes are axiom inferences (without premises) whose conclusions are elements of C , the resolvent nodes are resolution inferences, and the proof has a node with conclusion κ {\displaystyle \kappa } . [ 1 ]
The DAG contains an edge from a node η 1 {\displaystyle \eta _{1}} to a node η 2 {\displaystyle \eta _{2}} if and only if a premise of η 1 {\displaystyle \eta _{1}} is the conclusion of η 2 {\displaystyle \eta _{2}} . In this case, η 1 {\displaystyle \eta _{1}} is a child of η 2 {\displaystyle \eta _{2}} , and η 2 {\displaystyle \eta _{2}} is a parent of η 1 {\displaystyle \eta _{1}} . A node with no children is a root.
A proof compression algorithm will try to create a new DAG with fewer nodes that represents a valid proof of κ {\displaystyle \kappa } or, in some cases, a valid proof of a subset of κ {\displaystyle \kappa } .
Let's take a resolution proof for the clause { a , b , c } {\displaystyle \left\{a,b,c\right\}} from the set of clauses
Here we can see:
A (resolution) refutation of C is a resolution proof of ⊥ {\displaystyle \bot } from C . It is a common given a node η {\displaystyle \eta } , to refer to the clause η {\displaystyle \eta } or η {\displaystyle \eta } ’s clause meaning the conclusion clause of η {\displaystyle \eta } , and (sub)proof η {\displaystyle \eta } meaning the (sub)proof having η {\displaystyle \eta } as its only root.
In some works can be found an algebraic representation of resolution inferences . The resolvent of κ 1 {\displaystyle \kappa _{1}} and κ 2 {\displaystyle \kappa _{2}} with pivot p {\displaystyle p} can be denoted as κ 1 ⊙ p κ 2 {\displaystyle \kappa _{1}\odot _{p}\kappa _{2}} . When the pivot is uniquely defined or irrelevant, we omit it and write simply κ 1 ⊙ κ 2 {\displaystyle \kappa _{1}\odot \kappa _{2}} . In this way, the set of clauses can be seen as an algebra with a commutative operator; and terms in the corresponding term algebra denote resolution proofs in a notation style that is more compact and more convenient for describing resolution proofs than the usual graph notation.
In our last example the notation of the DAG would be { a , b , p } ⊙ p { c , ¬ p } {\displaystyle \left\{a,b,p\right\}\odot _{p}\left\{c,\neg p\right\}} or simply { a , b , p } ⊙ { c , ¬ p } . {\displaystyle \left\{a,b,p\right\}\odot \left\{c,\neg p\right\}.}
We can identify { a , b , p } ⏞ η 1 ⊙ { c , ¬ p } ⏞ η 2 ⏟ η 3 {\displaystyle \underbrace {\overbrace {\left\{a,b,p\right\}} ^{\eta _{1}}\odot \overbrace {\left\{c,\neg p\right\}} ^{\eta _{2}}} _{\eta _{3}}} .
Algorithms for compression of sequent calculus proofs include cut introduction and cut elimination .
Algorithms for compression of propositional resolution proofs include RecycleUnits , [ 2 ] RecyclePivots , [ 2 ] RecyclePivotsWithIntersection , [ 1 ] LowerUnits , [ 1 ] LowerUnivalents , [ 3 ] Split , [ 4 ] Reduce&Reconstruct , [ 5 ] and Subsumption . | https://en.wikipedia.org/wiki/Proof_compression |
In proof theory , a branch of mathematical logic , proof mining (or proof unwinding ) is a research program that studies or analyzes formalized proofs, especially in analysis , to obtain explicit bounds, ranges or rates of convergence from proofs that, when expressed in natural language, appear to be nonconstructive . [ 1 ] This research has led to improved results in analysis obtained from the analysis of classical proofs.
This mathematical logic -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Proof_mining |
In proof theory , proof nets are a geometrical method of representing proofs that
eliminates two forms of bureaucracy that differentiate proofs: (A) irrelevant syntactical features of regular proof calculi, and (B) the order of rules applied in a derivation. In this way, the formal properties of proof identity correspond more closely to the intuitively desirable properties. This distinguishes proof nets from regular proof calculi such as the natural deduction calculus and the sequent calculus , where these phenomena are present. Proof nets were introduced by Jean-Yves Girard .
As an illustration, these two linear logic proofs are identical:
And their corresponding nets will be the same.
Several correctness criteria are known to check if a sequential proof structure (i.e. something that seems to be a proof net) is actually a concrete proof structure (i.e. something that encodes a valid derivation in linear logic). The first such criterion is the long-trip criterion , [ 1 ] which was described by Jean-Yves Girard .
This logic -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Proof_net |
In mathematics , Bertrand's postulate (now a theorem ) states that, for each n ≥ 2 {\displaystyle n\geq 2} , there is a prime p {\displaystyle p} such that n < p < 2 n {\displaystyle n<p<2n} . First conjectured in 1845 by Joseph Bertrand , [ 1 ] it was first proven by Chebyshev , and a shorter but also advanced proof was given by Ramanujan . [ 2 ]
The following elementary proof was published by Paul Erdős in 1932, as one of his earliest mathematical publications. [ 3 ] The basic idea is to show that the central binomial coefficients must have a prime factor within the interval ( n , 2 n ) {\displaystyle (n,2n)} in order to be large enough. This is achieved through analysis of their factorizations.
The main steps of the proof are as follows. First, one shows that the contribution of every prime power factor p r {\displaystyle p^{r}} in the prime decomposition of the central binomial coefficient ( 2 n n ) = ( 2 n ) ! / ( n ! ) 2 {\displaystyle \textstyle {\binom {2n}{n}}=(2n)!/(n!)^{2}} is at most 2 n {\displaystyle 2n} ; then, one shows that every prime larger than 2 n {\displaystyle {\sqrt {2n}}} appears at most once.
The next step is to prove that ( 2 n n ) {\displaystyle {\tbinom {2n}{n}}} has no prime factors in the interval ( 2 n 3 , n ) {\displaystyle ({\tfrac {2n}{3}},n)} . As a consequence of these bounds, the contribution to the size of ( 2 n n ) {\displaystyle {\tbinom {2n}{n}}} coming from the prime factors that are at most n {\displaystyle n} grows asymptotically as θ n {\displaystyle \theta ^{\!\;n}} for some θ < 4 {\displaystyle \theta <4} . Since the asymptotic growth of the central binomial coefficient is at least 4 n / 2 n {\displaystyle 4^{n}\!/2n} , the conclusion is that, by contradiction and for large enough n {\displaystyle n} , the binomial coefficient must have another prime factor, which can only lie between n {\displaystyle n} and 2 n {\displaystyle 2n} .
The argument given is valid for all n ≥ 427 {\displaystyle n\geq 427} . The remaining values of n {\displaystyle n} are verified by direct inspection, which completes the proof.
The proof uses the following four lemmas to establish facts about the primes present in the central binomial coefficients.
For any integer n > 0 {\displaystyle n>0} , we have
Proof: Applying the binomial theorem ,
since ( 2 n n ) {\displaystyle {\tbinom {2n}{n}}} is the largest term in the sum in the right-hand side, and the sum has 2 n {\displaystyle 2n} terms (including the initial 2 {\displaystyle 2} outside the summation).
For a fixed prime p {\displaystyle p} , define R = R ( n , p ) {\displaystyle R=R(n,p)} to be the p -adic order of ( 2 n n ) {\displaystyle {\tbinom {2n}{n}}} , that is, the largest natural number r {\displaystyle r} such that p r {\displaystyle p^{r}} divides ( 2 n n ) {\displaystyle {\tbinom {2n}{n}}} .
For any prime p {\displaystyle p} , p R ≤ 2 n {\displaystyle p^{R}\leq 2n} .
Proof: The exponent of p {\displaystyle p} in n ! {\displaystyle n!} is given by Legendre's formula
so
But each term of the last summation must be either zero (if n / p j mod 1 < 1 / 2 {\displaystyle n/p^{j}{\bmod {1}}<1/2} ) or one (if n / p j mod 1 ≥ 1 / 2 {\displaystyle n/p^{j}{\bmod {1}}\geq 1/2} ), and all terms with j > log p ( 2 n ) {\displaystyle j>\log _{p}(2n)} are zero. Therefore,
and
If p {\displaystyle p} is an odd prime and 2 n 3 < p ≤ n {\displaystyle {\frac {2n}{3}}<p\leq n} , then R ( n , p ) = 0. {\displaystyle R(n,p)=0.}
Proof: There are exactly two factors of p {\displaystyle p} in the numerator of the expression ( 2 n n ) = ( 2 n ) ! / ( n ! ) 2 {\displaystyle {\tbinom {2n}{n}}=(2n)!/(n!)^{2}} , coming from the two terms p {\displaystyle p} and 2 p {\displaystyle 2p} in ( 2 n ) ! {\displaystyle (2n)!} , and also two factors of p {\displaystyle p} in the denominator from one copy of the term p {\displaystyle p} in each of the two factors of n ! {\displaystyle n!} . These factors all cancel, leaving no factors of p {\displaystyle p} in ( 2 n n ) {\displaystyle {\tbinom {2n}{n}}} . (The bound on p {\displaystyle p} in the preconditions of the lemma ensures that 3 p {\displaystyle 3p} is too large to be a term of the numerator, and the assumption that p {\displaystyle p} is odd is needed to ensure that 2 p {\displaystyle 2p} contributes only one factor of p {\displaystyle p} to the numerator.)
An upper bound is supplied for the primorial function,
where the product is taken over all prime numbers p {\displaystyle p} less than or equal to n {\displaystyle n} .
For all n ≥ 1 {\displaystyle n\geq 1} , n # < 4 n {\displaystyle n\#<4^{n}} .
Proof: We use complete induction .
For n = 1 , 2 {\displaystyle n=1,2} we have 1 # = 1 < 4 {\displaystyle 1\#=1<4} and 2 # = 2 < 4 2 = 16 {\displaystyle 2\#=2<4^{2}=16} .
Let us assume that the inequality holds for all 1 ≤ n ≤ 2 k − 1 {\displaystyle 1\leq n\leq 2k-1} . Since n = 2 k > 2 {\displaystyle n=2k>2} is composite, we have
Now let us assume that the inequality holds for all 1 ≤ n ≤ 2 k {\displaystyle 1\leq n\leq 2k} . Since ( 2 k + 1 k ) = ( 2 k + 1 ) ! k ! ( k + 1 ) ! {\displaystyle {\binom {2k+1}{k}}={\frac {(2k+1)!}{k!(k+1)!}}} is an integer and all the primes k + 2 ≤ p ≤ 2 k + 1 {\displaystyle k+2\leq p\leq 2k+1} appear only in the numerator, we have
Therefore,
Assume that there is a counterexample : an integer n ≥ 2 such that there is no prime p with n < p < 2 n .
If 2 ≤ n < 630, then p can be chosen from among the prime numbers 3, 5, 7, 13, 23, 43, 83, 163, 317, 631 (each being the largest prime less than twice its predecessor) such that n < p < 2 n . Therefore, n ≥ 630.
There are no prime factors p of ( 2 n n ) {\displaystyle \textstyle {\binom {2n}{n}}} such that:
Therefore, every prime factor p satisfies p ≤ 2 n /3.
When p > 2 n , {\displaystyle p>{\sqrt {2n}},} the number ( 2 n n ) {\displaystyle \textstyle {2n \choose n}} has at most one factor of p . By Lemma 2 , for any prime p we have p R ( p , n ) ≤ 2 n , and π ( x ) ≤ x − 1 {\displaystyle \pi (x)\leq x-1} since 1 is neither prime nor composite. Then, starting with Lemma 1 and decomposing the right-hand side into its prime factorization, and finally using Lemma 4 , these bounds give:
Therefore
Taking the base-2 logarithm of both sides yields
By concavity of the right-hand side as a function of n , the last inequality is necessarily verified on an interval. Since it holds true for n = 426 {\displaystyle n=426} and it does not for n = 427 {\displaystyle n=427} , we obtain
But these cases have already been settled, and we conclude that no counterexample to the postulate is possible.
It is possible to reduce the bound to n = 50 {\displaystyle n=50} .
For n ≥ 17 , {\displaystyle n\geq 17,} we get π ( n ) < n 2 − 1 {\displaystyle \pi (n)<{\frac {n}{2}}-1} , so we can say that the product p R {\displaystyle p^{R}} is at most ( 2 n ) 0.5 2 n − 1 {\displaystyle (2n)^{0.5{\sqrt {2n}}-1}} , which gives
which is true for n = 49 {\displaystyle n=49} and false for n = 50 {\displaystyle n=50} . | https://en.wikipedia.org/wiki/Proof_of_Bertrand's_postulate |
Fermat's Last Theorem is a theorem in number theory , originally stated by Pierre de Fermat in 1637 and proven by Andrew Wiles in 1995. The statement of the theorem involves an integer exponent n larger than 2. In the centuries following the initial statement of the result and before its general proof, various proofs were devised for particular values of the exponent n . Several of these proofs are described below, including Fermat's proof in the case n = 4 , which is an early example of the method of infinite descent .
Fermat's Last Theorem states that no three positive integers ( a , b , c ) can satisfy the equation a n + b n = c n for any integer value of n greater than 2. (For n equal to 1, the equation is a linear equation and has a solution for every possible a and b . For n equal to 2, the equation has infinitely many solutions, the Pythagorean triples .)
A solution ( a , b , c ) for a given n leads to a solution for all the factors of n : if h is a factor of n then there is an integer g such that n = gh . Then ( a g , b g , c g ) is a solution for the exponent h :
Therefore, to prove that Fermat's equation has no solutions for n > 2 , it suffices to prove that it has no solutions for n = 4 and for all odd primes p .
For any such odd exponent p , every positive-integer solution of the equation a p + b p = c p corresponds to a general integer solution to the equation a p + b p + c p = 0 . For example, if (3, 5, 8) solves the first equation, then (3, 5, −8) solves the second. Conversely, any solution of the second equation corresponds to a solution to the first. The second equation is sometimes useful because it makes the symmetry between the three variables a , b and c more apparent.
If two of the three numbers ( a , b , c ) can be divided by a fourth number d , then all three numbers are divisible by d . For example, if a and c are divisible by d = 13 , then b is also divisible by 13. This follows from the equation
If the right-hand side of the equation is divisible by 13, then the left-hand side is also divisible by 13. Let g represent the greatest common divisor of a , b , and c . Then ( a , b , c ) may be written as a = gx , b = gy , and c = gz where the three numbers ( x , y , z ) are pairwise coprime . In other words, the greatest common divisor ( GCD ) of each pair equals one
If ( a , b , c ) is a solution of Fermat's equation, then so is ( x , y , z ) , since the equation
implies the equation
A pairwise coprime solution ( x , y , z ) is called a primitive solution . Since every solution to Fermat's equation can be reduced to a primitive solution by dividing by their greatest common divisor g , Fermat's Last Theorem can be proven by demonstrating that no primitive solutions exist.
Integers can be divided into even and odd, those that are evenly divisible by two and those that are not. The even integers are ...−4, −2, 0, 2, 4,... whereas the odd integers are ...−3, −1, 1, 3,... . The property of whether an integer is even (or not) is known as its parity . If two numbers are both even or both odd, they have the same parity. By contrast, if one is even and the other odd, they have different parity.
The addition, subtraction and multiplication of even and odd integers obey simple rules. The addition or subtraction of two even numbers or of two odd numbers always produces an even number, e.g., 4 + 6 = 10 and 3 + 5 = 8 . Conversely, the addition or subtraction of an odd and even number is always odd, e.g., 3 + 8 = 11 . The multiplication of two odd numbers is always odd, but the multiplication of an even number with any number is always even. An odd number raised to a power is always odd and an even number raised to power is always even, so for example x n has the same parity as x .
Consider any primitive solution ( x , y , z ) to the equation x n + y n = z n . The terms in ( x , y , z ) cannot all be even, for then they would not be coprime; they could all be divided by two. If x n and y n are both even, z n would be even, so at least one of x n and y n are odd. The remaining addend is either even or odd; thus, the parities of the values in the sum are either (odd + even = odd) or (odd + odd = even).
The fundamental theorem of arithmetic states that any natural number can be written in only one way (uniquely) as the product of prime numbers. For example, 42 equals the product of prime numbers 2 × 3 × 7 , and no other product of prime numbers equals 42, aside from trivial rearrangements such as 7 × 3 × 2 . This unique factorization property is the basis on which much of number theory is built.
One consequence of this unique factorization property is that if a p th power of a number equals a product such as
and if u and v are coprime (share no prime factors), then u and v are themselves the p th power of two other numbers, u = r p and v = s p .
As described below, however, some number systems do not have unique factorization. This fact led to the failure of Lamé's 1847 general proof of Fermat's Last Theorem.
Since the time of Sophie Germain , Fermat's Last Theorem has been separated into two cases that are proven separately. The first case (case I) is to show that there are no primitive solutions ( x , y , z ) to the equation x p + y p = z p under the condition that p does not divide the product xyz . The second case (case II) corresponds to the condition that p does divide the product xyz . Since x , y , and z are pairwise coprime, p divides only one of the three numbers.
Only one mathematical proof by Fermat has survived, in which Fermat uses the technique of infinite descent to show that the area of a right triangle with integer sides can never equal the square of an integer. [ 1 ] This result is known as Fermat's right triangle theorem . As shown below, his proof is equivalent to demonstrating that the equation
has no primitive solutions in integers (no pairwise coprime solutions). In turn, this is sufficient to prove Fermat's Last Theorem for the case n = 4 , since the equation a 4 + b 4 = c 4 can be written as c 4 − b 4 = ( a 2 ) 2 . Alternative proofs of the case n = 4 were developed later [ 2 ] by Frénicle de Bessy, [ 3 ] Euler, [ 4 ] Kausler, [ 5 ] Barlow, [ 6 ] Legendre, [ 7 ] Schopis, [ 8 ] Terquem, [ 9 ] Bertrand, [ 10 ] Lebesgue, [ 11 ] Pepin, [ 12 ] Tafelmacher, [ 13 ] Hilbert, [ 14 ] Bendz, [ 15 ] Gambioli, [ 16 ] Kronecker, [ 17 ] Bang, [ 18 ] Sommer, [ 19 ] Bottari, [ 20 ] Rychlik, [ 21 ] Nutzhorn, [ 22 ] Carmichael, [ 23 ] Hancock, [ 24 ] Vrǎnceanu, [ 25 ] Grant and Perella, [ 26 ] Barbara, [ 27 ] and Dolan. [ 28 ] For one proof by infinite descent, see Infinite descent#Non-solvability of r 2 + s 4 = t 4 .
Fermat's proof demonstrates that no right triangle with integer sides can have an area that is a square. [ 29 ] Let the right triangle have sides ( u , v , w ) , where the area equals uv / 2 and, by the Pythagorean theorem , u 2 + v 2 = w 2 . If the area were equal to the square of an integer s
then by algebraic manipulations it would also be the case that
Adding u 2 + v 2 = w 2 to these equations gives
which can be expressed as
Multiplying these equations together yields
But as Fermat proved, there can be no integer solution to the equation x 4 − y 4 = z 2 , of which this is a special case with z = u 2 − v 2 , x = w and y = 2 s .
The first step of Fermat's proof is to factor the left-hand side [ 30 ]
Since x and y are coprime (this can be assumed because otherwise the factors could be cancelled), the greatest common divisor of x 2 + y 2 and x 2 − y 2 is either 2 (case A) or 1 (case B). The theorem is proven separately for these two cases.
In this case, both x and y are odd and z is even. Since ( y 2 , z , x 2 ) form a primitive Pythagorean triple, they can be written
where d and e are coprime and d > e > 0 . Thus,
which produces another solution ( d , e , xy ) that is smaller ( 0 < d < x ). As before, there must be a lower bound on the size of solutions, while this argument always produces a smaller solution than any given one, and thus the original solution is impossible.
In this case, the two factors are coprime. Since their product is a square z 2 , they must each be a square
The numbers s and t are both odd, since s 2 + t 2 = 2 x 2 , an even number, and since x and y cannot both be even. Therefore, the sum and difference of s and t are likewise even numbers, so we define integers u and v as
Since s and t are coprime, so are u and v ; only one of them can be even. Since y 2 = 2 uv , exactly one of them is even. For illustration, let u be even; then the numbers may be written as u = 2 m 2 and v = k 2 . Since ( u , v , x ) form a primitive Pythagorean triple
they can be expressed in terms of smaller integers d and e using Euclid's formula
Since u = 2 m 2 = 2 de , and since d and e are coprime, they must be squares themselves, d = g 2 and e = h 2 . This gives the equation
The solution ( g , h , k ) is another solution to the original equation, but smaller ( 0 < g < d < x ). Applying the same procedure to ( g , h , k ) would produce another solution, still smaller, and so on. But this is impossible, since natural numbers cannot be shrunk indefinitely. Therefore, the original solution ( x , y , z ) was impossible.
Fermat sent the letters in which he mentioned the case in which n = 3 in 1636, 1640 and 1657. [ 31 ] Euler sent a letter to Goldbach on 4 August 1753 in which claimed to have a proof of the case in which n = 3 . [ 32 ] Euler had a complete and pure elementary proof in 1760, but the result was not published. [ 33 ] Later, Euler's proof for n = 3 was published in 1770. [ 34 ] [ 35 ] [ 36 ] [ 37 ] Independent proofs were published by several other mathematicians, [ 38 ] including Kausler, [ 5 ] Legendre , [ 7 ] [ 39 ] Calzolari, [ 40 ] Lamé , [ 41 ] Tait , [ 42 ] Günther, [ 43 ] Gambioli, [ 16 ] Krey, [ 44 ] Rychlik , [ 21 ] Stockhaus, [ 45 ] Carmichael , [ 46 ] van der Corput , [ 47 ] Thue , [ 48 ] and Duarte. [ 49 ]
As Fermat did for the case n = 4 , Euler used the technique of infinite descent . [ 50 ] The proof assumes a solution ( x , y , z ) to the equation x 3 + y 3 + z 3 = 0 , where the three non-zero integers x , y , and z are pairwise coprime and not all positive. One of the three must be even, whereas the other two are odd. Without loss of generality , z may be assumed to be even.
Since x and y are both odd, they cannot be equal. If x = y , then 2 x 3 = − z 3 , which implies that x is even, a contradiction.
Since x and y are both odd, their sum and difference are both even numbers
where the non-zero integers u and v are coprime and have different parity (one is even, the other odd). Since x = u + v and y = u − v , it follows that
Since u and v have opposite parity, u 2 + 3 v 2 is always an odd number. Therefore, since z is even, u is even and v is odd. Since u and v are coprime, the greatest common divisor of 2 u and u 2 + 3 v 2 is either 1 (case A) or 3 (case B).
In this case, the two factors of − z 3 are coprime. This implies that three does not divide u and that the two factors are cubes of two smaller numbers, r and s
Since u 2 + 3 v 2 is odd, so is s . A crucial lemma shows that if s is odd and if it satisfies an equation s 3 = u 2 + 3 v 2 , then it can be written in terms of two integers e and f
so that
u and v are coprime, so e and f must be coprime, too. Since u is even and v odd, e is even and f is odd. Since
The factors 2 e , ( e – 3 f ) , and ( e + 3 f ) are coprime since 3 cannot divide e : if e were divisible by 3, then 3 would divide u , violating the designation of u and v as coprime. Since the three factors on the right-hand side are coprime, they must individually equal cubes of smaller integers
which yields a smaller solution k 3 + l 3 + m 3 = 0 . Therefore, by the argument of infinite descent , the original solution ( x , y , z ) was impossible.
In this case, the greatest common divisor of 2 u and u 2 + 3 v 2 is 3. That implies that 3 divides u , and one may express u = 3 w in terms of a smaller integer, w . Since u is divisible by 4, so is w ; hence, w is also even. Since u and v are coprime, so are v and w . Therefore, neither 3 nor 4 divide v .
Substituting u by w in the equation for z 3 yields
Because v and w are coprime, and because 3 does not divide v , then 18 w and 3 w 2 + v 2 are also coprime. Therefore, since their product is a cube, they are each the cube of smaller integers, r and s
By the lemma above, since s is odd and its cube is equal to a number of the form 3 w 2 + v 2 , it too can be expressed in terms of smaller coprime numbers, e and f .
A short calculation shows that
Thus, e is odd and f is even, because v is odd. The expression for 18 w then becomes
Since 3 3 divides r 3 we have that 3 divides r , so ( r / 3 ) 3 is an integer that equals 2 f ( e + f )( e − f ) . Since e and f are coprime, so are the three factors 2 f , e + f , and e − f ; therefore, they are each the cube of smaller integers, k , l , and m .
which yields a smaller solution k 3 + l 3 + m 3 = 0 . Therefore, by the argument of infinite descent , the original solution ( x , y , z ) was impossible.
Fermat's Last Theorem for n = 5 states that no three coprime integers x , y and z can satisfy the equation
This was proven [ 51 ] neither independently nor collaboratively by Dirichlet and Legendre around 1825. [ 32 ] [ 52 ] Alternative proofs were developed [ 53 ] by Gauss , [ 54 ] Lebesgue , [ 55 ] Lamé , [ 56 ] Gambioli, [ 16 ] [ 57 ] Werebrusow, [ 58 ] Rychlik , [ 59 ] van der Corput , [ 47 ] and Terjanian . [ 60 ]
Dirichlet's proof for n = 5 is divided into the two cases (cases I and II) defined by Sophie Germain . In case I, the exponent 5 does not divide the product xyz . In case II, 5 does divide xyz .
Case A for n = 5 can be proven immediately by Sophie Germain's theorem if the auxiliary prime θ = 11 . A more methodical proof is as follows. By Fermat's little theorem ,
and therefore
This equation forces two of the three numbers x , y , and z to be equivalent modulo 5, which can be seen as follows: Since they are indivisible by 5, x , y and z cannot equal 0 modulo 5, and must equal one of four possibilities: 1, −1, 2, or −2. If they were all different, two would be opposites and their sum modulo 5 would be zero (implying contrary to the assumption of this case that the other one would be 0 modulo 5).
Without loss of generality, x and y can be designated as the two equivalent numbers modulo 5. That equivalence implies that
However, the equation x ≡ y (mod 5) also implies that
Combining the two results and dividing both sides by x 5 yields a contradiction
Thus, case A for n = 5 has been proven.
The case n = 7 was proven [ 61 ] by Gabriel Lamé in 1839. [ 62 ] His rather complicated proof was simplified in 1840 by Victor-Amédée Lebesgue , [ 63 ] and still simpler proofs [ 64 ] were published by Angelo Genocchi in 1864, 1874 and 1876. [ 65 ] Alternative proofs were developed by Théophile Pépin [ 66 ] and Edmond Maillet. [ 67 ]
Fermat's Last Theorem has also been proven for the exponents n = 6 , n = 10 , and n = 14 . Proofs for n = 6 have been published by Kausler, [ 5 ] Thue , [ 68 ] Tafelmacher, [ 69 ] Lind, [ 70 ] Kapferer, [ 71 ] Swift, [ 72 ] and Breusch. [ 73 ] Similarly, Dirichlet [ 74 ] and Terjanian [ 75 ] each proved the case n = 14 , while Kapferer [ 71 ] and Breusch [ 73 ] each proved the case n = 10 . Strictly speaking, these proofs are unnecessary, since these cases follow from the proofs for n = 3 , n = 5 , n = 7 , respectively. Nevertheless, the reasoning of these even-exponent proofs differs from their odd-exponent counterparts. Dirichlet's proof for n = 14 was published in 1832, before Lamé's 1839 proof for n = 7 . | https://en.wikipedia.org/wiki/Proof_of_Fermat's_Last_Theorem_for_specific_exponents |
In mathematics , an impossibility theorem is a theorem that demonstrates a problem or general set of problems cannot be solved. These are also known as proofs of impossibility , negative proofs , or negative results . Impossibility theorems often resolve decades or centuries of work spent looking for a solution by proving there is no solution. Proving that something is impossible is usually much harder than the opposite task, as it is often necessary to develop a proof that works in general, rather than to just show a particular example. [ 1 ] Impossibility theorems are usually expressible as negative existential propositions or universal propositions in logic.
The irrationality of the square root of 2 is one of the oldest proofs of impossibility. It shows that it is impossible to express the square root of 2 as a ratio of two integers . Another consequential proof of impossibility was Ferdinand von Lindemann's proof in 1882, which showed that the problem of squaring the circle cannot be solved [ 2 ] because the number π is transcendental (i.e., non-algebraic), and that only a subset of the algebraic numbers can be constructed by compass and straightedge . Two other classical problems— trisecting the general angle and doubling the cube —were also proved impossible in the 19th century, and all of these problems gave rise to research into more complicated mathematical structures.
Some of the most important proofs of impossibility found in the 20th century were those related to undecidability , which showed that there are problems that cannot be solved in general by any algorithm , with one of the more prominent ones being the halting problem . Gödel's incompleteness theorems were other examples that uncovered fundamental limitations in the provability of formal systems. [ 3 ]
In computational complexity theory , techniques like relativization (the addition of an oracle ) allow for "weak" proofs of impossibility, in that proofs techniques that are not affected by relativization cannot resolve the P versus NP problem . [ 4 ] Another technique is the proof of completeness for a complexity class , which provides evidence for the difficulty of problems by showing them to be just as hard to solve as any other problem in the class. In particular, a complete problem is intractable if one of the problems in its class is.
One of the widely used types of impossibility proof is proof by contradiction . In this type of proof, it is shown that if a proposition, such as a solution to a particular class of equations, is assumed to hold, then via deduction two mutually contradictory things can be shown to hold, such as a number being both even and odd or both negative and positive. Since the contradiction stems from the original assumption, this means that the assumed premise must be impossible.
In contrast, a non-constructive proof of an impossibility claim would proceed by showing it is logically contradictory for all possible counterexamples to be invalid: at least one of the items on a list of possible counterexamples must actually be a valid counterexample to the impossibility conjecture. For example, a conjecture that it is impossible for an irrational power raised to an irrational power to be rational was disproved , by showing that one of two possible counterexamples must be a valid counterexample, without showing which one it is.
Another type of proof by contradiction is proof by descent, which proceeds first by assuming that something is possible, such as a positive integer [ 5 ] solution to a class of equations, and that therefore there must be a smallest solution (by the Well-ordering principle ). From the alleged smallest solution, it is then shown that a smaller solution can be found, contradicting the premise that the former solution was the smallest one possible—thereby showing that the original premise that a solution exists must be false.
The obvious way to disprove an impossibility conjecture is by providing a single counterexample . For example, Euler proposed that at least n different n th powers were necessary to sum to yet another n th power. The conjecture was disproved in 1966, with a counterexample involving a count of only four different 5th powers summing to another fifth power:
Proof by counterexample is a form of constructive proof , in that an object disproving the claim is exhibited.
In social choice theory , Arrow's impossibility theorem shows that it is impossible to devise a ranked-choice voting system that is both non-dictatorial and satisfies a basic requirement for rational behavior called independence of irrelevant alternatives .
Gibbard's theorem shows that any strategyproof game form (i.e. one with a dominant strategy ) with more than two outcomes is dictatorial .
The Gibbard–Satterthwaite theorem is a special case showing that no deterministic voting system can be fully invulnerable to strategic voting in all circumstances, regardless of how others vote.
The revelation principle can be seen as an impossibility theorem showing the "opposite" of Gibbard's theorem, in a colloquial sense: any game or voting system can be made resistant to strategy by incorporating the strategy into the mechanism . Thus, it is impossible to design a mechanism with a solution that is better than can be obtained by a truthful mechanism .
The proof by Pythagoras about 500 BCE has had a profound effect on mathematics. It shows that the square root of 2 cannot be expressed as the ratio of two integers. The proof bifurcated "the numbers" into two non-overlapping collections—the rational numbers and the irrational numbers .
There is a famous passage in Plato 's Theaetetus in which it is stated that Theodorus (Plato's teacher) proved the irrationality of
taking all the separate cases up to the root of 17 square feet ... . [ 6 ]
A more general proof shows that the m th root of an integer N is irrational, unless N is the m th power of an integer n . [ 7 ] That is, it is impossible to express the m th root of an integer N as the ratio a ⁄ b of two integers a and b , that share no common prime factor , except in cases in which b = 1.
Greek geometry was based on the use of the compass and a straightedge (though the straightedge is not strictly necessary). The compass allows a geometer to construct points equidistant from each other, which in Euclidean space are equivalent to implicitly calculations of square roots . Four famous questions asked how to construct:
For more than 2,000 years unsuccessful attempts were made to solve these problems; at last, in the 19th century it was proved that the desired constructions are mathematically impossible without admitting additional tools other than a compass. [ 8 ]
All of these are problems in Euclidean construction , and Euclidean constructions can be done only if they involve only Euclidean numbers (by definition of the latter). [ 9 ] Irrational numbers can be Euclidean. A good example is the square root of 2 (an irrational number). It is simply the length of the hypotenuse of a right triangle with legs both one unit in length, and it can be constructed with a straightedge and a compass. But it was proved centuries after Euclid that Euclidean numbers cannot involve any operations other than addition, subtraction, multiplication, division, and the extraction of square roots.
Both trisecting the general angle and doubling the cube require taking cube roots , which are not constructible numbers .
π {\displaystyle \pi } is not a Euclidean number ... and therefore it is impossible to construct, by Euclidean methods a length equal to the circumference of a circle of unit diameter
Because π {\displaystyle \pi } was proved in 1882 to be a transcendental number , it is not a Euclidean number; Hence the construction of a length π {\displaystyle \pi } from a unit circle is impossible. [ 10 ] [ 11 ]
The Gauss-Wantzel theorem showed in 1837 that constructing an equilateral n -gon is impossible for most values of n .
The parallel postulate from Euclid's Elements is equivalent to the statement that given a straight line and a point not on that line, only one parallel to the line may be drawn through that point. Unlike the other postulates, it was seen as less self-evident. Nagel and Newman argue that this may be because the postulate concerns "infinitely remote" regions of space; in particular, parallel lines are defined as not meeting even "at infinity", in contrast to asymptotes . [ 12 ] This perceived lack of self-evidence led to the question of whether it might be proven from the other Euclidean axioms and postulates. It was only in the nineteenth century that the impossibility of deducing the parallel postulate from the others was demonstrated in the works of Gauss , Bolyai , Lobachevsky , and Riemann . These works showed that the parallel postulate can moreover be replaced by alternatives, leading to non-Euclidean geometries .
Nagel and Newman consider the question raised by the parallel postulate to be "...perhaps the most significant development in its long-range effects upon subsequent mathematical history". [ 12 ] In particular, they consider its outcome to be "of the greatest intellectual importance," as it showed that "a proof can be given of the impossibility of proving certain propositions [in this case, the parallel postulate] within a given system [in this case, Euclid's first four postulates]." [ 13 ]
Fermat's Last Theorem was conjectured by Pierre de Fermat in the 1600s, states the impossibility of finding solutions in positive integers for the equation x n + y n = z n {\displaystyle x^{n}+y^{n}=z^{n}} with n > 2 {\displaystyle n>2} . Fermat himself gave a proof for the n = 4 case using his technique of infinite descent , and other special cases were subsequently proved, but the general case was not proven until 1994 by Andrew Wiles .
The question "Does any arbitrary Diophantine equation have an integer solution?" is undecidable . That is, it is impossible to answer the question for all cases.
Franzén introduces Hilbert's tenth problem and the MRDP theorem (Matiyasevich-Robinson-Davis-Putnam theorem) which states that "no algorithm exists which can decide whether or not a Diophantine equation has any solution at all". MRDP uses the undecidability proof of Turing: "... the set of solvable Diophantine equations is an example of a computably enumerable but not decidable set, and the set of unsolvable Diophantine equations is not computably enumerable". [ 14 ]
This profound paradox presented by Jules Richard in 1905 informed the work of Kurt Gödel [ 15 ] and Alan Turing. A succinct definition is found in Principia Mathematica : [ 16 ]
Richard's paradox ... is as follows. Consider all decimals that can be defined by means of a finite number of words [“words” are symbols; boldface added for emphasis] ; let E be the class of such decimals. Then E has ℵ 0 {\displaystyle \aleph _{0}} [an infinite number of] terms; hence its members can be ordered as the 1st, 2nd, 3rd, ... Let X be a number defined as follows [Whitehead & Russell now employ the Cantor diagonal method] . If the n -th figure in the n -th decimal is p , let the n -th figure in X be p + 1 (or 0, if p = 9). Then X is different from all the members of E , since, whatever finite value n may have, the n -th figure in X is different from the n -th figure in the n -th of the decimals composing E , and therefore X is different from the n -th decimal. Nevertheless we have defined X in a finite number of words [i.e. this very definition of “word” above.] and therefore X ought to be a member of E . Thus X both is and is not a member of E.
Kurt Gödel considered his proof to be “an analogy” of Richard's paradox, which he called " Richard's antinomy " [ 17 ] ( see below ).
Alan Turing constructed this paradox with a machine and proved that this machine could not answer a simple question: will this machine be able to determine if any machine (including itself) will become trapped in an unproductive ‘ infinite loop ’ (i.e. it fails to continue its computation of the diagonal number).
To quote Nagel and Newman (p. 68), "Gödel's paper is difficult. Forty-six preliminary definitions, together with several important preliminary theorems, must be mastered before the main results are reached". In fact, Nagel and Newman required a 67-page introduction to their exposition of the proof. But if the reader feels strong enough to tackle the paper, Martin Davis observes that "This remarkable paper is not only an intellectual landmark but is written with a clarity and vigor that makes it a pleasure to read" (Davis in Undecidable, p. 4).
Gödel proved, in his own words:
Gödel compared his proof to "Richard's antinomy" (an " antinomy " is a contradiction or a paradox; for more see Richard's paradox ):
A number of similar undecidability proofs appeared soon before and after Turing's proof:
For an exposition suitable for non-specialists, see Beltrami p. 108ff. Also see Franzen Chapter 8 pp. 137–148, and Davis pp. 263–266. Franzén's discussion is significantly more complicated than Beltrami's and delves into Ω— Gregory Chaitin 's so-called "halting probability". Davis's older treatment approaches the question from a Turing machine viewpoint. Chaitin has written a number of books about his endeavors and the subsequent philosophic and mathematical fallout from them.
A string is called (algorithmically) random if it cannot be produced from any shorter computer program. While most strings are random , no particular one can be proved so, except for finitely many short ones:
Beltrami observes that "Chaitin's proof is related to a paradox posed by Oxford librarian G. Berry early in the twentieth century that asks for 'the smallest positive integer that cannot be defined by an English sentence with fewer than 1000 characters.' Evidently, the shortest definition of this number must have at least 1000 characters. However, the sentence within quotation marks, which is itself a definition of the alleged number is less than 1000 characters in length!" [ 22 ]
In natural science , impossibility theorems are derived as mathematical results proven within well-established scientific theories . The basis for this strong acceptance is a combination of extensive evidence of something not occurring, combined with an underlying theory, very successful in making predictions, whose assumptions lead logically to the conclusion that something is impossible.
Two examples of widely accepted impossibilities in physics are perpetual motion machines , which violate the law of conservation of energy , and exceeding the speed of light , which violates the implications of special relativity . Another is the uncertainty principle of quantum mechanics , which asserts the impossibility of simultaneously knowing both the position and the momentum of a particle. There is also Bell's theorem : no physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics.
While an impossibility assertion in natural science can never be absolutely proved, it could be refuted by the observation of a single counterexample . Such a counterexample would require that the assumptions underlying the theory that implied the impossibility be re-examined. | https://en.wikipedia.org/wiki/Proof_of_impossibility |
Leonhard Euler proved the Euler product formula for the Riemann zeta function in his thesis Variae observationes circa series infinitas ( Various Observations about Infinite Series ), published by St Petersburg Academy in 1737. [ 1 ] [ 2 ]
The Euler product formula for the Riemann zeta function reads
where the left hand side equals the Riemann zeta function:
and the product on the right hand side extends over all prime numbers p :
This sketch of a proof makes use of simple algebra only. This was the method by which Euler originally discovered the formula. There is a certain sieving property that we can use to our advantage:
Subtracting the second equation from the first we remove all elements that have a factor of 2:
Repeating for the next term:
Subtracting again we get:
where all elements having a factor of 3 or 2 (or both) are removed.
It can be seen that the right side is being sieved. Repeating infinitely for 1 p s {\displaystyle {\frac {1}{p^{s}}}} where p {\displaystyle p} is prime, we get:
Dividing both sides by everything but the ζ( s ) we obtain:
This can be written more concisely as an infinite product over all primes p :
To make this proof rigorous, we need only to observe that when ℜ ( s ) > 1 {\displaystyle \Re (s)>1} , the sieved right-hand side approaches 1, which follows immediately from the convergence of the Dirichlet series for ζ ( s ) {\displaystyle \zeta (s)} .
An interesting result can be found for ζ(1), the harmonic series :
which can also be written as,
which is,
as, ζ ( 1 ) = 1 + 1 2 + 1 3 + 1 4 + 1 5 + … {\displaystyle \zeta (1)=1+{\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{4}}+{\frac {1}{5}}+\ldots }
thus,
While the series ratio test is inconclusive for the left-hand side it may be shown divergent by bounding logarithms. Similarly for the right-hand side the infinite coproduct of reals greater than one does not guarantee divergence, e.g.,
Instead, the partial products (whose numerators are primorials ) may be bounded, using ln(1+ x )≤ x , as
so that divergence is clear given the double-logarithmic divergence of the inverse prime series .
(Note that Euler's original proof for inverse prime series used just the converse direction to prove the divergence of the inverse prime series based on that of the Euler product and the harmonic series.)
Each factor (for a given prime p ) in the product above can be expanded to a geometric series consisting of the reciprocal of p raised to multiples of s , as follows
When ℜ ( s ) > 1 {\displaystyle \Re (s)>1} , this series converges absolutely . Hence we may take a finite number of factors, multiply them together, and rearrange terms. Taking all the primes p up to some prime number limit q , we have
where σ is the real part of s . By the fundamental theorem of arithmetic , the partial product when expanded out gives a sum consisting of those terms n − s where n is a product of primes less than or equal to q . The inequality results from the fact that therefore only integers larger than q can fail to appear in this expanded out partial product. Since the difference between the partial product and ζ( s ) goes to zero when σ > 1, we have convergence in this region. | https://en.wikipedia.org/wiki/Proof_of_the_Euler_product_formula_for_the_Riemann_zeta_function |
In logic , and in particular proof theory , a proof procedure for a given logic is a systematic method for producing proofs in some proof calculus of (provable) statements.
There are several types of proof calculi. The most popular are natural deduction , sequent calculi (i.e., Gentzen -type systems), Hilbert systems , and semantic tableaux or trees. A given proof procedure will target a specific proof calculus, but can often be reformulated so as to produce proofs in other proof styles.
A proof procedure for a logic is complete if it produces a proof for each provable statement. The theorems of logical systems are typically recursively enumerable , which implies the existence of a complete but usually extremely inefficient proof procedure; however, a proof procedure is only of interest if it is reasonably efficient.
Faced with an unprovable statement, a complete proof procedure may sometimes succeed in detecting and signalling its unprovability. In the general case, where provability is only a semidecidable property, this is not possible, and instead the procedure will diverge (not terminate). | https://en.wikipedia.org/wiki/Proof_procedure |
This article gives a sketch of a proof of Gödel's first incompleteness theorem . This theorem applies to any formal theory that satisfies certain technical hypotheses, which are discussed as needed during the sketch. We will assume for the remainder of the article that a fixed theory satisfying these hypotheses has been selected.
Throughout this article the word "number" refers to a natural number (including 0). The key property these numbers possess is that any natural number can be obtained by starting with the number 0 and adding 1 a finite number of times.
Gödel's theorem applies to any formal theory that satisfies certain properties. Each formal theory has a signature that specifies the nonlogical symbols in the language of the theory. For simplicity, we will assume that the language of the theory is composed from the following collection of 15 (and only 15) symbols:
This is the language of Peano arithmetic . A well-formed formula is a sequence of these symbols that is formed so as to have a well-defined reading as a mathematical formula. Thus x = SS 0 is well formed while x = ∀+ is not well formed. A theory is a set of well-formed formulas with no free variables .
A theory is consistent if there is no formula F such that both F and its negation are provable. ω-consistency is a stronger property than consistency. Suppose that F ( x ) is a formula with one free variable x . In order to be ω-consistent, the theory cannot prove both ∃ m F ( m ) while also proving ¬ F ( n ) for each natural number n .
The theory is assumed to be effective, which means that the set of axioms must be recursively enumerable . This means that it is theoretically possible to write a finite-length computer program that, if allowed to run forever, would output the axioms of the theory (necessarily including every well-formed instance of the axiom schema of induction ) one at a time and not output anything else. This requirement is necessary; there are theories that are complete , consistent, and include elementary arithmetic, but no such theory can be effective.
The sketch here is broken into three parts. In the first part, each formula of the theory is assigned a number, known as a Gödel number, in a manner that allows the formula to be effectively recovered from the number. This numbering is extended to cover finite sequences of formulas. In the second part, a specific formula PF ( x , y ) is constructed such that for any two numbers n and m , PF ( n , m ) holds if and only if n represents a sequence of formulas that constitutes a proof of the formula that m represents. In the third part of the proof, we construct a self-referential formula that, informally, says "I am not provable", and prove that this sentence is neither provable nor disprovable within the theory.
Importantly, all the formulas in the proof can be defined by primitive recursive functions , which themselves can be defined in first-order Peano arithmetic .
The first step of the proof is to represent (well-formed) formulas of the theory, and finite lists of these formulas, as natural numbers. These numbers are called the Gödel numbers of the formulas.
Begin by assigning a natural number to each symbol of the language of arithmetic, similar to the manner in which the ASCII code assigns a unique binary number to each letter and certain other characters. This article will employ the following assignment, very similar to the one Douglas Hofstadter used in his Gödel, Escher, Bach : [ 1 ]
The Gödel number of a formula is obtained by concatenating the Gödel numbers of each symbol making up the formula. The Gödel numbers for each symbol are separated by a zero because by design, no Gödel number of a symbol includes a 0 . Hence any formula may be correctly recovered from its Gödel number. Let G ( F ) denote the Gödel number of the formula F .
Given the above Gödel numbering, the sentence asserting that addition commutes , ∀ x ∀ x * ( x + x * = x * + x ) translates as the number:
(Spaces have been inserted on each side of every 0 only for readability; Gödel numbers are strict concatenations of decimal digits.) Not all natural numbers represent a sentence. For example, the number
translates to " = ∀ + x ", which is not well-formed.
Because each natural number can be obtained by applying the successor operation S to 0 a finite number of times, every natural number has its own Gödel number. For example, the Gödel number corresponding to 4, SSSS 0 , is:
The assignment of Gödel numbers can be extended to finite lists of formulas. To obtain the Gödel number of a list of formulas, write the Gödel numbers of the formulas in order, separating them by two consecutive zeros. Since the Gödel number of a formula never contains two consecutive zeros, each formula in a list of formulas can be effectively recovered from the Gödel number for the list.
It is crucial that the formal arithmetic be capable of proving a minimum set of facts. In particular, it must be able to prove that every number m has a Gödel number G ( m ) . A second fact that the theory must prove is that given any Gödel number G ( F ( x )) of a formula F ( x ) with one free variable x and any number m , there is a Gödel number of the formula F ( m ) obtained by replacing all occurrences of G ( x ) in G ( F ( x )) with G ( m ) , and that this second Gödel number can be effectively obtained from the Gödel number G ( F ( x )) of F ( x ) as a function of G ( x ) . To see that this is in fact possible, note that given the Gödel number of F ( x ) , one can recreate the original formula F ( x ) , make the substitution of x with m , and then find the Gödel number G ( F ( m )) of the resulting formula F ( m ) . This is a uniform procedure.
Deduction rules can then be represented by binary relations on Gödel numbers of lists of formulas. In other words, suppose that there is a deduction rule D 1 , by which one can move from the formulas S 1 , S 2 to a new formula S . Then the relation R 1 corresponding to this deduction rule says that n is related to m (in other words, n R 1 m holds) if n is the Gödel number of the list of formulas containing S 1 and S 2 and m is the Gödel number of the list of formulas containing S 1 , S 2 and S . Because each deduction rule is concrete, it is possible to effectively determine for any natural numbers n and m whether they are related by the relation.
The second stage in the proof is to use the Gödel numbering, described above, to show that the notion of provability can be expressed within the formal language of the theory. Suppose the theory has deduction rules: D 1 , D 2 , D 3 , ... . Let R 1 , R 2 , R 3 , ... be their corresponding relations, as described above.
Every provable statement is either an axiom itself, or it can be deduced from the axioms by a finite number of applications of the deduction rules.
A proof of a formula S is itself a string of mathematical statements related by particular relations (each is either an axiom or related to former statements by deduction rules), where the last statement is S . Thus one can define the Gödel number of a proof. Moreover, one may define a statement form Proof ( x , y ) , which for every two numbers x and y is provable if and only if x is the Gödel number of a proof of the statement S and y = G ( S ) .
Proof ( x , y ) is in fact an arithmetical relation, just as " x + y = 6 " is, though a much more complicated one. Given such a relation R ( x , y ) , for any two specific numbers n and m , either the formula R ( m , n ) , or its negation ¬ R ( m , n ) , but not both, is provable. This is because the relation between these two numbers can be simply "checked". Formally this can be proven by induction, where all these possible relations (whose number is infinite) are constructed one by one.
The detailed construction of the formula Proof makes essential use of the assumption that the theory is effective; it would not be possible to construct this formula without such an assumption.
For every number n and every formula F ( y ) , where y is a free variable, we define q ( n , G ( F )) , a relation between two numbers n and G ( F ) , such that it corresponds to the statement " n is not the Gödel number of a proof of F ( G ( F )) ". Here, F ( G ( F )) can be understood as F with its own Gödel number as its argument.
Note that q takes as an argument G ( F ) , the Gödel number of F . In order to prove either q ( n , G ( F )) , or ¬ q ( n , G ( F )) , it is necessary to perform number-theoretic operations on G ( F ) that mirror the following steps: decode the number G ( F ) into the formula F , replace all occurrences of y in F with the number G ( F ) , and then compute the Gödel number of the resulting formula F ( G ( F )) .
Note that for every specific number n and formula F ( y ), q ( n , G ( F )) is a straightforward (though complicated) arithmetical relation between two numbers n and G ( F ) , building on the relation PF defined earlier. Further, q ( n , G ( F )) is provable if the finite list of formulas encoded by n is not a proof of F ( G ( F )) , and ¬ q ( n , G ( F )) is provable if the finite list of formulas encoded by n is a proof of F ( G ( F )) . Given any numbers n and G ( F ) , either q ( n , G ( F )) or ¬ q ( n , G ( F )) (but not both) is provable.
Any proof of F ( G ( F )) can be encoded by a Gödel number n , such that q ( n , G ( F )) does not hold. If q ( n , G ( F )) holds for all natural numbers n , then there is no proof of F ( G ( F )) . In other words, ∀ y q ( y , G ( F )) , a formula about natural numbers, corresponds to "there is no proof of F ( G ( F )) ".
We now define the formula P ( x ) = ∀ y q ( y , x ) , where x is a free variable. The formula P itself has a Gödel number G ( P ) as does every formula.
This formula has a free variable x . Suppose we replace it with G ( F ) ,
the Gödel number of a formula F ( z ) , where z is a free variable. Then, P ( G ( F )) = ∀ y q ( y , G ( F )) corresponds to "there is no proof of F ( G ( F )) ", as we have seen.
Consider the formula P ( G ( P )) = ∀ y q ( y , G ( P )) . This formula concerning the number G ( P ) corresponds to "there is no proof of P ( G ( P )) ". We have here the self-referential feature that is crucial to the proof: A formula of the formal theory that somehow relates to its own provability within that formal theory. Very informally, P ( G ( P )) says: "I am not provable".
We will now show that neither the formula P ( G ( P )) , nor its negation ¬ P ( G ( P )) , is provable.
Suppose P ( G ( P )) = ∀ y q ( y , G ( P )) is provable. Let n be the Gödel number of a proof of P ( G ( P )) . Then, as seen earlier, the formula ¬ q ( n , G ( P )) is provable. Proving both ¬ q ( n , G ( P )) and ∀ y q ( y , G ( P )) violates the consistency of the formal theory. We therefore conclude that P ( G ( P )) is not provable.
Consider any number n . Suppose ¬ q ( n , G ( P )) is provable.
Then, n must be the Gödel number of a proof of P ( G ( P )) . But we have just proved that P ( G ( P )) is not provable. Since either q ( n , G ( P )) or ¬ q ( n , G ( P )) must be provable, we conclude that, for all natural numbers n , q ( n , G ( P )) is provable.
Suppose the negation of P ( G ( P )) , ¬ P ( G ( P )) = ∃ x ¬ q ( x , G ( P )) , is provable. Proving both ∃ x ¬ q ( x , G ( P )) , and q ( n , G ( P )) , for all natural numbers n , violates ω-consistency of the formal theory. Thus if the theory is ω-consistent , ¬ P ( G ( P )) is not provable.
We have sketched a proof showing that:
For any formal, recursively enumerable (i.e. effectively generated) theory of Peano Arithmetic ,
The proof of Gödel's incompleteness theorem just sketched is proof-theoretic (also called syntactic ) in that it shows that if certain proofs exist (a proof of P ( G ( P )) or its negation) then they can be manipulated to produce a proof of a contradiction. This makes no appeal to whether P ( G ( P )) is "true", only to whether it is provable. Truth is a model-theoretic , or semantic , concept, and is not equivalent to provability except in special cases.
By analyzing the situation of the above proof in more detail, it is possible to obtain a conclusion about the truth of P ( G ( P )) in the standard model N {\displaystyle \mathbb {N} } of natural numbers. As just seen, q ( n , G ( P )) is provable for each natural number n , and is thus true in the model N {\displaystyle \mathbb {N} } . Therefore, within this model,
holds. This is what the statement " P ( G ( P )) is true" usually refers to—the sentence is true in the intended model. It is not true in every model, however: If it were, then by Gödel's completeness theorem it would be provable, which we have just seen is not the case.
George Boolos (1989) vastly simplified the proof of the First Theorem, if one agrees that the theorem is equivalent to:
"There is no algorithm M whose output contains all true sentences of arithmetic and no false ones."
"Arithmetic" refers to Peano or Robinson arithmetic , but the proof invokes no specifics of either, tacitly assuming that these systems allow '<' and '×' to have their usual meanings. Boolos proves the theorem in about two pages. His proof employs the language of first-order logic , but invokes no facts about the connectives or quantifiers . The domain of discourse is the natural numbers . The Gödel sentence builds on Berry's paradox .
Let [ n ] abbreviate n successive applications of the successor function , starting from 0 . Boolos then asserts (the details are only sketched) that there exists a defined predicate Cxz that comes out true iff an arithmetic formula containing z symbols names the number x . This proof sketch contains the only mention of Gödel numbering ; Boolos merely assumes that every formula can be so numbered. Here, a formula F names the number n if and only if the following is provable:
Boolos then defines the related predicates:
Fx formalizes Berry's paradox. The balance of the proof, requiring but 12 lines of text, shows that the sentence ∀ x ( Fx ↔( x = [ n ])) is true for some number n , but no algorithm M will identify it as true. Hence in arithmetic, truth outruns proof. QED.
The above predicates contain the only existential quantifiers appearing in the entire proof. The '<' and '×' appearing in these predicates are the only defined arithmetical notions the proof requires. The proof nowhere mentions recursive functions or any facts from number theory , and Boolos claims that his proof dispenses with diagonalization . For more on this proof, see Berry's paradox . | https://en.wikipedia.org/wiki/Proof_sketch_for_Gödel's_first_incompleteness_theorem |
Proofs of the mathematical result that the rational number 22 / 7 is greater than π (pi) date back to antiquity. One of these proofs, more recently developed but requiring only elementary techniques from calculus, has attracted attention in modern mathematics due to its mathematical elegance and its connections to the theory of Diophantine approximations . Stephen Lucas calls this proof "one of the more beautiful results related to approximating π ". [ 1 ] Julian Havil ends a discussion of continued fraction approximations of π with the result, describing it as "impossible to resist mentioning" in that context. [ 2 ]
The purpose of the proof is not primarily to convince its readers that 22 / 7 (or 3 + 1 / 7 ) is indeed bigger than π . Systematic methods of computing the value of π exist. If one knows that π is approximately 3.14159, then it trivially follows that π < 22 / 7 , which is approximately 3.142857. But it takes much less work to show that π < 22 / 7 by the method used in this proof than to show that π is approximately 3.14159.
22 / 7 is a widely used Diophantine approximation of π . It is a convergent in the simple continued fraction expansion of π . It is greater than π , as can be readily seen in the decimal expansions of these values:
The approximation has been known since antiquity. Archimedes wrote the first known proof that 22 / 7 is an overestimate in the 3rd century BCE, although he may not have been the first to use that approximation. His proof proceeds by showing that 22 / 7 is greater than the ratio of the perimeter of a regular polygon with 96 sides to the diameter of a circle it circumscribes. [ note 1 ]
The proof first devised by British electrical engineer Donald Percy Dalzell (1898–1988) in 1944 [ 4 ] can be expressed very succinctly:
Therefore, 22 / 7 > π .
The evaluation of this integral was the first problem in the 1968 Putnam Competition . [ 5 ] It is easier than most Putnam Competition problems, but the competition often features seemingly obscure problems that turn out to refer to something very familiar. This integral has also been used in the entrance examinations for the Indian Institutes of Technology . [ 6 ]
That the integral is positive follows from the fact that the integrand is non-negative ; the denominator is positive and the numerator is a product of nonnegative numbers. One can also easily check that the integrand is strictly positive for at least one point in the range of integration, say at 1 / 2 . Since the integrand is continuous at that point and nonnegative elsewhere, the integral from 0 to 1 must be strictly positive.
It remains to show that the integral in fact evaluates to the desired quantity:
(See polynomial long division .)
In Dalzell (1944) , it is pointed out that if 1 is substituted for x in the denominator, one gets a lower bound on the integral, and if 0 is substituted for x in the denominator, one gets an upper bound: [ 7 ]
Thus we have
hence 3.1412 < π < 3.1421 in decimal expansion. The bounds deviate by less than 0.015% from π . See also Dalzell (1971) . [ 8 ]
As discussed in Lucas (2005) , the well-known Diophantine approximation and far better upper estimate 355 / 113 for π follows from the relation
where the first six digits after the decimal point agree with those of π . Substituting 1 for x in the denominator, we get the lower bound
substituting 0 for x in the denominator, we get twice this value as an upper bound, hence
In decimal expansion, this means 3.141 592 57 < π < 3.141 592 74 , where the bold digits of the lower and upper bound are those of π .
The above ideas can be generalized to get better approximations of π ; see also Backhouse (1995) [ 9 ] and Lucas (2005) (in both references, however, no calculations are given). For explicit calculations, consider, for every integer n ≥ 1 ,
where the middle integral evaluates to
involving π . The last sum also appears in Leibniz' formula for π . The correction term and error bound is given by
where the approximation (the tilde means that the quotient of both sides tends to one for large n ) of the central binomial coefficient follows from Stirling's formula and shows the fast convergence of the integrals to π .
Calculation of these integrals: For all integers k ≥ 0 and ℓ ≥ 2 we have
Applying this formula recursively 2 n times yields
Furthermore,
where the first equality holds, because the terms for 1 ≤ j ≤ 3 n – 1 cancel, and the second equality arises from the index shift j → j + 1 in the first sum.
Application of these two results gives
For integers k , ℓ ≥ 0 , using integration by parts ℓ times, we obtain
Setting k = ℓ = 4 n , we obtain
Integrating equation (1) from 0 to 1 using equation (2) and arctan(1) = π / 4 , we get the claimed equation involving π .
The results for n = 1 are given above. For n = 2 we get
and
hence 3.141 592 31 < π < 3.141 592 89 , where the bold digits of the lower and upper bound are those of π . Similarly for n = 3 ,
with correction term and error bound
hence 3.141 592 653 40 < π < 3.141 592 653 87 . The next step for n = 4 is
with
which gives 3.141 592 653 589 55 < π < 3.141 592 653 589 96 . | https://en.wikipedia.org/wiki/Proof_that_22/7_exceeds_π |
The number e was introduced by Jacob Bernoulli in 1683. More than half a century later, Euler , who had been a student of Jacob's younger brother Johann , proved that e is irrational ; that is, that it cannot be expressed as the quotient of two integers.
Euler wrote the first proof of the fact that e is irrational in 1737 (but the text was only published seven years later). [ 1 ] [ 2 ] [ 3 ] He computed the representation of e as a simple continued fraction , which is
Since this continued fraction is infinite and every rational number has a terminating continued fraction, e is irrational. A short proof of the previous equality is known. [ 4 ] [ 5 ] Since the simple continued fraction of e is not periodic , this also proves that e is not a root of a quadratic polynomial with rational coefficients; in particular, e 2 is irrational.
The most well-known proof is Joseph Fourier 's proof by contradiction , [ 6 ] which is based upon the equality
Initially e is assumed to be a rational number of the form a / b . The idea is to then analyze the scaled-up difference (here denoted x ) between the series representation of e and its strictly smaller b -th partial sum, which approximates the limiting value e . By choosing the scale factor to be the factorial of b , the fraction a / b and the b -th partial sum are turned into integers , hence x must be a positive integer. However, the fast convergence of the series representation implies that x is still strictly smaller than 1. From this contradiction we deduce that e is irrational.
Now for the details. If e is a rational number , there exist positive integers a and b such that e = a / b . Define the number
Use the assumption that e = a / b to obtain
The first term is an integer, and every fraction in the sum is actually an integer because n ≤ b for each term. Therefore, under the assumption that e is rational, x is an integer.
We now prove that 0 < x < 1 . First, to prove that x is strictly positive, we insert the above series representation of e into the definition of x and obtain
because all the terms are strictly positive.
We now prove that x < 1 . For all terms with n ≥ b + 1 we have the upper estimate
This inequality is strict for every n ≥ b + 2 . Changing the index of summation to k = n – b and using the formula for the infinite geometric series , we obtain
And therefore x < 1. {\displaystyle x<1.}
Since there is no integer strictly between 0 and 1, we have reached a contradiction, and so e is irrational, Q.E.D.
Another proof [ 7 ] can be obtained from the previous one by noting that
and this inequality is equivalent to the assertion that bx < 1. This is impossible, of course, since b and x are positive integers.
Still another proof [ 8 ] [ 9 ] can be obtained from the fact that
Define s n {\displaystyle s_{n}} as follows:
Then
which implies
for any positive integer n {\displaystyle n} .
Note that ( 2 n − 1 ) ! s 2 n − 1 {\displaystyle (2n-1)!s_{2n-1}} is always an integer. Assume that e − 1 {\displaystyle e^{-1}} is rational, so e − 1 = p / q , {\displaystyle e^{-1}=p/q,} where p , q {\displaystyle p,q} are co-prime, and q ≠ 0. {\displaystyle q\neq 0.} It is possible to appropriately choose n {\displaystyle n} so that ( 2 n − 1 ) ! e − 1 {\displaystyle (2n-1)!e^{-1}} is an integer, i.e. n ≥ ( q + 1 ) / 2. {\displaystyle n\geq (q+1)/2.} Hence, for this choice, the difference between ( 2 n − 1 ) ! e − 1 {\displaystyle (2n-1)!e^{-1}} and ( 2 n − 1 ) ! s 2 n − 1 {\displaystyle (2n-1)!s_{2n-1}} would be an integer. But from the above inequality, that is not possible. So, e − 1 {\displaystyle e^{-1}} is irrational. This means that e {\displaystyle e} is irrational.
In 1840, Liouville published a proof of the fact that e 2 is irrational [ 10 ] followed by a proof that e 2 is not a root of a second-degree polynomial with rational coefficients. [ 11 ] This last fact implies that e 4 is irrational. His proofs are similar to Fourier's proof of the irrationality of e . In 1891, Hurwitz explained how it is possible to prove along the same line of ideas that e is not a root of a third-degree polynomial with rational coefficients, which implies that e 3 is irrational. [ 12 ] More generally, e q is irrational for any non-zero rational q . [ 13 ]
Charles Hermite further proved that e is a transcendental number , in 1873, which means that is not a root of any polynomial with rational coefficients, as is e α for any non-zero algebraic α . [ 14 ] | https://en.wikipedia.org/wiki/Proof_that_e_is_irrational |
In the 1760s, Johann Heinrich Lambert was the first to prove that the number π is irrational , meaning it cannot be expressed as a fraction a / b {\displaystyle a/b} , where a {\displaystyle a} and b {\displaystyle b} are both integers . In the 19th century, Charles Hermite found a proof that requires no prerequisite knowledge beyond basic calculus . Three simplifications of Hermite's proof are due to Mary Cartwright , Ivan Niven , and Nicolas Bourbaki . Another proof, which is a simplification of Lambert's proof, is due to Miklós Laczkovich . Many of these are proofs by contradiction .
In 1882, Ferdinand von Lindemann proved that π {\displaystyle \pi } is not just irrational, but transcendental as well. [ 1 ]
In 1761, Johann Heinrich Lambert proved that π {\displaystyle \pi } is irrational by first showing that this continued fraction expansion holds:
Then Lambert proved that if x {\displaystyle x} is non-zero and rational, then this expression must be irrational. Since tan π 4 = 1 {\displaystyle \tan {\tfrac {\pi }{4}}=1} , it follows that π 4 {\displaystyle {\tfrac {\pi }{4}}} is irrational, and thus π {\displaystyle \pi } is also irrational. [ 2 ] A simplification of Lambert's proof is given below .
Written in 1873, this proof uses the characterization of π {\displaystyle \pi } as the smallest positive number whose half is a zero of the cosine function and it actually proves that π 2 {\displaystyle \pi ^{2}} is irrational. [ 3 ] [ 4 ] As in many proofs of irrationality, it is a proof by contradiction .
Consider the sequences of real functions A n {\displaystyle A_{n}} and U n {\displaystyle U_{n}} for n ∈ N 0 {\displaystyle n\in \mathbb {N} _{0}} defined by:
Using induction we can prove that
and therefore we have:
So
which is equivalent to
Using the definition of the sequence and employing induction we can show that
where P n {\displaystyle P_{n}} and Q n {\displaystyle Q_{n}} are polynomial functions with integer coefficients and the degree of P n {\displaystyle P_{n}} is smaller than or equal to ⌊ 1 2 n ⌋ . {\displaystyle {\bigl \lfloor }{\tfrac {1}{2}}n{\bigr \rfloor }.} In particular, A n ( 1 2 π ) = P n ( 1 4 π 2 ) . {\displaystyle A_{n}{\bigl (}{\tfrac {1}{2}}\pi {\bigr )}=P_{n}{\bigl (}{\tfrac {1}{4}}\pi ^{2}{\bigr )}.}
Hermite also gave a closed expression for the function A n , {\displaystyle A_{n},} namely
He did not justify this assertion, but it can be proved easily. First of all, this assertion is equivalent to
Proceeding by induction, take n = 0. {\displaystyle n=0.}
and, for the inductive step, consider any natural number n . {\displaystyle n.} If
then, using integration by parts and Leibniz's rule , one gets
If 1 4 π 2 = p / q , {\displaystyle {\tfrac {1}{4}}\pi ^{2}=p/q,} with p {\displaystyle p} and q {\displaystyle q} in N {\displaystyle \mathbb {N} } , then, since the coefficients of P n {\displaystyle P_{n}} are integers and its degree is smaller than or equal to ⌊ 1 2 n ⌋ , {\displaystyle {\bigl \lfloor }{\tfrac {1}{2}}n{\bigr \rfloor },} q ⌊ n / 2 ⌋ P n ( 1 4 π 2 ) {\displaystyle q^{\lfloor n/2\rfloor }P_{n}{\bigl (}{\tfrac {1}{4}}\pi ^{2}{\bigr )}} is some integer N . {\displaystyle N.} In other words,
But this number is clearly greater than 0. {\displaystyle 0.} On the other hand, the limit of this quantity as n {\displaystyle n} goes to infinity is zero, and so, if n {\displaystyle n} is large enough, N < 1. {\displaystyle N<1.} Thereby, a contradiction is reached.
Hermite did not present his proof as an end in itself but as an afterthought within his search for a proof of the transcendence of π . {\displaystyle \pi .} He discussed the recurrence relations to motivate and to obtain a convenient integral representation. Once this integral representation is obtained, there are various ways to present a succinct and self-contained proof starting from the integral (as in Cartwright's, Bourbaki's or Niven's presentations), which Hermite could easily see (as he did in his proof of the transcendence of e {\displaystyle e} [ 5 ] ).
Moreover, Hermite's proof is closer to Lambert's proof than it seems. In fact, A n ( x ) {\displaystyle A_{n}(x)} is the "residue" (or "remainder") of Lambert's continued fraction for tan x . {\displaystyle \tan x.} [ 6 ]
Harold Jeffreys wrote that this proof was set as an example in an exam at Cambridge University in 1945 by Mary Cartwright , but that she had not traced its origin. [ 7 ] It still remains on the 4th problem sheet today for the Analysis IA course at Cambridge University. [ 8 ]
Consider the integrals
where n {\displaystyle n} is a non-negative integer.
Two integrations by parts give the recurrence relation
If
then this becomes
Furthermore, J 0 ( x ) = 2 sin x {\displaystyle J_{0}(x)=2\sin x} and J 1 ( x ) = − 4 x cos x + 4 sin x . {\displaystyle J_{1}(x)=-4x\cos x+4\sin x.} Hence for all n ∈ Z + , {\displaystyle n\in \mathbb {Z} _{+},}
where P n ( x ) {\displaystyle P_{n}(x)} and Q n ( x ) {\displaystyle Q_{n}(x)} are polynomials of degree ≤ n , {\displaystyle \leq n,} and with integer coefficients (depending on n {\displaystyle n} ).
Take x = 1 2 π , {\displaystyle x={\tfrac {1}{2}}\pi ,} and suppose if possible that 1 2 π = a / b {\displaystyle {\tfrac {1}{2}}\pi =a/b} where a {\displaystyle a} and b {\displaystyle b} are natural numbers (i.e., assume that π {\displaystyle \pi } is rational). Then
The right side is an integer. But 0 < I n ( 1 2 π ) < 2 {\displaystyle 0<I_{n}{\bigl (}{\tfrac {1}{2}}\pi {\bigr )}<2} since the interval [ − 1 , 1 ] {\displaystyle [-1,1]} has length 2 {\displaystyle 2} and the function being integrated takes only values between 0 {\displaystyle 0} and 1. {\displaystyle 1.} On the other hand,
Hence, for sufficiently large n {\displaystyle n}
that is, we could find an integer between 0 {\displaystyle 0} and 1. {\displaystyle 1.} That is the contradiction that follows from the assumption that π {\displaystyle \pi } is rational.
This proof is similar to Hermite's proof. Indeed,
However, it is clearly simpler. This is achieved by omitting the inductive definition of the functions A n {\displaystyle A_{n}} and taking as a starting point their expression as an integral.
This proof uses the characterization of π {\displaystyle \pi } as the smallest positive zero of the sine function. [ 9 ]
Suppose that π {\displaystyle \pi } is rational, i.e. π = a / b {\displaystyle \pi =a/b} for some integers a {\displaystyle a} and b {\displaystyle b} which may be taken without loss of generality to both be positive. Given any positive integer n , {\displaystyle n,} we define the polynomial function:
and, for each x ∈ R {\displaystyle x\in \mathbb {R} } let
Claim 1: F ( 0 ) + F ( π ) {\displaystyle F(0)+F(\pi )} is an integer.
Proof: Expanding f {\displaystyle f} as a sum of monomials, the coefficient of x k {\displaystyle x^{k}} is a number of the form c k / n ! {\displaystyle c_{k}/n!} where c k {\displaystyle c_{k}} is an integer, which is 0 {\displaystyle 0} if k < n . {\displaystyle k<n.} Therefore, f ( k ) ( 0 ) {\displaystyle f^{(k)}(0)} is 0 {\displaystyle 0} when k < n {\displaystyle k<n} and it is equal to ( k ! / n ! ) c k {\displaystyle (k!/n!)c_{k}} if n ≤ k ≤ 2 n {\displaystyle n\leq k\leq 2n} ; in each case, f ( k ) ( 0 ) {\displaystyle f^{(k)}(0)} is an integer and therefore F ( 0 ) {\displaystyle F(0)} is an integer.
On the other hand, f ( π − x ) = f ( x ) {\displaystyle f(\pi -x)=f(x)} and so ( − 1 ) k f ( k ) ( π − x ) = f ( k ) ( x ) {\displaystyle (-1)^{k}f^{(k)}(\pi -x)=f^{(k)}(x)} for each non-negative integer k . {\displaystyle k.} In particular, ( − 1 ) k f ( k ) ( π ) = f ( k ) ( 0 ) . {\displaystyle (-1)^{k}f^{(k)}(\pi )=f^{(k)}(0).} Therefore, f ( k ) ( π ) {\displaystyle f^{(k)}(\pi )} is also an integer and so F ( π ) {\displaystyle F(\pi )} is an integer (in fact, it is easy to see that F ( π ) = F ( 0 ) {\displaystyle F(\pi )=F(0)} ). Since F ( 0 ) {\displaystyle F(0)} and F ( π ) {\displaystyle F(\pi )} are integers, so is their sum.
Claim 2:
Proof: Since f ( 2 n + 2 ) {\displaystyle f^{(2n+2)}} is the zero polynomial, we have
The derivatives of the sine and cosine function are given by sin' = cos and cos' = −sin. Hence the product rule implies
By the fundamental theorem of calculus
Since sin 0 = sin π = 0 {\displaystyle \sin 0=\sin \pi =0} and cos 0 = − cos π = 1 {\displaystyle \cos 0=-\cos \pi =1} (here we use the above-mentioned characterization of π {\displaystyle \pi } as a zero of the sine function), Claim 2 follows.
Conclusion: Since f ( x ) > 0 {\displaystyle f(x)>0} and sin x > 0 {\displaystyle \sin x>0} for 0 < x < π {\displaystyle 0<x<\pi } (because π {\displaystyle \pi } is the smallest positive zero of the sine function), Claims 1 and 2 show that F ( 0 ) + F ( π ) {\displaystyle F(0)+F(\pi )} is a positive integer. Since 0 ≤ x ( a − b x ) ≤ π a {\displaystyle 0\leq x(a-bx)\leq \pi a} and 0 ≤ sin x ≤ 1 {\displaystyle 0\leq \sin x\leq 1} for 0 ≤ x ≤ π , {\displaystyle 0\leq x\leq \pi ,} we have, by the original definition of f , {\displaystyle f,}
which is smaller than 1 {\displaystyle 1} for large n , {\displaystyle n,} hence F ( 0 ) + F ( π ) < 1 {\displaystyle F(0)+F(\pi )<1} for these n , {\displaystyle n,} by Claim 2. This is impossible for the positive integer F ( 0 ) + F ( π ) . {\displaystyle F(0)+F(\pi ).} This shows that the original assumption that π {\displaystyle \pi } is rational leads to a contradiction, which concludes the proof.
The above proof is a polished version, which is kept as simple as possible concerning the prerequisites, of an analysis of the formula
which is obtained by 2 n + 2 {\displaystyle 2n+2} integrations by parts . Claim 2 essentially establishes this formula, where the use of F {\displaystyle F} hides the iterated integration by parts. The last integral vanishes because f ( 2 n + 2 ) {\displaystyle f^{(2n+2)}} is the zero polynomial. Claim 1 shows that the remaining sum is an integer.
Niven's proof is closer to Cartwright's (and therefore Hermite's) proof than it appears at first sight. [ 6 ] In fact,
Therefore, the substitution x z = y {\displaystyle xz=y} turns this integral into
In particular,
Another connection between the proofs lies in the fact that Hermite already mentions [ 3 ] that if f {\displaystyle f} is a polynomial function and
then
from which it follows that
Bourbaki 's proof is outlined as an exercise in his calculus treatise. [ 10 ] For each natural number b and each non-negative integer n , {\displaystyle n,} define
Since A n ( b ) {\displaystyle A_{n}(b)} is the integral of a function defined on [ 0 , π ] {\displaystyle [0,\pi ]} that takes the value 0 {\displaystyle 0} at 0 {\displaystyle 0} and π {\displaystyle \pi } and which is greater than 0 {\displaystyle 0} otherwise, A n ( b ) > 0. {\displaystyle A_{n}(b)>0.} Besides, for each natural number b , {\displaystyle b,} A n ( b ) < 1 {\displaystyle A_{n}(b)<1} if n {\displaystyle n} is large enough, because
and therefore
On the other hand, repeated integration by parts allows us to deduce that, if a {\displaystyle a} and b {\displaystyle b} are natural numbers such that π = a / b {\displaystyle \pi =a/b} and f {\displaystyle f} is the polynomial function from [ 0 , π ] {\displaystyle [0,\pi ]} into R {\displaystyle \mathbb {R} } defined by
then:
This last integral is 0 , {\displaystyle 0,} since f ( 2 n + 1 ) {\displaystyle f^{(2n+1)}} is the null function (because f {\displaystyle f} is a polynomial function of degree 2 n {\displaystyle 2n} ). Since each function f ( k ) {\displaystyle f^{(k)}} (with 0 ≤ k ≤ 2 n {\displaystyle 0\leq k\leq 2n} ) takes integer values at 0 {\displaystyle 0} and π {\displaystyle \pi } and since the same thing happens with the sine and the cosine functions, this proves that A n ( b ) {\displaystyle A_{n}(b)} is an integer. Since it is also greater than 0 , {\displaystyle 0,} it must be a natural number. But it was also proved that A n ( b ) < 1 {\displaystyle A_{n}(b)<1} if n {\displaystyle n} is large enough, thereby reaching a contradiction .
This proof is quite close to Niven's proof, the main difference between them being the way of proving that the numbers A n ( b ) {\displaystyle A_{n}(b)} are integers.
Miklós Laczkovich 's proof is a simplification of Lambert's original proof. [ 11 ] He considers the functions
These functions are clearly defined for any real number x . {\displaystyle x.} Besides
Claim 1: The following recurrence relation holds for any real number x {\displaystyle x} :
Proof: This can be proved by comparing the coefficients of the powers of x . {\displaystyle x.}
Claim 2: For each real number x , {\displaystyle x,}
Proof: In fact, the sequence x 2 n / n ! {\displaystyle x^{2n}/n!} is bounded (since it converges to 0 {\displaystyle 0} ) and if C {\displaystyle C} is an upper bound and if k > 1 , {\displaystyle k>1,} then
Claim 3: If x ≠ 0 , {\displaystyle x\neq 0,} x 2 {\displaystyle x^{2}} is rational, and k ∈ Q ∖ { 0 , − 1 , − 2 , … } {\displaystyle k\in \mathbb {Q} \smallsetminus \{0,-1,-2,\ldots \}} then
Proof: Otherwise, there would be a number y ≠ 0 {\displaystyle y\neq 0} and integers a {\displaystyle a} and b {\displaystyle b} such that f k ( x ) = a y {\displaystyle f_{k}(x)=ay} and f k + 1 ( x ) = b y . {\displaystyle f_{k+1}(x)=by.} To see why, take y = f k + 1 ( x ) , {\displaystyle y=f_{k+1}(x),} a = 0 , {\displaystyle a=0,} and b = 1 {\displaystyle b=1} if f k ( x ) = 0 {\displaystyle f_{k}(x)=0} ; otherwise, choose integers a {\displaystyle a} and b {\displaystyle b} such that f k + 1 ( x ) / f k ( x ) = b / a {\displaystyle f_{k+1}(x)/f_{k}(x)=b/a} and define y = f k ( x ) / a = f k + 1 ( x ) / b . {\displaystyle y=f_{k}(x)/a=f_{k+1}(x)/b.} In each case, y {\displaystyle y} cannot be 0 , {\displaystyle 0,} because otherwise it would follow from claim 1 that each f k + n ( x ) {\displaystyle f_{k+n}(x)} ( n ∈ N {\displaystyle n\in \mathbb {N} } ) would be 0 , {\displaystyle 0,} which would contradict claim 2. Now, take a natural number c {\displaystyle c} such that all three numbers b c / k , {\displaystyle bc/k,} c k / x 2 , {\displaystyle ck/x^{2},} and c / x 2 {\displaystyle c/x^{2}} are integers and consider the sequence
Then
On the other hand, it follows from claim 1 that
which is a linear combination of g n + 1 {\displaystyle g_{n+1}} and g n {\displaystyle g_{n}} with integer coefficients. Therefore, each g n {\displaystyle g_{n}} is an integer multiple of y . {\displaystyle y.} Besides, it follows from claim 2 that each g n {\displaystyle g_{n}} is greater than 0 {\displaystyle 0} (and therefore that g n ≥ | y | {\displaystyle g_{n}\geq |y|} ) if n {\displaystyle n} is large enough and that the sequence of all g n {\displaystyle g_{n}} converges to 0. {\displaystyle 0.} But a sequence of numbers greater than or equal to | y | {\displaystyle |y|} cannot converge to 0. {\displaystyle 0.}
Since f 1 / 2 ( 1 4 π ) = cos 1 2 π = 0 , {\displaystyle f_{1/2}({\tfrac {1}{4}}\pi )=\cos {\tfrac {1}{2}}\pi =0,} it follows from claim 3 that 1 16 π 2 {\displaystyle {\tfrac {1}{16}}\pi ^{2}} is irrational and therefore that π {\displaystyle \pi } is irrational.
On the other hand, since
another consequence of Claim 3 is that, if x ∈ Q ∖ { 0 } , {\displaystyle x\in \mathbb {Q} \smallsetminus \{0\},} then tan x {\displaystyle \tan x} is irrational.
Laczkovich's proof is really about the hypergeometric function . In fact, f k ( x ) = 0 F 1 ( k − x 2 ) {\displaystyle f_{k}(x)={}_{0}F_{1}(k-x^{2})} and Gauss found a continued fraction expansion of the hypergeometric function using its functional equation . [ 12 ] This allowed Laczkovich to find a new and simpler proof of the fact that the tangent function has the continued fraction expansion that Lambert had discovered.
Laczkovich's result can also be expressed in Bessel functions of the first kind J ν ( x ) {\displaystyle J_{\nu }(x)} . In fact, Γ ( k ) J k − 1 ( 2 x ) = x k − 1 f k ( x ) {\displaystyle \Gamma (k)J_{k-1}(2x)=x^{k-1}f_{k}(x)} (where Γ {\displaystyle \Gamma } is the gamma function ). So Laczkovich's result is equivalent to: If x ≠ 0 , {\displaystyle x\neq 0,} x 2 {\displaystyle x^{2}} is rational, and k ∈ Q ∖ { 0 , − 1 , − 2 , … } {\displaystyle k\in \mathbb {Q} \smallsetminus \{0,-1,-2,\ldots \}} then | https://en.wikipedia.org/wiki/Proof_that_π_is_irrational |
In mathematics , a proof without words (or visual proof ) is an illustration of an identity or mathematical statement which can be demonstrated as self-evident by a diagram without any accompanying explanatory text. Such proofs can be considered more elegant than formal or mathematically rigorous proofs due to their self-evident nature. [ 1 ] When the diagram demonstrates a particular case of a general statement, to be a proof, it must be generalisable. [ 2 ]
A proof without words is not the same as a mathematical proof , because it omits the details of the logical argument it illustrates. However, it can provide valuable intuitions to the viewer that can help them formulate or better understand a true proof.
The statement that the sum of all positive odd numbers up to 2 n − 1 is a perfect square —more specifically, the perfect square n 2 —can be demonstrated by a proof without words. [ 3 ]
In one corner of a grid, a single block represents 1, the first square. That can be wrapped on two sides by a strip of three blocks (the next odd number) to make a 2 × 2 block: 4, the second square. Adding a further five blocks makes a 3 × 3 block: 9, the third square. This process can be continued indefinitely.
The Pythagorean theorem that a 2 + b 2 = c 2 {\displaystyle a^{2}+b^{2}=c^{2}} can be proven without words. [ 4 ]
One method of doing so is to visualise a larger square of sides a + b {\displaystyle a+b} , with four right-angled triangles of sides a {\displaystyle a} , b {\displaystyle b} and c {\displaystyle c} in its corners, such that the space in the middle is a diagonal square with an area of c 2 {\displaystyle c^{2}} . The four triangles can be rearranged within the larger square to split its unused space into two squares of a 2 {\displaystyle a^{2}} and b 2 {\displaystyle b^{2}} . [ 5 ]
Jensen's inequality can also be proven graphically. A dashed curve along the X axis is the hypothetical distribution of X , while a dashed curve along the Y axis is the corresponding distribution of Y values. The convex mapping Y ( X ) increasingly "stretches" the distribution for increasing values of X . [ 6 ]
Mathematics Magazine and The College Mathematics Journal run a regular feature titled "Proof without words" containing, as the title suggests, proofs without words. [ 3 ] The Art of Problem Solving and USAMTS websites run Java applets illustrating proofs without words. [ 7 ] [ 8 ]
For a proof to be accepted by the mathematical community, it must logically show how the statement it aims to prove follows totally and inevitably from a set of assumptions . [ 9 ] A proof without words might imply such an argument, but it does not make one directly, so it cannot take the place of a formal proof where one is required. [ 10 ] [ 11 ] Rather, mathematicians use proofs without words as illustrations and teaching aids for ideas that have already been proven formally. [ 12 ] [ 13 ] | https://en.wikipedia.org/wiki/Proof_without_words |
The term proofreading is used in genetics to refer to the error-correcting processes, first proposed by John Hopfield [ 1 ] and Jacques Ninio , [ 2 ] involved in DNA replication , immune system specificity, and enzyme-substrate recognition among many other processes that require enhanced specificity. The kinetic proofreading mechanisms of Hopfield and Ninio are non-equilibrium active processes that consume ATP to enhance specificity of various biochemical reactions.
In bacteria , all three DNA polymerases (I, II and III) have the ability to proofread, using 3’ → 5’ exonuclease activity. When an incorrect base pair is recognized, DNA polymerase reverses its direction by one base pair of DNA and excises the mismatched base. Following base excision, the polymerase can re-insert the correct base and replication can continue.
In eukaryotes , only the polymerases that deal with the elongation (delta and epsilon) have proofreading ability (3’ → 5’ exonuclease activity). [ 3 ]
Proofreading also occurs in mRNA translation for protein synthesis. [ 4 ] In this case, one mechanism is the release of any incorrect aminoacyl-tRNA before peptide bond formation. [ 5 ]
The extent of proofreading in DNA replication determines the mutation rate , and is different in different species. [ 6 ] For example, loss of proofreading due to mutations in the DNA polymerase epsilon gene results in a hyper-mutated genotype with >100 mutations per million bases of DNA in human colorectal cancers . [ 7 ]
The extent of proofreading in other molecular processes can depend on the effective population size of the species and the number of genes affected by the same proofreading mechanism. [ 8 ]
Bacteriophage (phage) T4 gene 43 encodes the phage's DNA polymerase replicative enzyme. Temperature-sensitive ( ts ) gene 43 mutants have been identified that have an antimutator phenotype , that is a lower rate of spontaneous mutation than wild type. [ 9 ] Studies of one of these mutants, tsB120 , showed that the DNA polymerase specified by this mutant copies DNA templates at a slower rate than the wild-type polymerase. [ 10 ] However, the 3’ to 5’ exonuclease activity was no higher than wild-type. During DNA replication the ratio of nucleotides turned over to those stably incorporated into newly formed DNA is 10 to 100 times higher in the case of the tsB120 mutant than in wild-type. [ 10 ] It was proposed that the antimutator effect may be explained by both greater accuracy in nucleotide selection and an increased efficiency of removal of noncomplementary nucleotides (proofreading) by the tsB120 polymerase.
When phage T4 virions with a wild-type gene 43 DNA polymerase are exposed to either ultraviolet light, which introduces cyclobutane pyrimidine dimer damages in DNA, or psoralen -plus-light, which introduces pyrimidine adducts, the rate of mutation increases. However, these mutagenic effects are inhibited when the phage's DNA synthesis is catalyzed by the tsCB120 antimutator polymerase, or another antimutator polymerase, tsCB87 . [ 11 ] These findings indicate that the level of induction of mutations by DNA damage can be strongly influenced by the gene 43 DNA polymerase proofreading function.
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is the causative agent of the COVID-19 pandemic. The SARS-CoV-2 RNA virus genome encodes a replication-and transcription complex, a multisubunit protein machine that carries out viral genome replication and transcription, processes essential to the virus life cycle. One of the proteins specified by the coronavirus genome is a non-structural protein, nsp14, that is a 3’-to-5’ exoribonuclease (ExoN). This protein resides in the protein complex nsp10-nsp14 that enhances replication fidelity by proofreading RNA synthesis, an activity critical for the virus life cycle. [ 12 ] Furthermore, the coronavirus proofreading exoribonuclease nsp14-ExoN is required for maintaining genetic recombination generated during infection. [ 13 ] | https://en.wikipedia.org/wiki/Proofreading_(biology) |
Proofs That Really Count: the Art of Combinatorial Proof is an undergraduate-level mathematics book on combinatorial proofs of mathematical identies . That is, it concerns equations between two integer -valued formulas, shown to be equal either by showing that both sides of the equation count the same type of mathematical objects, or by finding a one-to-one correspondence between the different types of object that they count. It was written by Arthur T. Benjamin and Jennifer Quinn , and published in 2003 by the Mathematical Association of America as volume 27 of their Dolciani Mathematical Expositions series. It won the Beckenbach Book Prize of the Mathematical Association of America.
The book provides combinatorial proofs of thirteen theorems in combinatorics and 246 numbered identities (collated in an appendix). [ 1 ] Several additional "uncounted identities" are also included. [ 2 ] Many proofs are based on a visual-reasoning method that the authors call "tiling", [ 1 ] [ 3 ] and in a foreword, the authors describe their work as providing a follow-up for counting problems of the Proof Without Words books by Roger B. Nelson. [ 3 ]
The first three chapters of the book start with integer sequences defined by linear recurrence relations , the prototypical example of which is the sequence of Fibonacci numbers . These numbers can be given a combinatorial interpretation as the number of ways of tiling a 1 × n {\displaystyle 1\times n} strip of squares with tiles of two types, single squares and dominos; this interpretation can be used to prove many of the fundamental identities involving the Fibonacci numbers, and generalized to similar relations about other sequences defined similarly, [ 4 ] such as the Lucas numbers , [ 5 ] using "circular tilings and colored tilings". [ 6 ] For instance, for the Fibonacci numbers, considering whether a tiling does or does not connect positions a − 1 {\displaystyle a-1} and a {\displaystyle a} of a strip of length a + b − 1 {\displaystyle a+b-1} immediately leads to the identity
Chapters four through seven of the book concern identities involving continued fractions , binomial coefficients , harmonic numbers , Stirling numbers , and factorials . The eighth chapter branches out from combinatorics to number theory and abstract algebra , and the final chapter returns to the Fibonacci numbers with more advanced material on their identities. [ 4 ]
The book is aimed at undergraduate mathematics students, but the material is largely self-contained, and could also be read by advanced high school students. [ 4 ] [ 6 ] Additionally, many of the book's chapters are themselves self-contained, allowing for arbitrary reading orders or for excerpts of this material to be used in classes. [ 2 ] Although it is structured as a textbook with exercises in each chapter, [ 4 ] reviewer Robert Beezer writes that it is "not meant as a textbook", but rather intended as a "resource" for teachers and researchers. [ 2 ] Echoing this, reviewer Joe Roberts writes that despite its elementary nature, this book should be "valuable as a reference ... for anyone working with such identities". [ 1 ]
In an initial review, Darren Glass complained that many of the results are presented as dry formulas, without any context or explanation for why they should be interesting or useful, and that
this lack of context would be an obstacle for using it as the main text for a class. [ 4 ] Nevertheless, in a second review after a year of owning the book, he wrote that he was "lending it out to person after person". [ 7 ] Reviewer Peter G. Anderson praises the book's "beautiful ways of seeing old, familiar mathematics and some new mathematics too", calling it "a treasure". [ 5 ] Reviewer Gerald L. Alexanderson describes the book's proofs as "ingenious, concrete and memorable". [ 3 ] The award citation for the book's 2006 Beckenbach Book Prize states that it "illustrates in a magical way the pervasiveness and power of counting techniques throughout mathematics. It is one of those rare books that will appeal to the mathematical professional and seduce the neophyte." [ 8 ]
One of the open problems from the book, seeking a bijective proof of an identity combining binomial coefficients with Fibonacci numbers, was subsequently answered positively by Doron Zeilberger . In the web site where he links a preprint of his paper, Zeilberger writes,
"When I was young and handsome, I couldn't see an identity without trying to prove it bijectively. Somehow, I weaned myself of this addiction. But the urge got rekindled, when I read Arthur Benjamin and Jennifer Quinn's masterpiece Proofs that Really Count ." [ 9 ]
Proofs That Really Count won the 2006 Beckenbach Book Prize of the Mathematical Association of America, [ 8 ] and the 2010 CHOICE Award for Outstanding Academic Title of the American Library Association . [ 10 ] It has been listed by the Basic Library List Committee of the Mathematical Association of America as essential for inclusion in any undergraduate mathematics library. [ 4 ] | https://en.wikipedia.org/wiki/Proofs_That_Really_Count |
Proofs from THE BOOK is a book of mathematical proofs by Martin Aigner and Günter M. Ziegler . The book is dedicated to the mathematician Paul Erdős , who often referred to "The Book" in which God keeps the most elegant proof of each mathematical theorem . During a lecture in 1985, Erdős said, "You don't have to believe in God, but you should believe in The Book." [ 1 ]
Proofs from THE BOOK contains 32 sections (45 in the sixth edition), each devoted to one theorem but often containing multiple proofs and related results. It spans a broad range of mathematical fields: number theory , geometry , analysis , combinatorics and graph theory . Erdős himself made many suggestions for the book, but died before its publication. The book is illustrated by Karl Heinrich Hofmann [ de ] . It has gone through six editions in English, and has been translated into Persian, French, German, Hungarian, Italian, Japanese, Chinese, Polish, Portuguese, Korean, Turkish, Russian, Spanish and Greek.
The American Mathematical Society awarded the 2018 Leroy P. Steele Prize for Mathematical Exposition to Aigner and Ziegler for this book.
The proofs include:
This article about a mathematical publication is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Proofs_from_THE_BOOK |
This article contains mathematical proofs for some properties of addition of the natural numbers : the additive identity, commutativity, and associativity. These proofs are used in the article Addition of natural numbers .
This article will use the Peano axioms for the definition of natural numbers. With these axioms, addition is defined from the constant 0 and the successor function S ( a ) by the two rules
For the proof of commutativity, it is useful to give the name "1" to the successor of 0; that is,
For every natural number a , one has
We prove associativity by first fixing natural numbers a and b and applying induction on the natural number c .
For the base case c = 0,
Each equation follows by definition [A1]; the first with a + b , the second with b .
Now, for the induction. We assume the induction hypothesis, namely we assume that for some natural number c ,
Then it follows,
In other words, the induction hypothesis holds for S ( c ). Therefore, the induction on c is complete.
Definition [A1] states directly that 0 is a right identity .
We prove that 0 is a left identity by induction on the natural number a .
For the base case a = 0, 0 + 0 = 0 by definition [A1].
Now we assume the induction hypothesis, that 0 + a = a .
Then
This completes the induction on a .
We prove commutativity ( a + b = b + a ) by applying induction on the natural number b . First we prove the base cases b = 0 and b = S (0) = 1 (i.e. we prove that 0 and 1 commute with everything).
The base case b = 0 follows immediately from the identity element property (0 is an additive identity ), which has been proved above: a + 0 = a = 0 + a .
Next we will prove the base case b = 1, that 1 commutes with everything, i.e. for all natural numbers a , we have a + 1 = 1 + a . We will prove this by induction on a (an induction proof within an induction proof). We have proved that 0 commutes with everything, so in particular, 0 commutes with 1: for a = 0, we have 0 + 1 = 1 + 0. Now, suppose a + 1 = 1 + a . Then
This completes the induction on a , and so we have proved the base case b = 1. Now, suppose that for all natural numbers a , we have a + b = b + a . We must show that for all natural numbers a , we have a + S ( b ) = S ( b ) + a . We have
This completes the induction on b . | https://en.wikipedia.org/wiki/Proofs_involving_the_addition_of_natural_numbers |
This article collects together a variety of proofs of Fermat's little theorem , which states that
for every prime number p and every integer a (see modular arithmetic ).
Some of the proofs of Fermat's little theorem given below depend on two simplifications.
The first is that we may assume that a is in the range 0 ≤ a ≤ p − 1 . This is a simple consequence of the laws of modular arithmetic ; we are simply saying that we may first reduce a modulo p . This is consistent with reducing a p {\displaystyle a^{p}} modulo p , as one can check.
Secondly, it suffices to prove that
for a in the range 1 ≤ a ≤ p − 1 . Indeed, if the previous assertion holds for such a , multiplying both sides by a yields the original form of the theorem,
On the other hand, if a = 0 , the theorem holds trivially.
This is perhaps the simplest known proof, requiring the least mathematical background. It is an attractive example of a combinatorial proof (a proof that involves counting a collection of objects in two different ways ).
The proof given here is an adaptation of Golomb 's proof. [ 1 ]
To keep things simple, let us assume that a is a positive integer . Consider all the possible strings of p symbols, using an alphabet with a different symbols. The total number of such strings is a p since there are a possibilities for each of p positions (see rule of product ).
For example, if p = 5 and a = 2 , then we can use an alphabet with two symbols (say A and B ), and there are 2 5 = 32 strings of length 5:
We will argue below that if we remove the strings consisting of a single symbol from the list (in our example, AAAAA and BBBBB ), the remaining a p − a strings can be arranged into groups, each group containing exactly p strings. It follows that a p − a is divisible by p .
Let us think of each such string as representing a necklace . That is, we connect the two ends of the string together and regard two strings as the same necklace if we can rotate one string to obtain the second string; in this case we will say that the two strings are friends . In our example, the following strings are all friends:
In full, each line of the following list corresponds to a single necklace, and the entire list comprises all 32 strings.
Notice that in the above list, each necklace with more than one symbol is represented by 5 different strings, and the number of necklaces represented by just one string is 2, i.e. is the number of distinct symbols. Thus the list shows very clearly why 32 − 2 is divisible by 5 .
One can use the following rule to work out how many friends a given string S has:
For example, suppose we start with the string S = ABBABBABBABB , which is built up of several copies of the shorter string T = ABB . If we rotate it one symbol at a time, we obtain the following 3 strings:
There aren't any others because ABB is exactly 3 symbols long and cannot be broken down into further repeating strings.
Using the above rule, we can complete the proof of Fermat's little theorem quite easily, as follows. Our starting pool of a p strings may be split into two categories:
The second category contains a p − a strings, and they may be arranged into groups of p strings, one group for each necklace. Therefore, a p − a must be divisible by p , as promised.
This proof uses some basic concepts from dynamical systems . [ 2 ]
We start by considering a family of functions T n ( x ), where n ≥ 2 is an integer , mapping the interval [0, 1] to itself by the formula
where { y } denotes the fractional part of y . For example, the function T 3 ( x ) is illustrated below:
A number x 0 is said to be a fixed point of a function f ( x ) if f ( x 0 ) = x 0 ; in other words, if f leaves x 0 fixed. The fixed points of a function can be easily found graphically: they are simply the x coordinates of the points where the graph of f ( x ) intersects the graph of the line y = x . For example, the fixed points of the function T 3 ( x ) are 0, 1/2, and 1; they are marked by black circles on the following diagram:
We will require the following two lemmas.
Lemma 1. For any n ≥ 2, the function T n ( x ) has exactly n fixed points.
Proof. There are 3 fixed points in the illustration above, and the same sort of geometrical argument applies for any n ≥ 2.
Lemma 2. For any positive integers n and m , and any 0 ≤ x ≤ 1,
In other words, T mn ( x ) is the composition of T n ( x ) and T m ( x ).
Proof. The proof of this lemma is not difficult, but we need to be slightly careful with the endpoint x = 1. For this point the lemma is clearly true, since
So let us assume that 0 ≤ x < 1. In this case,
so T m ( T n ( x )) is given by
Therefore, what we really need to show is that
To do this we observe that { nx } = nx − k , where k is the integer part of nx ; then
since mk is an integer.
Now let us properly begin the proof of Fermat's little theorem, by studying the function T a p ( x ). We will assume that a ≥ 2. From Lemma 1, we know that it has a p fixed points. By Lemma 2 we know that
so any fixed point of T a ( x ) is automatically a fixed point of T a p ( x ).
We are interested in the fixed points of T a p ( x ) that are not fixed points of T a ( x ). Let us call the set of such points S . There are a p − a points in S , because by Lemma 1 again, T a ( x ) has exactly a fixed points. The following diagram illustrates the situation for a = 3 and p = 2. The black circles are the points of S , of which there are 3 2 − 3 = 6.
The main idea of the proof is now to split the set S up into its orbits under T a . What this means is that we pick a point x 0 in S , and repeatedly apply T a (x) to it, to obtain the sequence of points
This sequence is called the orbit of x 0 under T a . By Lemma 2, this sequence can be rewritten as
Since we are assuming that x 0 is a fixed point of T a p ( x ), after p steps we hit T a p ( x 0 ) = x 0 , and from that point onwards the sequence repeats itself.
However, the sequence cannot begin repeating itself any earlier than that. If it did, the length of the repeating section would have to be a divisor of p , so it would have to be 1 (since p is prime). But this contradicts our assumption that x 0 is not a fixed point of T a .
In other words, the orbit contains exactly p distinct points. This holds for every orbit of S . Therefore, the set S , which contains a p − a points, can be broken up into orbits, each containing p points, so a p − a is divisible by p .
(This proof is essentially the same as the necklace-counting proof given above, simply viewed through a different lens: one may think of the interval [0, 1] as given by sequences of digits in base a (our distinction between 0 and 1 corresponding to the familiar distinction between representing integers as ending in ".0000..." and ".9999..."). T a n amounts to shifting such a sequence by n many digits. The fixed points of this will be sequences that are cyclic with period dividing n . In particular, the fixed points of T a p can be thought of as the necklaces of length p , with T a n corresponding to rotation of such necklaces by n spots.
This proof could also be presented without distinguishing between 0 and 1, simply using the half-open interval [0, 1); then T n would only have n − 1 fixed points, but T a p − T a would still work out to a p − a , as needed.)
This proof, due to Euler , [ 3 ] uses induction to prove the theorem for all integers a ≥ 0 .
The base step, that 0 p ≡ 0 (mod p ) , is trivial. Next, we must show that if the theorem is true for a = k , then it is also true for a = k + 1 . For this inductive step, we need the following lemma.
Lemma . For any integers x and y and for any prime p , ( x + y ) p ≡ x p + y p (mod p ) .
The lemma is a case of the freshman's dream . Leaving the proof for later on, we proceed with the induction.
Proof . Assume k p ≡ k (mod p ), and consider ( k +1) p . By the lemma we have
Using the induction hypothesis, we have that k p ≡ k (mod p ); and, trivially, 1 p = 1. Thus
which is the statement of the theorem for a = k +1. ∎
In order to prove the lemma, we must introduce the binomial theorem , which states that for any positive integer n ,
where the coefficients are the binomial coefficients ,
described in terms of the factorial function, n ! = 1×2×3×⋯× n .
Proof of Lemma. We consider the binomial coefficient when the exponent is a prime p :
The binomial coefficients are all integers. The numerator contains a factor p by the definition of factorial. When 0 < i < p , neither of the terms in the denominator includes a factor of p (relying on the primality of p ), leaving the coefficient itself to possess a prime factor of p from the numerator, implying that
Modulo p , this eliminates all but the first and last terms of the sum on the right-hand side of the binomial theorem for prime p . ∎
The primality of p is essential to the lemma; otherwise, we have examples like
which is not divisible by 4.
Using the Lemma, we have:
The proof, which was first discovered by Leibniz (who did not publish it) [ 4 ] and later rediscovered by Euler , [ 3 ] is a very simple application of the multinomial theorem , which states
where
and the summation is taken over all sequences of nonnegative integer indices k 1 , k 2 , ..., k m such the sum of all k i is n .
Thus if we express a as a sum of 1s (ones), we obtain
Clearly, if p is prime, and if k j is not equal to p for any j , we have
and if k j is equal to p for some j then
Since there are exactly a elements such that k j = p for some j , the theorem follows.
(This proof is essentially a coarser-grained variant of the necklace-counting proof given earlier; the multinomial coefficients count the number of ways a string can be permuted into arbitrary anagrams, while the necklace argument counts the number of ways a string can be rotated into cyclic anagrams. That is to say, that the nontrivial multinomial coefficients here are divisible by p can be seen as a consequence of the fact that each nontrivial necklace of length p can be unwrapped into a string in p many ways.
This multinomial expansion is also, of course, what essentially underlies the binomial theorem-based proof above)
An additive-combinatorial proof based on formal power product expansions was given by Giedrius Alkauskas. [ 5 ] This proof uses neither the Euclidean algorithm nor the binomial theorem , but rather it employs formal power series with rational coefficients.
This proof, [ 3 ] [ 6 ] discovered by James Ivory [ 7 ] and rediscovered by Dirichlet , [ 8 ] requires some background in modular arithmetic .
Let us assume that a is positive and not divisible by p .
The idea is that if we write down the sequence of numbers
and reduce each one modulo p , the resulting sequence turns out to be a rearrangement of
Therefore, if we multiply together the numbers in each sequence, the results must be identical modulo p :
Collecting together the a terms yields
Finally, we may “cancel out” the numbers 1, 2, ..., p − 1 from both sides of this equation, obtaining
There are two steps in the above proof that we need to justify:
We will prove these things below; let us first see an example of this proof in action.
If a = 3 and p = 7 , then the sequence in question is
reducing modulo 7 gives
which is just a rearrangement of
Multiplying them together gives
that is,
Canceling out 1 × 2 × 3 × 4 × 5 × 6 yields
which is Fermat's little theorem for the case a = 3 and p = 7 .
Let us first explain why it is valid, in certain situations, to “cancel”. The exact statement is as follows. If u , x , and y are integers, and u is not divisible by a prime number p , and if
then we may “cancel” u to obtain
Our use of this cancellation law in the above proof of Fermat's little theorem was valid because the numbers 1, 2, ..., p − 1 are certainly not divisible by p (indeed they are smaller than p ).
We can prove the cancellation law easily using Euclid's lemma , which generally states that if a prime p divides a product ab (where a and b are integers), then p must divide a or b . Indeed, the assertion ( C ) simply means that p divides ux − uy = u ( x − y ) . Since p is a prime which does not divide u , Euclid's lemma tells us that it must divide x − y instead; that is, ( D ) holds.
Note that the conditions under which the cancellation law holds are quite strict, and this explains why Fermat's little theorem demands that p is a prime. For example, 2×2 ≡ 2×5 (mod 6) , but it is not true that 2 ≡ 5 (mod 6) . However, the following generalization of the cancellation law holds: if u , x , y , and z are integers, if u and z are relatively prime , and if
then we may “cancel” u to obtain
This follows from a generalization of Euclid's lemma .
Finally, we must explain why the sequence
when reduced modulo p , becomes a rearrangement of the sequence
To start with, none of the terms a , 2 a , ..., ( p − 1) a can be congruent to zero modulo p , since if k is one of the numbers 1, 2, ..., p − 1 , then k is relatively prime with p , and so is a , so Euclid's lemma tells us that ka shares no factor with p . Therefore, at least we know that the numbers a , 2 a , ..., ( p − 1) a , when reduced modulo p , must be found among the numbers 1, 2, 3, ..., p − 1 .
Furthermore, the numbers a , 2 a , ..., ( p − 1) a must all be distinct after reducing them modulo p , because if
where k and m are one of 1, 2, ..., p − 1 , then the cancellation law tells us that
Since both k and m are between 1 and p − 1 , they must be equal. Therefore, the terms a , 2 a , ..., ( p − 1) a when reduced modulo p must be distinct.
To summarise: when we reduce the p − 1 numbers a , 2 a , ..., ( p − 1) a modulo p , we obtain distinct members of the sequence 1 , 2 , ..., p − 1 . Since there are exactly p − 1 of these, the only possibility is that the former are a rearrangement of the latter.
This method can also be used to prove Euler's theorem , with a slight alteration in that the numbers from 1 to p − 1 are substituted by the numbers less than and coprime with some number m (not necessarily prime). Both the rearrangement property and the cancellation law (under the generalized form mentioned above ) are still satisfied and can be utilized.
For example, if m = 10 , then the numbers less than m and coprime with m are 1 , 3 , 7 , and 9 . Thus we have:
Therefore,
This proof [ 9 ] requires the most basic elements of group theory .
The idea is to recognise that the set G = {1, 2, ..., p − 1 }, with the operation of multiplication (taken modulo p ), forms a group . The only group axiom that requires some effort to verify is that each element of G is invertible. Taking this on faith for the moment, let us assume that a is in the range 1 ≤ a ≤ p − 1 , that is, a is an element of G . Let k be the order of a , that is, k is the smallest positive integer such that a k ≡ 1 (mod p ) . Then the numbers 1, a , a 2 , ..., a k −1 reduced modulo p form a subgroup of G whose order is k and therefore, by Lagrange's theorem , k divides the order of G , which is p − 1 . So p − 1 = km for some positive integer m and then
To prove that every element b of G is invertible, we may proceed as follows. First, b is coprime to p . Thus Bézout's identity assures us that there are integers x and y such that bx + py = 1 . Reading this equality modulo p , we see that x is an inverse for b , since bx ≡ 1 (mod p ) . Therefore, every element of G is invertible. So, as remarked earlier, G is a group.
For example, when p = 11 , the inverses of each element are given as follows:
If we take the previous proof and, instead of using Lagrange's theorem, we try to prove it in this specific situation, then we get Euler's third proof, which is the one that he found more natural. [ 10 ] [ 11 ] Let A be the set whose elements are the numbers 1, a , a 2 , ..., a k − 1 reduced modulo p . If A = G , then k = p − 1 and therefore k divides p −1 . Otherwise, there is some b 1 ∈ G \ A .
Let A 1 be the set whose elements are the numbers b 1 , ab 1 , a 2 b 1 , ..., a k − 1 b 1 reduced modulo p . Then A 1 has k distinct elements because otherwise there would be two distinct numbers m , n ∈ {0, 1, ..., k − 1 } such that a m b 1 ≡ a n b 1 (mod p ) , which is impossible, since it would follow that a m ≡ a n (mod p ) . On the other hand, no element of A 1 can be an element of A , because otherwise there would be numbers m , n ∈ {0, 1, ..., k − 1 } such that a m b 1 ≡ a n (mod p ) , and then b 1 ≡ a n a k − m ≡ a n + k − m (mod p ) , which is impossible, since b 1 ∉ A .
So, the set A ∪ A 1 has 2 k elements. If it turns out to be equal to G , then 2 k = p −1 and therefore k divides p −1 . Otherwise, there is some b 2 ∈ G \( A ∪ A 1 ) and we can start all over again, defining A 2 as the set whose elements are the numbers b 2 , ab 2 , a 2 b 2 , ..., a k − 1 b 2 reduced modulo p . Since G is finite, this process must stop at some point and this proves that k divides p − 1 .
For instance, if a = 5 and p = 13 , then, since
we have k = 4 and A = {1, 5, 8, 12 }. Clearly, A ≠ G = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 }. Let b 1 be an element of G \ A ; for instance, take b 1 = 2 . Then, since
we have A 1 = {2, 3, 10, 11 }. Clearly, A ∪ A 1 ≠ G . Let b 2 be an element of G \( A ∪ A 1 ) ; for instance, take b 2 = 4 . Then, since
we have A 2 = {4, 6, 7, 9 }. And now G = A ∪ A 1 ∪ A 2 .
Note that the sets A , A 1 , and so on are in fact the cosets of A in G . | https://en.wikipedia.org/wiki/Proofs_of_Fermat's_little_theorem |
This article is supplemental for “ Convergence of random variables ” and provides proofs for selected results.
Several results will be established using the portmanteau lemma : A sequence { X n } converges in distribution to X if and only if any of the following conditions are met:
Proof: If { X n } {\displaystyle \{X_{n}\}} converges to X {\displaystyle X} almost surely, it means that the set of points O = { ω ∣ lim X n ( ω ) ≠ X ( ω ) } {\displaystyle O=\{\omega \mid \lim X_{n}(\omega )\neq X(\omega )\}} has measure zero. Now fix ε > 0 {\displaystyle \varepsilon >0} and consider a sequence of sets
This sequence of sets is decreasing ( A n ⊇ A n + 1 ⊇ … {\displaystyle A_{n}\supseteq A_{n+1}\supseteq \ldots } ) towards the set
The probabilities of this sequence are also decreasing, so lim Pr ( A n ) = Pr ( A ∞ ) {\displaystyle \lim \operatorname {Pr} (A_{n})=\operatorname {Pr} (A_{\infty })} ; we shall show now that this number is equal to zero. Now for any point ω {\displaystyle \omega } outside of O {\displaystyle O} we have lim X n ( ω ) = X ( ω ) {\displaystyle \lim X_{n}(\omega )=X(\omega )} , which implies that | X n ( ω ) − X ( ω ) | < ε {\displaystyle \left|X_{n}(\omega )-X(\omega )\right|<\varepsilon } for all n ≥ N {\displaystyle n\geq N} for some N {\displaystyle N} . In particular, for such n {\displaystyle n} the point ω {\displaystyle \omega } will not lie in A n {\displaystyle A_{n}} , and hence won't lie in A ∞ {\displaystyle A_{\infty }} . Therefore, A ∞ ⊆ O {\displaystyle A_{\infty }\subseteq O} and so Pr ( A ∞ ) = 0 {\displaystyle \operatorname {Pr} (A_{\infty })=0} .
Finally, by continuity from above,
which by definition means that X n {\displaystyle X_{n}} converges in probability to X {\displaystyle X} .
If X n are independent random variables assuming value one with probability 1/ n and zero otherwise, then X n converges to zero in probability but not almost surely. This can be verified using the Borel–Cantelli lemmas .
Lemma. Let X , Y be random variables, let a be a real number and ε > 0. Then
Proof of lemma:
Shorter proof of the lemma:
We have
for if Y ≤ a {\displaystyle Y\leq a} and | Y − X | ≤ ε {\displaystyle |Y-X|\leq \varepsilon } , then X ≤ a + ε {\displaystyle X\leq a+\varepsilon } . Hence by the union bound,
Proof of the theorem: Recall that in order to prove convergence in distribution, one must show that the sequence of cumulative distribution functions converges to the F X at every point where F X is continuous. Let a be such a point. For every ε > 0, due to the preceding lemma, we have:
So, we have
Taking the limit as n → ∞, we obtain:
where F X ( a ) = Pr( X ≤ a ) is the cumulative distribution function of X . This function is continuous at a by assumption, and therefore both F X ( a −ε) and F X ( a +ε) converge to F X ( a ) as ε → 0 + . Taking this limit, we obtain
which means that { X n } converges to X in distribution.
The implication follows for when X n is a random vector by using this property proved later on this page and by taking X n = X in the statement of that property.
Proof: Fix ε > 0. Let B ε ( c ) be the open ball of radius ε around point c , and B ε ( c ) c its complement. Then
By the portmanteau lemma (part C), if X n converges in distribution to c , then the limsup of the latter probability must be less than or equal to Pr( c ∈ B ε ( c ) c ), which is obviously equal to zero. Therefore,
which by definition means that X n converges to c in probability.
Proof: We will prove this theorem using the portmanteau lemma, part B. As required in that lemma, consider any bounded function f (i.e. | f ( x )| ≤ M ) which is also Lipschitz:
Take some ε > 0 and majorize the expression |E[ f ( Y n )] − E[ f ( X n )]| as
(here 1 {...} denotes the indicator function ; the expectation of the indicator function is equal to the probability of corresponding event). Therefore,
If we take the limit in this expression as n → ∞, the second term will go to zero since { Y n −X n } converges to zero in probability; and the third term will also converge to zero, by the portmanteau lemma and the fact that X n converges to X in distribution. Thus
Since ε was arbitrary, we conclude that the limit must in fact be equal to zero, and therefore E[ f ( Y n )] → E[ f ( X )], which again by the portmanteau lemma implies that { Y n } converges to X in distribution. QED.
Proof: We will prove this statement using the portmanteau lemma, part A.
First we want to show that ( X n , c ) converges in distribution to ( X , c ). By the portmanteau lemma this will be true if we can show that E[ f ( X n , c )] → E[ f ( X , c )] for any bounded continuous function f ( x , y ). So let f be such arbitrary bounded continuous function. Now consider the function of a single variable g ( x ) := f ( x , c ). This will obviously be also bounded and continuous, and therefore by the portmanteau lemma for sequence { X n } converging in distribution to X , we will have that E[ g ( X n )] → E[ g ( X )]. However the latter expression is equivalent to “E[ f ( X n , c )] → E[ f ( X , c )]”, and therefore we now know that ( X n , c ) converges in distribution to ( X , c ).
Secondly, consider |( X n , Y n ) − ( X n , c )| = | Y n − c |. This expression converges in probability to zero because Y n converges in probability to c . Thus we have demonstrated two facts:
By the property proved earlier , these two facts imply that ( X n , Y n ) converge in distribution to ( X , c ).
Proof:
where the last step follows by the pigeonhole principle and the sub-additivity of the probability measure. Each of the probabilities on the right-hand side converge to zero as n → ∞ by definition of the convergence of { X n } and { Y n } in probability to X and Y respectively. Taking the limit we conclude that the left-hand side also converges to zero, and therefore the sequence {( X n , Y n )} converges in probability to {( X , Y )}. | https://en.wikipedia.org/wiki/Proofs_of_convergence_of_random_variables |
In number theory , the law of quadratic reciprocity , like the Pythagorean theorem , has lent itself to an unusually large number of proofs . Several hundred proofs of the law of quadratic reciprocity have been published.
Of the elementary combinatorial proofs, there are two which apply types of double counting . One by Gotthold Eisenstein counts lattice points . Another applies Zolotarev's lemma to ( Z / p q Z ) × {\displaystyle (\mathbb {Z} /pq\mathbb {Z} )^{\times }} , expressed by the Chinese remainder theorem as ( Z / p Z ) × × ( Z / q Z ) × {\displaystyle (\mathbb {Z} /p\mathbb {Z} )^{\times }\times (\mathbb {Z} /q\mathbb {Z} )^{\times }} and calculates the signature of a permutation . The shortest known proof also uses a simplified version of double counting, namely double counting modulo a fixed prime.
Eisenstein's proof of quadratic reciprocity is a simplification of Gauss's third proof. It is more geometrically intuitive and requires less technical manipulation.
The point of departure is "Eisenstein's lemma", which states that for odd prime p and positive integer a not divisible by p ,
where ⌊ x ⌋ {\displaystyle \left\lfloor x\right\rfloor } denotes the floor function (the largest integer less than or equal to x ), and where the sum is taken over the even integers u = 2, 4, 6, ..., p −1. For example,
This result is very similar to Gauss's lemma , and can be proved in a similar fashion (proof given below ).
Using this representation of ( q / p ), the main argument is quite elegant. The sum ∑ u ⌊ q u / p ⌋ {\textstyle \sum _{u}\left\lfloor qu/p\right\rfloor } counts the number of lattice points with even x -coordinate in the interior of the triangle ABC in the following diagram:
Because each column has an even number of points (namely q −1 points), the number of such lattice points in the region BCYX is the same modulo 2 as the number of such points in the region CZY:
Then by flipping the diagram in both axes, we see that the number of points with even x -coordinate inside CZY is the same as the number of points inside AXY having odd x -coordinates. This can be justified mathematically by noting that q − 1 − ⌊ 2 k q p ⌋ = ⌊ ( p − 2 k ) q p ⌋ {\displaystyle \textstyle q-1-\left\lfloor {\frac {2kq}{p}}\right\rfloor =\left\lfloor {\frac {(p-2k)q}{p}}\right\rfloor } . [ 1 ]
The conclusion is that
where μ is the total number of lattice points in the interior of AXY.
Switching p and q , the same argument shows that
where ν is the number of lattice points in the interior of WYA. Since there are no lattice points on the line AY itself (because p and q are relatively prime ), and since the total number of points in the rectangle WYXA is
we obtain
For an even integer u in the range 1 ≤ u ≤ p −1, denote by r ( u ) the least positive residue of au modulo p . (For example, for p = 11, a = 7, we allow u = 2, 4, 6, 8, 10, and the corresponding values of r ( u ) are 3, 6, 9, 1, 4.)
The numbers (−1) r ( u ) r ( u ), again treated as least positive residues modulo p , are all even (in our running example, they are 8, 6, 2, 10, 4.) Furthermore, they are all distinct, because if (−1) r ( u ) r ( u ) ≡ (−1) r ( t ) r ( t ) (mod p ), then we may divide out by a to obtain u ≡ ± t (mod p ). This forces u ≡ t (mod p ), because both u and t are even , whereas p is odd. Since there exactly ( p −1)/2 of them and they are distinct, they must be simply a rearrangement of the even integers 2, 4, ..., p −1. Multiplying them together, we obtain
Dividing out successively by 2, 4, ..., p −1 on both sides (which is permissible since none of them are divisible by p ) and rearranging, we have
On the other hand, by the definition of r ( u ) and the floor function,
and since p is odd and u is even,
implies that ⌊ a u / p ⌋ {\displaystyle \left\lfloor au/p\right\rfloor } and r ( u ) are congruent modulo 2.
Finally this shows that
We are finished because the left hand side is just an alternative expression for ( a / p ), per Euler's criterion .
This lemma essentially states that the number of least residues after doubling that are odd gives the value of ( q / p ). This follows easily from Gauss' lemma.
Also, q u = p ⌊ q u p ⌋ + r ( u ) {\displaystyle qu=p\left\lfloor {\frac {qu}{p}}\right\rfloor +r(u)} implies that ⌊ q u / p ⌋ {\displaystyle \left\lfloor qu/p\right\rfloor } and r ( u ) are either congruent modulo 2, or incongruent, depending solely on the parity of u .
This means that the residues 1 , 2 , … , p − 1 2 {\displaystyle 1,2,\dots ,{\frac {p-1}{2}}} are (in)congruent to ⌊ q u / p ⌋ {\displaystyle \left\lfloor qu/p\right\rfloor } , and so
( − 1 ) p − 1 2 ≡ ( − 1 ) ∑ u ⌊ q u / p ⌋ ≡ ( − 1 ) ∑ u r ( u ) + u {\displaystyle (-1)^{\frac {p-1}{2}}\equiv (-1)^{\sum _{u}\left\lfloor qu/p\right\rfloor }\equiv (-1)^{\sum _{u}r(u)+u}}
where 1 ≤ u ≤ p − 1 2 {\displaystyle \textstyle 1\leq u\leq {\frac {p-1}{2}}} .
For example, using the previous example of p = 7 , q = 11 {\displaystyle p=7,q=11} , the residues are 7 , 3 , 10 , 6 , 2 {\displaystyle 7,3,10,6,2} and the floor function gives 0 , 1 , 1 , 2 , 3 {\displaystyle 0,1,1,2,3} . The pattern of congruence is 1 , 0 , 1 , 0 , 1 {\displaystyle 1,0,1,0,1} .
The proof of Quadratic Reciprocity using Gauss sums is one of the more common and classic proofs. These proofs work by comparing computations of single values in two different ways, one using Euler's Criterion and the other using the Binomial theorem . As an example of how Euler's criterion is used, we can use it to give a quick proof of the first supplemental case of determining ( − 1 p ) {\textstyle \left({\frac {-1}{p}}\right)} for an odd prime p : By Euler's criterion ( − 1 p ) ≡ ( − 1 ) p − 1 2 ( mod p ) {\textstyle \left({\frac {-1}{p}}\right)\equiv (-1)^{\frac {p-1}{2}}{\pmod {p}}} , but since both sides of the equivalence are ±1 and p is odd, we can deduce that ( − 1 p ) = ( − 1 ) p − 1 2 {\textstyle \left({\frac {-1}{p}}\right)=(-1)^{\frac {p-1}{2}}} .
Let ζ 8 = e 2 π i / 8 {\textstyle \zeta _{8}=e^{2\pi i/8}} , a primitive 8th root of unity and set τ = ζ 8 + ζ 8 − 1 {\textstyle \tau =\zeta _{8}+\zeta _{8}^{-1}} . Since ζ 8 2 = i {\textstyle \zeta _{8}^{2}=i} and ζ 8 − 2 = − i {\textstyle \zeta _{8}^{-2}=-i} we see that τ 2 = 2 {\textstyle \tau ^{2}=2} . Because τ {\displaystyle \tau } is an algebraic integer , if p is an odd prime it makes sense to talk about it modulo p . (Formally we are considering the commutative ring formed by factoring the algebraic integers A {\displaystyle \mathbf {A} } with the ideal generated by p . Because p − 1 {\displaystyle p^{-1}} is not an algebraic integer, 1, 2, ..., p are distinct elements of A / p A {\displaystyle {\mathbf {A} }/p{\mathbf {A} }} .) Using Euler's criterion, it follows that τ p − 1 = ( τ 2 ) p − 1 2 = 2 p − 1 2 ≡ ( 2 p ) ( mod p ) {\displaystyle \tau ^{p-1}=(\tau ^{2})^{\frac {p-1}{2}}=2^{\frac {p-1}{2}}\equiv \left({\frac {2}{p}}\right){\pmod {p}}} We can then say that τ p ≡ ( 2 p ) τ ( mod p ) {\displaystyle \tau ^{p}\equiv \left({\frac {2}{p}}\right)\tau {\pmod {p}}} But we can also compute τ p ( mod p ) {\textstyle \tau ^{p}{\pmod {p}}} using the binomial theorem. Because the cross terms in the binomial expansion all contain factors of p , we find that τ p ≡ ζ 8 p + ζ 8 − p ( mod p ) {\textstyle \tau ^{p}\equiv \zeta _{8}^{p}+\zeta _{8}^{-p}{\pmod {p}}} . We can evaluate this more exactly by breaking this up into two cases
These are the only options for a prime modulo 8 and both of these cases can be computed using the exponential form ζ 8 = e 2 π i 8 {\textstyle \zeta _{8}=e^{\frac {2\pi i}{8}}} . We can write this succinctly for all odd primes p as τ p ≡ ( − 1 ) p 2 − 1 8 τ ( mod p ) {\displaystyle \tau ^{p}\equiv (-1)^{\frac {p^{2}-1}{8}}\tau {\pmod {p}}} Combining these two expressions for τ p ( mod p ) {\textstyle \tau ^{p}{\pmod {p}}} and multiplying through by τ {\displaystyle \tau } we find that 2 ⋅ ( 2 p ) ≡ 2 ⋅ ( − 1 ) p 2 − 1 8 ( mod p ) {\textstyle 2\cdot \left({\frac {2}{p}}\right)\equiv 2\cdot (-1)^{\frac {p^{2}-1}{8}}{\pmod {p}}} . Since both ( 2 p ) {\textstyle \left({\frac {2}{p}}\right)} and ( − 1 ) p 2 − 1 8 {\displaystyle (-1)^{\frac {p^{2}-1}{8}}} are ±1 and 2 is invertible modulo p , we can conclude that ( 2 p ) = ( − 1 ) p 2 − 1 8 {\displaystyle \left({\frac {2}{p}}\right)=(-1)^{\frac {p^{2}-1}{8}}}
The idea for the general proof follows the above supplemental case: Find an algebraic integer that somehow encodes the Legendre symbols for p , then find a relationship between Legendre symbols by computing the q th power of this algebraic integer modulo q in two different ways, one using Euler's criterion the other using the binomial theorem.
Let g p = ∑ k = 1 p − 1 ( k p ) ζ p k {\displaystyle g_{p}=\sum _{k=1}^{p-1}\left({\frac {k}{p}}\right)\zeta _{p}^{k}} where ζ p = e 2 π i / p {\displaystyle \zeta _{p}=e^{2\pi i/p}} is a primitive p th root of unity. This is a quadratic Gauss sum . A fundamental property of these Gauss sums is that g p 2 = p ∗ {\displaystyle g_{p}^{2}=p^{*}} where p ∗ = ( − 1 p ) p {\textstyle p^{*}=\left({\frac {-1}{p}}\right)p} . To put this in context of the next proof, the individual elements of the Gauss sum are in the cyclotomic field L = Q ( ζ p ) {\displaystyle L=\mathbb {Q} (\zeta _{p})} but the above formula shows that the sum itself is a generator of the unique quadratic field contained in L . Again, since the quadratic Gauss sum is an algebraic integer, we can use modular arithmetic with it. Using this fundamental formula and Euler's criterion we find that g p q − 1 = ( g p 2 ) q − 1 2 = ( p ∗ ) q − 1 2 ≡ ( p ∗ q ) ( mod q ) {\displaystyle g_{p}^{q-1}=(g_{p}^{2})^{\frac {q-1}{2}}=(p^{*})^{\frac {q-1}{2}}\equiv \left({\frac {p^{*}}{q}}\right){\pmod {q}}} Therefore g p q ≡ ( p ∗ q ) g p ( mod q ) {\displaystyle g_{p}^{q}\equiv \left({\frac {p^{*}}{q}}\right)g_{p}{\pmod {q}}} Using the binomial theorem, we also find that g p q ≡ ∑ k = 1 p − 1 ( k p ) ζ p q k ( mod q ) {\textstyle g_{p}^{q}\equiv \sum _{k=1}^{p-1}\left({\frac {k}{p}}\right)\zeta _{p}^{qk}{\pmod {q}}} , If we let a be a multiplicative inverse of q ( mod p ) {\displaystyle q{\pmod {p}}} , then we can rewrite this sum as ( a p ) ∑ t = 1 p − 1 ( t p ) ζ p t {\textstyle \left({\frac {a}{p}}\right)\sum _{t=1}^{p-1}\left({\frac {t}{p}}\right)\zeta _{p}^{t}} using the substitution t = q k {\displaystyle t=qk} , which doesn't affect the range of the sum. Since ( a p ) = ( q p ) {\textstyle \left({\frac {a}{p}}\right)=\left({\frac {q}{p}}\right)} , we can then write g p q ≡ ( q p ) g p ( mod q ) {\displaystyle g_{p}^{q}\equiv \left({\frac {q}{p}}\right)g_{p}{\pmod {q}}} Using these two expressions for g p q ( mod q ) {\textstyle g_{p}^{q}{\pmod {q}}} , and multiplying through by g p {\displaystyle g_{p}} gives ( q p ) p ∗ ≡ ( p ∗ q ) p ∗ ( mod q ) {\displaystyle \left({\frac {q}{p}}\right)p^{*}\equiv \left({\frac {p^{*}}{q}}\right)p^{*}{\pmod {q}}} Since p ∗ {\displaystyle p^{*}} is invertible modulo q , and the Legendre symbols are either ±1, we can then conclude that ( q p ) = ( p ∗ q ) {\displaystyle \left({\frac {q}{p}}\right)=\left({\frac {p^{*}}{q}}\right)}
The proof presented here is by no means the simplest known; however, it is quite a deep one, in the sense that it motivates some of the ideas of Artin reciprocity .
Suppose that p is an odd prime. The action takes place inside the cyclotomic field L = Q ( ζ p ) , {\displaystyle L=\mathbb {Q} (\zeta _{p}),} where ζ p is a primitive p th root of unity . The basic theory of cyclotomic fields informs us that there is a canonical isomorphism
which sends the automorphism σ a satisfying σ a ( ζ p ) = ζ p a {\displaystyle \sigma _{a}(\zeta _{p})=\zeta _{p}^{a}} to the element a ∈ ( Z / p Z ) × . {\displaystyle a\in (\mathbb {Z} /p\mathbb {Z} )^{\times }.} In particular, this isomorphism is injective because the multiplicative group of a field is a cyclic group: F × ≅ C p − 1 {\displaystyle F^{\times }\cong C_{p-1}} .
Now consider the subgroup H of squares of elements of G . Since G is cyclic, H has index 2 in G , so the subfield corresponding to H under the Galois correspondence must be a quadratic extension of Q . (In fact it is the unique quadratic extension of Q contained in L .) The Gaussian period theory determines which one; it turns out to be Q ( p ∗ ) {\displaystyle \mathbb {Q} ({\sqrt {p^{*}}})} , where
At this point we start to see a hint of quadratic reciprocity emerging from our framework. On one hand, the image of H in ( Z / p Z ) × {\displaystyle (\mathbb {Z} /p\mathbb {Z} )^{\times }} consists precisely of the (nonzero) quadratic residues modulo p . On the other hand, H is related to an attempt to take the square root of p (or possibly of − p ). In other words, if now q is a prime (different from p ), we have shown that
In the ring of integers O L = Z [ ζ p ] {\displaystyle {\mathcal {O}}_{L}=\mathbb {Z} [\zeta _{p}]} , choose any unramified prime ideal β of lying over q , and let ϕ ∈ Gal ( L / Q ) {\displaystyle \phi \in \operatorname {Gal} (L/\mathbb {Q} )} be the Frobenius automorphism associated to β; the characteristic property of ϕ {\displaystyle \phi } is that
(The existence of such a Frobenius element depends on quite a bit of algebraic number theory machinery.)
The key fact about ϕ {\displaystyle \phi } that we need is that for any subfield K of L ,
Indeed, let δ be any ideal of O K below β (and hence above q ). Then, since ϕ ( x ) ≡ x q ( mod δ ) {\displaystyle \phi (x)\equiv x^{q}\!\!\!{\pmod {\delta }}} for any x ∈ O K {\displaystyle x\in {\mathcal {O}}_{K}} , we see that ϕ | K ∈ Gal ( K / Q ) {\displaystyle \phi \vert _{K}\in \operatorname {Gal} (K/\mathbb {Q} )} is a Frobenius for δ. A standard result concerning ϕ {\displaystyle \phi } is that its order is equal to the corresponding inertial degree; that is,
The left hand side is equal to 1 if and only if φ fixes K , and the right hand side is equal to one if and only q splits completely in K , so we are done.
Now, since the p th roots of unity are distinct modulo β (i.e. the polynomial X p − 1 is separable in characteristic q ), we must have
that is, ϕ {\displaystyle \phi } coincides with the automorphism σ q defined earlier. Taking K to be the quadratic field in which we are interested, we obtain the equivalence
Finally we must show that
Once we have done this, the law of quadratic reciprocity falls out immediately since
and
for p ≡ 3 ( mod 4 ) {\displaystyle p\equiv 3\!\!\!{\pmod {4}}} .
To show the last equivalence, suppose first that ( p ∗ q ) = 1. {\displaystyle \left({\frac {p^{*}}{q}}\right)=1.} In this case, there is some integer x (not divisible by q ) such that x 2 ≡ p ∗ ( mod q ) , {\displaystyle x^{2}\equiv p^{*}{\pmod {q}},} say x 2 − p ∗ = c q {\displaystyle x^{2}-p^{*}=cq} for some integer c . Let K = Q ( p ∗ ) , {\displaystyle K=\mathbb {Q} ({\sqrt {p^{*}}}),} and consider the ideal ( x − p ∗ , q ) {\displaystyle (x-{\sqrt {p^{*}}},q)} of K . It certainly divides the principal ideal ( q ). It cannot be equal to ( q ), since x − p ∗ {\displaystyle x-{\sqrt {p^{*}}}} is not divisible by q . It cannot be the unit ideal, because then
is divisible by q , which is again impossible. Therefore ( q ) must split in K .
Conversely, suppose that ( q ) splits, and let β be a prime of K above q . Then ( q ) ⊊ β , {\displaystyle (q)\subsetneq \beta ,} so we may choose some
Actually, since p ∗ ≡ 1 ( mod 4 ) , {\displaystyle p^{*}\equiv 1\!\!\!{\pmod {4}},} elementary theory of quadratic fields implies that the ring of integers of K is precisely Z [ 1 + p ∗ 2 ] , {\displaystyle \mathbb {Z} \left[{\frac {1+{\sqrt {p^{*}}}}{2}}\right],} so the denominators of a and b are at worst equal to 2. Since q ≠ 2, we may safely multiply a and b by 2, and assume that a + b p ∗ ∈ β ∖ ( q ) , {\displaystyle a+b{\sqrt {p^{*}}}\in \beta \setminus (q),} where now a and b are in Z . In this case we have
so q ∣ a 2 − b 2 p ∗ . {\displaystyle q\mid a^{2}-b^{2}p^{*}.} However, q cannot divide b , since then also q divides a , which contradicts our choice of a + b p ∗ . {\displaystyle a+b{\sqrt {p^{*}}}.} Therefore, we may divide by b modulo q , to obtain p ∗ ≡ ( a b − 1 ) 2 ( mod q ) {\displaystyle p^{*}\equiv (ab^{-1})^{2}\!\!\!{\pmod {q}}} as desired.
Every textbook on elementary number theory (and quite a few on algebraic number theory ) has a proof of quadratic reciprocity. Two are especially noteworthy:
Lemmermeyer (2000) has many proofs (some in exercises) of both quadratic and higher-power reciprocity laws and a discussion of their history. Its immense bibliography includes literature citations for 196 different published proofs.
Ireland & Rosen (1990) also has many proofs of quadratic reciprocity (and many exercises), and covers the cubic and biquadratic cases as well. Exercise 13.26 (p 202) says it all
Count the number of proofs to the law of quadratic reciprocity given thus far in this book and devise another one. | https://en.wikipedia.org/wiki/Proofs_of_quadratic_reciprocity |
The following are proofs of several characteristics related to the chi-squared distribution .
Let random variable Y be defined as Y = X 2 where X has normal distribution with mean 0 and variance 1 (that is X ~ N (0,1)).
Then, for y < 0 , F Y ( y ) = P ( Y < y ) = 0 and for y ≥ 0 , F Y ( y ) = P ( Y < y ) = P ( X 2 < y ) = P ( | X | < y ) = P ( − y < X < y ) = F X ( y ) − F X ( − y ) = F X ( y ) − ( 1 − F X ( y ) ) = 2 F X ( y ) − 1 {\displaystyle {\begin{alignedat}{2}{\text{for}}~y<0,&~~F_{Y}(y)=P(Y<y)=0~~{\text{and}}\\{\text{for}}~y\geq 0,&~~F_{Y}(y)=P(Y<y)=P(X^{2}<y)=P(|X|<{\sqrt {y}})=P(-{\sqrt {y}}<X<{\sqrt {y}})\\~~&=F_{X}({\sqrt {y}})-F_{X}(-{\sqrt {y}})=F_{X}({\sqrt {y}})-(1-F_{X}({\sqrt {y}}))=2F_{X}({\sqrt {y}})-1\end{alignedat}}}
Where F {\displaystyle F} and f {\displaystyle f} are the cdf and pdf of the corresponding random variables.
Then Y = X 2 ∼ χ 1 2 . {\displaystyle Y=X^{2}\sim \chi _{1}^{2}.}
The change of variable formula (implicitly derived above), for a monotonic transformation y = g ( x ) {\displaystyle y=g(x)} , is:
In this case the change is not monotonic, because every value of Y {\displaystyle \scriptstyle Y} has two corresponding values of X {\displaystyle \scriptstyle X} (one positive and negative). However, because of symmetry, both halves will transform identically, i.e.
In this case, the transformation is: x = g − 1 ( y ) = y {\displaystyle x=g^{-1}(y)={\sqrt {y}}} , and its derivative is d g − 1 ( y ) d y = 1 2 y . {\displaystyle {\frac {dg^{-1}(y)}{dy}}={\frac {1}{2{\sqrt {y}}}}.}
So here:
And one gets the chi-squared distribution, noting the property of the gamma function : Γ ( 1 / 2 ) = π {\displaystyle \Gamma (1/2)={\sqrt {\pi }}} .
There are several methods to derive chi-squared distribution with 2 degrees of freedom. Here is one based on the distribution with 1 degree of freedom.
Suppose that X {\displaystyle X} and Y {\displaystyle Y} are two independent variables satisfying X ∼ χ 1 2 {\displaystyle X\sim \chi _{1}^{2}} and Y ∼ χ 1 2 {\displaystyle Y\sim \chi _{1}^{2}} , so that the probability density functions of X {\displaystyle X} and Y {\displaystyle Y} are respectively:
and of course f Y ( y ) = f X ( y ) {\displaystyle f_{Y}(y)=f_{X}(y)} . Then, we can derive the joint distribution of ( X , Y ) {\displaystyle (X,Y)} :
where Γ ( 1 2 ) 2 = π {\displaystyle \Gamma ({\tfrac {1}{2}})^{2}=\pi } . Further [ clarification needed ] , let A = x y {\displaystyle A=xy} and B = x + y {\displaystyle B=x+y} , we can get that:
and
or, inversely
and
Since the two variable change policies are symmetric, we take the upper one and multiply the result by 2. The Jacobian determinant can be calculated as [ clarification needed ] :
Now we can change f ( x , y ) {\displaystyle f(x,y)} to f ( A , B ) {\displaystyle f(A,B)} [ clarification needed ] :
where the leading constant 2 is to take both the two variable change policies into account. Finally, we integrate out A {\displaystyle A} [ clarification needed ] to get the distribution of B {\displaystyle B} , i.e. x + y {\displaystyle x+y} :
Substituting A = B 2 4 sin 2 ( t ) {\displaystyle A={\frac {B^{2}}{4}}\sin ^{2}(t)} gives:
So, the result is:
Consider the k samples x i {\displaystyle x_{i}} to represent a single point in a k -dimensional space. The chi square distribution for k degrees of freedom will then be given by:
where N ( x ) {\displaystyle N(x)} is the standard normal distribution and V {\displaystyle {\mathcal {V}}} is that elemental shell volume at Q ( x ), which is proportional to the ( k − 1)-dimensional surface in k -space for which
It can be seen that this surface is the surface of a k -dimensional ball or, alternatively, an n-sphere where n = k - 1 with radius R = Q {\displaystyle R={\sqrt {Q}}} , and that the term in the exponent is simply expressed in terms of Q . Since it is a constant, it may be removed from inside the integral.
The integral is now simply the surface area A of the ( k − 1)-sphere times the infinitesimal thickness of the sphere which is
The area of a ( k − 1)-sphere is:
Substituting, realizing that Γ ( z + 1 ) = z Γ ( z ) {\displaystyle \Gamma (z+1)=z\Gamma (z)} , and cancelling terms yields: | https://en.wikipedia.org/wiki/Proofs_related_to_chi-squared_distribution |
The propagation constant of a sinusoidal electromagnetic wave is a measure of the change undergone by the amplitude and phase of the wave as it propagates in a given direction. The quantity being measured can be the voltage , the current in a circuit , or a field vector such as electric field strength or flux density . The propagation constant itself measures the dimensionless change in magnitude or phase per unit length . In the context of two-port networks and their cascades, propagation constant measures the change undergone by the source quantity as it propagates from one port to the next.
The propagation constant's value is expressed logarithmically , almost universally to the base e , rather than base 10 that is used in telecommunications in other situations. The quantity measured, such as voltage, is expressed as a sinusoidal phasor . The phase of the sinusoid varies with distance which results in the propagation constant being a complex number , the imaginary part being caused by the phase change.
The term "propagation constant" is somewhat of a misnomer as it usually varies strongly with ω . It is probably the most widely used term but there are a large variety of alternative names used by various authors for this quantity. These include transmission parameter , transmission function , propagation parameter , propagation coefficient and transmission constant . If the plural is used, it suggests that α and β are being referenced separately but collectively as in transmission parameters , propagation parameters , etc. In transmission line theory, α and β are counted among the "secondary coefficients", the term secondary being used to contrast to the primary line coefficients . The primary coefficients are the physical properties of the line, namely R,C,L and G, from which the secondary coefficients may be derived using the telegrapher's equation . In the field of transmission lines, the term transmission coefficient has a different meaning despite the similarity of name: it is the companion of the reflection coefficient .
The propagation constant, symbol γ , for a given system is defined by the ratio of the complex amplitude at the source of the wave to the complex amplitude at some distance x , such that,
Inverting the above equation and isolating γ results in the quotient of the complex amplitude ratio's natural logarithm and the distance x traveled:
Since the propagation constant is a complex quantity we can write:
where
That β does indeed represent phase can be seen from Euler's formula :
which is a sinusoid which varies in phase as θ varies but does not vary in amplitude because
The reason for the use of base e is also now made clear. The imaginary phase constant, i β , can be added directly to the attenuation constant, α , to form a single complex number that can be handled in one mathematical operation provided they are to the same base. Angles measured in radians require base e , so the attenuation is likewise in base e .
The propagation constant for conducting lines can be calculated from the primary line coefficients by means of the relationship
where
The propagation factor of a plane wave traveling in a linear media in the x direction is given by P = e − γ x {\displaystyle P=e^{-\gamma x}} where
The sign convention is chosen for consistency with propagation in lossy media. If the attenuation constant is positive, then the wave amplitude decreases as the wave propagates in the x direction.
Wavelength , phase velocity , and skin depth have simple relationships to the components of the propagation constant: λ = 2 π β v p = ω β δ = 1 α {\displaystyle \lambda ={\frac {2\pi }{\beta }}\qquad v_{p}={\frac {\omega }{\beta }}\qquad \delta ={\frac {1}{\alpha }}}
In telecommunications , the term attenuation constant , also called attenuation parameter or attenuation coefficient , is the attenuation of an electromagnetic wave propagating through a medium per unit distance from the source. It is the real part of the propagation constant and is measured in nepers per metre. A neper is approximately 8.7 dB . Attenuation constant can be defined by the amplitude ratio
The propagation constant per unit length is defined as the natural logarithm of the ratio of the sending end current or voltage to the receiving end current or voltage, divided by the distance x involved:
The attenuation constant for conductive lines can be calculated from the primary line coefficients as shown above. For a line meeting the distortionless condition , with a conductance G in the insulator, the attenuation constant is given by
however, a real line is unlikely to meet this condition without the addition of loading coils and, furthermore, there are some frequency dependent effects operating on the primary "constants" which cause a frequency dependence of the loss. There are two main components to these losses, the metal loss and the dielectric loss.
The loss of most transmission lines are dominated by the metal loss, which causes a frequency dependency due to finite conductivity of metals, and the skin effect inside a conductor. The skin effect causes R along the conductor to be approximately dependent on frequency according to
Losses in the dielectric depend on the loss tangent (tan δ ) of the material divided by the wavelength of the signal. Thus they are directly proportional to the frequency.
The attenuation constant for a particular propagation mode in an optical fiber is the real part of the axial propagation constant.
In electromagnetic theory , the phase constant , also called phase change constant , parameter or coefficient is the imaginary component of the propagation constant for a plane wave. It represents the change in phase per unit length along the path traveled by the wave at any instant and is equal to the real part of the angular wavenumber of the wave. It is represented by the symbol β and is measured in units of radians per unit length.
From the definition of (angular) wavenumber for transverse electromagnetic (TEM) waves in lossless media,
For a transmission line , the telegrapher's equations tells us that the wavenumber must be proportional to frequency for the transmission of the wave to be undistorted in the time domain . This includes, but is not limited to, the ideal case of a lossless line. The reason for this condition can be seen by considering that a useful signal is composed of many different wavelengths in the frequency domain. For there to be no distortion of the waveform , all these waves must travel at the same velocity so that they arrive at the far end of the line at the same time as a group . Since wave phase velocity is given by
it is proved that β is required to be proportional to ω . In terms of primary coefficients of the line, this yields from the telegrapher's equation for a distortionless line the condition
where L and C are, respectively, the inductance and capacitance per unit length of the line. However, practical lines can only be expected to approximately meet this condition over a limited frequency band.
In particular, the phase constant β {\displaystyle \beta } is not always equivalent to the wavenumber k {\displaystyle k} . The relation
applies to the TEM wave, which travels in free space or TEM-devices such as the coaxial cable and two parallel wires transmission lines . Nevertheless, it does not apply to the TE wave (transverse electric wave) and TM wave (transverse magnetic wave). For example, [ 2 ] in a hollow waveguide where the TEM wave cannot exist but TE and TM waves can propagate,
Here ω c {\displaystyle \omega _{c}} is the cutoff frequency . In a rectangular waveguide, the cutoff frequency is
where m , n ≥ 0 {\displaystyle m,n\geq 0} are the mode numbers for the rectangle's sides of length a {\displaystyle a} and b {\displaystyle b} respectively. For TE modes, m , n ≥ 0 {\displaystyle m,n\geq 0} (but m = n = 0 {\displaystyle m=n=0} is not allowed), while for TM modes m , n ≥ 1 {\displaystyle m,n\geq 1} .
The phase velocity equals
The term propagation constant or propagation function is applied to filters and other two-port networks used for signal processing . In these cases, however, the attenuation and phase coefficients are expressed in terms of nepers and radians per network section rather than per unit length. Some authors [ 3 ] make a distinction between per unit length measures (for which "constant" is used) and per section measures (for which "function" is used).
The propagation constant is a useful concept in filter design which invariably uses a cascaded section topology . In a cascaded topology, the propagation constant, attenuation constant and phase constant of individual sections may be simply added to find the total propagation constant etc.
The ratio of output to input voltage for each network is given by [ 4 ]
The terms Z I n Z I m {\displaystyle {\sqrt {\frac {Z_{In}}{Z_{Im}}}}} are impedance scaling terms [ 5 ] and their use is explained in the image impedance article.
The overall voltage ratio is given by
Thus for n cascaded sections all having matching impedances facing each other, the overall propagation constant is given by
The concept of penetration depth is one of many ways to describe the absorption of electromagnetic waves. For the others, and their interrelationships, see the article: Mathematical descriptions of opacity . | https://en.wikipedia.org/wiki/Propagation_constant |
In database systems, a propagation constraint "details what should happen to a related table when we update a row or rows of a target table" (Paul Beynon-Davies, 2004, p.108). Tables are linked using primary key to foreign key relationships. It is possible for users to update one table in a relationship in such a way that the relationship is no longer consistent and this is known as breaking referential integrity . An example of breaking referential integrity: if a table of employees includes a department number for 'Housewares' which is a foreign key to a table of departments and a user deletes that department from the department table then Housewares employees records would refer to a non-existent department number.
Propagation constraints are methods used by relational database management systems (RDBMS) to solve this problem by ensuring that relationships between tables are preserved without error. In his database textbook, Beynon-Davies explains the three ways that RDBMS handle deletions of target and related tuples :
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Propagation_constraint |
In quantum mechanics and quantum field theory , the propagator is a function that specifies the probability amplitude for a particle to travel from one place to another in a given period of time, or to travel with a certain energy and momentum. In Feynman diagrams , which serve to calculate the rate of collisions in quantum field theory , virtual particles contribute their propagator to the rate of the scattering event described by the respective diagram. Propagators may also be viewed as the inverse of the wave operator appropriate to the particle, and are, therefore, often called (causal) Green's functions (called " causal " to distinguish it from the elliptic Laplacian Green's function). [ 1 ] [ 2 ]
In non-relativistic quantum mechanics, the propagator gives the probability amplitude for a particle to travel from one spatial point (x') at one time (t') to another spatial point (x) at a later time (t).
The Green's function G for the Schrödinger equation is a function G ( x , t ; x ′ , t ′ ) = 1 i ℏ Θ ( t − t ′ ) K ( x , t ; x ′ , t ′ ) {\displaystyle G(x,t;x',t')={\frac {1}{i\hbar }}\Theta (t-t')K(x,t;x',t')} satisfying ( i ℏ ∂ ∂ t − H x ) G ( x , t ; x ′ , t ′ ) = δ ( x − x ′ ) δ ( t − t ′ ) , {\displaystyle \left(i\hbar {\frac {\partial }{\partial t}}-H_{x}\right)G(x,t;x',t')=\delta (x-x')\delta (t-t'),} where H denotes the Hamiltonian , δ ( x ) denotes the Dirac delta-function and Θ( t ) is the Heaviside step function . The kernel of the above Schrödinger differential operator in the big parentheses is denoted by K ( x , t ; x′ , t′ ) and called the propagator . [ nb 1 ]
This propagator may also be written as the transition amplitude K ( x , t ; x ′ , t ′ ) = ⟨ x | U ( t , t ′ ) | x ′ ⟩ , {\displaystyle K(x,t;x',t')={\big \langle }x{\big |}U(t,t'){\big |}x'{\big \rangle },} where U ( t , t′ ) is the unitary time-evolution operator for the system taking states at time t′ to states at time t . [ 3 ] Note the initial condition enforced by lim t → t ′ K ( x , t ; x ′ , t ′ ) = δ ( x − x ′ ) . {\displaystyle \lim _{t\to t'}K(x,t;x',t')=\delta (x-x').} The propagator may also be found by using a path integral :
where L denotes the Lagrangian and the boundary conditions are given by q ( t ) = x , q ( t′ ) = x′ . The paths that are summed over move only forwards in time and are integrated with the differential D [ q ( t ) ] {\displaystyle D[q(t)]} following the path in time. [ 4 ]
The propagator lets one find the wave function of a system, given an initial wave function and a time interval. The new wave function is given by
If K ( x , t ; x ′, t ′) only depends on the difference x − x′ , this is a convolution of the initial wave function and the propagator.
For a time-translationally invariant system, the propagator only depends on the time difference t − t ′ , so it may be rewritten as K ( x , t ; x ′ , t ′ ) = K ( x , x ′ ; t − t ′ ) . {\displaystyle K(x,t;x',t')=K(x,x';t-t').}
The propagator of a one-dimensional free particle , obtainable from, e.g., the path integral , is then
K ( x , x ′ ; t ) = 1 2 π ∫ − ∞ + ∞ d k e i k ( x − x ′ ) e − i ℏ k 2 t 2 m = ( m 2 π i ℏ t ) 1 2 e − m ( x − x ′ ) 2 2 i ℏ t . {\displaystyle K(x,x';t)={\frac {1}{2\pi }}\int _{-\infty }^{+\infty }dk\,e^{ik(x-x')}e^{-{\frac {i\hbar k^{2}t}{2m}}}=\left({\frac {m}{2\pi i\hbar t}}\right)^{\frac {1}{2}}e^{-{\frac {m(x-x')^{2}}{2i\hbar t}}}.}
Similarly, the propagator of a one-dimensional quantum harmonic oscillator is the Mehler kernel , [ 5 ] [ 6 ]
K ( x , x ′ ; t ) = ( m ω 2 π i ℏ sin ω t ) 1 2 exp ( − m ω ( ( x 2 + x ′ 2 ) cos ω t − 2 x x ′ ) 2 i ℏ sin ω t ) . {\displaystyle K(x,x';t)=\left({\frac {m\omega }{2\pi i\hbar \sin \omega t}}\right)^{\frac {1}{2}}\exp \left(-{\frac {m\omega {\big (}(x^{2}+x'^{2})\cos \omega t-2xx'{\big )}}{2i\hbar \sin \omega t}}\right).}
The latter may be obtained from the previous free-particle result upon making use of van Kortryk's SU(1,1) Lie-group identity, [ 7 ] exp ( − i t ℏ ( 1 2 m p 2 + 1 2 m ω 2 x 2 ) ) = exp ( − i m ω 2 ℏ x 2 tan ω t 2 ) exp ( − i 2 m ω ℏ p 2 sin ( ω t ) ) exp ( − i m ω 2 ℏ x 2 tan ω t 2 ) , {\displaystyle {\begin{aligned}&\exp \left(-{\frac {it}{\hbar }}\left({\frac {1}{2m}}{\mathsf {p}}^{2}+{\frac {1}{2}}m\omega ^{2}{\mathsf {x}}^{2}\right)\right)\\&=\exp \left(-{\frac {im\omega }{2\hbar }}{\mathsf {x}}^{2}\tan {\frac {\omega t}{2}}\right)\exp \left(-{\frac {i}{2m\omega \hbar }}{\mathsf {p}}^{2}\sin(\omega t)\right)\exp \left(-{\frac {im\omega }{2\hbar }}{\mathsf {x}}^{2}\tan {\frac {\omega t}{2}}\right),\end{aligned}}} valid for operators x {\displaystyle {\mathsf {x}}} and p {\displaystyle {\mathsf {p}}} satisfying the Heisenberg relation [ x , p ] = i ℏ {\displaystyle [{\mathsf {x}},{\mathsf {p}}]=i\hbar } .
For the N -dimensional case, the propagator can be simply obtained by the product K ( x → , x → ′ ; t ) = ∏ q = 1 N K ( x q , x q ′ ; t ) . {\displaystyle K({\vec {x}},{\vec {x}}';t)=\prod _{q=1}^{N}K(x_{q},x_{q}';t).}
In relativistic quantum mechanics and quantum field theory the propagators are Lorentz-invariant . They give the amplitude for a particle to travel between two spacetime events.
In quantum field theory, the theory of a free (or non-interacting) scalar field is a useful and simple example which serves to illustrate the concepts needed for more complicated theories. It describes spin -zero particles. There are a number of possible propagators for free scalar field theory. We now describe the most common ones.
The position space propagators are Green's functions for the Klein–Gordon equation . This means that they are functions G ( x , y ) satisfying ( ◻ x + m 2 ) G ( x , y ) = − δ ( x − y ) , {\displaystyle \left(\square _{x}+m^{2}\right)G(x,y)=-\delta (x-y),} where
(As typical in relativistic quantum field theory calculations, we use units where the speed of light c and the reduced Planck constant ħ are set to unity.)
We shall restrict attention to 4-dimensional Minkowski spacetime . We can perform a Fourier transform of the equation for the propagator, obtaining ( − p 2 + m 2 ) G ( p ) = − 1. {\displaystyle \left(-p^{2}+m^{2}\right)G(p)=-1.}
This equation can be inverted in the sense of distributions , noting that the equation xf ( x ) = 1 has the solution (see Sokhotski–Plemelj theorem ) f ( x ) = 1 x ± i ε = 1 x ∓ i π δ ( x ) , {\displaystyle f(x)={\frac {1}{x\pm i\varepsilon }}={\frac {1}{x}}\mp i\pi \delta (x),} with ε implying the limit to zero. Below, we discuss the right choice of the sign arising from causality requirements.
The solution is
G ( x , y ) = 1 ( 2 π ) 4 ∫ d 4 p e − i p ( x − y ) p 2 − m 2 ± i ε , {\displaystyle G(x,y)={\frac {1}{(2\pi )^{4}}}\int d^{4}p\,{\frac {e^{-ip(x-y)}}{p^{2}-m^{2}\pm i\varepsilon }},}
where p ( x − y ) := p 0 ( x 0 − y 0 ) − p → ⋅ ( x → − y → ) {\displaystyle p(x-y):=p_{0}(x^{0}-y^{0})-{\vec {p}}\cdot ({\vec {x}}-{\vec {y}})} is the 4-vector inner product.
The different choices for how to deform the integration contour in the above expression lead to various forms for the propagator. The choice of contour is usually phrased in terms of the p 0 {\displaystyle p_{0}} integral.
The integrand then has two poles at p 0 = ± p → 2 + m 2 , {\displaystyle p_{0}=\pm {\sqrt {{\vec {p}}^{2}+m^{2}}},} so different choices of how to avoid these lead to different propagators.
A contour going clockwise over both poles gives the causal retarded propagator . This is zero if x-y is spacelike or y is to the future of x , so it is zero if x ⁰< y ⁰ .
This choice of contour is equivalent to calculating the limit , G ret ( x , y ) = lim ε → 0 1 ( 2 π ) 4 ∫ d 4 p e − i p ( x − y ) ( p 0 + i ε ) 2 − p → 2 − m 2 = − Θ ( x 0 − y 0 ) 2 π δ ( τ x y 2 ) + Θ ( x 0 − y 0 ) Θ ( τ x y 2 ) m J 1 ( m τ x y ) 4 π τ x y . {\displaystyle G_{\text{ret}}(x,y)=\lim _{\varepsilon \to 0}{\frac {1}{(2\pi )^{4}}}\int d^{4}p\,{\frac {e^{-ip(x-y)}}{(p_{0}+i\varepsilon )^{2}-{\vec {p}}^{2}-m^{2}}}=-{\frac {\Theta (x^{0}-y^{0})}{2\pi }}\delta (\tau _{xy}^{2})+\Theta (x^{0}-y^{0})\Theta (\tau _{xy}^{2}){\frac {mJ_{1}(m\tau _{xy})}{4\pi \tau _{xy}}}.}
Here Θ ( x ) := { 1 x ≥ 0 0 x < 0 {\displaystyle \Theta (x):={\begin{cases}1&x\geq 0\\0&x<0\end{cases}}} is the Heaviside step function , τ x y := ( x 0 − y 0 ) 2 − ( x → − y → ) 2 {\displaystyle \tau _{xy}:={\sqrt {(x^{0}-y^{0})^{2}-({\vec {x}}-{\vec {y}})^{2}}}} is the proper time from x to y , and J 1 {\displaystyle J_{1}} is a Bessel function of the first kind . The propagator is non-zero only if y ≺ x {\displaystyle y\prec x} , i.e., y causally precedes x , which, for Minkowski spacetime, means
This expression can be related to the vacuum expectation value of the commutator of the free scalar field operator, G ret ( x , y ) = − i ⟨ 0 | [ Φ ( x ) , Φ ( y ) ] | 0 ⟩ Θ ( x 0 − y 0 ) , {\displaystyle G_{\text{ret}}(x,y)=-i\langle 0|\left[\Phi (x),\Phi (y)\right]|0\rangle \Theta (x^{0}-y^{0}),} where [ Φ ( x ) , Φ ( y ) ] := Φ ( x ) Φ ( y ) − Φ ( y ) Φ ( x ) . {\displaystyle \left[\Phi (x),\Phi (y)\right]:=\Phi (x)\Phi (y)-\Phi (y)\Phi (x).}
A contour going anti-clockwise under both poles gives the causal advanced propagator . This is zero if x-y is spacelike or if y is to the past of x , so it is zero if x ⁰> y ⁰ .
This choice of contour is equivalent to calculating the limit [ 8 ] G adv ( x , y ) = lim ε → 0 1 ( 2 π ) 4 ∫ d 4 p e − i p ( x − y ) ( p 0 − i ε ) 2 − p → 2 − m 2 = − Θ ( y 0 − x 0 ) 2 π δ ( τ x y 2 ) + Θ ( y 0 − x 0 ) Θ ( τ x y 2 ) m J 1 ( m τ x y ) 4 π τ x y . {\displaystyle G_{\text{adv}}(x,y)=\lim _{\varepsilon \to 0}{\frac {1}{(2\pi )^{4}}}\int d^{4}p\,{\frac {e^{-ip(x-y)}}{(p_{0}-i\varepsilon )^{2}-{\vec {p}}^{2}-m^{2}}}=-{\frac {\Theta (y^{0}-x^{0})}{2\pi }}\delta (\tau _{xy}^{2})+\Theta (y^{0}-x^{0})\Theta (\tau _{xy}^{2}){\frac {mJ_{1}(m\tau _{xy})}{4\pi \tau _{xy}}}.}
This expression can also be expressed in terms of the vacuum expectation value of the commutator of the free scalar field.
In this case, G adv ( x , y ) = i ⟨ 0 | [ Φ ( x ) , Φ ( y ) ] | 0 ⟩ Θ ( y 0 − x 0 ) . {\displaystyle G_{\text{adv}}(x,y)=i\langle 0|\left[\Phi (x),\Phi (y)\right]|0\rangle \Theta (y^{0}-x^{0})~.}
A contour going under the left pole and over the right pole gives the Feynman propagator , introduced by Richard Feynman in 1948. [ 9 ]
This choice of contour is equivalent to calculating the limit [ 10 ] G F ( x , y ) = lim ε → 0 1 ( 2 π ) 4 ∫ d 4 p e − i p ( x − y ) p 2 − m 2 + i ε = { − 1 4 π δ ( τ x y 2 ) + m 8 π τ x y H 1 ( 1 ) ( m τ x y ) τ x y 2 ≥ 0 − i m 4 π 2 − τ x y 2 K 1 ( m − τ x y 2 ) τ x y 2 < 0. {\displaystyle G_{F}(x,y)=\lim _{\varepsilon \to 0}{\frac {1}{(2\pi )^{4}}}\int d^{4}p\,{\frac {e^{-ip(x-y)}}{p^{2}-m^{2}+i\varepsilon }}={\begin{cases}-{\frac {1}{4\pi }}\delta (\tau _{xy}^{2})+{\frac {m}{8\pi \tau _{xy}}}H_{1}^{(1)}(m\tau _{xy})&\tau _{xy}^{2}\geq 0\\-{\frac {im}{4\pi ^{2}{\sqrt {-\tau _{xy}^{2}}}}}K_{1}(m{\sqrt {-\tau _{xy}^{2}}})&\tau _{xy}^{2}<0.\end{cases}}}
Here, H 1 (1) is a Hankel function and K 1 is a modified Bessel function .
This expression can be derived directly from the field theory as the vacuum expectation value of the time-ordered product of the free scalar field, that is, the product always taken such that the time ordering of the spacetime points is the same, G F ( x − y ) = − i ⟨ 0 | T ( Φ ( x ) Φ ( y ) ) | 0 ⟩ = − i ⟨ 0 | [ Θ ( x 0 − y 0 ) Φ ( x ) Φ ( y ) + Θ ( y 0 − x 0 ) Φ ( y ) Φ ( x ) ] | 0 ⟩ . {\displaystyle {\begin{aligned}G_{F}(x-y)&=-i\langle 0|T(\Phi (x)\Phi (y))|0\rangle \\[4pt]&=-i\left\langle 0|\left[\Theta (x^{0}-y^{0})\Phi (x)\Phi (y)+\Theta (y^{0}-x^{0})\Phi (y)\Phi (x)\right]|0\right\rangle .\end{aligned}}}
This expression is Lorentz invariant , as long as the field operators commute with one another when the points x and y are separated by a spacelike interval.
The usual derivation is to insert a complete set of single-particle momentum states between the fields with Lorentz covariant normalization, and then to show that the Θ functions providing the causal time ordering may be obtained by a contour integral along the energy axis, if the integrand is as above (hence the infinitesimal imaginary part), to move the pole off the real line.
The propagator may also be derived using the path integral formulation of quantum theory.
Introduced by Paul Dirac in 1938. [ 11 ] [ 12 ]
The Fourier transform of the position space propagators can be thought of as propagators in momentum space . These take a much simpler form than the position space propagators.
They are often written with an explicit ε term although this is understood to be a reminder about which integration contour is appropriate (see above). This ε term is included to incorporate boundary conditions and causality (see below).
For a 4-momentum p the causal and Feynman propagators in momentum space are:
For purposes of Feynman diagram calculations, it is usually convenient to write these with an additional overall factor of i (conventions vary).
The Feynman propagator has some properties that seem baffling at first. In particular, unlike the commutator, the propagator is nonzero outside of the light cone , though it falls off rapidly for spacelike intervals. Interpreted as an amplitude for particle motion, this translates to the virtual particle travelling faster than light. It is not immediately obvious how this can be reconciled with causality: can we use faster-than-light virtual particles to send faster-than-light messages?
The answer is no: while in classical mechanics the intervals along which particles and causal effects can travel are the same, this is no longer true in quantum field theory, where it is commutators that determine which operators can affect one another.
So what does the spacelike part of the propagator represent? In QFT the vacuum is an active participant, and particle numbers and field values are related by an uncertainty principle ; field values are uncertain even for particle number zero . There is a nonzero probability amplitude to find a significant fluctuation in the vacuum value of the field Φ( x ) if one measures it locally (or, to be more precise, if one measures an operator obtained by averaging the field over a small region). Furthermore, the dynamics of the fields tend to favor spatially correlated fluctuations to some extent. The nonzero time-ordered product for spacelike-separated fields then just measures the amplitude for a nonlocal correlation in these vacuum fluctuations, analogous to an EPR correlation . Indeed, the propagator is often called a two-point correlation function for the free field .
Since, by the postulates of quantum field theory, all observable operators commute with each other at spacelike separation, messages can no more be sent through these correlations than they can through any other EPR correlations; the correlations are in random variables.
Regarding virtual particles, the propagator at spacelike separation can be thought of as a means of calculating the amplitude for creating a virtual particle- antiparticle pair that eventually disappears into the vacuum, or for detecting a virtual pair emerging from the vacuum. In Feynman 's language, such creation and annihilation processes are equivalent to a virtual particle wandering backward and forward through time, which can take it outside of the light cone. However, no signaling back in time is allowed.
This can be made clearer by writing the propagator in the following form for a massless particle: G F ε ( x , y ) = ε ( x − y ) 2 + i ε 2 . {\displaystyle G_{F}^{\varepsilon }(x,y)={\frac {\varepsilon }{(x-y)^{2}+i\varepsilon ^{2}}}.}
This is the usual definition but normalised by a factor of ε {\displaystyle \varepsilon } . Then the rule is that one only takes the limit ε → 0 {\displaystyle \varepsilon \to 0} at the end of a calculation.
One sees that G F ε ( x , y ) = 1 ε if ( x − y ) 2 = 0 , {\displaystyle G_{F}^{\varepsilon }(x,y)={\frac {1}{\varepsilon }}\quad {\text{if}}~~~(x-y)^{2}=0,} and lim ε → 0 G F ε ( x , y ) = 0 if ( x − y ) 2 ≠ 0. {\displaystyle \lim _{\varepsilon \to 0}G_{F}^{\varepsilon }(x,y)=0\quad {\text{if}}~~~(x-y)^{2}\neq 0.} Hence this means that a single massless particle will always stay on the light cone. It is also shown that the total probability for a photon at any time must be normalised by the reciprocal of the following factor: lim ε → 0 ∫ | G F ε ( 0 , x ) | 2 d x 3 = lim ε → 0 ∫ ε 2 ( x 2 − t 2 ) 2 + ε 4 d x 3 = 2 π 2 | t | . {\displaystyle \lim _{\varepsilon \to 0}\int |G_{F}^{\varepsilon }(0,x)|^{2}\,dx^{3}=\lim _{\varepsilon \to 0}\int {\frac {\varepsilon ^{2}}{(\mathbf {x} ^{2}-t^{2})^{2}+\varepsilon ^{4}}}\,dx^{3}=2\pi ^{2}|t|.} We see that the parts outside the light cone usually are zero in the limit and only are important in Feynman diagrams.
The most common use of the propagator is in calculating probability amplitudes for particle interactions using Feynman diagrams . These calculations are usually carried out in momentum space. In general, the amplitude gets a factor of the propagator for every internal line , that is, every line that does not represent an incoming or outgoing particle in the initial or final state. It will also get a factor proportional to, and similar in form to, an interaction term in the theory's Lagrangian for every internal vertex where lines meet. These prescriptions are known as Feynman rules .
Internal lines correspond to virtual particles. Since the propagator does not vanish for combinations of energy and momentum disallowed by the classical equations of motion, we say that the virtual particles are allowed to be off shell . In fact, since the propagator is obtained by inverting the wave equation, in general, it will have singularities on shell.
The energy carried by the particle in the propagator can even be negative . This can be interpreted simply as the case in which, instead of a particle going one way, its antiparticle is going the other way, and therefore carrying an opposing flow of positive energy. The propagator encompasses both possibilities. It does mean that one has to be careful about minus signs for the case of fermions , whose propagators are not even functions in the energy and momentum (see below).
Virtual particles conserve energy and momentum. However, since they can be off shell, wherever the diagram contains a closed loop , the energies and momenta of the virtual particles participating in the loop will be partly unconstrained, since a change in a quantity for one particle in the loop can be balanced by an equal and opposite change in another. Therefore, every loop in a Feynman diagram requires an integral over a continuum of possible energies and momenta. In general, these integrals of products of propagators can diverge, a situation that must be handled by the process of renormalization .
If the particle possesses spin then its propagator is in general somewhat more complicated, as it will involve the particle's spin or polarization indices. The differential equation satisfied by the propagator for a spin 1 ⁄ 2 particle is given by [ 13 ]
where I 4 is the unit matrix in four dimensions, and employing the Feynman slash notation . This is the Dirac equation for a delta function source in spacetime. Using the momentum representation, S F ( x ′ , x ) = ∫ d 4 p ( 2 π ) 4 exp [ − i p ⋅ ( x ′ − x ) ] S ~ F ( p ) , {\displaystyle S_{F}(x',x)=\int {\frac {d^{4}p}{(2\pi )^{4}}}\exp {\left[-ip\cdot (x'-x)\right]}{\tilde {S}}_{F}(p),} the equation becomes
where on the right-hand side an integral representation of the four-dimensional delta function is used. Thus
By multiplying from the left with ( p̸ + m ) {\displaystyle (\not p+m)} (dropping unit matrices from the notation) and using properties of the gamma matrices , p̸ p̸ = 1 2 ( p̸ p̸ + p̸ p̸ ) = 1 2 ( γ μ p μ γ ν p ν + γ ν p ν γ μ p μ ) = 1 2 ( γ μ γ ν + γ ν γ μ ) p μ p ν = g μ ν p μ p ν = p ν p ν = p 2 , {\displaystyle {\begin{aligned}\not p\not p&={\tfrac {1}{2}}(\not p\not p+\not p\not p)\\[6pt]&={\tfrac {1}{2}}(\gamma _{\mu }p^{\mu }\gamma _{\nu }p^{\nu }+\gamma _{\nu }p^{\nu }\gamma _{\mu }p^{\mu })\\[6pt]&={\tfrac {1}{2}}(\gamma _{\mu }\gamma _{\nu }+\gamma _{\nu }\gamma _{\mu })p^{\mu }p^{\nu }\\[6pt]&=g_{\mu \nu }p^{\mu }p^{\nu }=p_{\nu }p^{\nu }=p^{2},\end{aligned}}}
the momentum-space propagator used in Feynman diagrams for a Dirac field representing the electron in quantum electrodynamics is found to have form
The iε downstairs is a prescription for how to handle the poles in the complex p 0 -plane. It automatically yields the Feynman contour of integration by shifting the poles appropriately. It is sometimes written
for short. It should be remembered that this expression is just shorthand notation for ( γ μ p μ − m ) −1 . "One over matrix" is otherwise nonsensical. In position space one has S F ( x − y ) = ∫ d 4 p ( 2 π ) 4 e − i p ⋅ ( x − y ) γ μ p μ + m p 2 − m 2 + i ε = ( γ μ ( x − y ) μ | x − y | 5 + m | x − y | 3 ) J 1 ( m | x − y | ) . {\displaystyle S_{F}(x-y)=\int {\frac {d^{4}p}{(2\pi )^{4}}}\,e^{-ip\cdot (x-y)}{\frac {\gamma ^{\mu }p_{\mu }+m}{p^{2}-m^{2}+i\varepsilon }}=\left({\frac {\gamma ^{\mu }(x-y)_{\mu }}{|x-y|^{5}}}+{\frac {m}{|x-y|^{3}}}\right)J_{1}(m|x-y|).}
This is related to the Feynman propagator by
where ∂ ̸ := γ μ ∂ μ {\displaystyle \not \partial :=\gamma ^{\mu }\partial _{\mu }} .
The propagator for a gauge boson in a gauge theory depends on the choice of convention to fix the gauge. For the gauge used by Feynman and Stueckelberg , the propagator for a photon is
The general form with gauge parameter λ , up to overall sign and the factor of i {\displaystyle i} , reads
The propagator for a massive vector field can be derived from the Stueckelberg Lagrangian. The general form with gauge parameter λ , up to overall sign and the factor of i {\displaystyle i} , reads
With these general forms one obtains the propagators in unitary gauge for λ = 0 , the propagator in Feynman or 't Hooft gauge for λ = 1 and in Landau or Lorenz gauge for λ = ∞ . There are also other notations where the gauge parameter is the inverse of λ , usually denoted ξ (see R ξ gauges ). The name of the propagator, however, refers to its final form and not necessarily to the value of the gauge parameter.
Unitary gauge:
Feynman ('t Hooft) gauge:
Landau (Lorenz) gauge:
The graviton propagator for Minkowski space in general relativity is [ 14 ] G α β μ ν = P α β μ ν 2 k 2 − P s 0 α β μ ν 2 k 2 = g α μ g β ν + g β μ g α ν − 2 D − 2 g μ ν g α β k 2 , {\displaystyle G_{\alpha \beta ~\mu \nu }={\frac {{\mathcal {P}}_{\alpha \beta ~\mu \nu }^{2}}{k^{2}}}-{\frac {{\mathcal {P}}_{s}^{0}{}_{\alpha \beta ~\mu \nu }}{2k^{2}}}={\frac {g_{\alpha \mu }g_{\beta \nu }+g_{\beta \mu }g_{\alpha \nu }-{\frac {2}{D-2}}g_{\mu \nu }g_{\alpha \beta }}{k^{2}}},} where D {\displaystyle D} is the number of spacetime dimensions, P 2 {\displaystyle {\mathcal {P}}^{2}} is the transverse and traceless spin-2 projection operator and P s 0 {\displaystyle {\mathcal {P}}_{s}^{0}} is a spin-0 scalar multiplet .
The graviton propagator for (Anti) de Sitter space is G = P 2 2 H 2 − ◻ + P s 0 2 ( ◻ + 4 H 2 ) , {\displaystyle G={\frac {{\mathcal {P}}^{2}}{2H^{2}-\Box }}+{\frac {{\mathcal {P}}_{s}^{0}}{2(\Box +4H^{2})}},} where H {\displaystyle H} is the Hubble constant . Note that upon taking the limit H → 0 {\displaystyle H\to 0} and ◻ → − k 2 {\displaystyle \Box \to -k^{2}} , the AdS propagator reduces to the Minkowski propagator. [ 15 ]
The scalar propagators are Green's functions for the Klein–Gordon equation. There are related singular functions which are important in quantum field theory . These functions are most simply defined in terms of the vacuum expectation value of products of field operators.
The commutator of two scalar field operators defines the Pauli – Jordan function Δ ( x − y ) {\displaystyle \Delta (x-y)} by [ 16 ] [ 17 ]
with
This satisfies
and is zero if ( x − y ) 2 < 0 {\displaystyle (x-y)^{2}<0} .
We can define the positive and negative frequency parts of Δ ( x − y ) {\displaystyle \Delta (x-y)} , sometimes called cut propagators, in a relativistically invariant way.
This allows us to define the positive frequency part:
and the negative frequency part:
These satisfy [ 17 ]
and
The anti-commutator of two scalar field operators defines Δ 1 ( x − y ) {\displaystyle \Delta _{1}(x-y)} function by
with
This satisfies Δ 1 ( x − y ) = Δ 1 ( y − x ) . {\displaystyle \,\Delta _{1}(x-y)=\Delta _{1}(y-x).}
The retarded, advanced and Feynman propagators defined above are all Green's functions for the Klein–Gordon equation.
They are related to the singular functions by [ 17 ]
where ε ( x 0 − y 0 ) {\displaystyle \varepsilon (x^{0}-y^{0})} is the sign of x 0 − y 0 {\displaystyle x^{0}-y^{0}} . | https://en.wikipedia.org/wiki/Propagator |
Propane ( / ˈ p r oʊ p eɪ n / ) is a three- carbon alkane with the molecular formula C 3 H 8 . It is a gas at standard temperature and pressure , but compressible to a transportable liquid. A by-product of natural gas processing and petroleum refining, it is often a constituent of liquefied petroleum gas (LPG), which is commonly used as a fuel in domestic and industrial applications and in low-emissions public transportation; other constituents of LPG may include propylene , butane , butylene , butadiene , and isobutylene . Discovered in 1857 by the French chemist Marcellin Berthelot , it became commercially available in the US by 1911. Propane has lower volumetric energy density than gasoline or coal, but has higher gravimetric energy density than them and burns more cleanly. [ 6 ]
Propane gas has become a popular choice for barbecues and portable stoves because its low −42 °C boiling point makes it vaporise inside pressurised liquid containers (it exists in two phases, vapor above liquid). It retains its ability to vaporise even in cold weather, making it better-suited for outdoor use in cold climates than alternatives with higher boiling points like butane. [ 7 ] LPG powers buses, forklifts, automobiles, outboard boat motors, and ice resurfacing machines , and is used for heat and cooking in recreational vehicles and campers . Propane is becoming popular as a replacement refrigerant (R290) for heatpumps also as it offers greater efficiency than the current refrigerants: R410A / R32, higher temperature heat output and less damage to the atmosphere for escaped gasses—at the expense of high gas flammability. [ 8 ]
Propane was first synthesized by the French chemist Marcellin Berthelot in 1857 during his researches on hydrogenation . Berthelot made propane by heating propylene dibromide (C 3 H 6 Br 2 ) with potassium iodide and water. [ 9 ] [ 10 ] : p. 9, §1.1 [ 11 ] Propane was found dissolved in Pennsylvanian light crude oil by Edmund Ronalds in 1864. [ 12 ] [ 13 ] Walter O. Snelling of the U.S. Bureau of Mines highlighted it as a volatile component in gasoline in 1910, which marked the "birth of the propane industry" in the United States. [ 14 ] The volatility of these lighter hydrocarbons caused them to be known as "wild" because of the high vapor pressures of unrefined gasoline. On March 31, 1912, The New York Times reported on Snelling's work with liquefied gas, saying "a steel bottle will carry enough gas to light an ordinary home for three weeks". [ 15 ]
It was during this time that Snelling — in cooperation with Frank P. Peterson, Chester Kerr, and Arthur Kerr — developed ways to liquefy the LP gases during the refining of gasoline. [ 14 ] Together, they established American Gasol Co., the first commercial marketer of propane. Snelling had produced relatively pure propane by 1911, and on March 25, 1913, his method of processing and producing LP gases was issued patent #1,056,845. [ 14 ] A separate method of producing LP gas through compression was developed by Frank Peterson and its patent was granted on July 2, 1912. [ 16 ]
The 1920s saw increased production of LP gases, with the first year of recorded production totaling 223,000 US gallons (840 m 3 ) in 1922. In 1927, annual marketed LP gas production reached 1 million US gallons (3,800 m 3 ), and by 1935, the annual sales of LP gas had reached 56 million US gallons (210,000 m 3 ). Major industry developments in the 1930s included the introduction of railroad tank car transport, gas odorization, and the construction of local bottle-filling plants. The year 1945 marked the first year that annual LP gas sales reached a billion gallons. By 1947, 62% of all U.S. homes had been equipped with either natural gas or propane for cooking. [ 14 ]
In 1950, 1,000 propane-fueled buses were ordered by the Chicago Transit Authority , and by 1958, sales in the U.S. had reached 7 billion US gallons (26,000,000 m 3 ) annually. In 2004, it was reported to be a growing $8-billion to $10-billion industry with over 15 billion US gallons (57,000,000 m 3 ) of propane being used annually in the U.S. [ 17 ]
During the COVID-19 pandemic , propane shortages were reported in the United States due to increased demand. [ 18 ] [ 19 ] [ 20 ]
The " prop- " root found in "propane" and names of other compounds with three-carbon chains was derived from " propionic acid ", [ 21 ] which in turn was named after the Greek words protos (meaning first) and pion (fat), as it was the "first" member of the series of fatty acids . [ 22 ]
Propane is a colorless, odorless gas. Ethyl mercaptan is added as a safety precaution as an odorizer , [ 23 ] and is commonly called a "rotten egg" smell. [ 24 ] At normal pressure it liquifies below its boiling point at −42 °C and solidifies below its melting point at −187.7 °C. Propane crystallizes in the space group P2 1 /n. [ 25 ] [ 26 ] The low space-filling of 58.5% (at 90 K), due to the bad stacking properties of the molecule, is the reason for the particularly low melting point.
Propane undergoes combustion reactions in a similar fashion to other alkanes . In the presence of excess oxygen, propane burns to form water and carbon dioxide . C 3 H 8 + 5 O 2 ⟶ 3 CO 2 + 4 H 2 O + heat {\displaystyle {\ce {C3H8 + 5 O2 -> 3 CO2 + 4 H2O + heat}}} When insufficient oxygen is present for complete combustion, carbon monoxide , soot ( carbon ), or both, are formed as well: C 3 H 8 + 9 2 O 2 ⟶ 2 CO 2 + CO + 4 H 2 O + heat {\displaystyle {\ce {C3H8 + 9/2 O2 -> 2 CO2 + CO + 4 H2O + heat}}} C 3 H 8 + 2 O 2 ⟶ 3 C + 4 H 2 O + heat {\displaystyle {\ce {C3H8 + 2 O2 -> 3 C + 4 H2O + heat}}} The complete combustion of propane produces about 50 MJ/kg of heat. [ 27 ]
Propane combustion is much cleaner than that of coal or unleaded gasoline. Propane's per-BTU production of CO 2 is almost as low as that of natural gas. [ 28 ] Propane burns hotter than home heating oil or diesel fuel because of the very high hydrogen content. The presence of C–C bonds , plus the multiple bonds of propylene and butylene , produce organic exhausts besides carbon dioxide and water vapor during typical combustion. These bonds also cause propane to burn with a visible flame.
The enthalpy of combustion of propane gas where all products return to standard state, for example where water returns to its liquid state at standard temperature (known as higher heating value ), is (2,219.2 ± 0.5) kJ/mol, or (50.33 ± 0.01) MJ/kg. [ 27 ]
The enthalpy of combustion of propane gas where products do not return to standard state, for example where the hot gases including water vapor exit a chimney, (known as lower heating value ) is −2043.455 kJ/mol. [ 29 ] The lower heat value is the amount of heat available from burning the substance where the combustion products are vented to the atmosphere; for example, the heat from a fireplace when the flue is open.
The density of propane gas at 25 °C (77 °F) is 1.808 kg/m 3 , about 1.5× the density of air at the same temperature. The density of liquid propane at 25 °C (77 °F) is 0.493 g/cm 3 , which is equivalent to 4.11 pounds per U.S. liquid gallon or 493 g/L. Propane expands at 1.5% per 10 °F. Thus, liquid propane has a density of approximately 4.2 pounds per gallon (504 g/L) at 60 °F (15.6 °C). [ 30 ]
As the density of propane changes with temperature, this fact must be considered every time when the application is connected with safety or custody transfer operations. [ 31 ]
Propane is a popular choice for barbecues and portable stoves because the low boiling point of −42 °C (−44 °F) makes it vaporize as soon as it is released from its pressurized container. Therefore, no carburetor or other vaporizing device is required; a simple metering nozzle suffices.
Blends of pure, dry "isopropane" [isobutane/propane mixtures of propane (R-290) and isobutane (R-600a)] can be used as the circulating refrigerant in suitably constructed compressor-based refrigeration. [ 32 ] Compared to fluorocarbons, propane has a negligible ozone depletion potential and very low global warming potential (having a GWP value of 0.072, [ 33 ] 13.9 times lower than the GWP of carbon dioxide) and can serve as a functional replacement for R-12 , R-22 , R-134a , and other chlorofluorocarbon or hydrofluorocarbon refrigerants in conventional stationary refrigeration and air conditioning systems. [ 34 ] Because its global warming effect is far less than current refrigerants, propane was chosen as one of five replacement refrigerants approved by the EPA in 2015, for use in systems specially designed to handle its flammability. [ 35 ]
Such substitution is widely prohibited or discouraged in motor vehicle air conditioning systems, on the grounds that using flammable hydrocarbons in systems originally designed to carry non-flammable refrigerant presents a significant risk of fire or explosion. [ 36 ]
Vendors and advocates of hydrocarbon refrigerants argue against such bans on the grounds that there have been very few such incidents relative to the number of vehicle air conditioning systems filled with hydrocarbons. [ 37 ] [ 38 ]
Propane is also instrumental in providing off-the-grid refrigeration, as the energy source for a gas absorption refrigerator and is commonly used for camping and recreational vehicles.
It has also been proposed to use propane as a refrigerant in heat pumps . [ 39 ]
Since it can be transported easily, it is a popular fuel for home heat and backup electrical generation in sparsely populated areas that do not have natural gas pipelines. In June 2023, Stanford researchers found propane combustion emitted detectable and repeatable levels of benzene that in some homes raised indoor benzene concentrations above well-established health benchmarks. The research also shows that gas and propane fuels appear to be the dominant source of benzene produced by cooking. [ 40 ]
In rural areas of North America, as well as northern Australia, propane is used to heat livestock facilities, in grain dryers, and other heat-producing appliances. When used for heating or grain drying it is usually stored in a large, permanently-placed cylinder which is refilled by a propane-delivery truck. As of 2014 [update] , 6.2 million American households use propane as their primary heating fuel. [ 41 ]
In North America, local delivery trucks with an average cylinder size of 3,000 US gallons (11 m 3 ), fill up large cylinders that are permanently installed on the property, or other service trucks exchange empty cylinders of propane with filled cylinders. Large tractor-trailer trucks, with an average cylinder size of 10,000 US gallons (38 m 3 ), transport propane from the pipeline or refinery to the local bulk plant. The bobtail tank truck is not unique to the North American market, though the practice is not as common elsewhere, and the vehicles are generally called tankers . In many countries, propane is delivered to end-users via small or medium-sized individual cylinders, while empty cylinders are removed for refilling at a central location.
There are also community propane systems, with a central cylinder feeding individual homes. [ 42 ]
In the U.S., over 190,000 on-road vehicles use propane, and over 450,000 forklifts use it for power. It is the third most popular vehicle fuel in the world, [ 43 ] behind gasoline and diesel fuel . In other parts of the world, propane used in vehicles is known as autogas. In 2007, approximately 13 million vehicles worldwide use autogas. [ 43 ]
The advantage of propane in cars is its liquid state at a moderate pressure. This allows fast refill times, affordable fuel cylinder construction, and price ranges typically just over half that of gasoline. Meanwhile, it is noticeably cleaner (both in handling, and in combustion), results in less engine wear (due to carbon deposits) without diluting engine oil (often extending oil-change intervals), and until recently [ when? ] was relatively low-cost in North America. The octane rating of propane is relatively high at 110. In the United States the propane fueling infrastructure is the most developed of all alternative vehicle fuels. Many converted vehicles have provisions for topping off from "barbecue bottles". Purpose-built vehicles are often in commercially owned fleets, and have private fueling facilities. A further saving for propane fuel vehicle operators, especially in fleets, is that theft is much more difficult than with gasoline or diesel fuels.
Propane is also used as fuel for small engines , especially those used indoors or in areas with insufficient fresh air and ventilation to carry away the more toxic exhaust of an engine running on gasoline or diesel fuel. More recently, [ when? ] there have been lawn-care products like string trimmers , lawn mowers and leaf blowers intended for outdoor use, but fueled by propane in order to reduce air pollution . [ 44 ]
Many heavy-duty highway trucks use propane as a boost, where it is added through the turbocharger, to mix with diesel fuel droplets. Propane droplets' very high hydrogen content helps the diesel fuel to burn hotter and therefore more completely. This provides more torque, more horsepower, and a cleaner exhaust for the trucks. It is normal for a 7-liter medium-duty diesel truck engine to increase fuel economy by 20 to 33 percent when a propane boost system is used. It is cheaper because propane is much cheaper than diesel fuel. The longer distance a cross-country trucker can travel on a full load of combined diesel and propane fuel means they can maintain federal hours of work rules with two fewer fuel stops in a cross-country trip. Truckers, tractor pulling competitions, and farmers have been using a propane boost system for over forty years [ when? ] in North America.
The North American standard grade of automotive-use propane is rated HD-5 (Heavy Duty 5%). HD-5 grade has a maximum of 5 percent butane, but propane sold in Europe has a maximum allowable amount of butane of 30 percent, meaning it is not the same fuel as HD-5. The LPG used as auto fuel and cooking gas in Asia and Australia also has very high butane content.
Propylene (also called propene) can be a contaminant of commercial propane. Propane containing too much propene is not suited for most vehicle fuels. HD-5 is a specification that establishes a maximum concentration of 5% propene in propane. Propane and other LP gas specifications are established in ASTM D-1835. [ 46 ] All propane fuels include an odorant , almost always ethanethiol , so that the gas can be smelled easily in case of a leak. Propane as HD-5 was originally intended for use as vehicle fuel. HD-5 is currently being used in all propane applications.
Typically in the United States and Canada, LPG is primarily propane (at least 90%), while the rest is mostly ethane , propylene , butane , and odorants including ethyl mercaptan . [ 47 ] [ 48 ] This is the HD-5 standard, (maximum allowable propylene content, and no more than 5% butanes and ethane) defined by the American Society for Testing and Materials by its Standard 1835 for internal combustion engines. Not all products labeled "LPG" conform to this standard, however. In Mexico, for example, gas labeled "LPG" may consist of 60% propane and 40% butane. "The exact proportion of this combination varies by country, depending on international prices, on the availability of components and, especially, on the climatic conditions that favor LPG with higher butane content in warmer regions and propane in cold areas". [ 49 ]
Propane is bought and stored in a liquid form, LPG. It can easily be stored in a relatively small space.
By comparison, compressed natural gas (CNG) cannot be liquefied by compression at normal temperatures, as these are well above its critical temperature . As a gas, very high pressure is required to store useful quantities. This poses the hazard that, in an accident, just as with any compressed gas cylinder (such as a CO 2 cylinder used for a soda concession) a CNG cylinder may burst with great force, or leak rapidly enough to become a self-propelled missile. Therefore, CNG is much less efficient to store than propane, due to the large cylinder volume required. An alternative means of storing natural gas is as a cryogenic liquid in an insulated container as liquefied natural gas (LNG). This form of storage is at low pressure and is around 3.5 times as efficient as storing it as CNG.
Unlike propane, if a spill occurs, CNG will evaporate and dissipate because it is lighter than air.
Propane is much more commonly used to fuel vehicles than is natural gas, because that equipment costs less. Propane requires just 1,220 kilopascals (177 psi) of pressure to keep it liquid at 37.8 °C (100 °F). [ 50 ]
Propane is a simple asphyxiant . [ 51 ] Unlike natural gas , it is denser than air. It may accumulate in low spaces and near the floor. When abused as an inhalant , it may cause hypoxia (lack of oxygen), pneumonia , cardiac failure or cardiac arrest . [ 52 ] [ 53 ] Propane has low toxicity since it is not readily absorbed and is not biologically active . Commonly stored under pressure at room temperature, propane and its mixtures will flash evaporate at atmospheric pressure and cool well below the freezing point of water. The cold gas, which appears white due to moisture condensing from the air, may cause frostbite.
Propane is denser than air. If a leak in a propane fuel system occurs, the vaporized gas will have a tendency to sink into any enclosed area and thus poses a risk of explosion and fire. The typical scenario is a leaking cylinder stored in a basement; the propane leak drifts across the floor to the pilot light on the furnace or water heater, and results in an explosion or fire. This property makes propane generally unsuitable as a fuel for boats. In 2007, a heavily investigated vapor-related explosion occurred in Ghent, West Virginia, U.S., killing four people and completely destroying the Little General convenience store on Flat Top Road , causing several injuries. [ 54 ] [ 55 ]
Another hazard associated with propane storage and transport is known as a BLEVE or boiling liquid expanding vapor explosion . The Kingman Explosion involved a railroad tank car in Kingman, Arizona, U.S., in 1973 during a propane transfer. The fire and subsequent explosions resulted in twelve fatalities and numerous injuries. [ 56 ]
Propane is produced as a by-product of two other processes, natural gas processing and petroleum refining . The processing of natural gas involves removal of butane , propane, and large amounts of ethane from the raw gas, to prevent condensation of these volatiles in natural gas pipelines. Additionally, oil refineries produce some propane as a by-product of cracking petroleum into gasoline or heating oil.
The supply of propane cannot easily be adjusted to meet increased demand, because of the by-product nature of propane production. About 90% of U.S. propane is domestically produced. [ 41 ] The United States imports about 10% of the propane consumed each year, with about 70% of that coming from Canada via pipeline and rail. The remaining 30% of imported propane comes to the United States from other sources via ocean transport.
After it is separated from the crude oil, North American propane is stored in huge salt caverns . Examples of these are Fort Saskatchewan , Alberta ; Mont Belvieu, Texas ; and Conway, Kansas . These salt caverns [ 57 ] can store 80,000,000 barrels (13,000,000 m 3 ) of propane.
As of October 2013 [update] , the retail cost of propane was approximately $2.37 per gallon, or roughly $25.95 per 1 million BTUs. [ 58 ] This means that filling a 500-gallon propane tank, which is what households that use propane as their main source of energy usually require, cost $948 (80% of 500 gallons or 400 gallons), a 7.5% increase on the 2012–2013 winter season average US price. [ 59 ] However, propane costs per gallon change significantly from one state to another: the Energy Information Administration (EIA) quotes a $2.995 per gallon average on the East Coast for October 2013, [ 60 ] while the figure for the Midwest was $1.860 for the same period. [ 61 ]
As of December 2015 [update] , the propane retail cost was approximately $1.97 per gallon, [ 62 ] which meant filling a 500-gallon propane tank to 80% capacity costed $788, a 16.9% decrease or $160 less from November 2013. Similar regional differences in prices are present with the December 2015 EIA figure for the East Coast at $2.67 per gallon and the Midwest at $1.43 per gallon. [ 62 ]
As of August 2018 [update] , the average US propane retail cost was approximately $2.48 per gallon. The wholesale price of propane in the U.S. always drops in the summer as most homes do not require it for home heating. The wholesale price of propane in the summer of 2018 was between 86 cents to 96 cents per U.S. gallon, based on a truckload or railway car load. The price for home heating was exactly double that price; at 95 cents per gallon wholesale, a home-delivered price was $1.90 per gallon if ordered 500 gallons at a time. Prices in the Midwest are always less than in California. Prices for home delivery always go up near the end of August or the first few days of September when people start ordering their home tanks to be filled. [ 63 ] | https://en.wikipedia.org/wiki/Propane |
1,3-Propanedithiol is the chemical compound with the formula HSCH 2 CH 2 CH 2 SH. This di thiol is a useful reagent in organic synthesis . This liquid, which is readily available commercially, has an intense stench.
1,3-Propanedithiol has been used for the protection of aldehydes and ketones via their reversible formation of dithianes . [ 1 ] A prototypical reaction is its formation of 1,3-dithiane from formaldehyde . [ 2 ] The reactivity of this dithiane illustrates the concept of umpolung . Alkylation gives thioethers, e.g. 1,5-dithiacyclooctane . The unpleasant odour of 1,3-propanedithiol has encouraged the development of alternative reagents that generate similar derivatives. [ 3 ]
Oxidation gives not the 1,2-dithiolane , but the bis( disulfide ).
1,3-Propanedithiol is used in the synthesis of tiapamil .
1,3-Propanedithiol reacts with metal ions to form dithiolates. Illustrative is the synthesis of the derivative diiron propanedithiolate hexacarbonyl upon reaction with triiron dodecacarbonyl : [ 5 ]
The stench of 1,3-propanedithiol can be minimized with bleach . | https://en.wikipedia.org/wiki/Propane-1,3-dithiol |
This page provides supplementary chemical data on propane .
Propane is highly temperature dependent. [ 3 ] The density of liquid and gaseous propane are given on the next image.
Table data obtained from CRC Handbook of Chemistry and Physics 44th ed.
Propane does not have health effects other than the danger of frostbite or asphyxiation. The National Propane Gas Association has a generic MSDS available online here. (Issued 1996) | https://en.wikipedia.org/wiki/Propane_(data_page) |
Propane refrigeration is a type of compression refrigeration . Propane (R290) has been used successfully in industrial refrigeration for many years, and is emerging as an increasingly viable alternative for homes and businesses. Propane's operating pressures and temperatures are well suited for use in air conditioning equipment, but because of propane’s flammability, great care is required in the manufacture, installation and servicing of equipment that uses it as a refrigerant.
Propane that is supplied for general use – such as in barbeques and patio heaters – is not suitable for use in refrigeration systems because it can contain high levels of contaminants, including moisture and unsaturated hydrocarbons. Only propane produced specifically for use in refrigeration systems – with a purity of at least 98.5% and moisture content below 10ppm (by weight) – should be used. [ 1 ]
With a global warming potential (GWP) of 0.072 [ 2 ] and an ozone depletion potential (ODP) of 0, R-290 is of very little threat to the environment. The Environmental Protection Agency (EPA) has listed R-290 as an acceptable refrigerant substitute under its Significant New Alternatives Policy (SNAP), and recently exempted it from the venting prohibition in Section 608 of the Clean Air Act. [ 3 ]
This technology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Propane_refrigeration |
Propanephosphonic acid anhydride ( PPAA , T3P ) is an anhydride of propane phosphonic acid . Its structure is a cyclic trimer , with a phosphorus–oxygen core and propyl groups and additional oxygens attached. [ 1 ] The chemical is a useful reagent for peptide synthesis reactions, where it activates the carboxylic acid partner for subsequent reaction with amines . It is commercially available as 50 % solution in DMF or ethyl acetate as a slightly yellow mixture.
This article about an organic compound is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Propanephosphonic_acid_anhydride |
In organic chemistry , the propargyl group is a functional group of 2- propynyl with the structure CH≡C−CH 2 − . It is an alkyl group derived from propyne ( HC≡C−CH 3 ).
The term propargylic refers to a saturated position ( sp 3 -hybridized ) on a molecular framework next to an alkynyl group. The name comes from mix of propene and argentum , which refers to the typical reaction of the terminal alkynes with silver salts .
The term homopropargylic designates in the same manner | https://en.wikipedia.org/wiki/Propargyl_group |
A propellant (or propellent ) is a mass that is expelled or expanded in such a way as to create a thrust or another motive force in accordance with Newton's third law of motion , and "propel" a vehicle, projectile , or fluid payload. In vehicles, the engine that expels the propellant is called a reaction engine . Although technically a propellant is the reaction mass used to create thrust, the term "propellant" is often used to describe a substance which contains both the reaction mass and the fuel that holds the energy used to accelerate the reaction mass. For example, the term "propellant" is often used in chemical rocket design to describe a combined fuel/propellant, although the propellants should not be confused with the fuel that is used by an engine to produce the energy that expels the propellant. Even though the byproducts of substances used as fuel are also often used as a reaction mass to create the thrust, such as with a chemical rocket engine, propellant and fuel are two distinct concepts.
Vehicles can use propellants to move by ejecting a propellant backwards which creates an opposite force that moves the vehicle forward. Projectiles can use propellants that are expanding gases which provide the motive force to set the projectile in motion. Aerosol cans use propellants which are fluids that are compressed so that when the propellant is allowed to escape by releasing a valve, the energy stored by the compression moves the propellant out of the can and that propellant forces the aerosol payload out along with the propellant. Compressed fluid may also be used as a simple vehicle propellant, with the potential energy that is stored in the compressed fluid used to expel the fluid as the propellant. The energy stored in the fluid was added to the system when the fluid was compressed, such as compressed air . The energy applied to the pump or thermal system that is used to compress the air is stored until it is released by allowing the propellant to escape. Compressed fluid may also be used only as energy storage along with some other substance as the propellant, such as with a water rocket , where the energy stored in the compressed air is the fuel and the water is the propellant.
In electrically powered spacecraft , electricity is used to accelerate the propellant. An electrostatic force may be used to expel positive ions, or the Lorentz force may be used to expel negative ions and electrons as the propellant. Electrothermal engines use the electromagnetic force to heat low molecular weight gases (e.g. hydrogen, helium, ammonia) into a plasma and expel the plasma as propellant. In the case of a resistojet rocket engine, the compressed propellant is simply heated using resistive heating as it is expelled to create more thrust.
In chemical rockets and aircraft, fuels are used to produce an energetic gas that can be directed through a nozzle , thereby producing thrust. In rockets, the burning of rocket fuel produces an exhaust, and the exhausted material is usually expelled as a propellant under pressure through a nozzle . The exhaust material may be a gas , liquid , plasma , or a solid . In powered aircraft without propellers such as jets , the propellant is usually the product of the burning of fuel with atmospheric oxygen so that the resulting propellant product has more mass than the fuel carried on the vehicle.
Proposed photon rockets would use the relativistic momentum of photons to create thrust. Even though photons do not have mass, they can still act as a propellant because they move at relativistic speed, i.e., the speed of light. In this case Newton's third Law of Motion is inadequate to model the physics involved and relativistic physics must be used.
In chemical rockets, chemical reactions are used to produce energy which creates movement of a fluid which is used to expel the products of that chemical reaction (and sometimes other substances) as propellants. For example, in a simple hydrogen/oxygen engine, hydrogen is burned (oxidized) to create H 2 O and the energy from the chemical reaction is used to expel the water (steam) to provide thrust. Often in chemical rocket engines, a higher molecular mass substance is included in the fuel to provide more reaction mass.
Rocket propellant may be expelled through an expansion nozzle as a cold gas, that is, without energetic mixing and combustion, to provide small changes in velocity to spacecraft by the use of cold gas thrusters , usually as maneuvering thrusters.
To attain a useful density for storage, most propellants are stored as either a solid or a liquid.
A rocket propellant is a mass that is expelled from a vehicle, such as a rocket, in such a way as to create a thrust in accordance with Newton's third law of motion , and "propel" the vehicle forward. The engine that expels the propellant is called a reaction engine . Although the term "propellant" is often used in chemical rocket design to describe a combined fuel/propellant, propellants should not be confused with the fuel that is used by an engine to produce the energy that expels the propellant. Even though the byproducts of substances used as fuel are also often used as a reaction mass to create the thrust, such as with a chemical rocket engine, propellant and fuel are two distinct concepts.
In electrically powered spacecraft , electricity is used to accelerate the propellant. An electrostatic force may be used to expel positive ions, or the Lorentz force may be used to expel negative ions and electrons as the propellant. Electrothermal engines use the electromagnetic force to heat low molecular weight gases (e.g. hydrogen, helium, ammonia) into a plasma and expel the plasma as propellant. In the case of a resistojet rocket engine, the compressed propellant is simply heated using resistive heating as it is expelled to create more thrust.
In chemical rockets and aircraft, fuels are used to produce an energetic gas that can be directed through a nozzle , thereby producing thrust. In rockets, the burning of rocket fuel produces an exhaust, and the exhausted material is usually expelled as a propellant under pressure through a nozzle . The exhaust material may be a gas , liquid , plasma , or a solid . In powered aircraft without propellers such as jets , the propellant is usually the product of the burning of fuel with atmospheric oxygen so that the resulting propellant product has more mass than the fuel carried on the vehicle.
The propellant or fuel may also simply be a compressed fluid, with the potential energy that is stored in the compressed fluid used to expel the fluid as the propellant. The energy stored in the fluid was added to the system when the fluid was compressed, such as compressed air . The energy applied to the pump or thermal system that is used to compress the air is stored until it is released by allowing the propellant to escape. Compressed fluid may also be used only as energy storage along with some other substance as the propellant, such as with a water rocket , where the energy stored in the compressed air is the fuel and the water is the propellant.
Proposed photon rockets would use the relativistic momentum of photons to create thrust. Even though photons do not have mass, they can still act as a propellant because they move at relativistic speed, i.e., the speed of light. In this case Newton's third Law of Motion is inadequate to model the physics involved and relativistic physics must be used.
In chemical rockets, chemical reactions are used to produce energy which creates movement of a fluid which is used to expel the products of that chemical reaction (and sometimes other substances) as propellants. For example, in a simple hydrogen/oxygen engine, hydrogen is burned (oxidized) to create H 2 O and the energy from the chemical reaction is used to expel the water (steam) to provide thrust. Often in chemical rocket engines, a higher molecular mass substance is included in the fuel to provide more reaction mass.
Rocket propellant may be expelled through an expansion nozzle as a cold gas, that is, without energetic mixing and combustion, to provide small changes in velocity to spacecraft by the use of cold gas thrusters , usually as maneuvering thrusters.
To attain a useful density for storage, most propellants are stored as either a solid or a liquid.
Propellants may be energized by chemical reactions to expel solid, liquid or gas. Electrical energy may be used to expel gases, plasmas, ions, solids or liquids. Photons may be used to provide thrust via relativistic momentum.
Propellants that explode in operation are of little practical use currently, although there have been experiments with Pulse Detonation Engines . Also the newly synthesized bishomocubane based compounds are under consideration in the research stage as both solid and liquid propellants of the future. [ 1 ] [ 2 ]
Solid fuel/propellants are used in forms called grains . A grain is any individual particle of fuel/propellant regardless of the size or shape. The shape and size of a grain determines the burn time, amount of gas, and rate of produced energy from the burning of the fuel and, as a consequence, thrust vs time profile.
There are three types of burns that can be achieved with different grains.
There are four different types of solid fuel/propellant compositions:
In rockets, three main liquid bipropellant combinations are used: cryogenic oxygen and hydrogen, cryogenic oxygen and a hydrocarbon, and storable propellants. [ 3 ]
Propellant combinations used for liquid propellant rockets include:
Common monopropellant used for liquid rocket engines include:
Electrically powered reactive engines use a variety of usually ionized propellants, including atomic ions, plasma, electrons, or small droplets or solid particles as propellant.
If the acceleration is caused mainly by the Coulomb force (i.e. application of a static electric field in the direction of the acceleration) the device is considered electrostatic. The types of electrostatic drives and their propellants:
These are engines that use electromagnetic fields to generate a plasma which is used as the propellant. They use a nozzle to direct the energized propellant. The nozzle itself may be composed simply of a magnetic field. Low molecular weight gases (e.g. hydrogen, helium, ammonia) are preferred propellants for this kind of system. [ 6 ]
Electromagnetic thrusters use ions as the propellant, which are accelerated by the Lorentz force or by magnetic fields, either of which is generated by electricity:
Nuclear reactions may be used to produce the energy for the expulsion of the propellants. Many types of nuclear reactors have been used/proposed to produce electricity for electrical propulsion as outlined above. Nuclear pulse propulsion uses a series of nuclear explosions to create large amounts of energy to expel the products of the nuclear reaction as the propellant. Nuclear thermal rockets use the heat of a nuclear reaction to heat a propellant. Usually the propellant is hydrogen because the force is a function of the energy irrespective of the mass of the propellant, so the lightest propellant (hydrogen) produces the greatest specific impulse .
A photonic reactive engine uses photons as the propellant and their discrete relativistic energy to produce thrust.
Compressed fluid or compressed gas propellants are pressurized physically, by a compressor, rather than by a chemical reaction. The pressures and energy densities that can be achieved, while insufficient for high-performance rocketry and firearms, are adequate for most applications, in which case compressed fluids offer a simpler, safer, and more practical source of propellant pressure.
A compressed fluid propellant may simply be a pressurized gas, or a substance which is a gas at atmospheric pressure, but stored under pressure as a liquid.
In applications in which a large quantity of propellant is used, such as pressure washing and airbrushing , air may be pressurized by a compressor and used immediately. Additionally, a hand pump to compress air can be used for its simplicity in low-tech applications such as atomizers , plant misters and water rockets . The simplest examples of such a system are squeeze bottles for such liquids as ketchup and shampoo.
However, compressed gases are impractical as stored propellants if they do not liquify inside the storage container, because very high pressures are required in order to store any significant quantity of gas, and high-pressure gas cylinders and pressure regulators are expensive and heavy.
Liquefied gas propellants are gases at atmospheric pressure, but become liquid at a modest pressure. This pressure is high enough to provide useful propulsion of the payload (e.g. aerosol paint, deodorant, lubricant), but is low enough to be stored in an inexpensive metal can, and to not pose a safety hazard in case the can is ruptured.
The mixture of liquid and gaseous propellant inside the can maintains a constant pressure, called the liquid's vapor pressure . As the payload is depleted, the propellant vaporizes to fill the internal volume of the can. Liquids are typically 500-1000x denser than their corresponding gases at atmospheric pressure; even at the higher pressure inside the can, only a small fraction of its volume needs to be propellant in order to eject the payload and replace it with vapor.
Vaporizing the liquid propellant to gas requires some energy, the enthalpy of vaporization , which cools the system. This is usually insignificant, although it can sometimes be an unwanted effect of heavy usage (as the system cools, the vapor pressure of the propellant drops). However, in the case of a freeze spray , this cooling contributes to the desired effect (although freeze sprays may also contain other components, such as chloroethane , with a lower vapor pressure but higher enthalpy of vaporization than the propellant).
Chlorofluorocarbons (CFCs) were once often used as propellants, [ 7 ] but since the Montreal Protocol came into force in 1989, they have been replaced in nearly every country due to the negative effects CFCs have on Earth's ozone layer . The most common replacements of CFCs are mixtures of volatile hydrocarbons , typically propane , n- butane and isobutane . [ 8 ] Dimethyl ether (DME) and methyl ethyl ether are also used. All these have the disadvantage of being flammable . Nitrous oxide and carbon dioxide are also used as propellants to deliver foodstuffs (for example, whipped cream and cooking spray ). Medicinal aerosols such as asthma inhalers use hydrofluoroalkanes (HFA): either HFA 134a (1,1,1,2,-tetrafluoroethane) or HFA 227 (1,1,1,2,3,3,3-heptafluoropropane) or combinations of the two. More recently, liquid hydrofluoroolefin (HFO) propellants have become more widely adopted in aerosol systems due to their relatively low vapor pressure, low global warming potential (GWP), and nonflammability. [ 9 ]
The practicality of liquified gas propellants allows for a broad variety of payloads. Aerosol sprays , in which a liquid is ejected as a spray, include paints, lubricants, degreasers, and protective coatings; deodorants and other personal care products; cooking oils. Some liquid payloads are not sprayed due to lower propellant pressure and/or viscous payload, as with whipped cream and shaving cream or shaving gel. Low-power guns, such as BB guns , paintball guns, and airsoft guns, have solid projectile payloads. Uniquely, in the case of a gas duster ("canned air"), the only payload is the velocity of the propellant vapor itself. | https://en.wikipedia.org/wiki/Propellant |
The propensity theory of probability is a probability interpretation in which the probability is thought of as a physical propensity, disposition, or tendency of a given type of situation to yield an outcome of a certain kind, or to yield a long-run relative frequency of such an outcome. [ 1 ]
Propensities are not relative frequencies, but purported causes of the observed stable relative frequencies. Propensities are invoked to explain why repeating a certain kind of experiment will generate a given outcome type at a persistent rate. Stable long-run frequencies are a manifestation of invariant single-case probabilities. Frequentists are unable to take this approach, since relative frequencies do not exist for single tosses of a coin, but only for large ensembles or collectives. These single-case probabilities are known as propensities or chances.
In addition to explaining the emergence of stable relative frequencies, the idea of propensity is motivated by the desire to make sense of single-case probability attributions in quantum mechanics , such as the probability of decay of a particular atom at a particular moment.
A propensity theory of probability was given by Charles Sanders Peirce . [ 2 ] [ 3 ] [ 4 ] [ 5 ]
A later propensity theory was proposed [ 6 ] by philosopher Karl Popper , who had only slight acquaintance with the writings of Charles S. Peirce , however. [ 2 ] [ 3 ] Popper noted that the outcome of a physical experiment is produced by a certain set of "generating conditions". When we repeat an experiment, as the saying goes, we really perform another experiment with a (more or less) similar set of generating conditions. To say that a set of generating conditions G has propensity p of producing the outcome E means that those exact conditions, if repeated indefinitely, would produce an outcome sequence in which E occurred with limiting relative frequency p . Thus the propensity p for E to occur depends upon G: P r ( E , G ) = p {\displaystyle Pr(E,G)=p} . For Popper then, a deterministic experiment would have propensity 0 or 1 for each outcome, since those generating conditions would have the same outcome on each trial. In other words, non-trivial propensities (those that differ from 0 and 1) imply something less than determinism and yet still causal dependence on the generating conditions.
A number of other philosophers, including David Miller and Donald A. Gillies , have proposed propensity theories somewhat similar to Popper's, in that propensities are defined in terms of either long-run or infinitely long-run relative frequencies.
Other propensity theorists ( e.g. Ronald Giere [ 7 ] ) do not explicitly define propensities at all, but rather see propensity as defined by the theoretical role it plays in science. They argue, for example, that physical magnitudes such as electrical charge cannot be explicitly defined either, in terms of more basic things, but only in terms of what they do (such as attracting and repelling other electrical charges). In a similar way, propensity is whatever fills the various roles that physical probability plays in science.
Other theories have been offered by D. H. Mellor , [ 8 ] and Ian Hacking . [ 9 ]
Ballentine developed an axiomatic propensity theory [ 10 ] building on the work of Paul Humphreys . [ 11 ] They show that the causal nature of the condition in propensity conflicts with an axiom needed for Bayes' theorem .
What roles does physical probability play in science? What are its properties? One central property of chance is that, when known, it constrains rational belief to take the same numerical value. David Lewis called this the principal principle , [ 12 ] The principle states:
Thus, for example, suppose you are certain that a particular biased coin has propensity 0.32 to land heads every time it is tossed. What is then the correct credence? According to the Principal Principle, the correct credence is .32. | https://en.wikipedia.org/wiki/Propensity_probability |
In organic chemistry , 1-propenyl (or simply propenyl ) has the formula CH=CHCH 3 and 2-propenyl ( isopropenyl ) has the formula CH 2 =C-CH 3 . These groups are found in many compounds. Propenyl compounds are isomeric with allyl compounds, which have the formula CH 2 -CH=CH 2 .
Many phenylpropanoids and their derivatives feature derivatives of propenylbenzene:
Isopropenyl acetate is a 2-propenyl ester , synthesizable from ketene .
Several terpenes feature 2-propenyl substituents:
This organic chemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Propenyl |
Proper length [ 1 ] or rest length [ 2 ] is the length of an object in the object's rest frame .
The measurement of lengths is more complicated in the theory of relativity than in classical mechanics . In classical mechanics, lengths are measured based on the assumption that the locations of all points involved are measured simultaneously. But in the theory of relativity, the notion of simultaneity is dependent on the observer.
A different term, proper distance , provides an invariant measure whose value is the same for all observers.
Proper distance is analogous to proper time . The difference is that the proper distance is defined between two spacelike-separated events (or along a spacelike path), while the proper time is defined between two timelike-separated events (or along a timelike path).
The proper length [ 1 ] or rest length [ 2 ] of an object is the length of the object measured by an observer which is at rest relative to it, by applying standard measuring rods on the object. The measurement of the object's endpoints doesn't have to be simultaneous, since the endpoints are constantly at rest at the same positions in the object's rest frame, so it is independent of Δ t . This length is thus given by:
However, in relatively moving frames the object's endpoints have to be measured simultaneously, since they are constantly changing their position. The resulting length is shorter than the rest length, and is given by the formula for length contraction (with γ being the Lorentz factor ):
In comparison, the invariant proper distance between two arbitrary events happening at the endpoints of the same object is given by:
So Δ σ depends on Δ t , whereas (as explained above) the object's rest length L 0 can be measured independently of Δ t . It follows that Δ σ and L 0 , measured at the endpoints of the same object, only agree with each other when the measurement events were simultaneous in the object's rest frame so that Δ t is zero. As explained by Fayngold: [ 1 ]
In special relativity , the proper distance between two spacelike-separated events is the distance between the two events, as measured in an inertial frame of reference in which the events are simultaneous. [ 3 ] [ 4 ] In such a specific frame, the distance is given by
Δ σ = Δ x 2 + Δ y 2 + Δ z 2 , {\displaystyle \Delta \sigma ={\sqrt {\Delta x^{2}+\Delta y^{2}+\Delta z^{2}}},}
where
The definition can be given equivalently with respect to any inertial frame of reference (without requiring the events to be simultaneous in that frame) by
Δ σ = Δ x 2 + Δ y 2 + Δ z 2 − c 2 Δ t 2 , {\displaystyle \Delta \sigma ={\sqrt {\Delta x^{2}+\Delta y^{2}+\Delta z^{2}-c^{2}\Delta t^{2}}},}
where
The two formulae are equivalent because of the invariance of spacetime intervals , and since Δ t = 0 exactly when the events are simultaneous in the given frame.
Two events are spacelike-separated if and only if the above formula gives a real, non-zero value for Δ σ .
The above formula for the proper distance between two events assumes that the spacetime in which the two events occur is flat. Hence, the above formula cannot in general be used in general relativity , in which curved spacetimes are considered. It is, however, possible to define the proper distance along a path in any spacetime, curved or flat. In a flat spacetime, the proper distance between two events is the proper distance along a straight path between the two events. In a curved spacetime, there may be more than one straight path ( geodesic ) between two events, so the proper distance along a straight path between two events would not uniquely define the proper distance between the two events.
Along an arbitrary spacelike path P , the proper distance is given in tensor syntax by the line integral
L = c ∫ P − g μ ν d x μ d x ν , {\displaystyle L=c\int _{P}{\sqrt {-g_{\mu \nu }dx^{\mu }dx^{\nu }}},}
where
In the equation above, the metric tensor is assumed to use the +−−− metric signature , and is assumed to be normalized to return a time instead of a distance. The − sign in the equation should be dropped with a metric tensor that instead uses the −+++ metric signature. Also, the c {\displaystyle c} should be dropped with a metric tensor that is normalized to use a distance, or that uses geometrized units . | https://en.wikipedia.org/wiki/Proper_length |
Proper motion is the astrometric measure of changes in the apparent places of stars or other celestial objects as they move relative to the center of mass of the Solar System . It is measured relative to the distant stars or a stable reference such as the International Celestial Reference Frame (ICRF). [ 1 ] Patterns in proper motion reveal larger structures like stellar streams , the general rotation of the Milky Way disk, and the random motions of stars in the Galactic halo . [ 2 ]
The components for proper motion in the equatorial coordinate system (of a given epoch , often J2000.0 ) are given in the direction of right ascension ( μ α ) and of declination ( μ δ ). Their combined value is computed as the total proper motion ( μ ). [ 3 ] [ 4 ] It has dimensions of angle per time , typically arcseconds per year or milliarcseconds per year.
Knowledge of the proper motion, distance, and radial velocity allows calculations of an object's motion from the Solar System's frame of reference and its motion from the galactic frame of reference – that is motion in respect to the Sun, and by coordinate transformation , that in respect to the Milky Way . [ 5 ]
Over the course of centuries, stars appear to maintain nearly fixed positions with respect to each other, so that they form the same constellations over historical time. As examples, both Ursa Major in the northern sky and Crux in the southern sky, look nearly the same now as they did hundreds of years ago. However, precise long-term observations show that such constellations change shape, albeit very slowly, and that each star has an independent motion .
This motion is caused by the movement of the stars relative to the Sun and Solar System . The Sun travels in a nearly circular orbit (the solar circle ) about the center of the galaxy at a speed of about 220 km/s at a radius of 8,000 parsecs (26,000 ly) from Sagittarius A* [ 6 ] [ 7 ] which can be taken as the rate of rotation of the Milky Way itself at this radius. [ 8 ] [ 9 ]
Any proper motion is a two-dimensional vector (as it excludes the component as to the direction of the line of sight) typically defined by its position angle and its magnitude . The first is the direction of the proper motion on the celestial sphere (with 0 degrees meaning the motion is north, 90 degrees meaning the motion is east, (left on most sky maps and space telescope images) and so on), and the second is its magnitude, typically expressed in arcseconds per year (symbols: arcsec/yr, as/yr, ″/yr, ″ yr −1 ) or milliarcseconds per year (symbols: mas/yr, mas yr −1 ).
Proper motion may alternatively be defined by the angular changes per year in the star's right ascension ( μ α ) and declination ( μ δ ) with respect to a defined epoch .
The components of proper motion by convention are arrived at as follows. Suppose an object moves from coordinates (α 1 , δ 1 ) to coordinates (α 2 , δ 2 ) in a time Δ t . The proper motions are given by: [ 10 ] μ α = α 2 − α 1 Δ t , {\displaystyle \mu _{\alpha }={\frac {\alpha _{2}-\alpha _{1}}{\Delta t}},} μ δ = δ 2 − δ 1 Δ t . {\displaystyle \mu _{\delta }={\frac {\delta _{2}-\delta _{1}}{\Delta t}}\ .} The magnitude of the proper motion μ is given by the Pythagorean theorem : [ 11 ] μ 2 = μ δ 2 + μ α 2 ⋅ cos 2 δ , {\displaystyle \mu ^{2}={\mu _{\delta }}^{2}+{\mu _{\alpha }}^{2}\cdot \cos ^{2}\delta \ ,} technically abbreviated: μ 2 = μ δ 2 + μ α ∗ 2 . {\displaystyle \mu ^{2}={\mu _{\delta }}^{2}+{\mu _{\alpha \ast }}^{2}\ .} where δ is the declination. The factor in cos 2 δ accounts for the widening of the lines (hours) of right ascension away from the poles, cos δ , being zero for a hypothetical object fixed at a celestial pole in declination. Thus, a co-efficient is given to negate the misleadingly greater east or west velocity (angular change in α ) in hours of Right Ascension the further it is towards the imaginary infinite poles, above and below the earth's axis of rotation, in the sky. The change μ α , which must be multiplied by cos δ to become a component of the proper motion, is sometimes called the "proper motion in right ascension", and μ δ the "proper motion in declination". [ 12 ]
If the proper motion in right ascension has been converted by cos δ , the result is designated μ α* . For example, the proper motion results in right ascension in the Hipparcos Catalogue (HIP) have already been converted. [ 13 ] Hence, the individual proper motions in right ascension and declination are made equivalent for straightforward calculations of various other stellar motions.
The position angle θ is related to these components by: [ 3 ] [ 14 ] μ sin θ = μ α cos δ = μ α ∗ , {\displaystyle \mu \sin \theta =\mu _{\alpha }\cos \delta =\mu _{\alpha \ast }\ ,} μ cos θ = μ δ . {\displaystyle \mu \cos \theta =\mu _{\delta }\ .}
Motions in equatorial coordinates can be converted to motions in galactic coordinates . [ 15 ]
For most stars seen in the sky, the observed proper motions are small and unremarkable. Such stars are often either faint or are significantly distant, have changes of below 0.01″ per year, and do not appear to move appreciably over many millennia. A few do have significant motions, and are usually called high-proper motion stars. Two or more stars which are moving in similar directions, exhibit so-called shared or common proper motion (or cpm.), suggesting they may share similar motion in space (if the distances and radial velocities are also consistent) and thus be gravitationally linked as binary stars or star clusters .
Barnard's Star has the largest proper motion of all stars, moving at 10.3″ yr −1 . Large proper motion usually strongly indicates an object is close to the Sun. This is so for Barnard's Star, about 6 light-years away. After the Sun and the Alpha Centauri system, it is the nearest known star. Being a red dwarf with an apparent magnitude of 9.54, it is too faint to see without a telescope or powerful binoculars. Of the stars visible to the naked eye (conservatively limiting unaided visual magnitude to 6.0), 61 Cygni A (magnitude V= 5.20) has the highest proper motion at 5.281″ yr −1 , discounting Groombridge 1830 (magnitude V= 6.42), proper motion: 7.058″ yr −1 . [ 16 ]
A proper motion of 1 arcsec per year 1 light-year away corresponds to a relative transverse speed of 1.45 km/s. Barnard's Star's transverse speed is 90 km/s and its radial velocity is 111 km/s (perpendicular (at a right, 90° angle), which gives a true or "space" motion of 142 km/s. True or absolute motion is more difficult to measure than the proper motion, because the true transverse velocity involves the product of the proper motion times the distance. As shown by this formula, true velocity measurements depend on distance measurements, which are difficult in general.
In 1992 Rho Aquilae became the first star to have its Bayer designation invalidated by moving to a neighbouring constellation – it is now in Delphinus . [ 17 ]
Stars with large proper motions tend to be nearby; most stars are far enough away that their proper motions are very small, on the order of a few thousandths of an arcsecond per year. It is possible to construct nearly complete samples of high proper motion stars by comparing photographic sky survey images taken many years apart. The Palomar Sky Survey is one source of such images. In the past, searches for high proper motion objects were undertaken using blink comparators to examine the images by eye. More modern techniques such as image differencing can scan digitized images, or comparisons to star catalogs obtained by satellites. [ 18 ] As any selection biases of these surveys are well understood and quantifiable, studies have confirmed more and inferred approximate quantities of unseen stars – revealing and confirming more by studying them further, regardless of brightness, for instance. Studies of this kind show most of the nearest stars are intrinsically faint and angularly small, such as red dwarfs .
Measurement of the proper motions of a large sample of stars in a distant stellar system, like a globular cluster, can be used to compute the cluster's total mass via the Leonard-Merritt mass estimator . Coupled with measurements of the stars' radial velocities , proper motions can be used to compute the distance to the cluster.
Stellar proper motions have been used to infer the presence of a super-massive black hole at the center of the Milky Way. [ 19 ] This now confirmed to exist black hole is called Sgr A* , and has a mass of 4.3 × 10 6 M ☉ (solar masses).
Proper motions of objects in galaxies in the Local Group can be used to estimate their distance. In 1999, the proper motion of water masers moving very rapidly around the center of NGC 4258 (M106) galaxy was measured via Very Long Baseline Interferometry . In combination with their radial motion this yielded an accurate distance to the galaxy of 7.2 ± 0.5 Mpc . [ 20 ] [ 21 ] In 2005, the first measurement was made of the proper motion of the Triangulum Galaxy M33, the third largest and only ordinary spiral galaxy in the Local Group, located 0.860 ± 0.028 Mpc beyond the Milky Way. [ 22 ] [ 23 ] The motion of the Andromeda Galaxy was measured in 2012, and an Andromeda–Milky Way collision is predicted in about 4.5 billion years. [ 24 ]
Proper motion was suspected by early astronomers (according to Macrobius , c. AD 400) but a proof was not provided until 1718 by Edmund Halley , who noticed that Sirius , Arcturus and Aldebaran were over half a degree away from the positions charted by the ancient Greek astronomer Hipparchus roughly 1850 years earlier. [ 25 ] [ 26 ]
The lesser meaning of "proper" used is arguably dated English (but neither historic, nor obsolete when used as a postpositive , as in "the city proper") meaning "belonging to" or "own". "Improper motion" would refer to perceived motion that is nothing to do with an object's inherent course, such as due to Earth's axial precession , and minor deviations, nutations well within the 26,000-year cycle. | https://en.wikipedia.org/wiki/Proper_motion |
Proper right and proper left are conceptual terms used to unambiguously convey relative direction when describing an image or other object. The "proper right" hand of a figure is the hand that would be regarded by that figure as its right hand. [ 1 ] In a frontal representation, that appears on the left as the viewer sees it, creating the potential for ambiguity if the hand is just described as the "right hand".
The terms are mainly used in discussing images of humans, whether in art history , medical contexts such as x-ray images, or elsewhere, but they can be used in describing any object that has an unambiguous front and back (for example furniture [ 2 ] ) or, [ 3 ] when describing things that move or change position, with reference to the original position. However a more restricted use may be preferred, and the internal instructions for cataloguing objects in the "Inventory of American Sculpture" at the Smithsonian American Art Museum say that "The terms 'proper right' and 'proper left' should be used when describing figures only". [ 4 ] In heraldry , right and left is always used in the meaning of proper right and proper left, as for the imaginary bearer of a coat of arms; to avoid confusion, the Latin terms dexter and sinister are often used. [ 5 ]
The alternative is to use language that makes it clear that the viewer's perspective is being used. The swords in the illustrations might be described as: "to the left as the viewer sees it", "at the view's left", "at the viewer's left", and so on. However these formulations do not work for freestanding sculpture in the round, where the viewer might be at any position around the sculpture. A British 19th-century manual for military drill contrasts "proper left" with "present left" when discussing the orientation of formations performing intricate movements on a parade ground, "proper" meaning the orientation at the start of the drill. [ 6 ]
The terms are analogous to the nautical port and starboard , where "port" is to a watercraft as "proper left" is to a sculpture, and they are used for essentially the same reason. Their use obviates the need for potentially ambiguous language such as "my right", "your left", and so on, by expressing the direction in a manner that holds true regardless of the relative orientations of the object and observer. [ 7 ] Another example is stage right and left in the theatre, which uses the actor's orientation, "stage right" equating to the audience's "house left".
This is from the auction catalogue description of an African wood figure: [ 8 ]
There is extensive insect loss in the proper right leg, some at the proper right elbow, and at the fronts of both feet. There is a chip off the proper right breast, and the proper right leg was broken off and reglued.
Describing an Indian sculpture: [ 9 ]
The figure standing on the yakṣī's proper left, however, is not a mirror image of the other male ... | https://en.wikipedia.org/wiki/Proper_right_and_proper_left |
In relativity , proper time (from Latin, meaning own time ) along a timelike world line is defined as the time as measured by a clock following that line. The proper time interval between two events on a world line is the change in proper time, which is independent of coordinates, and is a Lorentz scalar . [ 1 ] The interval is the quantity of interest, since proper time itself is fixed only up to an arbitrary additive constant, namely the setting of the clock at some event along the world line.
The proper time interval between two events depends not only on the events, but also the world line connecting them, and hence on the motion of the clock between the events. It is expressed as an integral over the world line (analogous to arc length in Euclidean space ). An accelerated clock will measure a smaller elapsed time between two events than that measured by a non-accelerated ( inertial ) clock between the same two events. The twin paradox is an example of this effect. [ 2 ]
By convention, proper time is usually represented by the Greek letter τ ( tau ) to distinguish it from coordinate time represented by t . Coordinate time is the time between two events as measured by an observer using that observer's own method of assigning a time to an event. In the special case of an inertial observer in special relativity , the time is measured using the observer's clock and the observer's definition of simultaneity.
The concept of proper time was introduced by Hermann Minkowski in 1908, [ 3 ] and is an important feature of Minkowski diagrams .
The formal definition of proper time involves describing the path through spacetime that represents a clock, observer, or test particle, and the metric structure of that spacetime. Proper time is the pseudo-Riemannian arc length of world lines in four-dimensional spacetime. From the mathematical point of view, coordinate time is assumed to be predefined and an expression for proper time as a function of coordinate time is required. On the other hand, proper time is measured experimentally and coordinate time is calculated from the proper time of inertial clocks.
Proper time can only be defined for timelike paths through spacetime which allow for the construction of an accompanying set of physical rulers and clocks. The same formalism for spacelike paths leads to a measurement of proper distance rather than proper time. For lightlike paths, there exists no concept of proper time and it is undefined as the spacetime interval is zero. Instead, an arbitrary and physically irrelevant affine parameter unrelated to time must be introduced. [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ]
With the timelike convention for the metric signature , the Minkowski metric is defined by η μ ν = ( 1 0 0 0 0 − 1 0 0 0 0 − 1 0 0 0 0 − 1 ) , {\displaystyle \eta _{\mu \nu }={\begin{pmatrix}1&0&0&0\\0&-1&0&0\\0&0&-1&0\\0&0&0&-1\end{pmatrix}},} and the coordinates by ( x 0 , x 1 , x 2 , x 3 ) = ( c t , x , y , z ) {\displaystyle (x^{0},x^{1},x^{2},x^{3})=(ct,x,y,z)} for arbitrary Lorentz frames.
In any such frame an infinitesimal interval, here assumed timelike, between two events is expressed as
and separates points on a trajectory of a particle (think clock{?}). The same interval can be expressed in coordinates such that at each moment, the particle is at rest . Such a frame is called an instantaneous rest frame, denoted here by the coordinates ( c τ , x τ , y τ , z τ ) {\displaystyle (c\tau ,x_{\tau },y_{\tau },z_{\tau })} for each instant. Due to the invariance of the interval (instantaneous rest frames taken at different times are related by Lorentz transformations) one may write d s 2 = c 2 d τ 2 − d x τ 2 − d y τ 2 − d z τ 2 = c 2 d τ 2 , {\displaystyle ds^{2}=c^{2}d\tau ^{2}-dx_{\tau }^{2}-dy_{\tau }^{2}-dz_{\tau }^{2}=c^{2}d\tau ^{2},} since in the instantaneous rest frame, the particle or the frame itself is at rest, i.e., d x τ = d y τ = d z τ = 0 {\displaystyle dx_{\tau }=dy_{\tau }=dz_{\tau }=0} . Since the interval is assumed timelike (ie. d s 2 > 0 {\displaystyle ds^{2}>0} ), taking the square root of the above yields [ 10 ] d s = c d τ , {\displaystyle ds=cd\tau ,} or d τ = d s c . {\displaystyle d\tau ={\frac {ds}{c}}.} Given this differential expression for τ , the proper time interval is defined as
Δ τ = ∫ P d τ = ∫ P d s c . {\displaystyle \Delta \tau =\int _{P}d\tau =\int _{P}{\frac {ds}{c}}.} (2)
Here P is the worldline from some initial event to some final event with the ordering of the events fixed by the requirement that the final event occurs later according to the clock than the initial event.
Using (1) and again the invariance of the interval, one may write [ 11 ]
Δ τ = ∫ P 1 c η μ ν d x μ d x ν = ∫ P d t 2 − d x 2 c 2 − d y 2 c 2 − d z 2 c 2 = ∫ a b 1 − 1 c 2 [ ( d x d t ) 2 + ( d y d t ) 2 + ( d z d t ) 2 ] d t = ∫ a b 1 − v ( t ) 2 c 2 d t = ∫ a b d t γ ( t ) , {\displaystyle {\begin{aligned}\Delta \tau &=\int _{P}{\frac {1}{c}}{\sqrt {\eta _{\mu \nu }dx^{\mu }dx^{\nu }}}\\&=\int _{P}{\sqrt {dt^{2}-{dx^{2} \over c^{2}}-{dy^{2} \over c^{2}}-{dz^{2} \over c^{2}}}}\\&=\int _{a}^{b}{\sqrt {1-{\frac {1}{c^{2}}}\left[\left({\frac {dx}{dt}}\right)^{2}+\left({\frac {dy}{dt}}\right)^{2}+\left({\frac {dz}{dt}}\right)^{2}\right]}}dt\\&=\int _{a}^{b}{\sqrt {1-{\frac {v(t)^{2}}{c^{2}}}}}dt\\&=\int _{a}^{b}{\frac {dt}{\gamma (t)}},\end{aligned}}} (3)
where ( x 0 , x 1 , x 2 , x 3 ) : [ a , b ] → P {\displaystyle (x^{0},x^{1},x^{2},x^{3}):[a,b]\rightarrow P} is an arbitrary bijective parametrization of the worldline P such that ( x 0 ( a ) , x 1 ( a ) , x 2 ( a ) , x 3 ( a ) ) and ( x 0 ( b ) , x 1 ( b ) , x 2 ( b ) , x 3 ( b ) ) {\displaystyle (x^{0}(a),x^{1}(a),x^{2}(a),x^{3}(a))\quad {\text{and}}\quad (x^{0}(b),x^{1}(b),x^{2}(b),x^{3}(b))} give the endpoints of P and a < b; v ( t ) is the coordinate speed at coordinate time t ; and x ( t ) , y ( t ) , and z ( t ) are space coordinates. The first expression is manifestly Lorentz invariant. They are all Lorentz invariant, since proper time and proper time intervals are coordinate-independent by definition.
If t , x , y , z , are parameterised by a parameter λ , this can be written as Δ τ = ∫ ( d t d λ ) 2 − 1 c 2 [ ( d x d λ ) 2 + ( d y d λ ) 2 + ( d z d λ ) 2 ] d λ . {\displaystyle \Delta \tau =\int {\sqrt {\left({\frac {dt}{d\lambda }}\right)^{2}-{\frac {1}{c^{2}}}\left[\left({\frac {dx}{d\lambda }}\right)^{2}+\left({\frac {dy}{d\lambda }}\right)^{2}+\left({\frac {dz}{d\lambda }}\right)^{2}\right]}}\,d\lambda .}
If the motion of the particle is constant, the expression simplifies to Δ τ = ( Δ t ) 2 − ( Δ x ) 2 c 2 − ( Δ y ) 2 c 2 − ( Δ z ) 2 c 2 , {\displaystyle \Delta \tau ={\sqrt {\left(\Delta t\right)^{2}-{\frac {\left(\Delta x\right)^{2}}{c^{2}}}-{\frac {\left(\Delta y\right)^{2}}{c^{2}}}-{\frac {\left(\Delta z\right)^{2}}{c^{2}}}}},} where Δ means the change in coordinates between the initial and final events. The definition in special relativity generalizes straightforwardly to general relativity as follows below.
Proper time is defined in general relativity as follows: Given a pseudo-Riemannian manifold with a local coordinates x μ and equipped with a metric tensor g μν , the proper time interval Δ τ between two events along a timelike path P is given by the line integral [ 12 ]
This expression is, as it should be, invariant under coordinate changes. It reduces (in appropriate coordinates) to the expression of special relativity in flat spacetime .
In the same way that coordinates can be chosen such that x 1 , x 2 , x 3 = const in special relativity, this can be done in general relativity too. Then, in these coordinates, [ 13 ] Δ τ = ∫ P d τ = ∫ P 1 c g 00 d x 0 . {\displaystyle \Delta \tau =\int _{P}d\tau =\int _{P}{\frac {1}{c}}{\sqrt {g_{00}}}dx^{0}.}
This expression generalizes definition (2) and can be taken as the definition. Then using invariance of the interval, equation (4) follows from it in the same way (3) follows from (2) , except that here arbitrary coordinate changes are allowed.
For a twin paradox scenario, let there be an observer A who moves between the A -coordinates (0,0,0,0) and (10 years, 0, 0, 0) inertially. This means that A stays at x = y = z = 0 {\displaystyle x=y=z=0} for 10 years of A -coordinate time. The proper time interval for A between the two events is then Δ τ A = ( 10 years ) 2 = 10 years . {\displaystyle \Delta \tau _{A}={\sqrt {(10{\text{ years}})^{2}}}=10{\text{ years}}.}
So being "at rest" in a special relativity coordinate system means that proper time and coordinate time are the same.
Let there now be another observer B who travels in the x direction from (0,0,0,0) for 5 years of A -coordinate time at 0.866 c to (5 years, 4.33 light-years, 0, 0). Once there, B accelerates, and travels in the other spatial direction for another 5 years of A -coordinate time to (10 years, 0, 0, 0). For each leg of the trip, the proper time interval can be calculated using A -coordinates, and is given by Δ τ l e g = ( 5 years ) 2 − ( 4.33 years ) 2 = 6.25 y e a r s 2 = 2.5 years . {\displaystyle \Delta \tau _{leg}={\sqrt {({\text{5 years}})^{2}-({\text{4.33 years}})^{2}}}={\sqrt {6.25\;\mathrm {years} ^{2}}}={\text{2.5 years}}.}
So the total proper time for observer B to go from (0,0,0,0) to (5 years, 4.33 light-years, 0, 0) and then to (10 years, 0, 0, 0) is Δ τ B = 2 Δ τ l e g = 5 years . {\displaystyle \Delta \tau _{B}=2\Delta \tau _{leg}={\text{5 years}}.}
Thus it is shown that the proper time equation incorporates the time dilation effect. In fact, for an object in a SR (special relativity) spacetime traveling with velocity v {\displaystyle v} for a time Δ T {\displaystyle \Delta T} , the proper time interval experienced is Δ τ = Δ T 2 − ( v x Δ T c ) 2 − ( v y Δ T c ) 2 − ( v z Δ T c ) 2 = Δ T 1 − v 2 c 2 , {\displaystyle \Delta \tau ={\sqrt {\Delta T^{2}-\left({\frac {v_{x}\Delta T}{c}}\right)^{2}-\left({\frac {v_{y}\Delta T}{c}}\right)^{2}-\left({\frac {v_{z}\Delta T}{c}}\right)^{2}}}=\Delta T{\sqrt {1-{\frac {v^{2}}{c^{2}}}}},} which is the SR time dilation formula.
An observer rotating around another inertial observer is in an accelerated frame of reference. For such an observer, the incremental ( d τ {\displaystyle d\tau } ) form of the proper time equation is needed, along with a parameterized description of the path being taken, as shown below.
Let there be an observer C on a disk rotating in the xy plane at a coordinate angular rate of ω {\displaystyle \omega } and who is at a distance of r from the center of the disk with the center of the disk at x = y = z = 0 . The path of observer C is given by ( T , r cos ( ω T ) , r sin ( ω T ) , 0 ) {\displaystyle (T,\,r\cos(\omega T),\,r\sin(\omega T),\,0)} , where T {\displaystyle T} is the current coordinate time. When r and ω {\displaystyle \omega } are constant, d x = − r ω sin ( ω T ) d T {\displaystyle dx=-r\omega \sin(\omega T)\,dT} and d y = r ω cos ( ω T ) d T {\displaystyle dy=r\omega \cos(\omega T)\,dT} . The incremental proper time formula then becomes d τ = d T 2 − ( r ω c ) 2 sin 2 ( ω T ) d T 2 − ( r ω c ) 2 cos 2 ( ω T ) d T 2 = d T 1 − ( r ω c ) 2 . {\displaystyle d\tau ={\sqrt {dT^{2}-\left({\frac {r\omega }{c}}\right)^{2}\sin ^{2}(\omega T)\;dT^{2}-\left({\frac {r\omega }{c}}\right)^{2}\cos ^{2}(\omega T)\;dT^{2}}}=dT{\sqrt {1-\left({\frac {r\omega }{c}}\right)^{2}}}.}
So for an observer rotating at a constant distance of r from a given point in spacetime at a constant angular rate of ω between coordinate times T 1 {\displaystyle T_{1}} and T 2 {\displaystyle T_{2}} , the proper time experienced will be ∫ T 1 T 2 d τ = ( T 2 − T 1 ) 1 − ( r ω c ) 2 = Δ T 1 − v 2 / c 2 , {\displaystyle \int _{T_{1}}^{T_{2}}d\tau =(T_{2}-T_{1}){\sqrt {1-\left({\frac {r\omega }{c}}\right)^{2}}}=\Delta T{\sqrt {1-v^{2}/c^{2}}},} as v = r ω {\displaystyle v=r\omega } for a rotating observer. This result is the same as for the linear motion example, and shows the general application of the integral form of the proper time formula.
The difference between SR and general relativity (GR) is that in GR one can use any metric which is a solution of the Einstein field equations , not just the Minkowski metric. Because inertial motion in curved spacetimes lacks the simple expression it has in SR, the line integral form of the proper time equation must always be used.
An appropriate coordinate conversion done against the Minkowski metric creates coordinates where an object on a rotating disk stays in the same spatial coordinate position. The new coordinates are r = x 2 + y 2 {\displaystyle r={\sqrt {x^{2}+y^{2}}}} and θ = arctan ( y x ) − ω t . {\displaystyle \theta =\arctan \left({\frac {y}{x}}\right)-\omega t.}
The t and z coordinates remain unchanged. In this new coordinate system, the incremental proper time equation is d τ = [ 1 − ( r ω c ) 2 ] d t 2 − d r 2 c 2 − r 2 d θ 2 c 2 − d z 2 c 2 − 2 r 2 ω d t d θ c 2 . {\displaystyle d\tau ={\sqrt {\left[1-\left({\frac {r\omega }{c}}\right)^{2}\right]dt^{2}-{\frac {dr^{2}}{c^{2}}}-{\frac {r^{2}\,d\theta ^{2}}{c^{2}}}-{\frac {dz^{2}}{c^{2}}}-2{\frac {r^{2}\omega \,dt\,d\theta }{c^{2}}}}}.}
With r , θ , and z being constant over time, this simplifies to d τ = d t 1 − ( r ω c ) 2 , {\displaystyle d\tau =dt{\sqrt {1-\left({\frac {r\omega }{c}}\right)^{2}}},} which is the same as in Example 2.
Now let there be an object off of the rotating disk and at inertial rest with respect to the center of the disk and at a distance of R from it. This object has a coordinate motion described by dθ = − ω dt , which describes the inertially at-rest object of counter-rotating in the view of the rotating observer. Now the proper time equation becomes d τ = [ 1 − ( R ω c ) 2 ] d t 2 − ( R ω c ) 2 d t 2 + 2 ( R ω c ) 2 d t 2 = d t . {\displaystyle d\tau ={\sqrt {\left[1-\left({\frac {R\omega }{c}}\right)^{2}\right]dt^{2}-\left({\frac {R\omega }{c}}\right)^{2}\,dt^{2}+2\left({\frac {R\omega }{c}}\right)^{2}\,dt^{2}}}=dt.}
So for the inertial at-rest observer, coordinate time and proper time are once again found to pass at the same rate, as expected and required for the internal self-consistency of relativity theory. [ 14 ]
The Schwarzschild solution has an incremental proper time equation of d τ = ( 1 − 2 m r ) d t 2 − 1 c 2 ( 1 − 2 m r ) − 1 d r 2 − r 2 c 2 d ϕ 2 − r 2 c 2 sin 2 ( ϕ ) d θ 2 , {\displaystyle d\tau ={\sqrt {\left(1-{\frac {2m}{r}}\right)dt^{2}-{\frac {1}{c^{2}}}\left(1-{\frac {2m}{r}}\right)^{-1}dr^{2}-{\frac {r^{2}}{c^{2}}}d\phi ^{2}-{\frac {r^{2}}{c^{2}}}\sin ^{2}(\phi )\,d\theta ^{2}}},} where
To demonstrate the use of the proper time relationship, several sub-examples involving the Earth will be used here.
For the Earth , M = 5.9742 × 10 24 kg , meaning that m = 4.4354 × 10 −3 m . When standing on the north pole, we can assume d r = d θ = d ϕ = 0 {\displaystyle dr=d\theta =d\phi =0} (meaning that we are neither moving up or down or along the surface of the Earth). In this case, the Schwarzschild solution proper time equation becomes d τ = d t 1 − 2 m / r {\textstyle d\tau =dt\,{\sqrt {1-2m/r}}} . Then using the polar radius of the Earth as the radial coordinate (or r = 6,356,752 metres {\displaystyle r={\text{6,356,752 metres}}} ), we find that d τ = ( 1 − 1.3908 × 10 − 9 ) d t 2 = ( 1 − 6.9540 × 10 − 10 ) d t . {\displaystyle d\tau ={\sqrt {\left(1-1.3908\times 10^{-9}\right)\;dt^{2}}}=\left(1-6.9540\times 10^{-10}\right)\,dt.}
At the equator , the radius of the Earth is r = 6 378 137 m . In addition, the rotation of the Earth needs to be taken into account. This imparts on an observer an angular velocity of d θ / d t {\displaystyle d\theta /dt} of 2 π divided by the sidereal period of the Earth's rotation, 86162.4 seconds. So d θ = 7.2923 × 10 − 5 d t {\displaystyle d\theta =7.2923\times 10^{-5}\,dt} . The proper time equation then produces d τ = ( 1 − 1.3908 × 10 − 9 ) d t 2 − 2.4069 × 10 − 12 d t 2 = ( 1 − 6.9660 × 10 − 10 ) d t . {\displaystyle d\tau ={\sqrt {\left(1-1.3908\times 10^{-9}\right)dt^{2}-2.4069\times 10^{-12}\,dt^{2}}}=\left(1-6.9660\times 10^{-10}\right)\,dt.}
From a non-relativistic point of view this should have been the same as the previous result. This example demonstrates how the proper time equation is used, even though the Earth rotates and hence is not spherically symmetric as assumed by the Schwarzschild solution. To describe the effects of rotation more accurately the Kerr metric may be used. | https://en.wikipedia.org/wiki/Proper_time |
The chemical elements can be broadly divided into metals , metalloids , and nonmetals according to their shared physical and chemical properties . All elemental metals have a shiny appearance (at least when freshly polished); are good conductors of heat and electricity; form alloys with other metallic elements; and have at least one basic oxide . Metalloids are metallic-looking, often brittle solids that are either semiconductors or exist in semiconducting forms, and have amphoteric or weakly acidic oxides . Typical elemental nonmetals have a dull, coloured or colourless appearance; are often brittle when solid; are poor conductors of heat and electricity; and have acidic oxides. Most or some elements in each category share a range of other properties; a few elements have properties that are either anomalous given their category, or otherwise extraordinary.
Elemental metals appear lustrous (beneath any patina ); form compounds ( alloys ) when combined with other elements; tend to lose or share electrons when they react with other substances; and each forms at least one predominantly basic oxide.
Most metals are silvery looking, high density, relatively soft and easily deformed solids with good electrical and thermal conductivity , closely packed structures , low ionisation energies and electronegativities , and are found naturally in combined states.
Some metals appear coloured ( Cu , Cs , Au ), have low densities (e.g. Be , Al ) or very high melting points (e.g. W , Nb ), are liquids at or near room temperature (e.g. Hg , Ga ), are brittle (e.g. Os , Bi ), not easily machined (e.g. Ti , Re ), or are noble (hard to oxidise , e.g. Au , Pt ), or have nonmetallic structures ( Mn and Ga are structurally analogous to, respectively, white P and I ).
Metals comprise the large majority of the elements, and can be subdivided into several different categories. From left to right in the periodic table, these categories include the highly reactive alkali metals ; the less-reactive alkaline earth metals , lanthanides , and radioactive actinides ; the archetypal transition metals ; and the physically and chemically weak post-transition metals . Specialized subcategories such as the refractory metals and the noble metals also exist.
Metalloids are metallic-looking often brittle solids; tend to share electrons when they react with other substances; have weakly acidic or amphoteric oxides; and are usually found naturally in combined states.
Most are semiconductors, and moderate thermal conductors, and have structures that are more open than those of most metals.
Some metalloids ( As , Sb ) conduct electricity like metals.
The metalloids, as the smallest major category of elements, are not subdivided further.
Nonmetallic elements often have open structures; tend to gain or share electrons when they react with other substances; and do not form distinctly basic oxides.
Most are gases at room temperature; have relatively low densities; are poor electrical and thermal conductors; have relatively high ionisation energies and electronegativities; form acidic oxides; and are found naturally in uncombined states in large amounts.
Some nonmetals ( black P , S , and Se ) are brittle solids at room temperature (although each of these also have malleable, pliable or ductile allotropes).
From left to right in the periodic table, the nonmetals can be divided into the reactive nonmetals and the noble gases. The reactive nonmetals near the metalloids show some incipient metallic character, such as the metallic appearance of graphite, black phosphorus, selenium and iodine. The noble gases are almost completely inert.
The characteristic properties of elemental metals and nonmetals are quite distinct, as shown in the table below. Metalloids, straddling the metal-nonmetal border , are mostly distinct from either, but in a few properties resemble one or the other, as shown in the shading of the metalloid column below and summarized in the small table at the top of this section.
Authors differ in where they divide metals from nonmetals and in whether they recognize an intermediate metalloid category. Some authors count metalloids as nonmetals with weakly nonmetallic properties. [ n 1 ] Others count some of the metalloids as post-transition metals . [ n 2 ]
There were exceptions... in the periodic table, anomalies too—some of them profound. Why, for example, was manganese such a bad conductor of electricity, when the elements on either side of it were reasonably good conductors? Why was strong magnetism confined to the iron metals? And yet these exceptions, I was somehow convinced, reflected special additional mechanisms at work...
Within each category, elements can be found with one or two properties very different from the expected norm, or that are otherwise notable.
Sodium , potassium , rubidium , caesium , barium , platinum , gold
Manganese
Iron , cobalt , nickel , gadolinium , terbium , dysprosium , holmium , erbium , thulium
Iridium
Gold
Mercury
Lead
Bismuth
Uranium
Plutonium
Boron
Boron , antimony
Silicon
Arsenic
Antimony
Hydrogen
Helium
Carbon
Phosphorus
Iodine | https://en.wikipedia.org/wiki/Properties_of_metals,_metalloids_and_nonmetals |
Nonmetals show more variability in their properties than do metals. [ 1 ] Metalloids are included here since they behave predominately as chemically weak nonmetals.
Physically, they nearly all exist as diatomic or monatomic gases, or polyatomic solids having more substantial (open-packed) forms and relatively small atomic radii, unlike metals, which are nearly all solid and close-packed, and mostly have larger atomic radii. If solid, they have a submetallic appearance (with the exception of sulfur) and are brittle , as opposed to metals, which are lustrous , and generally ductile or malleable ; they usually have lower densities than metals; are mostly poorer conductors of heat and electricity ; and tend to have significantly lower melting points and boiling points than those of most metals.
Chemically, the nonmetals mostly have higher ionisation energies , higher electron affinities (nitrogen and the noble gases have negative electron affinities) and higher electronegativity values [ n 1 ] than metals noting that, in general, the higher an element's ionisation energy, electron affinity, and electronegativity, the more nonmetallic that element is. [ 2 ] Nonmetals, including (to a limited extent) xenon and probably radon, usually exist as anions or oxyanions in aqueous solution; they generally form ionic or covalent compounds when combined with metals (unlike metals, which mostly form alloys with other metals); and have acidic oxides whereas the common oxides of nearly all metals are basic .
Hydrogen is a colourless, odourless, and comparatively unreactive diatomic gas with a density of 8.988 × 10 −5 g/cm 3 and is about 14 times lighter than air. It condenses to a colourless liquid −252.879 °C and freezes into an ice- or snow-like solid at −259.16 °C. The solid form has a hexagonal crystalline structure and is soft and easily crushed. Hydrogen is an insulator in all of its forms. It has a high ionisation energy (1312.0 kJ/mol), moderate electron affinity (73 kJ/mol), and moderate electronegativity (2.2). Hydrogen is a poor oxidising agent (H 2 + 2 e − → 2H – = –2.25 V at pH 0). Its chemistry, most of which is based around its tendency to acquire the electron configuration of the noble gas helium, is largely covalent in nature, noting it can form ionic hydrides with highly electropositive metals, and alloy-like hydrides with some transition metals. The common oxide of hydrogen ( H 2 O ) is a neutral oxide. [ n 2 ]
Boron is a lustrous, barely reactive solid with a density 2.34 g/cm 3 (cf. aluminium 2.70), and is hard ( MH 9.3) and brittle. It melts at 2076 °C (cf. steel ~1370 °C) and boils at 3927 °C. Boron has a complex rhombohedral crystalline structure (CN 5+). It is a semiconductor with a band gap of about 1.56 eV. Boron has a moderate ionisation energy (800.6 kJ/mol), low electron affinity (27 kJ/mol), and moderate electronegativity (2.04). Being a metalloid, most of its chemistry is nonmetallic in nature. Boron is a poor oxidizing agent (B 12 + 3 e → BH 3 = –0.15 V at pH 0). While it bonds covalently in nearly all of its compounds, it can form intermetallic compounds and alloys with transition metals of the composition M n B, if n > 2. The common oxide of boron ( B 2 O 3 ) is weakly acidic.
Carbon (as graphite, its most thermodynamically stable form) is a lustrous and comparatively unreactive solid with a density of 2.267 g/cm 3 , and is soft (MH 0.5) and brittle. It sublimes to vapour at 3642 °C. Carbon has a hexagonal crystalline structure (CN 3). It is a semimetal in the direction of its planes, with an electrical conductivity exceeding that of some metals, and behaves as a semiconductor in the direction perpendicular to its planes. It has a high ionisation energy (1086.5 kJ/mol), moderate electron affinity (122 kJ/mol), and high electronegativity (2.55). Carbon is a poor oxidising agent (C + 4 e − → CH 4 = 0.13 V at pH 0). Its chemistry is largely covalent in nature, noting it can form salt-like carbides with highly electropositive metals. The common oxide of carbon ( CO 2 ) is a medium-strength acidic oxide.
Silicon is a metallic-looking relatively unreactive solid with a density of 2.3290 g/cm 3 , and is hard (MH 6.5) and brittle. It melts at 1414 °C (cf. steel ~1370 °C) and boils at 3265 °C. Silicon has a diamond cubic structure (CN 4). It is a non-conductive with a band gap of about 1.11 eV. [ 3 ] Silicon has a moderate ionisation energy (786.5 kJ/mol), moderate electron affinity (134 kJ/mol), and moderate electronegativity (1.9). It is a poor oxidising agent (Si + 4 e → Si 4 = –0.147 at pH 0). As a metalloid the chemistry of silicon is largely covalent in nature, noting it can form alloys with metals such as iron and copper. The common oxide of silicon ( SiO 2 ) is weakly acidic.
Germanium is a shiny, mostly unreactive grey-white solid with a density of 5.323 g/cm 3 (about two-thirds that of iron), and is hard (MH 6.0) and brittle. It melts at 938.25 °C (cf. silver 961.78 °C) and boils at 2833 °C. Germanium has a diamond cubic structure (CN 4). It is a semiconductor with a band gap of about 0.67 eV. Germanium has a moderate ionisation energy (762 kJ/mol), moderate electron affinity (119 kJ/mol), and moderate electronegativity (2.01). It is a poor oxidising agent (Ge + 4 e → GeH 4 = –0.294 at pH 0). As a metalloid the chemistry of germanium is largely covalent in nature, noting it can form alloys with metals such as aluminium and gold. Most alloys of germanium with metals lack metallic or semimetallic conductivity. The common oxide of germanium ( GeO 2 ) is amphoteric.
Nitrogen is a colourless, odourless, and relatively inert diatomic gas with a density of 1.251 × 10 −3 g/cm 3 (marginally heavier than air). It condenses to a colourless liquid at −195.795 °C and freezes into an ice- or snow-like solid at −210.00 °C. The solid form (density 0.85 g/cm 3 ; cf. lithium 0.534) has a hexagonal crystalline structure and is soft and easily crushed. Nitrogen is an insulator in all of its forms. It has a high ionisation energy (1402.3 kJ/mol), low electron affinity (–6.75 kJ/mol), and high electronegativity (3.04). The latter property manifests in the capacity of nitrogen to form usually strong hydrogen bonds, and its preference for forming complexes with metals having low electronegativities, small cationic radii, and often high charges (+3 or more). Nitrogen is a poor oxidising agent (N 2 + 6 e − → 2NH 3 = −0.057 V at pH 0). Only when it is in a positive oxidation state, that is, in combination with oxygen or fluorine, are its compounds good oxidising agents, for example, 2NO 3 − → N 2 = 1.25 V. Its chemistry is largely covalent in nature; anion formation is energetically unfavourable owing to strong inter electron repulsions associated with having three unpaired electrons in its outer valence shell, hence its negative electron affinity. The common oxide of nitrogen ( NO ) is weakly acidic. Many compounds of nitrogen are less stable than diatomic nitrogen, so nitrogen atoms in compounds seek to recombine if possible and release energy and nitrogen gas in the process, which can be leveraged for explosive purposes.
Phosphorus in its most thermodynamically stable black form, is a lustrous and comparatively unreactive solid with a density of 2.69 g/cm 3 , and is soft (MH 2.0) and has a flaky comportment. It sublimes at 620 °C. Black phosphorus has an orthorhombic crystalline structure (CN 3). It is a semiconductor with a band gap of 0.3 eV. It has a high ionisation energy (1086.5 kJ/mol), moderate electron affinity (72 kJ/mol), and moderate electronegativity (2.19). In comparison to nitrogen, phosphorus usually forms weak hydrogen bonds, and prefers to form complexes with metals having high electronegativities, large cationic radii, and often low charges (usually +1 or +2. Phosphorus is a poor oxidising agent (P 4 + 3 e − → PH 3 – = −0.046 V at pH 0 for the white form, −0.088 V for the red). Its chemistry is largely covalent in nature, noting it can form salt-like phosphides with highly electropositive metals. Compared to nitrogen, electrons have more space on phosphorus, which lowers their mutual repulsion and results in anion formation requiring less energy. The common oxide of phosphorus ( P 2 O 5 ) is a medium-strength acidic oxide.
When assessing periodicity in the properties of the elements it needs to be borne in mind that the quoted properties of phosphorus tend to be those of its least stable white form rather than, as is the case with all other elements, the most stable form. White phosphorus is the most common, industrially important, and easily reproducible allotrope. For those reasons it is the standard state of the element. Paradoxically, it is also thermodynamically the least stable, as well as the most volatile and reactive form. It gradually changes to red phosphorus. This transformation is accelerated by light and heat, and samples of white phosphorus almost always contain some red phosphorus and, accordingly, appear yellow. For this reason, white phosphorus that is aged or otherwise impure is also called yellow phosphorus. When exposed to oxygen, white phosphorus glows in the dark with a very faint tinge of green and blue. It is highly flammable and pyrophoric (self-igniting) upon contact with air. White phosphorus has a density of 1.823 g/cm 3 , is soft (MH 0.5) as wax, pliable and can be cut with a knife. It melts at 44.15 °C and, if heated rapidly, boils at 280.5 °C; it otherwise remains solid and transforms to violet phosphorus at 550 °C. It has a body-centred cubic structure, analogous to that of manganese, with unit cell comprising 58 P 4 molecules. It is an insulator with a band gap of about 3.7 eV.
Arsenic is a grey, metallic looking solid which is stable in dry air but develops a golden bronze patina in moist air, which blackens on further exposure. It has a density of 5.727 g/cm 3 , and is brittle and moderately hard (MH 3.5; more than aluminium; less than iron). Arsenic sublimes at 615 °C. It has a rhombohedral polyatomic crystalline structure (CN 3). Arsenic is a semimetal, with an electrical conductivity of around 3.9 × 10 4 S•cm −1 and a band overlap of 0.5 eV. It has a moderate ionisation energy (947 kJ/mol), moderate electron affinity (79 kJ/mol), and moderate electronegativity (2.18). Arsenic is a poor oxidising agent (As + 3e → AsH 3 = –0.22 at pH 0). As a metalloid, its chemistry is largely covalent in nature, noting it can form brittle alloys with metals, and has an extensive organometallic chemistry. Most alloys of arsenic with metals lack metallic or semimetallic conductivity. The common oxide of arsenic ( As 2 O 3 ) is acidic but weakly amphoteric.
Antimony is a silver-white solid with a blue tint and a brilliant lustre. It is stable in air and moisture at room temperature. Antimony has a density of 6.697 g/cm 3 , and is moderately hard (MH 3.0; about the same as copper). It has a rhombohedral crystalline structure (CN 3). Antimony melts at 630.63 °C and boils at 1635 °C. It is a semimetal, with an electrical conductivity of around 3.1 × 10 4 S•cm −1 and a band overlap of 0.16 eV. Antimony has a moderate ionisation energy (834 kJ/mol), moderate electron affinity (101 kJ/mol), and moderate electronegativity (2.05). It is a poor oxidising agent (Sb + 3e → SbH 3 = –0.51 at pH 0). As a metalloid, its chemistry is largely covalent in nature, noting it can form alloys with one or more metals such as aluminium, iron, nickel , copper, zinc, tin, lead and bismuth, and has an extensive organometallic chemistry. Most alloys of antimony with metals have metallic or semimetallic conductivity. The common oxide of antimony ( Sb 2 O 3 ) is amphoteric.
In the United States alone, more than $10 billion is lost each year to corrosion...Much of this corrosion is the rusting of iron and steel...The oxidizing agent causing all of this corrosion is usually oxygen.
Oxygen is a colourless, odourless, and unpredictably reactive diatomic gas with a gaseous density of 1.429 × 10 −3 g/cm 3 (marginally heavier than air). It is generally unreactive at room temperature. Thus, sodium metal will "retain its metallic lustre for days in the presence of absolutely dry air and can even be melted (m.p. 97.82 °C) in the presence of dry oxygen without igniting". [ 5 ] On the other hand, oxygen can react with many inorganic and organic compounds either spontaneously or under the right conditions, [ 6 ] (such as a flame or a spark) [or ultra-violet light?]. It condenses to pale blue liquid −182.962 °C and freezes into a light blue solid at −218.79 °C. The solid form (density 0.0763 g/cm 3 ) has a cubic crystalline structure and is soft and easily crushed. Oxygen is an insulator in all of its forms. It has a high ionisation energy (1313.9 kJ/mol), moderately high electron affinity (141 kJ/mol), and high electronegativity (3.44). Oxygen is a strong oxidising agent (O 2 + 4 e → 2H 2 O = 1.23 V at pH 0). Metal oxides are largely ionic in nature. [ 7 ]
Sulfur is a bright-yellow moderately reactive [ 8 ] solid. It has a density of 2.07 g/cm 3 and is soft (MH 2.0) and brittle. It melts to a light yellow liquid 95.3 °C and boils at 444.6 °C. Sulfur has an abundance on earth one-tenth that of oxygen. It has an orthorhombic polyatomic (CN 2) crystalline structure, and is brittle. Sulfur is an insulator with a band gap of 2.6 eV, and a photoconductor meaning its electrical conductivity increases a million-fold when illuminated. Sulfur has a moderate ionisation energy (999.6 kJ/mol), high electron affinity (200 kJ/mol), and high electronegativity (2.58). It is a poor oxidising agent (S 8 + 2 e − → H 2 S = 0.14 V at pH 0). The chemistry of sulfur is largely covalent in nature, noting it can form ionic sulfides with highly electropositive metals. The common oxide of sulfur (SO 3 ) is strongly acidic.
Selenium is a metallic-looking, moderately reactive [ 8 ] solid with a density of 4.81 g/cm 3 and is soft (MH 2.0) and brittle. It melts at 221 °C to a black liquid and boils at 685 °C to a dark yellow vapour. Selenium has a hexagonal polyatomic (CN 2) crystalline structure. It is a semiconductor with a band gap of 1.7 eV, and a photoconductor meaning its electrical conductivity increases a million-fold when illuminated. Selenium has a moderate ionisation energy (941.0 kJ/mol), high electron affinity (195 kJ/mol), and high electronegativity (2.55). It is a poor oxidising agent (Se + 2 e − → H 2 Se = −0.082 V at pH 0). The chemistry of selenium is largely covalent in nature, noting it can form ionic selenides with highly electropositive metals. The common oxide of selenium (SeO 3 ) is strongly acidic.
Tellurium is a silvery-white, moderately reactive, [ 8 ] shiny solid, that has a density of 6.24 g/cm 3 and is soft (MH 2.25) and brittle. It is the softest of the commonly recognised metalloids. Tellurium reacts with boiling water, or when freshly precipitated even at 50 °C, to give the dioxide and hydrogen: Te + 2 H 2 O → TeO 2 + 2 H 2 . It has a melting point of 450 °C and a boiling point of 988 °C. Tellurium has a polyatomic (CN 2) hexagonal crystalline structure. It is a semiconductor with a band gap of 0.32 to 0.38 eV. Tellurium has a moderate ionisation energy (869.3 kJ/mol), high electron affinity (190 kJ/mol), and moderate electronegativity (2.1). It is a poor oxidising agent (Te + 2 e − → H 2 Te = −0.45 V at pH 0). The chemistry of tellurium is largely covalent in nature, noting it has an extensive organometallic chemistry and that many tellurides can be regarded as metallic alloys. The common oxide of tellurium (TeO 2 ) is amphoteric.
Fluorine is an extremely toxic and reactive pale yellow diatomic gas that, with a gaseous density of 1.696 × 10 −3 g/cm 3 , is about 40% heavier than air. Its extreme reactivity is such that it was not isolated (via electrolysis) until 1886 and was not isolated chemically until 1986. Its occurrence in an uncombined state in nature was first reported in 2012, but is contentious. Fluorine condenses to a pale yellow liquid at −188.11 °C and freezes into a colourless solid [ 5 ] at −219.67 °C. The solid form (density 1.7 g/cm −3 ) has a cubic crystalline structure and is soft and easily crushed. Fluorine is an insulator in all of its forms. It has a high ionisation energy (1681 kJ/mol), high electron affinity (328 kJ/mol), and high electronegativity (3.98). Fluorine is a powerful oxidising agent (F 2 + 2 e → 2HF = 2.87 V at pH 0); "even water, in the form of steam, will catch fire in an atmosphere of fluorine". [ 9 ] Metal fluorides are generally ionic in nature.
Chlorine is an irritating green-yellow diatomic gas that is extremely reactive, and has a gaseous density of 3.2 × 10 −3 g/cm 3 (about 2.5 times heavier than air). It condenses at −34.04 °C to an amber-coloured liquid and freezes at −101.5 °C into a yellow crystalline solid. The solid form (density 1.9 g/cm −3 ) has an orthorhombic crystalline structure and is soft and easily crushed. Chlorine is an insulator in all of its forms. It has a high ionisation energy (1251.2 kJ/mol), high electron affinity (349 kJ/mol; higher than fluorine), and high electronegativity (3.16). Chlorine is a strong oxidising agent (Cl 2 + 2 e → 2HCl = 1.36 V at pH 0). Metal chlorides are largely ionic in nature. The common oxide of chlorine (Cl 2 O 7 ) is strongly acidic.
Bromine is a deep brown diatomic liquid that is quite reactive, and has a liquid density of 3.1028 g/cm 3 . It boils at 58.8 °C and solidifies at −7.3 °C to an orange crystalline solid (density 4.05 g/cm −3 ). It is the only element, apart from mercury, known to be a liquid at room temperature. The solid form, like chlorine, has an orthorhombic crystalline structure and is soft and easily crushed. Bromine is an insulator in all of its forms. It has a high ionisation energy (1139.9 kJ/mol), high electron affinity (324 kJ/mol), and high electronegativity (2.96). Bromine is a strong oxidising agent (Br 2 + 2 e → 2HBr = 1.07 V at pH 0). Metal bromides are largely ionic in nature. The unstable common oxide of bromine (Br 2 O 5 ) is strongly acidic.
Iodine, the rarest of the nonmetallic halogens, is a metallic looking solid that is moderately reactive, and has a density of 4.933 g/cm 3 . It melts at 113.7 °C to a brown liquid and boils at 184.3 °C to a violet-coloured vapour. It has an orthorhombic crystalline structure with a flaky habit. Iodine is semiconductor in the direction of its planes, with a band gap of about 1.3 eV and a conductivity of 1.7 × 10 −8 S•cm −1 at room temperature. This is higher than selenium but lower than boron, the least electrically conducting of the recognised metalloids. Iodine is an insulator in the direction perpendicular to its planes. It has a high ionisation energy (1008.4 kJ/mol), high electron affinity (295 kJ/mol), and high electronegativity (2.66). Iodine is a moderately strong oxidising agent (I 2 + 2 e → 2I − = 0.53 V at pH 0). Metal iodides are predominantly ionic in nature. The only stable oxide of iodine (I 2 O 5 ) is strongly acidic.
Astatine, is the rarest naturally occurring element in the Earth's crust, occurring only as the decay product of various heavier elements. All of astatine's isotopes are short-lived; the most stable is astatine-210, with a half-life of 8.1 hours. Astatine is sometimes described as probably being a black solid (assuming it follows this trend), or as having a metallic appearance. Astatine is predicted to be a semiconductor, with a band gap of about 0.7 eV. It has a moderate ionisation energy (900 kJ/mol), high electron affinity (233 kJ/mol), and moderate electronegativity (2.2). Astatine is a moderately weak oxidising agent (At 2 + 2 e → 2At − = 0.3 V at pH 0).
Helium has a density of 1.785 × 10 −4 g/cm 3 (cf. air 1.225 × 10 −3 g/cm 3 ), liquifies at −268.928 °C, and cannot be solidified at normal pressure. It has the lowest boiling point of all of the elements. Liquid helium exhibits super-fluidity, superconductivity, and near-zero viscosity; its thermal conductivity is greater than that of any other known substance (more than 1,000 times that of copper). Helium can only be solidified at −272.20 °C under a pressure of 2.5 MPa. It has a very high ionisation energy (2372.3 kJ/mol), low electron affinity (estimated at −50 kJ/mol), and high electronegativity (4.16 χSpec). No normal compounds of helium have so far been synthesised.
Neon has a density of 9.002 × 10 −4 g/cm 3 , liquifies at −245.95 °C, and solidifies at −248.45 °C. It has the narrowest liquid range of any element and, in liquid form, has over 40 times the refrigerating capacity of liquid helium and three times that of liquid hydrogen. Neon has a very high ionisation energy (2080.7 kJ/mol), low electron affinity (estimated at −120 kJ/mol), and very high electronegativity (4.787 χSpec). It is the least reactive of the noble gases; no normal compounds of neon have so far been synthesised.
Argon has a density of 1.784 × 10 −3 g/cm 3 , liquifies at −185.848 °C, and solidifies at −189.34 °C. Although non-toxic, it is 38% denser than air and therefore considered a dangerous asphyxiant in closed areas. It is difficult to detect because (like all the noble gases) it is colourless, odourless, and tasteless. Argon has a high ionisation energy (1520.6 kJ/mol), low electron affinity (estimated at −96 kJ/mol), and high electronegativity (3.242 χSpec). One interstitial compound of argon , Ar 1 C 60 , is a stable solid at room temperature.
Krypton has a density of 3.749 × 10 −3 g/cm 3 , liquifies at −153.415 °C, and solidifies at −157.37 °C. It has a high ionisation energy (1350.8 kJ/mol), low electron affinity (estimated at −60 kJ/mol), and high electronegativity (2.966 χSpec). Krypton can be reacted with fluorine to form the difluoride, KrF 2 . The reaction of KrF 2 with B(OTeF 5 ) 3 produces an unstable compound, Kr(OTeF 5 ) 2 , that contains a krypton- oxygen bond.
Xenon has a density of 5.894 × 10 −3 g/cm 3 , liquifies at −161.4 °C, and solidifies at −165.051 °C. It is non- toxic , and belongs to a select group of substances that penetrate the blood–brain barrier , causing mild to full surgical anesthesia when inhaled in high concentrations with oxygen. Xenon has a high ionisation energy (1170.4 kJ/mol), low electron affinity (estimated at −80 kJ/mol), and high electronegativity (2.582 χSpec). It forms a relatively large number of compounds , mostly containing fluorine or oxygen. An unusual ion containing xenon is the tetraxenonogold(II) cation, AuXe 2+ 4 , which contains Xe–Au bonds. This ion occurs in the compound AuXe 4 (Sb 2 F 11 ) 2 , and is remarkable in having direct chemical bonds between two notoriously unreactive atoms, xenon and gold , with xenon acting as a transition metal ligand. The compound Xe 2 Sb 2 F 11 contains a Xe–Xe bond, the longest element-element bond known (308.71 pm = 3.0871 Å ). The most common oxide of xenon ( XeO 3 ) is strongly acidic.
Radon, which is radioactive, has a density of 9.73 × 10 −3 g/cm 3 , liquifies at −61.7 °C, and solidifies at −71 °C. It has a high ionisation energy (1037 kJ/mol), low electron affinity (estimated at −70 kJ/mol), and a high electronegativity (2.60 χSpec). The only confirmed compounds of radon, which is the rarest of the naturally occurring noble gases, are the difluoride RnF 2 , and trioxide, RnO 3 . It has been reported that radon is capable of forming a simple Rn 2+ cation in halogen fluoride solution, which is highly unusual behaviour for a nonmetal, and a noble gas at that. Radon trioxide (RnO 3 ) is expected to be acidic.
Oganesson, the heaviest element on the periodic table, has only recently been synthesized. Owing to its short half-life, its chemical properties have not yet been investigated. Due to the significant relativistic destabilisation of the 7p 3/2 orbitals, it is expected to be significantly reactive and behave more similarly to the group 14 elements, as it effectively has four valence electrons outside a pseudo-noble gas core. Its predicted melting and boiling points are 52±15 °C and 177±10 °C respectively, so that it is probably neither noble nor a gas; it is expected to have a density of about 6.6–7.4 g/cm 3 around room temperature. It is expected to have a barely positive electron affinity (estimated as 5 kJ/mol) and a moderate ionisation energy of about 860 kJ/mol, which is rather low for a nonmetal and close to those of tellurium and astatine. The oganesson fluorides OgF 2 and OgF 4 are expected to show significant ionic character, suggesting that oganesson may have at least incipient metallic properties. The oxides of oganesson, OgO and OgO 2 , are predicted to be amphoteric. | https://en.wikipedia.org/wiki/Properties_of_nonmetals_(and_metalloids)_by_group |
Water ( H 2 O ) is a polar inorganic compound that is at room temperature a tasteless and odorless liquid , which is nearly colorless apart from an inherent hint of blue . It is by far the most studied chemical compound [ 20 ] and is described as the "universal solvent " [ 21 ] and the "solvent of life". [ 22 ] It is the most abundant substance on the surface of Earth [ 23 ] and the only common substance to exist as a solid , liquid, and gas on Earth's surface. [ 24 ] It is also the third most abundant molecule in the universe (behind molecular hydrogen and carbon monoxide ). [ 23 ]
Water molecules form hydrogen bonds with each other and are strongly polar. This polarity allows it to dissociate ions in salts and bond to other polar substances such as alcohols and acids, thus dissolving them. Its hydrogen bonding causes its many unique properties, such as having a solid form less dense than its liquid form, a relatively high boiling point of 100 °C for its molar mass , and a high heat capacity .
Water is amphoteric , meaning that it can exhibit properties of an acid or a base , depending on the pH of the solution that it is in; it readily produces both H + and OH − ions. [ c ] Related to its amphoteric character, it undergoes self-ionization . The product of the activities , or approximately, the concentrations of H + and OH − is a constant, so their respective concentrations are inversely proportional to each other. [ 25 ]
Water is the chemical substance with chemical formula H 2 O ; one molecule of water has two hydrogen atoms covalently bonded to a single oxygen atom. [ 26 ] Water is a tasteless, odorless liquid at ambient temperature and pressure . Liquid water has weak absorption bands at wavelengths of around 750 nm which cause it to appear to have a blue color. [ 4 ] This can easily be observed in a water-filled bath or wash-basin whose lining is white. Large ice crystals, as in glaciers , also appear blue.
Under standard conditions , water is primarily a liquid, unlike other analogous hydrides of the oxygen family , which are generally gaseous. This unique property of water is due to hydrogen bonding . The molecules of water are constantly moving concerning each other, and the hydrogen bonds are continually breaking and reforming at timescales faster than 200 femtoseconds (2 × 10 −13 seconds). [ 27 ] However, these bonds are strong enough to create many of the peculiar properties of water, some of which make it integral to life.
Within the Earth's atmosphere and surface, the liquid phase is the most common and is the form that is generally denoted by the word "water". The solid phase of water is known as ice and commonly takes the structure of hard, amalgamated crystals , such as ice cubes , or loosely accumulated granular crystals, like snow . Aside from common hexagonal crystalline ice , other crystalline and amorphous phases of ice are known. The gaseous phase of water is known as water vapor (or steam ). Visible steam and clouds are formed from minute droplets of water suspended in the air.
Water also forms a supercritical fluid . The critical temperature is 647 K and the critical pressure is 22.064 MPa . In nature, this only rarely occurs in extremely hostile conditions. A likely example of naturally occurring supercritical water is in the hottest parts of deep water hydrothermal vents , in which water is heated to the critical temperature by volcanic plumes and the critical pressure is caused by the weight of the ocean at the extreme depths where the vents are located. This pressure is reached at a depth of about 2200 meters: much less than the mean depth of the ocean (3800 meters). [ 28 ]
Water has a very high specific heat capacity of 4184 J/(kg·K) at 20 °C (4182 J/(kg·K) at 25 °C)—the second-highest among all the heteroatomic species (after ammonia ), as well as a high heat of vaporization (40.65 kJ/mol or 2257 kJ/kg at the normal boiling point), both of which are a result of the extensive hydrogen bonding between its molecules. These unusual properties allow water to moderate Earth's climate by buffering large fluctuations in temperature. Most of the additional energy stored in the climate system since 1970 has accumulated in the oceans . [ 29 ]
The specific enthalpy of fusion (more commonly known as latent heat) of water is 333.55 kJ/kg at 0 °C: the same amount of energy is required to melt ice as to warm ice from −160 °C up to its melting point or to heat the same amount of water by about 80 °C. Of common substances, only that of ammonia is higher. This property confers resistance to melting on the ice of glaciers and drift ice . Before and since the advent of mechanical refrigeration , ice was and still is in common use for retarding food spoilage.
The specific heat capacity of ice at −10 °C is 2030 J/(kg·K) [ 30 ] and the heat capacity of steam at 100 °C is 2080 J/(kg·K). [ 31 ]
The density of water is about 1 gram per cubic centimetre (62 lb/cu ft): this relationship was originally used to define the gram. [ 32 ] The density varies with temperature, but not linearly: as the temperature increases, the density rises to a peak at 3.98 °C (39.16 °F) and then decreases; [ 33 ] the initial increase is unusual because most liquids undergo thermal expansion so that the density only decreases as a function of temperature. The increase observed for water from 0 °C (32 °F) to 3.98 °C (39.16 °F) and for a few other liquids [ d ] is described as negative thermal expansion . Regular, hexagonal ice is also less dense than liquid water—upon freezing, the density of water decreases by about 9%. [ 36 ] [ e ]
These peculiar effects are due to the highly directional bonding of water molecules via the hydrogen bonds: ice and liquid water at low temperature have comparatively low-density, low-energy open lattice structures. The breaking of hydrogen bonds on melting with increasing temperature in the range 0–4 °C allows for a denser molecular packing in which some of the lattice cavities are filled by water molecules. [ 33 ] [ 37 ] Above 4 °C, however, thermal expansion becomes the dominant effect, [ 37 ] and water near the boiling point (100 °C) is about 4% less dense than water at 4 °C (39 °F). [ 36 ] [ f ]
Under increasing pressure, ice undergoes a number of transitions to other polymorphs with higher density than liquid water, such as ice II , ice III , high-density amorphous ice (HDA), and very-high-density amorphous ice (VHDA). [ 38 ] [ 39 ]
The unusual density curve and lower density of ice than of water is essential for much of the life on earth—if water were most dense at the freezing point, then in winter the cooling at the surface would lead to convective mixing. Once 0 °C are reached, the water body would freeze from the bottom up, and all life in it would be killed. [ 36 ] Furthermore, given that water is a good thermal insulator (due to its heat capacity), some frozen lakes might not completely thaw in summer. [ 36 ] As it is, the inversion of the density curve leads to a stable layering for surface temperatures below 4 °C, and with the layer of ice that floats on top insulating the water below, [ 40 ] even e.g., Lake Baikal in central Siberia freezes only to about 1 m thickness in winter. In general, for deep enough lakes, the temperature at the bottom stays constant at about 4 °C (39 °F) throughout the year (see diagram). [ 36 ]
The density of saltwater depends on the dissolved salt content as well as the temperature. Ice still floats in the oceans, otherwise, they would freeze from the bottom up. However, the salt content of oceans lowers the freezing point by about 1.9 °C [ 41 ] (due to freezing-point depression of a solvent containing a solute ) and lowers the temperature of the density maximum of water to the former freezing point at 0 °C. This is why, in ocean water, the downward convection of colder water is not blocked by an expansion of water as it becomes colder near the freezing point. The oceans' cold water near the freezing point continues to sink. So creatures that live at the bottom of cold oceans like the Arctic Ocean generally live in water 4 °C colder than at the bottom of frozen-over fresh water lakes and rivers.
As the surface of saltwater begins to freeze (at −1.9 °C [ 41 ] for normal salinity seawater , 3.5%) the ice that forms is essentially salt-free, with about the same density as freshwater ice. This ice floats on the surface, and the salt that is "frozen out" adds to the salinity and density of the seawater just below it, in a process known as brine rejection . This denser saltwater sinks by convection and the replacing seawater is subject to the same process. This produces essentially freshwater ice at −1.9 °C [ 41 ] on the surface. The increased density of the seawater beneath the forming ice causes it to sink towards the bottom. On a large scale, the process of brine rejection and sinking cold salty water results in ocean currents forming to transport such water away from the Poles, leading to a global system of currents called the thermohaline circulation .
Water is miscible with many liquids, including ethanol in all proportions. Water and most oils are immiscible, usually forming layers according to increasing density from the top. This can be predicted by comparing the polarity . Water being a relatively polar compound will tend to be miscible with liquids of high polarity such as ethanol and acetone , whereas compounds with low polarity will tend to be immiscible and poorly soluble such as with hydrocarbons .
As a gas, water vapor is completely miscible with air. On the other hand, the maximum water vapor pressure that is thermodynamically stable with the liquid (or solid) at a given temperature is relatively low compared with total atmospheric pressure. For example, if the vapor's partial pressure is 2% of atmospheric pressure and the air is cooled from 25 °C, starting at about 22 °C, water will start to condense, defining the dew point , and creating fog or dew . The reverse process accounts for the fog burning off in the morning. If the humidity is increased at room temperature, for example, by running a hot shower or a bath, and the temperature stays about the same, the vapor soon reaches the pressure for phase change and then condenses out as minute water droplets, commonly referred to as steam.
A saturated gas or one with 100% relative humidity is when the vapor pressure of water in the air is at equilibrium with vapor pressure due to (liquid) water; water (or ice, if cool enough) will fail to lose mass through evaporation when exposed to saturated air. Because the amount of water vapor in the air is small, relative humidity, the ratio of the partial pressure due to the water vapor to the saturated partial vapor pressure, is much more useful. Vapor pressure above 100% relative humidity is called supersaturated and can occur if the air is rapidly cooled, for example, by rising suddenly in an updraft. [ g ]
The compressibility of water is a function of pressure and temperature. At 0 °C, at the limit of zero pressure, the compressibility is 5.1 × 10 −10 Pa −1 . At the zero-pressure limit, the compressibility reaches a minimum of 4.4 × 10 −10 Pa −1 around 45 °C before increasing again with increasing temperature. As the pressure is increased, the compressibility decreases, being 3.9 × 10 −10 Pa −1 at 0 °C and 100 megapascals (1,000 bar). [ 42 ]
The bulk modulus of water is about 2.2 GPa. [ 43 ] The low compressibility of non-gasses, and of water in particular, leads to their often being assumed as incompressible. The low compressibility of water means that even in the deep oceans at 4 kilometres (2.5 mi) depth, where pressures are 40 MPa, there is only a 1.8% decrease in volume. [ 43 ]
The bulk modulus of water ice ranges from 11.3 GPa at 0 K up to 8.6 GPa at 273 K. [ 44 ] The large change in the compressibility of ice as a function of temperature is the result of its relatively large thermal expansion coefficient compared to other common solids.
The temperature and pressure at which ordinary solid, liquid, and gaseous water coexist in equilibrium is a triple point of water. Since 1954, this point had been used to define the base unit of temperature, the kelvin , [ 45 ] [ 46 ] but, starting in 2019 , the kelvin is now defined using the Boltzmann constant , rather than the triple point of water. [ 47 ]
Due to the existence of many polymorphs (forms) of ice, water has other triple points, which have either three polymorphs of ice or two polymorphs of ice and liquid in equilibrium. [ 46 ] Gustav Heinrich Johann Apollon Tammann in Göttingen produced data on several other triple points in the early 20th century. Kamb and others documented further triple points in the 1960s. [ 48 ] [ 49 ] [ 50 ]
The melting point of ice is 0 °C (32 °F; 273 K) at standard pressure; however, pure liquid water can be supercooled well below that temperature without freezing if the liquid is not mechanically disturbed. It can remain in a fluid state down to its homogeneous nucleation point of about 231 K (−42 °C; −44 °F). [ 52 ] The melting point of ordinary hexagonal ice falls slightly under moderately high pressures, by 0.0073 °C (0.0131 °F)/atm [ h ] or about 0.5 °C (0.90 °F)/70 atm [ i ] [ 53 ] as the stabilization energy of hydrogen bonding is exceeded by intermolecular repulsion, but as ice transforms into its polymorphs (see crystalline states of ice ) above 209.9 MPa (2,072 atm), the melting point increases markedly with pressure , i.e., reaching 355 K (82 °C) at 2.216 GPa (21,870 atm) (triple point of Ice VII [ 54 ] ).
Pure water containing no exogenous ions is an excellent electronic insulator , but not even "deionized" water is completely free of ions. Water undergoes autoionization in the liquid state when two water molecules form one hydroxide anion ( OH − ) and one hydronium cation ( H 3 O + ). Because of autoionization, at ambient temperatures pure liquid water has a similar intrinsic charge carrier concentration to the semiconductor germanium and an intrinsic charge carrier concentration three orders of magnitude greater than the semiconductor silicon, hence, based on charge carrier concentration, water can not be considered to be a completely dielectric material or electrical insulator but to be a limited conductor of ionic charge. [ 55 ]
Because water is such a good solvent, it almost always has some solute dissolved in it, often a salt . If water has even a tiny amount of such an impurity, then the ions can carry charges back and forth, allowing the water to conduct electricity far more readily.
It is known that the theoretical maximum electrical resistivity for water is approximately 18.2 MΩ·cm (182 kΩ ·m) at 25 °C. [ 56 ] This figure agrees well with what is typically seen on reverse osmosis , ultra-filtered and deionized ultra-pure water systems used, for instance, in semiconductor manufacturing plants. A salt or acid contaminant level exceeding even 100 parts per trillion (ppt) in otherwise ultra-pure water begins to noticeably lower its resistivity by up to several kΩ·m. [ citation needed ]
In pure water, sensitive equipment can detect a very slight electrical conductivity of 0.05501 ± 0.0001 μS / cm at 25.00 °C. [ 56 ] Water can also be electrolyzed into oxygen and hydrogen gases but in the absence of dissolved ions this is a very slow process, as very little current is conducted. In ice, the primary charge carriers are protons (see proton conductor ). [ 57 ] Ice was previously thought to have a small but measurable conductivity of 1 × 10 −10 S/cm, but this conductivity is now thought to be almost entirely from surface defects, and without those, ice is an insulator with an immeasurably small conductivity. [ 33 ]
An important feature of water is its polar nature. The structure has a bent molecular geometry for the two hydrogens from the oxygen vertex. The oxygen atom also has two lone pairs of electrons. One effect usually ascribed to the lone pairs is that the H–O–H gas-phase bend angle is 104.48°, [ 58 ] which is smaller than the typical tetrahedral angle of 109.47°. The lone pairs are closer to the oxygen atom than the electrons sigma bonded to the hydrogens, so they require more space. The increased repulsion of the lone pairs forces the O–H bonds closer to each other. [ 59 ]
Another consequence of its structure is that water is a polar molecule . Due to the difference in electronegativity , a bond dipole moment points from each H to the O, making the oxygen partially negative and each hydrogen partially positive. A large molecular dipole , points from a region between the two hydrogen atoms to the oxygen atom. The charge differences cause water molecules to aggregate (the relatively positive areas being attracted to the relatively negative areas). This attraction, hydrogen bonding , explains many of the properties of water, such as its solvent properties. [ 60 ]
Although hydrogen bonding is a relatively weak attraction compared to the covalent bonds within the water molecule itself, it is responsible for several of the water's physical properties. These properties include its relatively high melting and boiling point temperatures: more energy is required to break the hydrogen bonds between water molecules. In contrast, hydrogen sulfide ( H 2 S ), has much weaker hydrogen bonding due to sulfur's lower electronegativity. H 2 S is a gas at room temperature , despite hydrogen sulfide having nearly twice the molar mass of water. The extra bonding between water molecules also gives liquid water a large specific heat capacity . This high heat capacity makes water a good heat storage medium (coolant) and heat shield.
Water molecules stay close to each other ( cohesion ), due to the collective action of hydrogen bonds between water molecules. These hydrogen bonds are constantly breaking, with new bonds being formed with different water molecules; but at any given time in a sample of liquid water, a large portion of the molecules are held together by such bonds. [ 61 ]
Water also has high adhesion properties because of its polar nature. On clean, smooth glass the water may form a thin film because the molecular forces between glass and water molecules (adhesive forces) are stronger than the cohesive forces. [ citation needed ] In biological cells and organelles , water is in contact with membrane and protein surfaces that are hydrophilic ; that is, surfaces that have a strong attraction to water. Irving Langmuir observed a strong repulsive force between hydrophilic surfaces. To dehydrate hydrophilic surfaces—to remove the strongly held layers of water of hydration—requires doing substantial work against these forces, called hydration forces. These forces are very large but decrease rapidly over a nanometer or less. [ 62 ] They are important in biology, particularly when cells are dehydrated by exposure to dry atmospheres or to extracellular freezing. [ 63 ]
Water has an unusually high surface tension of 71.99 mN/m at 25 °C [ 64 ] which is caused by the strength of the hydrogen bonding between water molecules. [ 65 ] This allows insects to walk on water. [ 65 ]
Because water has strong cohesive and adhesive forces, it exhibits capillary action. [ 66 ] Strong cohesion from hydrogen bonding and adhesion allows trees to transport water more than 100 m upward. [ 65 ]
Water is an excellent solvent due to its high dielectric constant. [ 67 ] Substances that mix well and dissolve in water are known as hydrophilic ("water-loving") substances, while those that do not mix well with water are known as hydrophobic ("water-fearing") substances. [ 68 ] The ability of a substance to dissolve in water is determined by whether or not the substance can match or better the strong attractive forces that water molecules generate between other water molecules. If a substance has properties that do not allow it to overcome these strong intermolecular forces, the molecules are precipitated out from the water. Contrary to the common misconception, water and hydrophobic substances do not "repel", and the hydration of a hydrophobic surface is energetically, but not entropically, favorable.
When an ionic or polar compound enters water, it is surrounded by water molecules ( hydration ). The relatively small size of water molecules (~3 angstroms) allows many water molecules to surround one molecule of solute . The partially negative dipole ends of the water are attracted to positively charged components of the solute, and vice versa for the positive dipole ends.
In general, ionic and polar substances such as acids , alcohols , and salts are relatively soluble in water, and nonpolar substances such as fats and oils are not. Nonpolar molecules stay together in water because it is energetically more favorable for the water molecules to hydrogen bond to each other than to engage in van der Waals interactions with non-polar molecules.
An example of an ionic solute is table salt ; the sodium chloride, NaCl, separates into Na + cations and Cl − anions , each being surrounded by water molecules. The ions are then easily transported away from their crystalline lattice into solution. An example of a nonionic solute is table sugar . The water dipoles make hydrogen bonds with the polar regions of the sugar molecule ( OH groups ) and allow it to be carried away into solution.
The quantum tunneling dynamics in water was reported as early as 1992. At that time it was known that there are motions which destroy and regenerate the weak hydrogen bond by internal rotations of the substituent water monomers . [ 69 ] On 18 March 2016, it was reported that the hydrogen bond can be broken by quantum tunneling in the water hexamer . Unlike previously reported tunneling motions in water, this involved the concerted breaking of two hydrogen bonds. [ 70 ] Later in the same year, the discovery of the quantum tunneling of water molecules was reported. [ 71 ]
Water is relatively transparent to visible light , near ultraviolet light, and far-red light, but it absorbs most ultraviolet light , infrared light , and microwaves . Most photoreceptors and photosynthetic pigments utilize the portion of the light spectrum that is transmitted well through water. Microwave ovens take advantage of water's opacity to microwave radiation to heat the water inside of foods. Water's light blue color is caused by weak absorption in the red part of the visible spectrum . [ 4 ] [ 72 ]
A single water molecule can participate in a maximum of four hydrogen bonds because it can accept two bonds using the lone pairs on oxygen and donate two hydrogen atoms. Other molecules like hydrogen fluoride , ammonia, and methanol can also form hydrogen bonds. However, they do not show anomalous thermodynamic , kinetic , or structural properties like those observed in water because none of them can form four hydrogen bonds: either they cannot donate or accept hydrogen atoms, or there are steric effects in bulky residues. In water, intermolecular tetrahedral structures form due to the four hydrogen bonds, thereby forming an open structure and a three-dimensional bonding network, resulting in the anomalous decrease in density when cooled below 4 °C. This repeated, constantly reorganising unit defines a three-dimensional network extending throughout the liquid. This view is based upon neutron scattering studies and computer simulations, and it makes sense in the light of the unambiguously tetrahedral arrangement of water molecules in ice structures.
However, there is an alternative theory for the structure of water. In 2004, a controversial paper from Stockholm University suggested that water molecules in the liquid state typically bind not to four but only two others; thus forming chains and rings. The term "string theory of water" (which is not to be confused with the string theory of physics) was coined. These observations were based upon X-ray absorption spectroscopy that probed the local environment of individual oxygen atoms. [ 73 ]
The repulsive effects of the two lone pairs on the oxygen atom cause water to have a bent , not linear , molecular structure, [ 74 ] allowing it to be polar. The hydrogen–oxygen–hydrogen angle is 104.45°, which is less than the 109.47° for ideal sp 3 hybridization . The valence bond theory explanation is that the oxygen atom's lone pairs are physically larger and therefore take up more space than the oxygen atom's bonds to the hydrogen atoms. [ 75 ] The molecular orbital theory explanation ( Bent's rule ) is that lowering the energy of the oxygen atom's nonbonding hybrid orbitals (by assigning them more s character and less p character) and correspondingly raising the energy of the oxygen atom's hybrid orbitals bonded to the hydrogen atoms (by assigning them more p character and less s character) has the net effect of lowering the energy of the occupied molecular orbitals because the energy of the oxygen atom's nonbonding hybrid orbitals contributes completely to the energy of the oxygen atom's lone pairs while the energy of the oxygen atom's other two hybrid orbitals contributes only partially to the energy of the bonding orbitals (the remainder of the contribution coming from the hydrogen atoms' 1s orbitals).
In liquid water there is some self-ionization giving hydronium ions and hydroxide ions.
The equilibrium constant for this reaction, known as the ionic product of water, K w = [ H 3 O + ] [ O H − ] {\displaystyle K_{\rm {w}}=[{\rm {H_{3}O^{+}}}][{\rm {OH^{-}}}]} , has a value of about 10 −14 at 25 °C. At neutral pH , the concentration of the hydroxide ion ( OH − ) equals that of the (solvated) hydrogen ion ( H + ), with a value close to 10 −7 mol L −1 at 25 °C. [ 76 ] See data page for values at other temperatures.
The thermodynamic equilibrium constant is a quotient of thermodynamic activities of all products and reactants including water:
However, for dilute solutions, the activity of a solute such as H 3 O + or OH − is approximated by its concentration, and the activity of the solvent H 2 O is approximated by 1, so that we obtain the simple ionic product K e q ≈ K w = [ H 3 O + ] [ O H − ] {\displaystyle K_{\rm {eq}}\approx K_{\rm {w}}=[{\rm {H_{3}O^{+}}}][{\rm {OH^{-}}}]}
The action of water on rock over long periods of time typically leads to weathering and water erosion , physical processes that convert solid rocks and minerals into soil and sediment, but under some conditions chemical reactions with water occur as well, resulting in metasomatism or mineral hydration , a type of chemical alteration of a rock which produces clay minerals . It also occurs when Portland cement hardens.
Water ice can form clathrate compounds , known as clathrate hydrates , with a variety of small molecules that can be embedded in its spacious crystal lattice. The most notable of these is methane clathrate , 4 CH 4 ·23H 2 O , naturally found in large quantities on the ocean floor.
Rain is generally mildly acidic, with a pH between 5.2 and 5.8 if not having any acid stronger than carbon dioxide. [ 77 ] If high amounts of nitrogen and sulfur oxides are present in the air, they too will dissolve into the cloud and raindrops, producing acid rain .
Several isotopes of both hydrogen and oxygen exist, giving rise to several known isotopologues of water. Vienna Standard Mean Ocean Water is the current international standard for water isotopes. Naturally occurring water is almost completely composed of the neutron-less hydrogen isotope protium . Only 155 ppm include deuterium ( 2 H or D), a hydrogen isotope with one neutron, and fewer than 20 parts per quintillion include tritium ( 3 H or T), which has two neutrons. Oxygen also has three stable isotopes, with 16 O present in 99.76%, 17 O in 0.04%, and 18 O in 0.2% of water molecules. [ 78 ]
Deuterium oxide, D 2 O , is also known as heavy water because of its higher density. It is used in nuclear reactors as a neutron moderator . Tritium is radioactive , decaying with a half-life of 4500 days; THO exists in nature only in minute quantities, being produced primarily via cosmic ray-induced nuclear reactions in the atmosphere. Water with one protium and one deuterium atom HDO occur naturally in ordinary water in low concentrations (~0.03%) and D 2 O in far lower amounts (0.000003%) and any such molecules are temporary as the atoms recombine.
The most notable physical differences between H 2 O and D 2 O , other than the simple difference in specific mass, involve properties that are affected by hydrogen bonding, such as freezing and boiling, and other kinetic effects. This is because the nucleus of deuterium is twice as heavy as protium, and this causes noticeable differences in bonding energies. The difference in boiling points allows the isotopologues to be separated. The self-diffusion coefficient of H 2 O at 25 °C is 23% higher than the value of D 2 O . [ 79 ] Because water molecules exchange hydrogen atoms with one another, hydrogen deuterium oxide (DOH) is much more common in low-purity heavy water than pure dideuterium monoxide D 2 O .
Consumption of pure isolated D 2 O may affect biochemical processes—ingestion of large amounts impairs kidney and central nervous system function. Small quantities can be consumed without any ill-effects; humans are generally unaware of taste differences, [ 80 ] but sometimes report a burning sensation [ 81 ] or sweet flavor. [ 82 ] Very large amounts of heavy water must be consumed for any toxicity to become apparent. Rats, however, are able to avoid heavy water by smell, and it is toxic to many animals. [ 83 ]
Light water refers to deuterium-depleted water (DDW), water in which the deuterium content has been reduced below the standard 155 ppm level.
Water is the most abundant substance on Earth's surface and also the third most abundant molecule in the universe, after H 2 and CO . [ 23 ] 0.23 ppm of the earth's mass is water and 97.39% of the global water volume of 1.38 × 10 9 km 3 is found in the oceans. [ 84 ]
Water is far more prevalent in the outer Solar System, beyond a point called the frost line , where the Sun's radiation is too weak to vaporize solid and liquid water (as well as other elements and chemical compounds with relatively low melting points, such as methane and ammonia ). In the inner Solar System, planets, asteroids, and moons formed almost entirely of metals and silicates. Water has since been delivered to the inner Solar System via an as-yet unknown mechanism, theorized to be the impacts of asteroids or comets carrying water from the outer Solar System, where bodies contain much more water ice. [ 85 ] The difference between planetary bodies located inside and outside the frost line can be stark. Earth's mass is 0.000023% water, while Tethys , a moon of Saturn, is almost entirely made of water. [ 86 ]
Water is amphoteric : it has the ability to act as either an acid or a base in chemical reactions. [ 87 ] According to the Brønsted-Lowry definition, an acid is a proton ( H + ) donor and a base is a proton acceptor. [ 88 ] When reacting with a stronger acid, water acts as a base; when reacting with a stronger base, it acts as an acid. [ 88 ] For instance, water receives an H + ion from HCl when hydrochloric acid is formed:
In the reaction with ammonia , NH 3 , water donates a H + ion, and is thus acting as an acid:
Because the oxygen atom in water has two lone pairs , water often acts as a Lewis base , or electron-pair donor, in reactions with Lewis acids , although it can also react with Lewis bases, forming hydrogen bonds between the electron pair donors and the hydrogen atoms of water. HSAB theory describes water as both a weak hard acid and a weak hard base, meaning that it reacts preferentially with other hard species:
When a salt of a weak acid or of a weak base is dissolved in water, water can partially hydrolyze the salt, producing the corresponding base or acid, which gives aqueous solutions of soap and baking soda their basic pH:
Water's Lewis base character makes it a common ligand in transition metal complexes, examples of which include metal aquo complexes such as Fe(H 2 O) 2+ 6 to perrhenic acid , which contains two water molecules coordinated to a rhenium center. In solid hydrates , water can be either a ligand or simply lodged in the framework, or both. Thus, FeSO 4 ·7H 2 O consists of [Fe(H 2 O) 6 ] 2+ centers and one "lattice water". Water is typically a monodentate ligand, i.e., it forms only one bond with the central atom. [ 89 ]
As a hard base, water reacts readily with organic carbocations ; for example in a hydration reaction , a hydroxyl group ( OH − ) and an acidic proton are added to the two carbon atoms bonded together in the carbon-carbon double bond, resulting in an alcohol. When the addition of water to an organic molecule cleaves the molecule in two, hydrolysis is said to occur. Notable examples of hydrolysis are the saponification of fats and the digestion of proteins and polysaccharides . Water can also be a leaving group in S N 2 substitution and E2 elimination reactions; the latter is then known as a dehydration reaction .
Water contains hydrogen in the oxidation state +1 and oxygen in the oxidation state −2. [ 90 ] It oxidizes chemicals such as hydrides , alkali metals , and some alkaline earth metals . [ 91 ] [ 92 ] One example of an alkali metal reacting with water is: [ 93 ]
Some other reactive metals, such as aluminium and beryllium , are oxidized by water as well, but their oxides adhere to the metal and form a passive protective layer. [ 94 ] Note that the rusting of iron is a reaction between iron and oxygen [ 95 ] that is dissolved in water, not between iron and water.
Water can be oxidized to emit oxygen gas, but very few oxidants react with water even if their reduction potential is greater than the potential of O 2 /H 2 O . Almost all such reactions require a catalyst . [ 96 ] An example of the oxidation of water is:
Water can be split into its constituent elements, hydrogen and oxygen, by passing an electric current through it. [ 97 ] This process is called electrolysis. The cathode half reaction is:
The anode half reaction is:
The gases produced bubble to the surface, where they can be collected or ignited with a flame above the water if this was the intention. The required potential for the electrolysis of pure water is 1.23 V at 25 °C. [ 97 ] The operating potential is actually 1.48 V or higher in practical electrolysis.
Henry Cavendish showed that water was composed of oxygen and hydrogen in 1781. [ 98 ] The first decomposition of water into hydrogen and oxygen, by electrolysis , was done in 1800 by English chemist William Nicholson and Anthony Carlisle . [ 98 ] [ 99 ] In 1805, Joseph Louis Gay-Lussac and Alexander von Humboldt showed that water is composed of two parts hydrogen and one part oxygen. [ 100 ]
Gilbert Newton Lewis isolated the first sample of pure heavy water in 1933. [ 101 ]
The properties of water have historically been used to define various temperature scales . Notably, the Kelvin , Celsius , Rankine , and Fahrenheit scales were, or currently are, defined by the freezing and boiling points of water. The less common scales of Delisle , Newton , Réaumur , and Rømer were defined similarly. The triple point of water is a more commonly used standard point today.
The accepted IUPAC name of water is oxidane or simply water , [ 102 ] or its equivalent in different languages, although there are other systematic names which can be used to describe the molecule. Oxidane is only intended to be used as the name of the mononuclear parent hydride used for naming derivatives of water by substituent nomenclature . [ 103 ] These derivatives commonly have other recommended names. For example, the name hydroxyl is recommended over oxidanyl for the –OH group. The name oxane is explicitly mentioned by the IUPAC as being unsuitable for this purpose, since it is already the name of a cyclic ether also known as tetrahydropyran . [ 3 ] [ 104 ]
The simplest systematic name of water is hydrogen oxide . This is analogous to related compounds such as hydrogen peroxide , hydrogen sulfide , and deuterium oxide (heavy water). Using chemical nomenclature for type I ionic binary compounds , water would take the name hydrogen monoxide , [ 105 ] but this is not among the names published by the International Union of Pure and Applied Chemistry (IUPAC). [ 102 ] Another name is dihydrogen monoxide , which is a rarely used name of water, and mostly used in the dihydrogen monoxide parody .
Other systematic names for water include hydroxic acid , hydroxylic acid , and hydrogen hydroxide , using acid and base names. [ j ] None of these exotic names are used widely. The polarized form of the water molecule, H + OH − , is also called hydron hydroxide by IUPAC nomenclature. [ 106 ]
Water substance is a rare term used for H 2 O when one does not wish to specify the phase of matter (liquid water, water vapor , some form of ice , or a component in a mixture) though the term water is also used with this general meaning.
Oxygen dihydride is another way of referring to water, but modern usage often restricts the term " hydride " to ionic compounds (which water is not). | https://en.wikipedia.org/wiki/Properties_of_water |
In mathematics , a property is any characteristic that applies to a given set . [ 1 ] Rigorously , a property p defined for all elements of a set X is usually defined as a function p : X → {true, false}, that is true whenever the property holds; or, equivalently, as the subset of X for which p holds; i.e. the set { x | p ( x ) = true}; p is its indicator function . However, it may be objected that the rigorous definition defines merely the extension of a property, and says nothing about what causes the property to hold for exactly those values. [ citation needed ]
Of objects :
For more examples, see Category:Algebraic properties of elements .
Of operations :
For more examples, see Category:Properties of binary operations .
This mathematics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Property_(mathematics) |
In geometric topology , the Property P conjecture is a statement about 3-manifolds obtained by Dehn surgery on a knot in the 3-sphere . A knot in the 3-sphere is said to have Property P if every 3-manifold obtained by performing (non-trivial) Dehn surgery on the knot is not simply-connected . [ 1 ] The conjecture states that all knots, except the unknot, have Property P.
Research on Property P was started by R. H. Bing , who popularized the name and conjecture.
This conjecture can be thought of as a first step to resolving the Poincaré conjecture , since the Lickorish–Wallace theorem says any closed, orientable 3-manifold results from Dehn surgery on a link. [ 2 ] If a knot K ⊂ S 3 {\displaystyle K\subset \mathbb {S} ^{3}} has Property P, then one cannot construct a counterexample to the Poincaré conjecture by surgery along K {\displaystyle K} .
A proof was announced in 2004, as the combined result of efforts of mathematicians working in several different fields.
Let [ l ] , [ m ] ∈ π 1 ( S 3 ∖ K ) {\displaystyle [l],[m]\in \pi _{1}(\mathbb {S} ^{3}\setminus K)} denote elements corresponding to a preferred longitude and meridian of a tubular neighborhood of K {\displaystyle K} .
K {\displaystyle K} has Property P if and only if its Knot group is never trivialised by adjoining a relation of the form m = l a {\displaystyle m=l^{a}} for some 0 ≠ a ∈ Z {\displaystyle 0\neq a\in \mathbb {Z} } .
This topology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Property_P_conjecture |
Property maintenance relates to the upkeep of a home , apartment , rental property or building and may be a commercial venture through a property maintenance company, an employee of the company which owns a home, apartment or a self-storage pastime for example day-to-day housekeeping or cleaning. [ 1 ] [ 2 ]
This economics -related article is a stub . You can help Wikipedia by expanding it .
This article about a civil engineering topic is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Property_maintenance |
Prophase (from Ancient Greek προ- ( pro- ) ' before ' and φάσις (phásis) ' appearance ' ) is the first stage of cell division in both mitosis and meiosis . Beginning after interphase , DNA has already been replicated when the cell enters prophase. The main occurrences in prophase are the condensation of the chromatin reticulum and the disappearance of the nucleolus . [ 3 ]
Microscopy can be used to visualize condensed chromosomes as they move through meiosis and mitosis . [ 4 ]
Various DNA stains are used to treat cells such that condensing chromosomes can be visualized as the move through prophase. [ 4 ]
The giemsa G-banding technique is commonly used to identify mammalian chromosomes , but utilizing the technology on plant cells was originally difficult due to the high degree of chromosome compaction in plant cells. [ 5 ] [ 4 ] G-banding was fully realized for plant chromosomes in 1990. [ 6 ] During both meiotic and mitotic prophase, giemsa staining can be applied to cells to elicit G-banding in chromosomes . [ 2 ] Silver staining, a more modern technology, in conjunction with giemsa staining can be used to image the synaptonemal complex throughout the various stages of meiotic prophase. [ 7 ] To perform G-banding , chromosomes must be fixed, and thus it is not possible to perform on living cells. [ 8 ]
Fluorescent stains such as DAPI can be used in both live plant and animal cells . These stains do not band chromosomes , but instead allow for DNA probing of specific regions and genes . Use of fluorescent microscopy has vastly improved spatial resolution . [ 9 ]
Prophase is the first stage of mitosis in animal cells , and the second stage of mitosis in plant cells . [ 10 ] At the start of prophase there are two identical copies of each chromosome in the cell due to replication in interphase . These copies are referred to as sister chromatids and are attached by DNA element called the centromere . [ 11 ] The main events of prophase are: the condensation of chromosomes , the movement of the centrosomes , the formation of the mitotic spindle , and the beginning of nucleoli break down. [ 3 ]
DNA that was replicated in interphase is condensed from DNA strands with lengths reaching 0.7 μm down to
0.2-0.3 μm. [ 3 ] This process employs the condensin complex. [ 11 ] Condensed chromosomes consist of two sister chromatids joined at the centromere . [ 12 ]
During prophase in animal cells , centrosomes move far enough apart to be resolved using a light microscope . [ 3 ] Microtubule activity in each centrosome is increased due to recruitment of γ-tubulin . Replicated centrosomes from interphase move apart towards opposite poles of the cell, powered by centrosome associated motor proteins . [ 13 ] Interdigitated interpolar microtubules from each centrosome interact with each other, helping to move the centrosomes to opposite poles. [ 13 ] [ 3 ]
Microtubules involved in the interphase scaffolding break down as the replicated centrosomes separate. [ 3 ] The movement of centrosomes to opposite poles is accompanied in animal cells by the organization of individual radial microtubule arrays (asters) by each centriole. [ 13 ] Interpolar microtubules from both centrosomes interact, joining the sets of microtubules and forming the basic structure of the mitotic spindle . [ 13 ] Plant cells do not have centrosomes and the chromosomes can nucleate microtubule assembly into the mitotic apparatus . [ 13 ] In plant cells , microtubules gather at opposite poles and begin to form the spindle apparatus at locations called foci. [ 10 ] The mitotic spindle is of great importance in the process of mitosis and will eventually segregate the sister chromatids in metaphase . [ 3 ]
The nucleoli begin to break down in prophase, resulting in the discontinuation of ribosome production. [ 3 ] This indicates a redirection of cellular energy from general cellular metabolism to cellular division . [ 3 ] The nuclear envelope stays intact during this process. [ 10 ]
Meiosis involves two rounds of chromosome segregation and thus undergoes prophase twice, resulting in prophase I and prophase II. [ 12 ] Prophase I is the most complex phase in all of meiosis because homologous chromosomes must pair and exchange genetic information . [ 3 ] : 98 Prophase II is very similar to mitotic prophase. [ 12 ]
Prophase I is divided into five phases: leptotene, zygotene, pachytene, diplotene, and diakinesis. In addition to the events that occur in mitotic prophase, several crucial events occur within these phases such as pairing of homologous chromosomes and the reciprocal exchange of genetic material between these homologous chromosomes . Prophase I occurs at different speeds dependent on species and sex . Many species arrest meiosis in diplotene of prophase I until ovulation . [ 3 ] : 98 In humans, decades can pass as oocytes remain arrested in prophase I only to quickly complete meiosis I prior to ovulation . [ 12 ]
In the first stage of prophase I, leptotene (from the Greek for "delicate"), chromosomes begin to condense. Each chromosome is in a diploid state and consists of two sister chromatids ; however, the chromatin of the sister chromatids is not yet condensed enough to be resolvable in microscopy . [ 3 ] : 98 Homologous regions within homologous chromosome pairs begin to associate with each other. [ 2 ]
In the second phase of prophase I, zygotene (from the Greek for "conjugation"), all maternally and paternally derived chromosomes have found their homologous partner. [ 3 ] : 98 The homologous pairs then undergo synapsis,a process by which the synaptonemal complex (a proteinaceous structure) aligns corresponding regions of genetic information on maternally and paternally derived non-sister chromatids of homologous chromosome pairs. [ 3 ] : 98 [ 12 ] The paired homologous chromosome bound by the synaptonemal complex are referred to as bivalents or tetrads. [ 10 ] [ 3 ] : 98 Sex (X and Y) chromosomes do not fully synapse because only a small region of the chromosomes are homologous. [ 3 ] : 98
The nucleolus moves from a central to a peripheral position in the nucleus . [ 14 ]
The third phase of prophase I, pachytene (from the Greek for "thick"), begins at the completion of synapsis. [ 3 ] : 98 Chromatin has condensed enough that chromosomes can now be resolved in microscopy . [ 10 ] Structures called recombination nodules form on the synaptonemal complex of bivalents . These recombination nodules facilitate genetic exchange between the non-sister chromatids of the synaptonemal complex in an event known as crossing-over or genetic recombination. [ 3 ] : 98 Multiple recombination events can occur on each bivalent. In humans, an average of 2-3 events occur on each chromosome. [ 13 ] : 681
In the fourth phase of prophase I, diplotene (from the Greek for "twofold"), crossing-over is completed. [ 3 ] : 99 [ 10 ] Homologous chromosomes retain a full set of genetic information; however, the homologous chromosomes are now of mixed maternal and paternal descent. [ 3 ] : 99 Visible junctions called chiasmata hold the homologous chromosomes together at locations where recombination occurred as the synaptonemal complex dissolves. [ 12 ] [ 3 ] : 99 It is at this stage where meiotic arrest occurs in many species . [ 3 ] : 99
In the fifth and final phase of prophase I, diakinesis (from the Greek for "double movement"), full chromatin condensation has occurred and all four sister chromatids can be seen in bivalents with microscopy . The rest of the phase resemble the early stages of mitotic prometaphase , as the meiotic prophase ends with the spindle apparatus beginning to form, and the nuclear membrane beginning to break down. [ 10 ] [ 3 ] : 99
Prophase II of meiosis is very similar to prophase of mitosis . The most noticeable difference is that prophase II occurs with a haploid number of chromosomes as opposed to the diploid number in mitotic prophase. [ 12 ] [ 10 ] In both animal and plant cells chromosomes may de-condense during telophase I requiring them to re-condense in prophase II. [ 3 ] : 100 [ 10 ] If chromosomes do not need to re-condense, prophase II often proceeds very quickly as is seen in the model organism Arabidopsis . [ 10 ]
Female mammals and birds are born possessing all the oocytes needed for future ovulations, and these oocytes are arrested at the prophase I stage of meiosis . [ 15 ] In humans, as an example, oocytes are formed between three and four months of gestation within the fetus and are therefore present at birth. During this prophase I arrested stage ( dictyate ), which may last for decades, four copies of the genome are present in the oocytes. The adaptive significance of prophase I arrest is still not fully understood. However, it has been proposed that the arrest of oocytes at the four genome copy stage may provide the informational redundancy needed to repair damage in the DNA of the germline . [ 15 ] The repair process used appears to be homologous recombinational repair [ 15 ] [ 16 ] Prophase arrested oocytes have a high capability for efficient repair of DNA damages . [ 16 ] DNA repair capability appears to be a key quality control mechanism in the female germ line and a critical determinant of fertility . [ 16 ]
The most notable difference between prophase in plant cells and animal cells occurs because plant cells lack centrioles . The organization of the spindle apparatus is associated instead with foci at opposite poles of the cell or is mediated by chromosomes. Another notable difference is preprophase , an additional step in plant mitosis that results in formation of the preprophase band , a structure composed of microtubules . In mitotic prophase I of plants, this band disappears. [ 10 ]
Prophase I in meiosis is the most complex iteration of prophase that occurs in both plant cells and animal cells . [ 3 ] To ensure pairing of homologous chromosomes and recombination of genetic material occurs properly, there are cellular checkpoints in place. The meiotic checkpoint network is a DNA damage response system that controls double strand break repair, chromatin structure, and the movement and pairing of chromosomes . [ 17 ] The system consists of multiple pathways (including the meiotic recombination checkpoint ) that prevent the cell from entering metaphase I with errors due to recombination. [ 18 ] | https://en.wikipedia.org/wiki/Prophase |
Propidium iodide (or PI ) is a fluorescent intercalating agent that can be used to stain cells and nucleic acids . PI binds to DNA by intercalating between the bases with little or no sequence preference. When in an aqueous solution, PI has a fluorescent excitation maximum of 493 nm (blue-green), and an emission maximum of 636 nm (red). After binding DNA, the quantum yield of PI is enhanced 20-30 fold, and the excitation/emission maximum of PI is shifted to 535 nm (green) / 617 nm (orange-red). [ 1 ] Propidium iodide is used as a DNA stain in flow cytometry to evaluate cell viability or DNA content in cell cycle analysis , [ 2 ] or in microscopy to visualize the nucleus and other DNA-containing organelles. Propidium Iodide is not membrane-permeable, making it useful to differentiate necrotic , apoptotic and healthy cells based on membrane integrity. [ 3 ] [ 4 ] PI also binds to RNA , necessitating treatment with nucleases to distinguish between RNA and DNA staining. [ 5 ] PI is widely used in fluorescence staining and visualization of the plant cell wall. [ 6 ] | https://en.wikipedia.org/wiki/Propidium_iodide |
Propidium monoazide ( PMA ) is a photoreactive DNA-binding dye that preferentially binds to dsDNA . It is used to detect viable microorganisms by qPCR . [ 1 ] Visible light (high power halogen lamps or specific LED devices [ 2 ] ) induces a photoreaction of the chemical that will lead to a covalent bond with PMA and the dsDNA. The mechanism of DNA modification by PMA can be seen in this protocol. [ 3 ] This process renders the DNA insoluble and results in its loss during subsequent genomic DNA extraction. [ 4 ] Theoretically, dead microorganisms lose their capability to maintain their membranes intact, which leaves the "naked" DNA in the cytosol ready to react with PMA. DNA of living organisms are not exposed to the PMA, as they have an intact cell membrane. After treatment with the chemical, only the DNA from living bacteria is usable in qPCR, allowing to obtain only the amplified DNA of living organisms. This is helpful in determining which pathogens are active in specific samples. [ 5 ] The main use of PMA is in Viability PCR but the same principle can be applied in flow cytometry or fluorescence microscopy .
However, the ability of PMA in differentiating viable and non-viable cells varies for different bacteria. An example is that the permeability of PMA to gram-positive and gram-negative cell membranes is different. Therefore, the application of PMA to mixed communities is still limited.
PMA was developed at Biotium, Inc. [ 6 ] as an improvement on ethidium monoazide (EMA). PMA provides better discrimination between live and dead bacteria because it is excluded from live cells more efficiently than EMA. [ 7 ] | https://en.wikipedia.org/wiki/Propidium_monoazide |
Propionyl-CoA is a coenzyme A derivative of propionic acid . It is composed of a 24 total carbon chain (without the coenzyme, it is a 3 carbon structure) and its production and metabolic fate depend on which organism it is present in. [ 1 ] Several different pathways can lead to its production, such as through the catabolism of specific amino acids or the oxidation of odd-chain fatty acids . [ 2 ] It later can be broken down by propionyl-CoA carboxylase or through the methylcitrate cycle. [ 3 ] In different organisms, however, propionyl-CoA can be sequestered into controlled regions, to alleviate its potential toxicity through accumulation. [ 4 ] Genetic deficiencies regarding the production and breakdown of propionyl-CoA also have great clinical and human significance. [ 5 ]
There are several different pathways through which propionyl-CoA can be produced:
The metabolic (catabolic fate) of propionyl-CoA depends on what environment it is being synthesized in. Therefore, propionyl-CoA in an anaerobic environment could have a different fate than that in an aerobic organism . The multiple pathways, either catabolism by propionyl-CoA carboxylase or methylcitrate synthase, also depend on the presence of various genes. [ 7 ]
Within the citric acid cycle in humans, propionyl-CoA, which interacts with oxaloacetate to form methylcitrate, can also catalyzed into methylmalonyl-CoA through carboxylation by propionyl-CoA carboxylase (PCC). Methylmalonyl-CoA is later transformed to succinyl-CoA to be further used in the tricarboxylic acid cycle . PCC not only catalyzes the carboxylation of propionyl-CoA to methylmalonyl-CoA, but also acts on several different acyl-CoAs. Nevertheless, its highest binding affinity is to propionyl-CoA. It was further shown that propionyl-CoA transformation is inhibited by the absence of several TCA markers, such as glutamate . The mechanism is shown by the figure to the left. [ 2 ]
In mammals, propionyl-CoA is converted to ( S )- methylmalonyl-CoA by propionyl-CoA carboxylase , a biotin -dependent enzyme also requiring bicarbonate and ATP .
This product is converted to ( R )-methylmalonyl-CoA by methylmalonyl-CoA racemase .
( R )-Methylmalonyl-CoA is converted to succinyl-CoA , an intermediate in the tricarboxylic acid cycle , by methylmalonyl-CoA mutase , an enzyme requiring
cobalamin to catalyze the carbon-carbon bond migration.
The methylmalonyl-CoA mutase mechanism begins with the cleavage of the bond between the 5' CH 2 - of 5'-deoxyadenosyl and the cobalt, which is in its +3 oxidation state (III), which produces a 5'- deoxyadenosyl radical and cobalamin in the reduced Co(II) oxidation state.
Next, this radical abstracts a hydrogen atom from the methyl group of methylmalonyl-CoA, which generates a methylmalonyl-CoA radical. It is believed that this radical forms a carbon-cobalt bond to the coenzyme, which is then followed by the rearrangement of the substrate's carbon skeleton, thus producing a succinyl-CoA radical. This radical then goes on to abstract a hydrogen from the previously produced 5'-deoxyadenosine, again creating a deoxyadenosyl radical, which attacks the coenzyme to reform the initial complex.
A defect in methylmalonyl-CoA mutase enzyme results in methylmalonic aciduria , a dangerous disorder that causes a lowering of blood pH. [ 8 ]
Propionyl-CoA accumulation can prove toxic to different organisms. Since different cycles have been proposed regarding how propionyl-CoA is transformed into pyruvate, one studied mechanism is the methylcitrate cycle . The initial reaction is beta-oxidation to form the propionyl-CoA which is further broken down by the cycle. This pathway involves the enzymes both related to the methylcitrate cycle as well as the citric acid cycle . These all contribute to the overall reaction to detoxify the bacteria from harmful propionyl-CoA. It is also attributed as a resulting pathway due to the catabolism of fatty acids in mycobacteria. [ 3 ] In order to proceed, the prpC gene codes for methylcitrate synthase, and if not present, the methylcitrate cycle will not occur. Instead, catabolism proceeds through propionyl-CoA carboxylase. [ 7 ] This mechanism is shown below to the left along with the participating reactants, products, intermediates, and enzymes.
The oxidation of propionyl-CoA to form pyruvate is influenced by its necessity in Mycobacterium tuberculosis . Accumulation of propionyl-CoA can lead to toxic effects. In Mycobacterium tuberculosis , it has been suggested that the metabolism of propionyl-CoA is involved in cell wall biogenesis . A lack of such catabolism would therefore increase the susceptibility of the cell to various toxins, particularly to macrophage antimicrobial mechanisms. Another hypothesis regarding the fate of propionyl-CoA, in M. tuberculosis is, is that since propionyl-CoA is produced by beta odd chain fatty acid catabolism, the methylcitrate cycle is activated subsequently to negate any potential toxicity, acting as a buffering mechanism. [ 11 ]
Propionyl-CoA has can have many adverse and toxic affects on different species, including bacterium . For example, inhibition of pyruvate dehydrogenase by an accumulation of propionyl-CoA in Rhodobacter sphaeroides can prove deadly. Furthermore, as with E. coli , an influx of propionyl-CoA in Myobacterial species can result in toxicity if not dealt with immediately. This toxicity is caused by a pathway involving the lipids that form the bacterial cell wall . Using esterification of long-chain fatty acids, excess propionyl-CoA can be sequestered and stored in the lipid, triacylglycerol (TAG), leading to regulation of elevated propionyl-CoA levels. Such a process of methyl branching of the fatty acids causes them to act as sinks for accumulating propion [ 4 ]
In an investigation performed by Luo et al., Escherichia coli strains were utilized to examine how the metabolism of propionyl-CoA could potentially lead to the production of 3-hydroxypropionic acid (3-HP). It was shown that a mutation in a key gene involved in the pathway, succinate CoA-transferase , led to a significant increase in 3-HP. [ 7 ] However, this is still a developing field and information on this topic is limited. [ 12 ]
Amino acid metabolism in plants has been deemed a controversial topic, due to the lack of concrete evidence for any particular pathway. However, it has been suggested that enzymes related to the production and use of propionyl-CoA are involved. Associated with this is the metabolism of isobutyryl-CoA . These two molecules are deemed to be intermediates in valine metabolism. As propionate consists in the form of propionyl-CoA, it was discovered that propionyl-CoA is converted to β-hydroxypropionate through a peroxisomal enzymatic β-oxidation pathway. Nevertheless, in the plant Arabidopsis , key enzymes in the conversion of valine to propionyl-CoA were not observed. Through different experiments performed by Lucas et al., it has been suggested that in plants, through peroxisomal enzymes, propionyl-CoA (and isobutyryl-CoA ) are involved in the metabolism of many different substrates (currently being evaluated for identity), and not just valine . [ 13 ]
Propionyl-CoA production through the catabolism of fatty acids is also associated with thioesterifcation . In a study concerning Aspergillus nidulans , it was found that with the inhibition of a methylcitrate synthase gene, mcsA , of the pathway described above, production of distinct polyketides was inhibited as well. Therefore, the utilization of propionyl-CoA through the methylcitrate cycle decreases its concentration, while subsequently increasing the concentration of polyketides. A polyketide is a structure commonly found in fungi that is made of acetyl - and malonyl -CoAs, providing a product with alternating carbonyl groups and methylene groups . Polyketides and polyketide derivatives are often highly structurally complex, and several are highly toxic. This has led to research on limiting polyketide toxicity to crops in agriculture through phytopathogenic fungi . [ 14 ]
Propionyl-CoA is also a substrate for post-translational modification of proteins by reacting with lysine residues on proteins, a reaction called protein propionylation . [ 15 ] [ 16 ] Due to structural similarities of Acetyl-CoA and Propionyl-CoA, propionylation reaction are thought to use many of the same enzymes used for protein acetylation. [ 16 ] Although functional consequences of protein propionylation are currently not completely understood, in vitro propionylation of the Propionyl-CoA Synthetase enzyme controls its activity. [ 17 ]
Similar to how plant peroxisomal enzymes bind propionyl-CoA and isobutyryl-CoA, Gen5, an acetyltransferase in humans, binds to propionyl-CoA and butyryl-CoA . These specifically bind to the catalytic domain of Gen5L2 . This conserved acetyltransferase is responsible for the regulation of transcription by lysine acetylation of the histone N-terminal tails. This function of acetylation has a much higher reaction rate than propionylation or butyrylation . Because of the structure of propionyl-CoA, Gen5 distinguishes between different acyl-CoA molecules. In fact, it was found that the propyl group of butyrl-CoA cannot bind due to lack of stereospecificity to the active binding site of Gen5 due to the unsaturated acyl chains . On the other hand, the third carbon of propionyl-CoA can fit into the active site of Gen5 with the correct orientation. [ 18 ]
In the neonatal developmental stages, propionic acidemia , which is a medical issue defined as the lack of propionyl-CoA carboxylase, can cause impairment, mental disability, and numerous other issues. This is caused by an accumulation of propionyl-CoA because it cannot be converted to methylmalonyl-CoA . Newborns are tested for elevated propionylcarnitine . Further ways of diagnosing this disease include urine samples. Medications used help to reverse and prevent recurring symptoms include using supplements to decrease propionate production. [ 5 ] | https://en.wikipedia.org/wiki/Propionyl-CoA |
Propionyl chloride (also propanoyl chloride ) is the organic compound with the formula CH 3 CH 2 C(O)Cl. It is the acyl chloride derivative of propionic acid . It undergoes the characteristic reactions of acyl chlorides. [ 1 ] It is a colorless, corrosive, volatile liquid .
It is used as a reagent for organic synthesis . In derived chiral amides and esters, the methylene protons are diastereotopic. [ 2 ]
There have been efforts [ 3 ] to schedule Propionyl chloride as a DEA List 1 Chemical as it can be used to synthesize fentanyl .
Propionyl chloride is industrially produced by chlorination of propionic acid with phosgene : [ 4 ] | https://en.wikipedia.org/wiki/Propionyl_chloride |
A proportion is a mathematical statement expressing equality of two ratios . [ 1 ] [ 2 ]
a : b = c : d {\displaystyle a:b=c:d}
a and d are called extremes , b and c are called means .
Proportion can be written as a b = c d {\displaystyle {\frac {a}{b}}={\frac {c}{d}}} , where ratios are expressed as fractions .
Such a proportion is known as geometrical proportion , [ 3 ] not to be confused with arithmetical proportion and harmonic proportion .
A Greek mathematician Eudoxus provided a definition for the meaning of the equality between two ratios. This definition of proportion forms the subject of Euclid's Book V, where we can read:
Magnitudes are said to be in the same ratio, the first to the second and the third to the fourth when, if any equimultiples whatever be taken of the first and third, and any equimultiples whatever of the second and fourth, the former equimultiples alike exceed, are alike equal to, or alike fall short of, the latter equimultiples respectively taken in corresponding order.
Later, the realization that ratios are numbers allowed to switch from solving proportions to equations , and from transformation of proportions to algebraic transformations.
An equation of the form a − b = c − d {\displaystyle a-b=c-d} is called arithmetic proportion or difference proportion . [ 5 ]
If the means of the geometric proportion are equal, and the rightmost extreme is equal to the difference between the leftmost extreme and a mean, then such a proportion is called harmonic : [ 6 ] a : b = b : ( a − b ) {\displaystyle a:b=b:(a-b)} . In this case the ratio a : b {\displaystyle a:b} is called golden ratio . | https://en.wikipedia.org/wiki/Proportion_(mathematics) |
Proportional-fair scheduling is a compromise-based scheduling algorithm . It is based upon maintaining a balance between two competing interests: Trying to maximize the total throughput of the network (wired or not) while at the same time allowing all users at least a minimal level of service. This is done by assigning each data flow a data rate or a scheduling priority (depending on the implementation) that is inversely proportional to its anticipated resource consumption. [ 1 ] [ 2 ]
Proportionally fair scheduling can be achieved by means of weighted fair queuing (WFQ), by setting the scheduling weights for data flow i {\displaystyle i} to w i = 1 / c i {\displaystyle w_{i}=1/c_{i}} , where the cost c i {\displaystyle c_{i}} is the amount of consumed resources per data bit. For instance:
Another way to schedule data transfer that leads to similar results is through the use of prioritization coefficients. [ 3 ] Here we schedule the channel for the station that has the maximum of the priority function:
P = T α R β {\displaystyle P={\frac {T^{\alpha }}{R^{\beta }}}}
By adjusting α {\displaystyle \alpha } and β {\displaystyle \beta } in the formula above, we are able to adjust the balance between serving the best mobiles (the ones in the best channel conditions) more often and serving the costly mobiles often enough that they have an acceptable level of performance.
In the extreme case ( α = 0 {\displaystyle \alpha =0} and β = 1 {\displaystyle \beta =1} ) the scheduler acts in a "packet" round-robin fashion and serves all mobiles one after the other (but not equally often in time), with no regard for resource consumption, and such that each user gets the same amount of data. The ( α = 0 {\displaystyle \alpha =0} and β = 1 {\displaystyle \beta =1} ) scheduler could be called "maximum fairness scheduler" (to be used to provide equal throughout to voice users for example). If α = 1 {\displaystyle \alpha =1} and β = 0 {\displaystyle \beta =0} then the scheduler will always serve the mobile with the best channel conditions. This will maximize the throughput of the channel while stations with low T {\displaystyle T} are not served at all. The ( α = 1 {\displaystyle \alpha =1} and β = 0 {\displaystyle \beta =0} ) scheduler could be called "max rate" scheduler. [ 2 ] Using α ≈ 1 {\displaystyle \alpha \approx 1} and β ≈ 1 {\displaystyle \beta \approx 1} will yield the proportional fair scheduling algorithm used in 3G networks. [ 3 ] The ( α = 1 {\displaystyle \alpha =1} and β = 1 {\displaystyle \beta =1} ) scheduler could be implemented by providing the same amount of time & spectrum for each user, irrespective of the desired packet size, channel quality and data rate (MCS) used. The proportional fair ( α = 1 {\displaystyle \alpha =1} and β = 1 {\displaystyle \beta =1} ) scheduler could be called "equal effort scheduler" or "time/spectrum Round Robin scheduler".
This technique can be further parametrized by using a "memory constant" that determines the period of time over which the station data rate used in calculating the priority function is averaged. A larger constant generally improves throughput at the expense of reduced short-term fairness. | https://en.wikipedia.org/wiki/Proportional-fair_scheduling |
Proportional control , in engineering and process control, is a type of linear feedback control system in which a correction is applied to the controlled variable, and the size of the correction is proportional to the difference between the desired value ( setpoint , SP) and the measured value ( process variable , PV). Two classic mechanical examples are the toilet bowl float proportioning valve and the fly-ball governor .
The proportional control concept is more complex than an on–off control system such as a bi-metallic domestic thermostat , but simpler than a proportional–integral–derivative (PID) control system used in something like an automobile cruise control . On–off control will work where the overall system has a relatively long response time, but can result in instability if the system being controlled has a rapid response time. Proportional control overcomes this by modulating the output to the controlling device, such as a control valve at a level which avoids instability, but applies correction as fast as practicable by applying the optimum quantity of proportional gain.
A drawback of proportional control is that it cannot eliminate the residual SP − PV error in processes with compensation e.g. temperature control, as it requires an error to generate a proportional output. To overcome this the PI controller was devised, which uses a proportional term (P) to remove the gross error, and an integral term (I) to eliminate the residual offset error by integrating the error over time to produce an "I" component for the controller output.
In the proportional control algorithm, the controller output is proportional to the error signal, which is the difference between the setpoint and the process variable. In other words, the output of a proportional controller is the multiplication product of the error signal and the proportional gain.
This can be mathematically expressed as
where
Constraints: In a real plant, actuators have physical limitations that can be expressed as constraints on P o u t {\displaystyle P_{\mathrm {out} }} . For example, P o u t {\displaystyle P_{\mathrm {out} }} may be bounded between −1 and +1 if those are the maximum output limits.
Qualifications: It is preferable to express K p {\displaystyle K_{p}} as a unitless number. To do this, we can express e ( t ) {\displaystyle e(t)} as a ratio with the span of the instrument. This span is in the same units as error (e.g. C degrees) so the ratio has no units.
Proportional control dictates g c = k c {\displaystyle {\mathit {g_{c}=k_{c}}}} . From the block diagram shown, assume that r , the setpoint, is the flowrate into a tank and e is error , which is the difference between setpoint and measured process output. g p , {\displaystyle {\mathit {g_{p}}},} is process transfer function; the input into the block is flow rate and output is tank level.
The output as a function of the setpoint, r , is known as the closed-loop transfer function . g c l = g p g c 1 + g p g c , {\displaystyle {\mathit {g_{cl}}}={\frac {\mathit {g_{p}g_{c}}}{1+g_{p}g_{c}}},} If the poles of g c l , {\displaystyle {\mathit {g_{cl}}},} are stable, then the closed-loop system is stable.
For a first-order process, a general transfer function is g p = k p τ p s + 1 {\displaystyle g_{p}={\frac {k_{p}}{\tau _{p}s+1}}} . Combining this with the closed-loop transfer function above returns g C L = k p k c τ p s + 1 1 + k p k c τ p s + 1 {\displaystyle g_{CL}={\frac {\frac {k_{p}k_{c}}{\tau _{p}s+1}}{1+{\frac {k_{p}k_{c}}{\tau _{p}s+1}}}}} . Simplifying this equation results in g C L = k C L τ C L s + 1 {\displaystyle g_{CL}={\frac {k_{CL}}{\tau _{CL}s+1}}} where k C L = k p k c 1 + k p k c {\displaystyle k_{CL}={\frac {k_{p}k_{c}}{1+k_{p}k_{c}}}} and τ C L = τ p 1 + k p k c {\displaystyle \tau _{CL}={\frac {\tau _{p}}{1+k_{p}k_{c}}}} . For stability in this system, τ C L > 0 {\displaystyle \tau _{CL}>0} ; therefore, τ p {\displaystyle \tau _{p}} must be a positive number, and k p k c > − 1 {\displaystyle k_{p}k_{c}>-1} (standard practice is to make sure that k p k c > 0 {\displaystyle k_{p}k_{c}>0} ).
Introducing a step change to the system gives the output response of y ( s ) = g C L × Δ R s {\displaystyle y(s)=g_{CL}\times {\frac {\Delta R}{s}}} .
Using the final-value theorem,
lim t → ∞ y ( t ) = lim s ↘ 0 ( s × k C L τ C L s + 1 × Δ R s ) = k C L × Δ R = y ( t ) | t = ∞ {\displaystyle \lim _{t\to \infty }y(t)=\lim _{s\,\searrow \,0}\left(s\times {\frac {k_{CL}}{\tau _{CL}s+1}}\times {\frac {\Delta R}{s}}\right)=k_{CL}\times \Delta R=y(t)|_{t=\infty }}
which shows that there will always be an offset in the system.
For an integrating process, a general transfer function is g p = 1 s ( s + 1 ) {\displaystyle g_{p}={\frac {1}{s(s+1)}}} , which, when combined with the closed-loop transfer function, becomes g C L = k c s ( s + 1 ) + k c {\displaystyle g_{CL}={\frac {k_{c}}{s(s+1)+k_{c}}}} .
Introducing a step change to the system gives the output response of y ( s ) = g C L × Δ R s {\displaystyle y(s)=g_{CL}\times {\frac {\Delta R}{s}}} .
Using the final-value theorem,
lim t → ∞ y ( t ) = lim s ↘ 0 ( s × k c s ( s + 1 ) + k c × Δ R s ) = Δ R = y ( t ) | t = ∞ {\displaystyle \lim _{t\to \infty }y(t)=\lim _{s\,\searrow \,0}\left(s\times {\frac {k_{c}}{s(s+1)+k_{c}}}\times {\frac {\Delta R}{s}}\right)=\Delta R=y(t)|_{t=\infty }}
meaning there is no offset in this system. This is the only process that will not have any offset when using a proportional controller. [ 1 ]
Offset error is the difference between the desired value and the actual value, SP − PV error. Over a range of operating conditions, proportional control alone is unable to eliminate offset error, as it requires an error to generate an output adjustment. [ 1 ] While a proportional controller may be tuned (via p0 adjustment, if possible) to eliminate offset error for expected conditions, when a disturbance (deviation from existing state or setpoint adjustment) occurs in the process, corrective control action, based purely on proportional control, will result in an offset error.
Consider an object suspended by a spring as a simple proportional control. The spring will attempt to maintain the object in a certain location despite disturbances that may temporarily displace it. Hooke's law tells us that the spring applies a corrective force that is proportional to the object's displacement. While this will tend to hold the object in a particular location, the absolute resting location of the object will vary if its mass is changed. This difference in resting location is the offset error.
The proportional band is the band of controller output over which the final control element (a control valve, for instance) will move from one extreme to another. Mathematically, it can be expressed as:
P B = 100 K p {\displaystyle PB={\frac {100}{K_{p}}}\ }
So if K p {\displaystyle K_{p}} , the proportional gain, is very high, the proportional band is very small, which means that the band of controller output over which the final control element will go from minimum to maximum (or vice versa) is very small. This is the case with on–off controllers, where K p {\displaystyle K_{p}} is very high and hence, for even a small error, the controller output is driven from one extreme to another.
The clear advantage of proportional over on–off control can be demonstrated by car speed control. An analogy to on–off control is driving a car by applying either full power or no power and varying the duty cycle , to control speed. The power would be on until the target speed is reached, and then the power would be removed, so the car reduces speed. When the speed falls below the target, with a certain hysteresis , full power would again be applied. It can be seen that this would obviously result in poor control and large variations in speed. The more powerful the engine, the greater the instability; the heavier the car, the greater the stability. Stability may be expressed as correlating to the power-to-weight ratio of the vehicle.
In proportional control, the power output is always proportional to the (actual versus target speed) error. If the car is at target speed and the speed increases slightly due to a falling gradient, the power is reduced slightly, or in proportion to the change in error, so that the car reduces speed gradually and reaches the new target point with very little, if any, "overshoot", which is much smoother control than on–off control. In practice, PID controllers are used for this and the large number of other control processes that require more responsive control than using proportional alone. | https://en.wikipedia.org/wiki/Proportional_control |
Reasoning based on relations of proportionality is one form of what in Piaget's theory of cognitive development is called "formal operational reasoning", which is acquired in the later stages of intellectual development. There are methods by which teachers can guide students in the correct application of proportional reasoning .
In mathematics and in physics, proportionality is a mathematical relation between two quantities; it can be expressed as an equality of two ratios:
Functionally, proportionality can be a relationship between variables in a mathematical equation. For example, given the following equation for the force of gravity (according to Newton ):
the force of gravity between two masses is directly proportional to the product of the two masses and inversely proportional to the square of the distance between the two masses.
In Piaget's model of intellectual development, the fourth and final stage is the formal operational stage . In the classic book "The Growth of Logical Thinking from Childhood to Adolescence" by Jean Piaget and Bärbel Inhelder formal operational reasoning takes many forms, including propositional reasoning, deductive logic, separation and control of variables, combinatorial reasoning, and proportional reasoning. Robert Karplus , a science educator in the 1960s and 1970s, investigated all these forms of reasoning in adolescents & adults. Mr. Tall-Mr.Short was one of his studies.
Comparable reasoning patterns exist for inverse proportion.
Someone with knowledge about the area of triangles might reason: "Initially the area of the water forming the triangle is 12 since 1 / 2 × 4 × 6 = 12. The amount of water doesn't change so the area won't change. So the answer is 3 because 1 / 2 × 3 × 8 = 12."
A correct multiplicative answer is relatively rare. By far the most common answer is something like: "2 units because the water level on the right side increased by two units so the water level on the left side must decrease by two units and 4 – 2 = 2." Less frequently the reason for two units is: "Before there is a total of 10 units because 4 + 6 = 10. The total number of units must stay the same so the answer is 2 because 2 + 8 = 10."
So again there are individuals who are not at the formal operational level apply an additive strategy rather than a multiplicative strategy to solve an inverse proportion. And, like the direct proportion, this incorrect strategy appears to be logical to the individual and appears to give a reasonable answer. Students are very surprised when they actually carry out the experiment and tilt the triangle to find the answer is 3 and not 2 as they so confidently predicted.
Let T be the height of Mr. Tall and S be the height of Mr. Short, then the correct multiplicative strategy can be expressed as T/S = 3/2; this is a constant ratio relation. The incorrect additive strategy can be expressed as T – S = 2; this is a constant difference relation. Here is the graph for these two equations. For the numeric values involved in the problem statement, these graphs are "similar" and it is easy to see why individuals consider their incorrect answers perfectly reasonable.
Now consider our inverse proportion using the "water triangle". Let L be the height of the water on the left side and R be the height of the water on the right side, then the correct multiplicative strategy can be expressed as L × R = 24; this is a constant product relation. The incorrect additive strategy can be expressed as L + R = 10; this is a constant sum relation. Here is the graph for these two equations. For the numeric values involved in the problem statement, these graphs are "similar" and it is easy to see why individuals consider their incorrect answers perfectly reasonable.
As any experienced teacher will attest [ citation needed ] , it is not sufficient to simply tell a student his/her answer is incorrect and then instruct the student to use the correct solution. The incorrect strategy has not been "unwired in the brain" and would re-emerge after the current lesson has been completed.
Also the additive strategies noted above cannot simply be labeled as "incorrect" since they do correctly match other real world situations. For example, consider the following problem:
On Independence Day this year Mr. Tall was 6 years old and Mr. Short was 4 years old. On a future Independence Day Mr. Short is 6 years old. How old will Mr. Tall be on that Independence Day?
Similarly the constant sum relation can be correct for some situations. Consider the following problem.
There are four beavers on the left side of a river and six beavers on the right side of the river. At a later time with the same group of beavers there are eight beavers on the right side of the river. How many beavers will there be on the left side?
So there are situations where the additive relations (constant difference and constant sum) are correct and other situations where the multiplicative relations (constant ratio and constant product) are correct.
It is critically important that students on their own recognize that their current mode of reasoning, say that it is additive, is inappropriate for a multiplicative problem they are trying to solve. Robert Karplus developed a model of learning he called the learning cycle that facilitates the acquisition of new reasoning skills.
Hands-on activities are extremely useful in the learning cycle. After making predictions about the height of Mr. Tall in paper clips, the measuring tools can be introduced and the students can test their strategies. For the student using a constant difference relation, actual measurement will show that Mr. Tall is actually nine paper clips high and this will set up some cognitive dissonance.
The same is true for the inverse relations. Here is a picture of two students working with the "water triangle". Given the problem noted above, most students predict the water level on the left side will drop to two units when the water triangle is tilted. When they carry out the experiment and see that the answer is 3 units, this establishes some cognitive dissonance. This is a prime time for the teacher to move the lesson into the second stage of the learning cycle.
It is important that the students not over apply the multiplicative strategies they learn. Therefore, some of the hands-on activities might not be based on a multiplicative relation. Here is a picture of two students working with an apparatus where the constant sum relation is correct.
It is not always possible or feasible to put carefully designed hands-on activities into the hands of students. Also, older audiences do not always react well to using hands-on experimentation. However, it is often possible to introduce cognitive dissonance through thought experiments .
In all the experiments noted above there are two variables whose values change based on a fixed relation. Consider the following problem that is similar to the Mr. Tall and Mr. Short problem.
Here is a photograph of a father and a daughter. In this picture the daughter is 4 cm high and the father is 6 cm high. They decided to enlarge the picture and in the bigger picture the daughter is 6 cm high. How high is the father in the larger picture?
A very common answer for an individual using an additive relation is 8 cm because the father is always 2 cm higher than his daughter. So now ask this student the following question:
Suppose they made a very small version of the original picture and in this small picture the father is 2 cm high. How high will the daughter be in this small picture?
The student quickly realizes that the strategy "the father is always 2 cm higher than his daughter" is not correct. This can also be achieved by exploring the other extreme where the original picture is blown up to poster size and the daughter is 100 cm high. How high will the father be in this poster? A student answering 102 cm realizes that the father and daughter are almost the same height which cannot be right. Once cognitive dissonance is present, the teacher can introduce the correct relation, constant ratio.
The student can also be encouraged to conduct their own thought experiments, such as "what if the height of the daughter doubles in an enlargement, what will happen to the height of the father?" Most students, including those still at the concrete operational stage, will quickly answer that the father's height must also double. The abstract thought experiment is: "Suppose that one of the variables is doubled in value, how will the other variable change?" If the answer is "double", then this may be a constant ratio problem. But if the answer is not double, such as for the age problem with Mr. Tall and Mr. Short given above, then it is not a constant ratio problem.
For inverse relations, such as the "water triangle", limiting cases can also introduce cognitive dissonance. For example:
Given the initial conditions with the water level on the left at 4 units and the water level on the right at 6 units, predict what is the water level on the left if the triangle is tilted until the water level on the right is 10 units.
Students will abandon the additive strategy at this point realizing that 0 cannot be the correct answer. A thought experiment can be performed for inverse relations. If one variable doubles in value, what happens to the other variable? If the answer is 1 / 2 then this might be a constant product relation (that is, an inverse proportion).
Plotting the values of variables can also be a valuable tool for identifying whether two variables are directly proportional or not. If they are directly proportional, then the values should be on a straight line and that line should intersect the origin.
The four functional relations noted above, constant sum, constant difference, constant product, and constant ratio, are based on the four arithmetic operations students are most familiar with, namely, addition, subtraction, multiplication and division. Most relations in the real world do not fall into one of these categories. However, if students learn simple techniques such as thought experiments and plotting graphs, they will be able to apply these techniques to more complex situations.
Again, consider Newton's equation for the force of gravity:
If a student understands the functional relation between the variables, then he/she should be able to answer the following thought experiments.
What would happen to the force of gravitational attraction if:
Generally, thought experiments must be confirmed by experimental results. Many children and adults when asked to perform a thought experiment on the mass of an object and the velocity with which it falls to the earth might say that when the mass is doubled then the object will fall twice as fast. However, experimental results do not back up this "logical" thought experiment so it is always essential that theoretical results agree with experimental data. | https://en.wikipedia.org/wiki/Proportional_reasoning |
In mathematics , two sequences of numbers, often experimental data , are proportional or directly proportional if their corresponding elements have a constant ratio . The ratio is called coefficient of proportionality (or proportionality constant ) and its reciprocal is known as constant of normalization (or normalizing constant ). Two sequences are inversely proportional if corresponding elements have a constant product.
Two functions f ( x ) {\displaystyle f(x)} and g ( x ) {\displaystyle g(x)} are proportional if their ratio f ( x ) g ( x ) {\textstyle {\frac {f(x)}{g(x)}}} is a constant function .
If several pairs of variables share the same direct proportionality constant, the equation expressing the equality of these ratios is called a proportion , e.g., a / b = x / y = ⋯ = k (for details see Ratio ).
Proportionality is closely related to linearity .
Given an independent variable x and a dependent variable y , y is directly proportional to x [ 1 ] if there is a positive constant k such that:
The relation is often denoted using the symbols "∝" (not to be confused with the Greek letter alpha ) or "~", with exception of Japanese texts, where "~" is reserved for intervals:
For x ≠ 0 {\displaystyle x\neq 0} the proportionality constant can be expressed as the ratio:
It is also called the constant of variation or constant of proportionality .
Given such a constant k , the proportionality relation ∝ with proportionality constant k between two sets A and B is the equivalence relation defined by { ( a , b ) ∈ A × B : a = k b } . {\displaystyle \{(a,b)\in A\times B:a=kb\}.}
A direct proportionality can also be viewed as a linear equation in two variables with a y -intercept of 0 and a slope of k > 0, which corresponds to linear growth .
Two variables are inversely proportional (also called varying inversely , in inverse variation , in inverse proportion ) [ 2 ] if each of the variables is directly proportional to the multiplicative inverse (reciprocal) of the other, or equivalently if their product is a constant. [ 3 ] It follows that the variable y is inversely proportional to the variable x if there exists a non-zero constant k such that
or equivalently, x y = k {\displaystyle xy=k} . Hence the constant " k " is the product of x and y .
The graph of two variables varying inversely on the Cartesian coordinate plane is a rectangular hyperbola . The product of the x and y values of each point on the curve equals the constant of proportionality ( k ). Since neither x nor y can equal zero (because k is non-zero), the graph never crosses either axis.
Direct and inverse proportion contrast as follows: in direct proportion the variables increase or decrease together. With inverse proportion, an increase in one variable is associated with a decrease in the other. For instance, in travel, a constant speed dictates a direct proportion between distance and time travelled; in contrast, for a given distance (the constant), the time of travel is inversely proportional to speed: s × t = d .
The concepts of direct and inverse proportion lead to the location of points in the Cartesian plane by hyperbolic coordinates ; the two coordinates correspond to the constant of direct proportionality that specifies a point as being on a particular ray and the constant of inverse proportionality that specifies a point as being on a particular hyperbola .
The Unicode characters for proportionality are the following: | https://en.wikipedia.org/wiki/Proportionality_(mathematics) |
A proportional–integral–derivative controller ( PID controller or three-term controller ) is a feedback -based control loop mechanism commonly used to manage machines and processes that require continuous control and automatic adjustment. It is typically used in industrial control systems and various other applications where constant control through modulation is necessary without human intervention. The PID controller automatically compares the desired target value ( setpoint or SP) with the actual value of the system ( process variable or PV). The difference between these two values is called the error value , denoted as e ( t ) {\displaystyle e(t)} .
It then applies corrective actions automatically to bring the PV to the same value as the SP using three methods: The proportional ( P ) component responds to the current error value by producing an output that is directly proportional to the magnitude of the error. This provides immediate correction based on how far the system is from the desired setpoint. The integral ( I ) component, in turn, considers the cumulative sum of past errors to address any residual steady-state errors that persist over time, eliminating lingering discrepancies. Lastly, the derivative ( D ) component predicts future error by assessing the rate of change of the error, which helps to mitigate overshoot and enhance system stability, particularly when the system undergoes rapid changes. The PID output signal can directly control actuators through voltage, current, or other modulation methods, depending on the application. The PID controller reduces the likelihood of human error and improves automation .
A common example is a vehicle’s cruise control system . For instance, when a vehicle encounters a hill, its speed will decrease if the engine power output is kept constant. The PID controller adjusts the engine's power output to restore the vehicle to its desired speed, doing so efficiently with minimal delay and overshoot.
The theoretical foundation of PID controllers dates back to the early 1920s with the development of automatic steering systems for ships. This concept was later adopted for automatic process control in manufacturing, first appearing in pneumatic actuators and evolving into electronic controllers. PID controllers are widely used in numerous applications requiring accurate, stable, and optimized automatic control , such as temperature regulation , motor speed control, and industrial process management.
The distinguishing feature of the PID controller is the ability to use the three control terms of proportional, integral and derivative influence on the controller output to apply accurate and optimal control. The block diagram on the right shows the principles of how these terms are generated and applied. It shows a PID controller, which continuously calculates an error value e ( t ) {\displaystyle e(t)} as the difference between a desired setpoint SP = r ( t ) {\displaystyle {\text{SP}}=r(t)} and a measured process variable PV = y ( t ) {\displaystyle {\text{PV}}=y(t)} : e ( t ) = r ( t ) − y ( t ) {\displaystyle e(t)=r(t)-y(t)} , and applies a correction based on proportional , integral , and derivative terms. The controller attempts to minimize the error over time by adjustment of a control variable u ( t ) {\displaystyle u(t)} , such as the opening of a control valve , to a new value determined by a weighted sum of the control terms.
The PID controller directly generates a continuous control signal based on error, without discrete modulation.
In this model:
Tuning – The balance of these effects is achieved by loop tuning to produce the optimal control function. The tuning constants are shown below as "K" and must be derived for each control application, as they depend on the response characteristics of the physical system, external to the controller. These are dependent on the behavior of the measuring sensor, the final control element (such as a control valve), any control signal delays, and the process itself. Approximate values of constants can usually be initially entered knowing the type of application, but they are normally refined, or tuned, by introducing a setpoint change and observing the system response. [ 2 ]
Control action – The mathematical model and practical loop above both use a direct control action for all the terms, which means an increasing positive error results in an increasing positive control output correction. This is because the "error" term is not the deviation from the setpoint (actual-desired) but is in fact the correction needed (desired-actual). The system is called reverse acting if it is necessary to apply negative corrective action. For instance, if the valve in the flow loop was 100–0% valve opening for 0–100% control output – meaning that the controller action has to be reversed. Some process control schemes and final control elements require this reverse action. An example would be a valve for cooling water, where the fail-safe mode, in the case of signal loss, would be 100% opening of the valve; therefore 0% controller output needs to cause 100% valve opening.
The overall control function is
where K p {\displaystyle K_{\text{p}}} , K i {\displaystyle K_{\text{i}}} , and K d {\displaystyle K_{\text{d}}} , all non-negative, denote the coefficients for the proportional, integral, and derivative terms respectively (sometimes denoted P , I , and D ).
In the standard form of the equation (see later in article), K i {\displaystyle K_{\text{i}}} and K d {\displaystyle K_{\text{d}}} are respectively replaced by K p / T i {\displaystyle K_{\text{p}}/T_{\text{i}}} and K p T d {\displaystyle K_{\text{p}}T_{\text{d}}} ; the advantage of this being that T i {\displaystyle T_{\text{i}}} and T d {\displaystyle T_{\text{d}}} have some understandable physical meaning, as they represent an integration time and a derivative time respectively. K p T d {\displaystyle K_{\text{p}}T_{\text{d}}} is the time constant with which the controller will attempt to approach the set point. K p / T i {\displaystyle K_{\text{p}}/T_{\text{i}}} determines how long the controller will tolerate the output being consistently above or below the set point.
where
Although a PID controller has three control terms, some applications need only one or two terms to provide appropriate control. This is achieved by setting the unused parameters to zero and is called a PI, PD, P, or I controller in the absence of the other control actions. PI controllers are fairly common in applications where derivative action would be sensitive to measurement noise, but the integral term is often needed for the system to reach its target value. [ citation needed ]
The use of the PID algorithm does not guarantee optimal control of the system or its control stability ( see § Limitations , below ). Situations may occur where there are excessive delays: the measurement of the process value is delayed, or the control action does not apply quickly enough. In these cases lead–lag compensation is required to be effective. The response of the controller can be described in terms of its responsiveness to an error, the degree to which the system overshoots a setpoint, and the degree of any system oscillation . But the PID controller is broadly applicable since it relies only on the response of the measured process variable, not on knowledge or a model of the underlying process.
Continuous control, before PID controllers were fully understood and implemented, has one of its origins in the centrifugal governor , which uses rotating weights to control a process. This was invented by Christiaan Huygens in the 17th century to regulate the gap between millstones in windmills depending on the speed of rotation, and thereby compensate for the variable speed of grain feed. [ 3 ] [ 4 ]
With the invention of the low-pressure stationary steam engine there was a need for automatic speed control, and James Watt 's self-designed " conical pendulum " governor, a set of revolving steel balls attached to a vertical spindle by link arms, came to be an industry standard. This was based on the millstone-gap control concept. [ 5 ]
Rotating-governor speed control, however, was still variable under conditions of varying load, where the shortcoming of what is now known as proportional control alone was evident. The error between the desired speed and the actual speed would increase with increasing load. In the 19th century, the theoretical basis for the operation of governors was first described by James Clerk Maxwell in 1868 in his now-famous paper On Governors . He explored the mathematical basis for control stability, and progressed a good way towards a solution, but made an appeal for mathematicians to examine the problem. [ 6 ] [ 5 ] The problem was examined further in 1874 by Edward Routh , Charles Sturm , and in 1895, Adolf Hurwitz , all of whom contributed to the establishment of control stability criteria. [ 5 ] In subsequent applications, speed governors were further refined, notably by American scientist Willard Gibbs , who in 1872 theoretically analyzed Watt's conical pendulum governor.
About this time, the invention of the Whitehead torpedo posed a control problem that required accurate control of the running depth. Use of a depth pressure sensor alone proved inadequate, and a pendulum that measured the fore and aft pitch of the torpedo was combined with depth measurement to become the pendulum-and-hydrostat control . Pressure control provided only a proportional control that, if the control gain was too high, would become unstable and go into overshoot with considerable instability of depth-holding. The pendulum added what is now known as derivative control, which damped the oscillations by detecting the torpedo dive/climb angle and thereby the rate-of-change of depth. [ 7 ] This development (named by Whitehead as "The Secret" to give no clue to its action) was around 1868. [ 8 ]
Another early example of a PID-type controller was developed by Elmer Sperry in 1911 for ship steering, though his work was intuitive rather than mathematically-based. [ 9 ]
It was not until 1922, however, that a formal control law for what we now call PID or three-term control was first developed using theoretical analysis, by Russian American engineer Nicolas Minorsky . [ 10 ] Minorsky was researching and designing automatic ship steering for the US Navy and based his analysis on observations of a helmsman . He noted the helmsman steered the ship based not only on the current course error but also on past error, as well as the current rate of change; [ 11 ] this was then given a mathematical treatment by Minorsky. [ 5 ] His goal was stability, not general control, which simplified the problem significantly. While proportional control provided stability against small disturbances, it was insufficient for dealing with a steady disturbance, notably a stiff gale (due to steady-state error ), which required adding the integral term. Finally, the derivative term was added to improve stability and control.
Trials were carried out on the USS New Mexico , with the controllers controlling the angular velocity (not the angle) of the rudder. PI control yielded sustained yaw (angular error) of ±2°. Adding the D element yielded a yaw error of ±1/6°, better than most helmsmen could achieve. [ 12 ]
The Navy ultimately did not adopt the system due to resistance by personnel. Similar work was carried out and published by several others [ who? ] in the 1930s. [ citation needed ]
The wide use of feedback controllers did not become feasible until the development of wideband high-gain amplifiers to use the concept of negative feedback . This had been developed in telephone engineering electronics by Harold Black in the late 1920s, but not published until 1934. [ 5 ] Independently, Clesson E Mason of the Foxboro Company in 1930 invented a wide-band pneumatic controller by combining the nozzle and flapper high-gain pneumatic amplifier, which had been invented in 1914, with negative feedback from the controller output. This dramatically increased the linear range of operation of the nozzle and flapper amplifier, and integral control could also be added by the use of a precision bleed valve and a bellows generating the integral term. The result was the "Stabilog" controller which gave both proportional and integral functions using feedback bellows. [ 5 ] The integral term was called Reset . [ 13 ] Later the derivative term was added by a further bellows and adjustable orifice.
From about 1932 onwards, the use of wideband pneumatic controllers increased rapidly in a variety of control applications. Air pressure was used for generating the controller output, and also for powering process modulating devices such as diaphragm-operated control valves. They were simple low maintenance devices that operated well in harsh industrial environments and did not present explosion risks in hazardous locations . They were the industry standard for many decades until the advent of discrete electronic controllers and distributed control systems (DCSs).
With these controllers, a pneumatic industry signaling standard of 3–15 psi (0.2–1.0 bar) was established, which had an elevated zero to ensure devices were working within their linear characteristic and represented the control range of 0-100%.
In the 1950s, when high gain electronic amplifiers became cheap and reliable, electronic PID controllers became popular, and the pneumatic standard was emulated by 10-50 mA and 4–20 mA current loop signals (the latter became the industry standard). Pneumatic field actuators are still widely used because of the advantages of pneumatic energy for control valves in process plant environments.
Most modern PID controls in industry are implemented as computer software in DCSs, programmable logic controllers (PLCs), or discrete compact controllers .
Electronic analog PID control loops were often found within more complex electronic systems, for example, the head positioning of a disk drive , the power conditioning of a power supply , or even the movement-detection circuit of a modern seismometer . Discrete electronic analog controllers have been largely replaced by digital controllers using microcontrollers or FPGAs to implement PID algorithms. However, discrete analog PID controllers are still used in niche applications requiring high-bandwidth and low-noise performance, such as laser-diode controllers. [ 14 ]
Consider a robotic arm [ 15 ] that can be moved and positioned by a control loop. An electric motor may lift or lower the arm, depending on forward or reverse power applied, but power cannot be a simple function of position because of the inertial mass of the arm, forces due to gravity, external forces on the arm such as a load to lift or work to be done on an external object.
The PID controller continuously adjusts the input current to achieve smooth motion.
By measuring the position (PV), and subtracting it from the setpoint (SP), the error (e) is found, and from it the controller calculates how much electric current to supply to the motor (MV).
The obvious method is proportional control: the motor current is set in proportion to the existing error. However, this method fails if, for instance, the arm has to lift different weights: a greater weight needs a greater force applied for the same error on the down side, but a smaller force if the error is low on the upside. That's where the integral and derivative terms play their part.
An integral term increases action in relation not only to the error but also the time for which it has persisted. So, if the applied force is not enough to bring the error to zero, this force will be increased as time passes. A pure "I" controller could bring the error to zero, but it would be both weakly reacting at the start (because the action would be small at the beginning, depending on time to become significant) and more aggressive at the end (the action increases as long as the error is positive, even if the error is near zero).
Applying too much integral when the error is small and decreasing will lead to overshoot. After overshooting, if the controller were to apply a large correction in the opposite direction and repeatedly overshoot the desired position, the output would oscillate around the setpoint in either a constant, growing, or decaying sinusoid . If the amplitude of the oscillations increases with time, the system is unstable. If it decreases, the system is stable. If the oscillations remain at a constant magnitude, the system is marginally stable .
A derivative term does not consider the magnitude of the error (meaning it cannot bring it to zero: a pure D controller cannot bring the system to its setpoint), but rather the rate of change of error, trying to bring this rate to zero. It aims at flattening the error trajectory into a horizontal line, damping the force applied, and so reduces overshoot (error on the other side because of too great applied force).
In the interest of achieving a controlled arrival at the desired position (SP) in a timely and accurate way, the controlled system needs to be critically damped . A well-tuned position control system will also apply the necessary currents to the controlled motor so that the arm pushes and pulls as necessary to resist external forces trying to move it away from the required position. The setpoint itself may be generated by an external system, such as a PLC or other computer system, so that it continuously varies depending on the work that the robotic arm is expected to do. A well-tuned PID control system will enable the arm to meet these changing requirements to the best of its capabilities.
If a controller starts from a stable state with zero error (PV = SP), then further changes by the controller will be in response to changes in other measured or unmeasured inputs to the process that affect the process, and hence the PV. Variables that affect the process other than the MV are known as disturbances. Generally, controllers are used to reject disturbances and to implement setpoint changes. A change in load on the arm constitutes a disturbance to the robot arm control process.
In theory, a controller can be used to control any process that has a measurable output (PV), a known ideal value for that output (SP), and an input to the process (MV) that will affect the relevant PV. Controllers are used in industry to regulate temperature , pressure , force , feed rate , [ 16 ] flow rate , chemical composition (component concentrations ), weight , position , speed , and practically every other variable for which a measurement exists.
The PID control scheme is named after its three correcting terms, whose sum constitutes the manipulated variable (MV). The proportional, integral, and derivative terms are summed to calculate the output of the PID controller. Defining u ( t ) {\displaystyle u(t)} as the controller output, the final form of the PID algorithm is
where
Equivalently, the transfer function in the Laplace domain of the PID controller is
where s {\displaystyle s} is the complex angular frequency.
The proportional term produces an output value that is proportional to the current error value. The proportional response can be adjusted by multiplying the error by a constant K p , called the proportional gain constant.
The proportional term is given by
A high proportional gain results in a large change in the output for a given change in the error. If the proportional gain is too high, the system can become unstable (see the section on loop tuning ). In contrast, a small gain results in a small output response to a large input error, and a less responsive or less sensitive controller. If the proportional gain is too low, the control action may be too small when responding to system disturbances. Tuning theory and industrial practice indicate that the proportional term should contribute the bulk of the output change. [ citation needed ]
The steady-state error is the difference between the desired final output and the actual one. [ 17 ] Because a non-zero error is required to drive it, a proportional controller generally operates with a steady-state error. [ a ] Steady-state error (SSE) is proportional to the process gain and inversely proportional to proportional gain. SSE may be mitigated by adding a compensating bias term to the setpoint AND output or corrected dynamically by adding an integral term.
The contribution from the integral term is proportional to both the magnitude of the error and the duration of the error. The integral in a PID controller is the sum of the instantaneous error over time and gives the accumulated offset that should have been corrected previously. The accumulated error is then multiplied by the integral gain ( K i ) and added to the controller output.
The integral term is given by
The integral term accelerates the movement of the process towards setpoint and eliminates the residual steady-state error that occurs with a pure proportional controller. However, since the integral term responds to accumulated errors from the past, it can cause the present value to overshoot the setpoint value (see the section on loop tuning ).
The derivative of the process error is calculated by determining the slope of the error over time and multiplying this rate of change by the derivative gain K d . The magnitude of the contribution of the derivative term to the overall control action is termed the derivative gain, K d .
The derivative term is given by
Derivative action predicts system behavior and thus improves settling time and stability of the system. [ 18 ] [ 19 ] An ideal derivative is not causal , so that implementations of PID controllers include an additional low-pass filtering for the derivative term to limit the high-frequency gain and noise. Derivative action is seldom used in practice though – by one estimate in only 25% of deployed controllers [ citation needed ] – because of its variable impact on system stability in real-world applications.
Tuning a control loop is the adjustment of its control parameters (proportional band/gain, integral gain/reset, derivative gain/rate) to the optimum values for the desired control response. Stability (no unbounded oscillation) is a basic requirement, but beyond that, different systems have different behavior, different applications have different requirements, and requirements may conflict with one another.
Even though there are only three parameters and it is simple to describe in principle, PID tuning is a difficult problem because it must satisfy complex criteria within the limitations of PID control . Accordingly, there are various methods for loop tuning, and more sophisticated techniques are the subject of patents; this section describes some traditional, manual methods for loop tuning.
Designing and tuning a PID controller appears to be conceptually intuitive, but can be hard in practice, if multiple (and often conflicting) objectives, such as short transient and high stability, are to be achieved. PID controllers often provide acceptable control using default tunings, but performance can generally be improved by careful tuning, and performance may be unacceptable with poor tuning. Usually, initial designs need to be adjusted repeatedly through computer simulations until the closed-loop system performs or compromises as desired.
Some processes have a degree of nonlinearity , so parameters that work well at full-load conditions do not work when the process is starting up from no load. This can be corrected by gain scheduling (using different parameters in different operating regions).
If the PID controller parameters (the gains of the proportional, integral and derivative terms) are chosen incorrectly, the controlled process input can be unstable; i.e., its output diverges , with or without oscillation , and is limited only by saturation or mechanical breakage. Instability is caused by excess gain, particularly in the presence of significant lag.
Generally, stabilization of response is required and the process must not oscillate for any combination of process conditions and setpoints, though sometimes marginal stability (bounded oscillation) is acceptable or desired. [ citation needed ]
Mathematically, the origins of instability can be seen in the Laplace domain . [ 20 ]
The closed-loop transfer function is
where K ( s ) {\displaystyle K(s)} is the PID transfer function, and G ( s ) {\displaystyle G(s)} is the plant transfer function. A system is unstable where the closed-loop transfer function diverges for some s {\displaystyle s} . [ 20 ] This happens in situations where K ( s ) G ( s ) = − 1 {\displaystyle K(s)G(s)=-1} . In other words, this happens when | K ( s ) G ( s ) | = 1 {\displaystyle |K(s)G(s)|=1} with a 180° phase shift. Stability is guaranteed when K ( s ) G ( s ) < 1 {\displaystyle K(s)G(s)<1} for frequencies that suffer high phase shifts. A more general formalism of this effect is known as the Nyquist stability criterion .
The optimal behavior on a process change or setpoint change varies depending on the application.
Two basic requirements are regulation (disturbance rejection – staying at a given setpoint) and command tracking (implementing setpoint changes). These terms refer to how well the controlled variable tracks the desired value. Specific criteria for command tracking include rise time and settling time . Some processes must not allow an overshoot of the process variable beyond the setpoint if, for example, this would be unsafe. Other processes must minimize the energy expended in reaching a new setpoint.
There are several methods for tuning a PID loop. The most effective methods generally involve developing some form of process model and then choosing P, I, and D based on the dynamic model parameters. Manual tuning methods can be relatively time-consuming, particularly for systems with long loop times.
The choice of method depends largely on whether the loop can be taken offline for tuning, and on the response time of the system. If the system can be taken offline, the best tuning method often involves subjecting the system to a step change in input, measuring the output as a function of time, and using this response to determine the control parameters. [ citation needed ]
If the system must remain online, one tuning method is to first set K i {\displaystyle K_{i}} and K d {\displaystyle K_{d}} values to zero. Increase the K p {\displaystyle K_{p}} until the output of the loop oscillates; then set K p {\displaystyle K_{p}} to approximately half that value for a "quarter amplitude decay"-type response. Then increase K i {\displaystyle K_{i}} until any offset is corrected in sufficient time for the process, but not until too great a value causes instability. Finally, increase K d {\displaystyle K_{d}} , if required, until the loop is acceptably quick to reach its reference after a load disturbance. Too much K p {\displaystyle K_{p}} causes excessive response and overshoot. A fast PID loop tuning usually overshoots slightly to reach the setpoint more quickly; however, some systems cannot accept overshoot, in which case an overdamped closed-loop system is required, which in turn requires a K p {\displaystyle K_{p}} setting significantly less than half that of the K p {\displaystyle K_{p}} setting that was causing oscillation. [ citation needed ]
Another heuristic tuning method is known as the Ziegler–Nichols method , introduced by John G. Ziegler and Nathaniel B. Nichols in the 1940s. As in the method above, the K i {\displaystyle K_{i}} and K d {\displaystyle K_{d}} gains are first set to zero. The proportional gain is increased until it reaches the ultimate gain K u {\displaystyle K_{u}} at which the output of the loop starts to oscillate constantly. K u {\displaystyle K_{u}} and the oscillation period T u {\displaystyle T_{u}} are used to set the gains as follows:
The oscillation frequency is often measured instead, and the reciprocals of each multiplication yields the same result.
These gains apply to the ideal, parallel form of the PID controller. When applied to the standard PID form, only the integral and derivative gains K i {\displaystyle K_{i}} and K d {\displaystyle K_{d}} are dependent on the oscillation period T u {\displaystyle T_{u}} .
This method was developed in 1953 and is based on a first-order + time delay model. Similar to the Ziegler–Nichols method , a set of tuning parameters were developed to yield a closed-loop response with a decay ratio of 1 4 {\displaystyle {\tfrac {1}{4}}} . Arguably the biggest problem with these parameters is that a small change in the process parameters could potentially cause a closed-loop system to become unstable.
Published in 1984 by Karl Johan Åström and Tore Hägglund, [ 25 ] the relay method temporarily operates the process using bang-bang control and measures the resultant oscillations. The output is switched (as if by a relay , hence the name) between two values of the control variable. The values must be chosen so the process will cross the setpoint, but they need not be 0% and 100%; by choosing suitable values, dangerous oscillations can be avoided.
As long as the process variable is below the setpoint, the control output is set to the higher value. As soon as it rises above the setpoint, the control output is set to the lower value. Ideally, the output waveform is nearly square, spending equal time above and below the setpoint. The period and amplitude of the resultant oscillations are measured, and used to compute the ultimate gain and period, which are then fed into the Ziegler–Nichols method.
Specifically, the ultimate period T u {\displaystyle T_{u}} is assumed to be equal to the observed period, and the ultimate gain is computed as K u = 4 b / π a , {\displaystyle K_{u}=4b/\pi a,} where a is the amplitude of the process variable oscillation, and b is the amplitude of the control output change which caused it.
There are numerous variants on the relay method. [ 26 ]
The transfer function for a first-order process with dead time is
where k p is the process gain, τ p is the time constant, θ is the dead time, and u ( s ) is a step change input. Converting this transfer function to the time domain results in
using the same parameters found above.
It is important when using this method to apply a large enough step-change input that the output can be measured; however, too large of a step change can affect the process stability. Additionally, a larger step change ensures that the output does not change due to a disturbance (for best results, try to minimize disturbances when performing the step test).
One way to determine the parameters for the first-order process is using the 63.2% method. In this method, the process gain ( k p ) is equal to the change in output divided by the change in input. The dead time θ is the amount of time between when the step change occurred and when the output first changed. The time constant ( τ p ) is the amount of time it takes for the output to reach 63.2% of the new steady-state value after the step change. One downside to using this method is that it can take a while to reach a new steady-state value if the process has large time constants. [ 27 ]
Most modern industrial facilities no longer tune loops using the manual calculation methods shown above. Instead, PID tuning and loop optimization software are used to ensure consistent results. These software packages gather data, develop process models, and suggest optimal tuning. Some software packages can even develop tuning by gathering data from reference changes.
Mathematical PID loop tuning induces an impulse in the system and then uses the controlled system's frequency response to design the PID loop values. In loops with response times of several minutes, mathematical loop tuning is recommended, because trial and error can take days just to find a stable set of loop values. Optimal values are harder to find. Some digital loop controllers offer a self-tuning feature in which very small setpoint changes are sent to the process, allowing the controller itself to calculate optimal tuning values.
Another approach calculates initial values via the Ziegler–Nichols method, and uses a numerical optimization technique to find better PID coefficients. [ 28 ]
Other formulas are available to tune the loop according to different performance criteria. Many patented formulas are now embedded within PID tuning software and hardware modules. [ 29 ]
Advances in automated PID loop tuning software also deliver algorithms for tuning PID Loops in a dynamic or non-steady state (NSS) scenario. The software models the dynamics of a process, through a disturbance, and calculate PID control parameters in response. [ 30 ]
While PID controllers are applicable to many control problems, and often perform satisfactorily without any improvements or only coarse tuning, they can perform poorly in some applications and do not in general provide optimal control . The fundamental difficulty with PID control is that it is a feedback control system, with constant parameters, and no direct knowledge of the process, and thus overall performance is reactive and a compromise. While PID control is the best controller for an observer without a model of the process, better performance can be obtained by overtly modeling the actor of the process without resorting to an observer.
PID controllers, when used alone, can give poor performance when the PID loop gains must be reduced so that the control system does not overshoot, oscillate or hunt about the control setpoint value. They also have difficulties in the presence of non-linearities, may trade-off regulation versus response time, do not react to changing process behavior (say, the process changes after it has warmed up), and have lag in responding to large disturbances.
The most significant improvement is to incorporate feed-forward control with knowledge about the system, and using the PID only to control error. Alternatively, PIDs can be modified in more minor ways, such as by changing the parameters (either gain scheduling in different use cases or adaptively modifying them based on performance), improving measurement (higher sampling rate, precision, and accuracy, and low-pass filtering if necessary), or cascading multiple PID controllers.
PID controllers work best when the loop to be controlled is linear and symmetric. Thus, their performance in non-linear and asymmetric systems is degraded.
A non-linear valve, for instance, in a flow control application, will result in variable loop sensitivity, requiring dampened action to prevent instability. One solution is the use of the valve's non-linear characteristic in the control algorithm to compensate for this.
An asymmetric application, for example, is temperature control in HVAC systems using only active heating (via a heating element), where there is only passive cooling available. When it is desired to lower the controlled temperature the heating output is off, but there is no active cooling due to control output. Any overshoot of rising temperature can therefore only be corrected slowly; it cannot be forced downward by the control output. In this case the PID controller could be tuned to be over-damped, to prevent or reduce overshoot, but this reduces performance by increasing the settling time of a rising temperature to the set point. The inherent degradation of control quality in this application could be solved by application of active cooling.
A problem with the derivative term is that it amplifies higher frequency measurement or process noise that can cause large amounts of change in the output. It is often helpful to filter the measurements with a low-pass filter in order to remove higher-frequency noise components. As low-pass filtering and derivative control can cancel each other out, the amount of filtering is limited. Therefore, low noise instrumentation can be important. A nonlinear median filter may be used, which improves the filtering efficiency and practical performance. [ 31 ] In some cases, the differential band can be turned off with little loss of control. This is equivalent to using the PID controller as a PI controller .
The basic PID algorithm presents some challenges in control applications that have been addressed by minor modifications to the PID form.
One common problem resulting from the ideal PID implementations is integral windup . Following a large change in setpoint the integral term can accumulate an error larger than the maximal value for the regulation variable (windup), thus the system overshoots and continues to increase until this accumulated error is unwound. This problem can be addressed by:
For example, a PID loop is used to control the temperature of an electric resistance furnace where the system has stabilized. Now when the door is opened and something cold is put into the furnace the temperature drops below the setpoint. The integral function of the controller tends to compensate for error by introducing another error in the positive direction. This overshoot can be avoided by freezing of the integral function after the opening of the door for the time the control loop typically needs to reheat the furnace.
A PI controller (proportional-integral controller) is a special case of the PID controller in which the derivative (D) of the error is not used.
The controller output is given by
where Δ {\displaystyle \Delta } is the error or deviation of actual measured value ( PV ) from the setpoint ( SP ).
A PI controller can be modelled easily in software such as Simulink or Xcos using a "flow chart" box involving Laplace operators:
where
Setting a value for G {\displaystyle G} is often a trade off between decreasing overshoot and increasing settling time.
The lack of derivative action may make the system more steady in the steady state in the case of noisy data. This is because derivative action is more sensitive to higher-frequency terms in the inputs.
Without derivative action, a PI-controlled system is less responsive to real (non-noise) and relatively fast alterations in state and so the system will be slower to reach setpoint and slower to respond to perturbations than a well-tuned PID system may be.
Many PID loops control a mechanical device (for example, a valve). Mechanical maintenance can be a major cost and wear leads to control degradation in the form of either stiction or backlash in the mechanical response to an input signal. The rate of mechanical wear is mainly a function of how often a device is activated to make a change. Where wear is a significant concern, the PID loop may have an output deadband to reduce the frequency of activation of the output (valve). This is accomplished by modifying the controller to hold its output steady if the change would be small (within the defined deadband range). The calculated output must leave the deadband before the actual output will change.
The proportional and derivative terms can produce excessive movement in the output when a system is subjected to an instantaneous step increase in the error, such as a large setpoint change. In the case of the derivative term, this is due to taking the derivative of the error, which is very large in the case of an instantaneous step change. As a result, some PID algorithms incorporate some of the following modifications:
The control system performance can be improved by combining the feedback (or closed-loop) control of a PID controller with feed-forward (or open-loop) control. Knowledge about the system (such as the desired acceleration and inertia) can be fed forward and combined with the PID output to improve the overall system performance. The feed-forward value alone can often provide the major portion of the controller output. The PID controller primarily has to compensate for whatever difference or error remains between the setpoint (SP) and the system response to the open-loop control. Since the feed-forward output is not affected by the process feedback, it can never cause the control system to oscillate, thus improving the system response without affecting stability. Feed forward can be based on the setpoint and on extra measured disturbances. Setpoint weighting is a simple form of feed forward.
For example, in most motion control systems, in order to accelerate a mechanical load under control, more force is required from the actuator. If a velocity loop PID controller is being used to control the speed of the load and command the force being applied by the actuator, then it is beneficial to take the desired instantaneous acceleration, scale that value appropriately and add it to the output of the PID velocity loop controller. This means that whenever the load is being accelerated or decelerated, a proportional amount of force is commanded from the actuator regardless of the feedback value. The PID loop in this situation uses the feedback information to change the combined output to reduce the remaining difference between the process setpoint and the feedback value. Working together, the combined open-loop feed-forward controller and closed-loop PID controller can provide a more responsive control system.
PID controllers are often implemented with a "bumpless" initialization feature that recalculates the integral accumulator term to maintain a consistent process output through parameter changes. [ 33 ] A partial implementation is to store the integral gain times the error rather than storing the error and postmultiplying by the integral gain, which prevents discontinuous output when the I gain is changed, but not the P or D gains.
In addition to feed-forward, PID controllers are often enhanced through methods such as PID gain scheduling (changing parameters in different operating conditions), fuzzy logic , or computational verb logic. [ 34 ] [ 35 ] Further practical application issues can arise from instrumentation connected to the controller. A high enough sampling rate, measurement precision, and measurement accuracy are required to achieve adequate control performance. Another new method for improvement of PID controller is to increase the degree of freedom by using fractional order . The order of the integrator and differentiator add increased flexibility to the controller. [ 36 ]
One distinctive advantage of PID controllers is that two PID controllers can be used together to yield better dynamic performance. This is called cascaded PID control. Two controllers are in cascade when they are arranged so that one regulates the set point of the other. A PID controller acts as outer loop controller, which controls the primary physical parameter, such as fluid level or velocity. The other controller acts as inner loop controller, which reads the output of outer loop controller as setpoint, usually controlling a more rapid changing parameter, flowrate or acceleration. It can be mathematically proven [ citation needed ] that the working frequency of the controller is increased and the time constant of the object is reduced by using cascaded PID controllers. [ vague ] .
For example, a temperature-controlled circulating bath has two PID controllers in cascade, each with its own thermocouple temperature sensor. The outer controller controls the temperature of the water using a thermocouple located far from the heater, where it accurately reads the temperature of the bulk of the water. The error term of this PID controller is the difference between the desired bath temperature and measured temperature. Instead of controlling the heater directly, the outer PID controller sets a heater temperature goal for the inner PID controller. The inner PID controller controls the temperature of the heater using a thermocouple attached to the heater. The inner controller's error term is the difference between this heater temperature setpoint and the measured temperature of the heater. Its output controls the actual heater to stay near this setpoint.
The proportional, integral, and differential terms of the two controllers will be very different. The outer PID controller has a long time constant – all the water in the tank needs to heat up or cool down. The inner loop responds much more quickly. Each controller can be tuned to match the physics of the system it controls – heat transfer and thermal mass of the whole tank or of just the heater – giving better total response. [ 37 ] [ 38 ]
The form of the PID controller most often encountered in industry, and the one most relevant to tuning algorithms is the standard form . In this form the K p {\displaystyle K_{p}} gain is applied to the I o u t {\displaystyle I_{\mathrm {out} }} , and D o u t {\displaystyle D_{\mathrm {out} }} terms, yielding:
where
In this standard form, the parameters have a clear physical meaning. In particular, the inner summation produces a new single error value which is compensated for future and past errors. The proportional error term is the current error. The derivative components term attempts to predict the error value at T d {\displaystyle T_{d}} seconds (or samples) in the future, assuming that the loop control remains unchanged. The integral component adjusts the error value to compensate for the sum of all past errors, with the intention of completely eliminating them in T i {\displaystyle T_{i}} seconds (or samples). The resulting compensated single error value is then scaled by the single gain K p {\displaystyle K_{p}} to compute the control variable.
In the parallel form, shown in the controller theory section
the gain parameters are related to the parameters of the standard form through K i = K p / T i {\displaystyle K_{i}=K_{p}/T_{i}} and K d = K p T d {\displaystyle K_{d}=K_{p}T_{d}} . This parallel form, where the parameters are treated as simple gains, is the most general and flexible form. However, it is also the form where the parameters have the weakest relationship to physical behaviors and is generally reserved for theoretical treatment of the PID controller. The standard form, despite being slightly more complex mathematically, is more common in industry.
In many cases, the manipulated variable output by the PID controller is a dimensionless fraction between 0 and 100% of some maximum possible value, and the translation into real units (such as pumping rate or watts of heater power) is outside the PID controller. The process variable, however, is in dimensioned units such as temperature. It is common in this case to express the gain K p {\displaystyle K_{p}} not as "output per degree", but rather in the reciprocal form of a proportional band 100 / K p {\displaystyle 100/K_{p}} , which is "degrees per full output": the range over which the output changes from 0 to 1 (0% to 100%). Beyond this range, the output is saturated, full-off or full-on. The narrower this band, the higher the proportional gain.
In most commercial control systems, derivative action is based on process variable rather than error. That is, a change in the setpoint does not affect the derivative action. This is because the digitized version of the algorithm produces a large unwanted spike when the setpoint is changed. If the setpoint is constant then changes in the PV will be the same as changes in error. Therefore, this modification makes no difference to the way the controller responds to process disturbances.
Most commercial control systems offer the option of also basing the proportional action solely on the process variable. This means that only the integral action responds to changes in the setpoint. The modification to the algorithm does not affect the way the controller responds to process disturbances.
Basing proportional action on PV eliminates the instant and possibly very large change in output caused by a sudden change to the setpoint. Depending on the process and tuning this may be beneficial to the response to a setpoint step.
King [ 39 ] describes an effective chart-based method.
Sometimes it is useful to write the PID regulator in Laplace transform form:
Having the PID controller written in Laplace form and having the transfer function of the controlled system makes it easy to determine the closed-loop transfer function of the system.
Another representation of the PID controller is the series, or interacting form
where the parameters are related to the parameters of the standard form through
with
This form essentially consists of a PD and PI controller in series. As the integral is required to calculate the controller's bias this form provides the ability to track an external bias value which is required to be used for proper implementation of multi-controller advanced control schemes.
The analysis for designing a digital implementation of a PID controller in a microcontroller (MCU) or FPGA device requires the standard form of the PID controller to be discretized . [ 40 ] Approximations for first-order derivatives are made by backward finite differences . u ( t ) {\displaystyle u(t)} and e ( t ) {\displaystyle e(t)} are discretized with a sampling period Δ t {\displaystyle \Delta t} , k is the sample index.
Differentiating both sides of PID equation using Newton's notation gives:
u ˙ ( t ) = K p e ˙ ( t ) + K i e ( t ) + K d e ¨ ( t ) {\displaystyle {\dot {u}}(t)=K_{p}{\dot {e}}(t)+K_{i}e(t)+K_{d}{\ddot {e}}(t)}
Derivative terms are approximated as,
So,
Applying backward difference again gives,
By simplifying and regrouping terms of the above equation, an algorithm for an implementation of the discretized PID controller in a MCU is finally obtained:
or:
s.t. T i = K p / K i , T d = K d / K p {\displaystyle T_{i}=K_{p}/K_{i},T_{d}=K_{d}/K_{p}}
Note: This method solves in fact u ( t ) = K p e ( t ) + K i ∫ 0 t e ( τ ) d τ + K d d e ( t ) d t + u 0 {\displaystyle u(t)=K_{\text{p}}e(t)+K_{\text{i}}\int _{0}^{t}e(\tau )\,\mathrm {d} \tau +K_{\text{d}}{\frac {\mathrm {d} e(t)}{\mathrm {d} t}}+u_{0}} where u 0 {\displaystyle u_{0}} is a constant independent of t. This constant is useful when you want to have a start and stop control on the regulation loop. For instance, setting Kp,Ki and Kd to 0 will keep u(t) constant. Likewise, when you want to start a regulation on a system where the error is already close to 0 with u(t) non null, it prevents from sending the output to 0.
Here is a very simple and explicit group of pseudocode that can be easily understood by the layman: [ citation needed ]
Below a pseudocode illustrates how to implement a PID considering the PID as an IIR filter:
The Z-transform of a PID can be written as ( Δ t {\displaystyle \Delta _{t}} is the sampling time):
and expressed in a IIR form (in agreement with the discrete implementation shown above):
We can then deduce the recursive iteration often found in FPGA implementation [ 41 ]
Here, Kp is a dimensionless number, Ki is expressed in s − 1 {\displaystyle s^{-1}} and Kd is expressed in s. When doing a regulation where the actuator and the measured value are not in the same unit (ex. temperature regulation using a motor controlling a valve), Kp, Ki and Kd may be corrected by a unit conversion factor. It may also be interesting to use Ki in its reciprocal form (integration time). The above implementation allows to perform an I-only controller which may be useful in some cases.
In the real world, this is D-to-A converted and passed into the process under control as the manipulated variable (MV). The current error is stored elsewhere for re-use in the next differentiation, the program then waits until dt seconds have passed since start, and the loop begins again, reading in new values for the PV and the setpoint and calculating a new value for the error. [ 42 ]
Note that for real code, the use of "wait(dt)" might be inappropriate because it doesn't account for time taken by the algorithm itself during the loop, or more importantly, any pre-emption delaying the algorithm.
A common issue when using K d {\displaystyle K_{d}} is the response to the derivative of a rising or falling edge of the setpoint as shown below:
A typical workaround is to filter the derivative action using a low pass filter of time constant τ d / N {\displaystyle \tau _{d}/N} where 3 <= N <= 10 {\displaystyle 3<=N<=10} :
A variant of the above algorithm using an infinite impulse response (IIR) filter for the derivative: | https://en.wikipedia.org/wiki/Proportional–integral–derivative_controller |
The proportionator is the most efficient unbiased stereological method used to estimate population size in samples.
A typical application is counting the number of cells in an organ . The proportionator is related to the optical fractionator and physical dissector methods that also estimate population. The optical and physical fractionators use a sampling method called systematic uniform random sampling , or SURS. Unlike these two methods the proportionator introduces sampling with probability proportional to size, or PPS. With SURS all sampling sites are equal. With PPS sites are not sampled with the same probability. The reason for using PPS is to improve the efficiency of the estimation process.
Efficiency is the notion of how much is gained by a given amount of work. A more efficient method provides better results for the same amount of work. The proportionator provides a better estimate, that is a more precise estimate, than either of these two methods: the optical fractionator and physical dissector . The PPS is implemented by assigning a value to a sampling site. This value is the characteristic of the sampling site. The proportionator becomes the optical fractionator if the characteristic is constant, i.e. the same, for all sampling sites. If there is no difference between sampling sites, then the proportionator behaves the same as the optical fractionator. In actual sampling, the characteristic varies across the tissue being studied. Information about the distribution of the characteristic is used to refine the sampling. The greater the variance of the characteristic, the greater the efficiency of the proportionator. What this means to the stereologist is simple: if you need to count more and more to get the CE needed to publish just stop and switch to the proportionator.
The proportionator is a patented process that is not generally available. The only current licensee for the patent is Visiopharm.
The proportionator is the de facto standard method used to count cells in large projects. The increased efficiency provided by the proportionator makes more work intensive methods such as the optical fractionator less attractive except in small projects.
A common misconception in the stereological literature is that design based methodologies require that all objects of interest must have the same probability of being selected. It is true that making such a design decision ensures an unbiased result, but it is not necessary. The use of nonuniform sampling is often used in stereological work. The point sampled intercept method selects cells using a point probe. The result is a volume-weighted estimate of the size of the cells. This is not a biased result.
A sampling method known as probability proportional to size , or PPS, selects objects based on a characteristic that differs between objects. An excellent example of this is the selection of trees based on their diameter, or selecting a cell based on volume. The PSI selects cells with points. DeVries estimators select trees with lines. Sections select objects based on their height. These are examples of objects being selected in a varying probability by probes. In these examples the characteristic is a function of the objects themselves. That does not have to be the case.
The proportionator applies PPS to counting cells. The PPS is employed to gain efficiency in the sampling, and not to produce a weighted estimate, such as a volume weighted estimate. The optical fractionator is the older standard for estimating the number of cells in an unbiased manner. The optical fractionator, and other sampling methods, has some statistical uncertainty. This uncertainty is due to the variance of the sampling even though the result is unbiased. The efficiency of the sampling can be determined by use of the coefficient of error, or CE. This value describes the variance of the sampling method. Often, biological sampling is done at a CE of .05.
The efficiency of a sampling method is the amount of work it takes to obtain a desired CE. A more efficient method is one that requires less work to obtain a desired CE. A method is less efficient if the same amount of work results in a larger CE.
Suppose that every sample always gave the same result. There would be no difference between samples. This means that the variance in this case is 0. No more than 1 sample would be required to obtain a good result. (Understand that this might not be efficient if the sampling requires a great deal of work and there is no need for a CE this low.) If samples differ, then the variance is positive, and so is the CE.
The typical method of controlling the CE is to do more counting. The literature on the optical fractionator recommends methods of deciding where to increase the workload: more slices, or more optical dissectors. In keeping with this notion some amount of effort has been made to perform automatic image acquisition and counting to facilitate the process. The proportionator provides a superior result by avoiding more counting.
One of the earliest stereological methods that employed PPS was introduced by Walter Bitterlich in 1939 to improve the efficiency of fieldwork in the forest sciences . Bitterlich developed a sampling method that revolutionized the forest sciences. Up to this time the sampling quadrat method proposed by Pond and Clements in 1898 was still in use. Laying out sampling quadrats at each sampling site was a difficult process at times due to the physical obstructions of the natural world. Besides the physical issues it was also a costly procedure. It took a considerable amount of time to lay out a rectangle and to measure the trees included in the quadrat. Bitterlich realized that PPS could be used in the field. Bitterlich proposed the use of a sampling angle. All of the trees selected by a fixed angle from a sampling point would be counted. The quadrat, or plot as it was often called, was not required.
The quantity being estimated by the researchers was tree volume. The original sampling method was to choose a number of sampling points. The researcher traveled to each sampling point. A quadrat, rectangular sampling area, was laid out at each sampling point. Measurements of the trees in the quadrats was used to estimate tree volume. A typical measurement is basal area.
Bitterlich's method was to choose a number of sampling points. The researcher traveled to each sampling point just as in the quadrat method. At each sampling point the researcher used an angle gauge to see if a tree had a larger apparent angle than the gauge. If so, the tree was counted. No quadrat and no measurements! Just count and go. The result of this procedure was an estimate of tree volume.
Lou Grosenbaugh realized the importance of Bitterlich's work and wrote a number of articles describing the method. Soon a host of devices from angle gauge, to relascope, to sampling prism were developed. The Bitterlich method, employing PPS, and these devices profoundly increased the efficiency of fieldwork.
The proportionator reduces the workload by avoiding the expense of increased counting. The efficiency increase is attained by employing PPS. Efforts to automate the counting process attack the variance problem at the wrong level of sampling. The better solution is to reduce the workload before going to the counting step. The optimal situation is to have all samples providing identical counts. The next best situation is to reduce the difference between samples.
The proportionator adjusts the sampling scheme to select samples that are likely to provide estimates that have a smaller difference. Thus the variance of the estimator is addressed without changing the workload. That results in a gain in efficiency due to the reduction in variance for a given cost.
The main steps in sampling biological tissue are:
The typical attempt at increasing efficiency is the counting which occurs in step 6. The proportionator adjusts the sampling at step 5. This is accomplished by assigning a characteristic to each sampling site. Since each of the sampling sites is viewed it is possible for the automated systems to make a visual record of the site. The image collected at each site is used to determine a value for the site. The values for the sites are the characteristic. Recall that the characteristic may, but does not have to a function of the objects being counted. The potential sampling sites are then sampled based on the observed characteristic. Sites are chosen in a non-uniform manner, but still an unbiased method. Not only is the result unbiased, but the result is not weighted by the characteristic. The end result is that the difference between samples is reduced. This reduces the variance. Therefore, the workload is reduced.
Experimental evidence demonstrates that the proportionator significantly reduces the variance between samples, especially in situations where the tissue distribution is heterogeneous. This means that the situations where it is harder to reduce the variance, or improve the CE, are just the situations where the proportionator excels. Another way to look at this is that the proportionator is designed to take the CE reduction issue out of the hands of the researcher.
Suppose that the goal is to have a CE of .05. If the CE is larger than that value, then the only option available in the optical fractionator method is to increase the counting by either using more slices or more sampling sites on the slices. The proportionator is able to adjust the sampling to decrease the CE without increasing the counting. In fact, if the proportionator is able to reduce the CE below .05, then it is possible to reduce the counting workload and allow the CE to come up to the .05 requirement.
PPS revolutionized the forestry sciences. The application of PPS to cell counting makes larger scale research projects possible, while saving time and reducing expenses. | https://en.wikipedia.org/wiki/Proportionator |
A proposition is a central concept in the philosophy of language , semantics , logic , and related fields, often characterized as the primary bearer of truth or falsity . Propositions are also often characterized as the type of object that declarative sentences denote . For instance, the sentence "The sky is blue" denotes the proposition that the sky is blue. However, crucially, propositions are not themselves linguistic expressions . For instance, the English sentence "Snow is white" denotes the same proposition as the German sentence "Schnee ist weiß" even though the two sentences are not the same. Similarly, propositions can also be characterized as the objects of belief and other propositional attitudes . For instance if someone believes that the sky is blue, the object of their belief is the proposition that the sky is blue.
Formally, propositions are often modeled as functions which map a possible world to a truth value . For instance, the proposition that the sky is blue can be modeled as a function which would return the truth value T {\displaystyle T} if given the actual world as input, but would return F {\displaystyle F} if given some alternate world where the sky is green. However, a number of alternative formalizations have been proposed, notably the structured propositions view.
Propositions have played a large role throughout the history of logic , linguistics , philosophy of language , and related disciplines. Some researchers have doubted whether a consistent definition of propositionhood is possible, David Lewis even remarking that "the conception we associate with the word ‘proposition’ may be something of a jumble of conflicting desiderata". The term is often used broadly and has been used to refer to various related concepts.
In relation to the mind, propositions are discussed primarily as they fit into propositional attitudes . Propositional attitudes are simply attitudes characteristic of folk psychology (belief, desire, etc.) that one can take toward a proposition (e.g. 'it is raining,' 'snow is white,' etc.). In English, propositions usually follow folk psychological attitudes by a "that clause" (e.g. "Jane believes that it is raining"). In philosophy of mind and psychology , mental states are often taken to primarily consist in propositional attitudes. The propositions are usually said to be the "mental content" of the attitude. For example, if Jane has a mental state of believing that it is raining, her mental content is the proposition 'it is raining.' Furthermore, since such mental states are about something (namely, propositions), they are said to be intentional mental states.
Explaining the relation of propositions to the mind is especially difficult for non-mentalist views of propositions, such as those of the logical positivists and Russell described above, and Gottlob Frege 's view that propositions are Platonist entities, that is, existing in an abstract, non-physical realm. [ 1 ] So some recent views of propositions have taken them to be mental. Although propositions cannot be particular thoughts since those are not shareable, they could be types of cognitive events [ 2 ] or properties of thoughts (which could be the same across different thinkers). [ 3 ]
Philosophical debates surrounding propositions as they relate to propositional attitudes have also recently centered on whether they are internal or external to the agent, or whether they are mind-dependent or mind-independent entities. For more, see the entry on internalism and externalism in philosophy of mind.
In modern logic, propositions are standardly understood semantically as indicator functions that take a possible world and return a truth value. For example, the proposition that the sky is blue could be represented as a function f {\displaystyle f} such that f ( w ) = T {\displaystyle f(w)=T} for every world w , {\displaystyle w,} if any, where the sky is blue, and f ( v ) = F {\displaystyle f(v)=F} for every world v , {\displaystyle v,} if any, where it is not. A proposition can be modeled equivalently with the inverse image of T {\displaystyle T} under the indicator function, which is sometimes called the characteristic set of the proposition. For instance, if w {\displaystyle w} and w ′ {\displaystyle w'} are the only worlds in which the sky is blue, the proposition that the sky is blue could be modeled as the set { w , w ′ } {\displaystyle \{w,w'\}} . [ 4 ] [ 5 ] [ 6 ] [ 7 ]
Numerous refinements and alternative notions of proposition-hood have been proposed including inquisitive propositions and structured propositions . [ 8 ] [ 5 ] Propositions are called structured propositions if they have constituents, in some broad sense. [ 9 ] [ 10 ] Assuming a structured view of propositions, one can distinguish between singular propositions (also Russellian propositions , named after Bertrand Russell ) which are about a particular individual, general propositions , which are not about any particular individual, and particularized propositions , which are about a particular individual but do not contain that individual as a constituent. [ 5 ]
Attempts to provide a workable definition of proposition include the following:
Two meaningful declarative sentences express the same proposition, if and only if they mean the same thing. [ citation needed ]
which defines proposition in terms of synonymity. For example, "Snow is white" (in English) and "Schnee ist weiß" (in German) are different sentences, but they say the same thing, so they express the same proposition. Another definition of proposition is:
Two meaningful declarative sentence-tokens express the same proposition, if and only if they mean the same thing. [ citation needed ]
The above definitions can result in two identical sentences/sentence-tokens appearing to have the same meaning, and thus expressing the same proposition and yet having different truth-values, as in "I am Spartacus" said by Spartacus and said by John Smith, and "It is Wednesday" said on a Wednesday and on a Thursday. These examples reflect the problem of ambiguity in common language, resulting in a mistaken equivalence of the statements. “I am Spartacus” spoken by Spartacus is the declaration that the individual speaking is called Spartacus and it is true. When spoken by John Smith, it is a declaration about a different speaker and it is false. The term “I” means different things, so “I am Spartacus” means different things.
A related problem is when identical sentences have the same truth-value, yet express different propositions. The sentence “I am a philosopher” could have been spoken by both Socrates and Plato. In both instances, the statement is true, but means something different.
These problems are addressed in predicate logic by using a variable for the problematic term, so that “X is a philosopher” can have Socrates or Plato substituted for X, illustrating that “Socrates is a philosopher” and “Plato is a philosopher” are different propositions. Similarly, “I am Spartacus” becomes “X is Spartacus”, where X is replaced with terms representing the individuals Spartacus and John Smith.
In other words, the example problems can be averted if sentences are formulated with precision such that their terms have unambiguous meanings.
A number of philosophers and linguists claim that all definitions of a proposition are too vague to be useful. For them, it is just a misleading concept that should be removed from philosophy and semantics . W. V. Quine , who granted the existence of sets in mathematics, [ 11 ] maintained that the indeterminacy of translation prevented any meaningful discussion of propositions, and that they should be discarded in favor of sentences. [ 12 ] P. F. Strawson , on the other hand, advocated for the use of the term " statement ".
In Aristotelian logic a proposition was defined as a particular kind of sentence (a declarative sentence ) that affirms or denies a predicate of a subject , optionally with the help of a copula . [ 13 ] Aristotelian propositions take forms like "All men are mortal" and "Socrates is a man."
Aristotelian logic identifies a categorical proposition as a sentence which affirms or denies a predicate of a subject , optionally with the help of a copula . An Aristotelian proposition may take the form of "All men are mortal" or "Socrates is a man." In the first example, the subject is "men", predicate is "mortal" and copula is "are", while in the second example, the subject is "Socrates", the predicate is "a man" and copula is "is". [ 13 ]
Often, propositions are related to closed formulae (or logical sentence) to distinguish them from what is expressed by an open formula . In this sense, propositions are "statements" that are truth-bearers . This conception of a proposition was supported by the philosophical school of logical positivism .
Some philosophers argue that some (or all) kinds of speech or actions besides the declarative ones also have propositional content. For example, yes–no questions present propositions, being inquiries into the truth value of them. On the other hand, some signs can be declarative assertions of propositions, without forming a sentence nor even being linguistic (e.g. traffic signs convey definite meaning which is either true or false).
Propositions are also spoken of as the content of beliefs and similar intentional attitudes , such as desires, preferences, and hopes. For example, "I desire that I have a new car ", or "I wonder whether it will snow " (or, whether it is the case that "it will snow"). Desire, belief, doubt, and so on, are thus called propositional attitudes when they take this sort of content. [ 9 ]
Bertrand Russell held that propositions were structured entities with objects and properties as constituents. One important difference between Ludwig Wittgenstein 's view (according to which a proposition is the set of possible worlds /states of affairs in which it is true) is that on the Russellian account, two propositions that are true in all the same states of affairs can still be differentiated. For instance, the proposition "two plus two equals four" is distinct on a Russellian account from the proposition "three plus three equals six". If propositions are sets of possible worlds, however, then all mathematical truths (and all other necessary truths) are the same set (the set of all possible worlds). [ citation needed ] | https://en.wikipedia.org/wiki/Proposition |
In propositional logic , a propositional formula is a type of syntactic formula which is well formed . If the values of all variables in a propositional formula are given, it determines a unique truth value . A propositional formula may also be called a propositional expression , a sentence , [ 1 ] or a sentential formula .
A propositional formula is constructed from simple propositions , such as "five is greater than three" or propositional variables such as p and q , using connectives or logical operators such as NOT, AND, OR, or IMPLIES; for example:
In mathematics , a propositional formula is often more briefly referred to as a " proposition ", but, more precisely, a propositional formula is not a proposition but a formal expression that denotes a proposition , a formal object under discussion, just like an expression such as " x + y " is not a value, but denotes a value. In some contexts, maintaining the distinction may be of importance.
For the purposes of the propositional calculus, propositions (utterances, sentences, assertions) are considered to be either simple or compound. [ 2 ] Compound propositions are considered to be linked by sentential connectives, some of the most common of which are "AND", "OR", "IF ... THEN ...", "NEITHER ... NOR ...", "... IS EQUIVALENT TO ..." . The linking semicolon ";", and connective "BUT" are considered to be expressions of "AND". A sequence of discrete sentences are considered to be linked by "AND"s, and formal analysis applies a recursive "parenthesis rule" with respect to sequences of simple propositions (see more below about well-formed formulas).
Simple propositions are declarative in nature, that is, they make assertions about the condition or nature of a particular object of sensation e.g. "This cow is blue", "There's a coyote!" ("That coyote IS there , behind the rocks."). [ 3 ] Thus the simple "primitive" assertions must be about specific objects or specific states of mind. Each must have at least a subject (an immediate object of thought or observation), a verb (in the active voice and present tense preferred), and perhaps an adjective or adverb. "Dog!" probably implies "I see a dog" but should be rejected as too ambiguous.
For the purposes of the propositional calculus a compound proposition can usually be reworded into a series of simple sentences, although the result will probably sound stilted.
The predicate calculus goes a step further than the propositional calculus to an "analysis of the inner structure of propositions" [ 4 ] It breaks a simple sentence down into two parts (i) its subject (the object ( singular or plural) of discourse) and (ii) a predicate (a verb or possibly verb-clause that asserts a quality or attribute of the object(s)). The predicate calculus then generalizes the "subject|predicate" form (where | symbolizes concatenation (stringing together) of symbols) into a form with the following blank-subject structure " ___|predicate", and the predicate in turn generalized to all things with that property.
The generalization of "this pig" to a (potential) member of two classes "winged things" and "blue things" means that it has a truth-relationship with both of these classes. In other words, given a domain of discourse "winged things", p is either found to be a member of this domain or not. Thus there is a relationship W (wingedness) between p (pig) and { T, F }, W(p) evaluates to { T, F } where { T, F } is the set of the Boolean values "true" and "false". Likewise for B (blueness) and p (pig) and { T, F }: B(p) evaluates to { T, F }. So one now can analyze the connected assertions "B(p) AND W(p)" for its overall truth-value, i.e.:
In particular, simple sentences that employ notions of "all", "some", "a few", "one of", etc. called logical quantifiers are treated by the predicate calculus. Along with the new function symbolism "F(x)" two new symbols are introduced: ∀ (For all), and ∃ (There exists ..., At least one of ... exists, etc.). The predicate calculus, but not the propositional calculus, can establish the formal validity of the following statement:
Tarski asserts that the notion of IDENTITY (as distinguished from LOGICAL EQUIVALENCE) lies outside the propositional calculus; however, he notes that if a logic is to be of use for mathematics and the sciences it must contain a "theory" of IDENTITY. [ 5 ] Some authors refer to "predicate logic with identity" to emphasize this extension. See more about this below.
An algebra (and there are many different ones), loosely defined, is a method by which a collection of symbols called variables together with some other symbols such as parentheses (, ) and some sub-set of symbols such as *, +, ~, &, ∨, =, ≡, ∧, ¬ are manipulated within a system of rules. These symbols, and well-formed strings of them, are said to represent objects, but in a specific algebraic system these objects do not have meanings. Thus work inside the algebra becomes an exercise in obeying certain laws (rules) of the algebra's syntax (symbol-formation) rather than in semantics (meaning) of the symbols. The meanings are to be found outside the algebra.
For a well-formed sequence of symbols in the algebra —a formula— to have some usefulness outside the algebra the symbols are assigned meanings and eventually the variables are assigned values; then by a series of rules the formula is evaluated.
When the values are restricted to just two and applied to the notion of simple sentences (e.g. spoken utterances or written assertions) linked by propositional connectives this whole algebraic system of symbols and rules and evaluation-methods is usually called the propositional calculus or the sentential calculus.
While some of the familiar rules of arithmetic algebra continue to hold in the algebra of propositions (e.g. the commutative and associative laws for AND and OR), some do not (e.g. the distributive laws for AND, OR and NOT).
Analysis: In deductive reasoning , philosophers, rhetoricians and mathematicians reduce arguments to formulas and then study them (usually with truth tables ) for correctness (soundness). For example: Is the following argument sound?
Engineers analyze the logic circuits they have designed using synthesis techniques and then apply various reduction and minimization techniques to simplify their designs.
Synthesis: Engineers in particular synthesize propositional formulas (that eventually end up as circuits of symbols) from truth tables . For example, one might write down a truth table for how binary addition should behave given the addition of variables "b" and "a" and "carry_in" "ci", and the results "carry_out" "co" and "sum" Σ:
The simplest type of propositional formula is a propositional variable . Propositions that are simple ( atomic ), symbolic expressions are often denoted by variables named p , q , or P , Q , etc. A propositional variable is intended to represent an atomic proposition (assertion), such as "It is Saturday" = p (here the symbol = means " ... is assigned the variable named ...") or "I only go to the movies on Monday" = q .
Evaluation of a propositional formula begins with assignment of a truth value to each variable. Because each variable represents a simple sentence, the truth values are being applied to the "truth" or "falsity" of these simple sentences.
Truth values in rhetoric, philosophy and mathematics
The truth values are only two: { TRUTH "T", FALSITY "F" }. An empiricist puts all propositions into two broad classes: analytic —true no matter what (e.g. tautology ), and synthetic —derived from experience and thereby susceptible to confirmation by third parties (the verification theory of meaning). [ 6 ] Empiricists hold that, in general, to arrive at the truth-value of a synthetic proposition , meanings (pattern-matching templates) must first be applied to the words, and then these meaning-templates must be matched against whatever it is that is being asserted. For example, my utterance "That cow is blue !" Is this statement a TRUTH? Truly I said it. And maybe I am seeing a blue cow—unless I am lying my statement is a TRUTH relative to the object of my (perhaps flawed) perception. But is the blue cow "really there"? What do you see when you look out the same window? In order to proceed with a verification, you will need a prior notion (a template) of both "cow" and " blue ", and an ability to match the templates against the object of sensation (if indeed there is one). [ citation needed ]
Truth values in engineering
Engineers try to avoid notions of truth and falsity that bedevil philosophers, but in the final analysis engineers must trust their measuring instruments. In their quest for robustness , engineers prefer to pull known objects from a small library—objects that have well-defined, predictable behaviors even in large combinations, (hence their name for the propositional calculus: "combinatorial logic"). The fewest behaviors of a single object are two (e.g. { OFF, ON }, { open, shut }, { UP, DOWN } etc.), and these are put in correspondence with { 0, 1 }. Such elements are called digital ; those with a continuous range of behaviors are called analog . Whenever decisions must be made in an analog system, quite often an engineer will convert an analog behavior (the door is 45.32146% UP) to digital (e.g. DOWN=0 ) by use of a comparator . [ 7 ]
Thus an assignment of meaning of the variables and the two value-symbols { 0, 1 } comes from "outside" the formula that represents the behavior of the (usually) compound object. An example is a garage door with two "limit switches", one for UP labelled SW_U and one for DOWN labelled SW_D, and whatever else is in the door's circuitry. Inspection of the circuit (either the diagram or the actual objects themselves—door, switches, wires, circuit board, etc.) might reveal that, on the circuit board "node 22" goes to +0 volts when the contacts of switch "SW_D" are mechanically in contact ("closed") and the door is in the "down" position (95% down), and "node 29" goes to +0 volts when the door is 95% UP and the contacts of switch SW_U are in mechanical contact ("closed"). [ 8 ] The engineer must define the meanings of these voltages and all possible combinations (all 4 of them), including the "bad" ones (e.g. both nodes 22 and 29 at 0 volts, meaning that the door is open and closed at the same time). The circuit mindlessly responds to whatever voltages it experiences without any awareness of TRUTH or FALSEHOOD, RIGHT or WRONG, SAFE or DANGEROUS. [ citation needed ]
Arbitrary propositional formulas are built from propositional variables and other propositional formulas using propositional connectives . Examples of connectives include:
The following are the connectives common to rhetoric, philosophy and mathematics together with their truth tables . The symbols used will vary from author to author and between fields of endeavor. In general the abbreviations "T" and "F" stand for the evaluations TRUTH and FALSITY applied to the variables in the propositional formula (e.g. the assertion: "That cow is blue" will have the truth-value "T" for Truth or "F" for Falsity, as the case may be.).
The connectives go by a number of different word-usages, e.g. "a IMPLIES b" is also said "IF a THEN b". Some of these are shown in the table.
In general, the engineering connectives are just the same as the mathematics connectives excepting they tend to evaluate with "1" = "T" and "0" = "F". This is done for the purposes of analysis/minimization and synthesis of formulas by use of the notion of minterms and Karnaugh maps (see below). Engineers also use the words logical product from Boole 's notion (a*a = a) and logical sum from Jevons ' notion (a+a = a). [ 9 ]
The IF ... THEN ... ELSE ... connective appears as the simplest form of CASE operator of recursion theory and computation theory and is the connective responsible for conditional goto's (jumps, branches). From this one connective all other connectives can be constructed (see more below). Although " IF c THEN b ELSE a " sounds like an implication it is, in its most reduced form, a switch that makes a decision and offers as outcome only one of two alternatives "a" or "b" (hence the name switch statement in the C programming language). [ 10 ]
The following three propositions are equivalent (as indicated by the logical equivalence sign ≡ ):
Thus IF ... THEN ... ELSE—unlike implication—does not evaluate to an ambiguous "TRUTH" when the first proposition is false i.e. c = F in (c → b). For example, most people would reject the following compound proposition as a nonsensical non sequitur because the second sentence is not connected in meaning to the first. [ 11 ]
In recognition of this problem, the sign → of formal implication in the propositional calculus is called material implication to distinguish it from the everyday, intuitive implication. [ a ]
The use of the IF ... THEN ... ELSE construction avoids controversy because it offers a completely deterministic choice between two stated alternatives; it offers two "objects" (the two alternatives b and a), and it selects between them exhaustively and unambiguously. [ 13 ] In the truth table below, d1 is the formula: ( (IF c THEN b) AND (IF NOT-c THEN a) ). Its fully reduced form d2 is the formula: ( (c AND b) OR (NOT-c AND a). The two formulas are equivalent as shown by the columns "=d1" and "=d2". Electrical engineers call the fully reduced formula the AND-OR-SELECT operator. The CASE (or SWITCH) operator is an extension of the same idea to n possible, but mutually exclusive outcomes. Electrical engineers call the CASE operator a multiplexer .
The first table of this section stars *** the entry logical equivalence to note the fact that " Logical equivalence " is not the same thing as "identity". For example, most would agree that the assertion "That cow is blue" is identical to the assertion "That cow is blue". On the other hand, logical equivalence sometimes appears in speech as in this example: " 'The sun is shining' means 'I'm biking' " Translated into a propositional formula the words become: "IF 'the sun is shining' THEN 'I'm biking', AND IF 'I'm biking' THEN 'the sun is shining'": [ 14 ]
Different authors use different signs for logical equivalence: ↔ (e.g. Suppes, Goodstein, Hamilton), ≡ (e.g. Robbin), ⇔ (e.g. Bender and Williamson). Typically identity is written as the equals sign =. One exception to this rule is found in Principia Mathematica . For more about the philosophy of the notion of IDENTITY see Leibniz's law .
As noted above, Tarski considers IDENTITY to lie outside the propositional calculus, but he asserts that without the notion, "logic" is insufficient for mathematics and the deductive sciences. In fact the sign comes into the propositional calculus when a formula is to be evaluated. [ 15 ]
In some systems there are no truth tables, but rather just formal axioms (e.g. strings of symbols from a set { ~, →, (, ), variables p 1 , p 2 , p 3 , ... } and formula-formation rules (rules about how to make more symbol strings from previous strings by use of e.g. substitution and modus ponens ). the result of such a calculus will be another formula (i.e. a well-formed symbol string). Eventually, however, if one wants to use the calculus to study notions of validity and truth, one must add axioms that define the behavior of the symbols called "the truth values" {T, F} ( or {1, 0}, etc.) relative to the other symbols.
For example, Hamilton uses two symbols = and ≠ when he defines the notion of a valuation v of any well-formed formulas (wffs) A and B in his "formal statement calculus" L. A valuation v is a function from the wffs of his system L to the range (output) { T, F }, given that each variable p 1 , p 2 , p 3 in a wff is assigned an arbitrary truth value { T, F }.
The two definitions ( i ) and ( ii ) define the equivalent of the truth tables for the ~ (NOT) and → (IMPLICATION) connectives of his system. The first one derives F ≠ T and T ≠ F, in other words " v ( A ) does not mean v (~ A )". Definition ( ii ) specifies the third row in the truth table, and the other three rows then come from an application of definition ( i ). In particular ( ii ) assigns the value F (or a meaning of "F") to the entire expression. The definitions also serve as formation rules that allow substitution of a value previously derived into a formula:
Some formal systems specify these valuation axioms at the outset in the form of certain formulas such as the law of contradiction or laws of identity and nullity. The choice of which ones to use, together with laws such as commutation and distribution, is up to the system's designer as long as the set of axioms is complete (i.e. sufficient to form and to evaluate any well-formed formula created in the system).
As shown above, the CASE (IF c THEN b ELSE a ) connective is constructed either from the 2-argument connectives IF ... THEN ... and AND or from OR and AND and the 1-argument NOT. Connectives such as the n-argument AND (a & b & c & ... & n), OR (a ∨ b ∨ c ∨ ... ∨ n) are constructed from strings of two-argument AND and OR and written in abbreviated form without the parentheses. These, and other connectives as well, can then be used as building blocks for yet further connectives. Rhetoricians, philosophers, and mathematicians use truth tables and the various theorems to analyze and simplify their formulas.
Electrical engineering uses drawn symbols and connect them with lines that stand for the mathematicals act of substitution and replacement. They then verify their drawings with truth tables and simplify the expressions as shown below by use of Karnaugh maps or the theorems. In this way engineers have created a host of "combinatorial logic" (i.e. connectives without feedback) such as "decoders", "encoders", "mutifunction gates", "majority logic", "binary adders", "arithmetic logic units", etc.
A definition creates a new symbol and its behavior, often for the purposes of abbreviation. Once the definition is presented, either form of the equivalent symbol or formula can be used. The following symbolism = Df is following the convention of Reichenbach. [ 16 ] Some examples of convenient definitions drawn from the symbol set { ~, &, (, ) } and variables. Each definition is producing a logically equivalent formula that can be used for substitution or replacement.
The definitions above for OR, IMPLICATION, XOR, and logical equivalence are actually schemas (or "schemata"), that is, they are models (demonstrations, examples) for a general formula format but shown (for illustrative purposes) with specific letters a, b, c for the variables, whereas any variable letters can go in their places as long as the letter substitutions follow the rule of substitution below.
Substitution : The variable or sub-formula to be substituted with another variable, constant, or sub-formula must be replaced in all instances throughout the overall formula.
Replacement : (i) the formula to be replaced must be within a tautology, i.e. logically equivalent ( connected by ≡ or ↔) to the formula that replaces it, and (ii) unlike substitution its permissible for the replacement to occur only in one place (i.e. for one formula).
The classical presentation of propositional logic (see Enderton 2002) uses the connectives ¬ , ∧ , ∨ , → , ↔ {\displaystyle \lnot ,\land ,\lor ,\to ,\leftrightarrow } . The set of formulas over a given set of propositional variables is inductively defined to be the smallest set of expressions such that:
This inductive definition can be easily extended to cover additional connectives.
The inductive definition can also be rephrased in terms of a closure operation (Enderton 2002). Let V denote a set of propositional variables and let X V denote the set of all strings from an alphabet including symbols in V , left and right parentheses, and all the logical connectives under consideration. Each logical connective corresponds to a formula building operation, a function from XX V to XX V :
The set of formulas over V is defined to be the smallest subset of XX V containing V and closed under all the formula building operations.
The following "laws" of the propositional calculus are used to "reduce" complex formulas. The "laws" can be verified easily with truth tables. For each law, the principal (outermost) connective is associated with logical equivalence ≡ or identity =. A complete analysis of all 2 n combinations of truth-values for its n distinct variables will result in a column of 1's (T's) underneath this connective. This finding makes each law, by definition, a tautology. And, for a given law, because its formula on the left and right are equivalent (or identical) they can be substituted for one another.
Enterprising readers might challenge themselves to invent an "axiomatic system" that uses the symbols { ∨, &, ~, (, ), variables a, b, c }, the formation rules specified above, and as few as possible of the laws listed below, and then derive as theorems the others as well as the truth-table valuations for ∨, &, and ~. One set attributed to Huntington (1904) (Suppes:204) uses eight of the laws defined below.
If used in an axiomatic system, the symbols 1 and 0 (or T and F) are considered to be well-formed formulas and thus obey all the same rules as the variables. Thus the laws listed below are actually axiom schemas , that is, they stand in place of an infinite number of instances. Thus ( x ∨ y ) ≡ ( y ∨ x ) might be used in one instance, ( p ∨ 0 ) ≡ ( 0 ∨ p ) and in another instance ( 1 ∨ q ) ≡ ( q ∨ 1 ), etc.
In general, to avoid confusion during analysis and evaluation of propositional formulas, one can make liberal use of parentheses. However, quite often authors leave them out. To parse a complicated formula one first needs to know the seniority, or rank, that each of the connectives (excepting *) has over the other connectives. To "well-form" a formula, start with the connective with the highest rank and add parentheses around its components, then move down in rank (paying close attention to the connective's scope over which it is working). From most- to least-senior, with the predicate signs ∀x and ∃x, the IDENTITY = and arithmetic signs added for completeness: [ b ]
Thus the formula can be parsed—but because NOT does not obey the distributive law, the parentheses around the inner formula (~c & ~d) is mandatory:
Both AND and OR obey the commutative law and associative law :
Omitting parentheses in strings of AND and OR : The connectives are considered to be unary (one-variable, e.g. NOT) and binary (i.e. two-variable AND, OR, IMPLIES). For example:
However, a truth-table demonstration shows that the form without the extra parentheses is perfectly adequate.
Omitting parentheses with regards to a single-variable NOT : While ~(a) where a is a single variable is perfectly clear, ~a is adequate and is the usual way this literal would appear. When the NOT is over a formula with more than one symbol, then the parentheses are mandatory, e.g. ~(a ∨ b).
OR distributes over AND and AND distributes over OR. NOT does not distribute over AND or OR. See below about De Morgan's law:
NOT, when distributed over OR or AND, does something peculiar (again, these can be verified with a truth-table):
Absorption, in particular the first one, causes the "laws" of logic to differ from the "laws" of arithmetic:
The sign " = " (as distinguished from logical equivalence ≡, alternately ↔ or ⇔) symbolizes the assignment of value or meaning. Thus the string (a & ~(a)) symbolizes "0", i.e. it means the same thing as symbol "0" ". In some "systems" this will be an axiom (definition) perhaps shown as ( (a & ~(a)) = Df 0 ); in other systems, it may be derived in the truth table below:
A key property of formulas is that they can be uniquely parsed to determine the structure of the formula in terms of its propositional variables and logical connectives. When formulas are written in infix notation , as above, unique readability is ensured through an appropriate use of parentheses in the definition of formulas. Alternatively, formulas can be written in Polish notation or reverse Polish notation , eliminating the need for parentheses altogether.
The inductive definition of infix formulas in the previous section can be converted to a formal grammar in Backus-Naur form :
It can be shown that any expression matched by the grammar has a balanced number of left and right parentheses, and any nonempty initial segment of a formula has more left than right parentheses. [ 18 ] This fact can be used to give an algorithm for parsing formulas. For example, suppose that an expression x begins with ( ¬ {\displaystyle (\lnot } . Starting after the second symbol, match the shortest subexpression y of x that has balanced parentheses. If x is a formula, there is exactly one symbol left after this expression, this symbol is a closing parenthesis, and y itself is a formula. This idea can be used to generate a recursive descent parser for formulas.
Example of parenthesis counting :
This method locates as "1" the principal connective — the connective under which the overall evaluation of the formula occurs for the outer-most parentheses (which are often omitted). [ 19 ] It also locates the inner-most connective where one would begin evaluatation of the formula without the use of a truth table, e.g. at "level 6".
The notion of valid argument is usually applied to inferences in arguments, but arguments reduce to propositional formulas and can be evaluated the same as any other propositional formula. Here a valid inference means: "The formula that represents the inference evaluates to "truth" beneath its principal connective, no matter what truth-values are assigned to its variables", i.e. the formula is a tautology. [ 20 ] Quite possibly a formula will be well-formed but not valid. Another way of saying this is: "Being well-formed is necessary for a formula to be valid but it is not sufficient ." The only way to find out if it is both well-formed and valid is to submit it to verification with a truth table or by use of the "laws":
A set of logical connectives is called complete if every propositional formula is tautologically equivalent to a formula with just the connectives in that set. There are many complete sets of connectives, including { ∧ , ¬ } {\displaystyle \{\land ,\lnot \}} , { ∨ , ¬ } {\displaystyle \{\lor ,\lnot \}} , and { → , ¬ } {\displaystyle \{\to ,\lnot \}} . There are two binary connectives that are complete on their own, corresponding to NAND and NOR, respectively. [ 21 ] Some pairs are not complete, for example { ∧ , ∨ } {\displaystyle \{\land ,\lor \}} .
The binary connective corresponding to NAND is called the Sheffer stroke , and written with a vertical bar | or vertical arrow ↑. The completeness of this connective was noted in Principia Mathematica (1927:xvii). Since it is complete on its own, all other connectives can be expressed using only the stroke. For example, where the symbol " ≡ " represents logical equivalence :
In particular, the zero-ary connectives ⊤ {\displaystyle \top } (representing truth) and ⊥ {\displaystyle \bot } (representing falsity) can be expressed using the stroke:
This connective together with { 0, 1 }, ( or { F, T } or { ⊥ {\displaystyle \bot } , ⊤ {\displaystyle \top } } ) forms a complete set. In the following the IF...THEN...ELSE relation (c, b, a) = d represents ( (c → b) ∨ (~c → a) ) ≡ ( (c & b) ∨ (~c & a) ) = d
Example: The following shows how a theorem-based proof of "(c, b, 1) ≡ (c → b)" would proceed, below the proof is its truth-table verification. ( Note: (c → b) is defined to be (~c ∨ b) ):
In the following truth table the column labelled "taut" for tautology evaluates logical equivalence (symbolized here by ≡) between the two columns labelled d. Because all four rows under "taut" are 1's, the equivalence indeed represents a tautology.
An arbitrary propositional formula may have a very complicated structure. It is often convenient to work with formulas that have simpler forms, known as normal forms . Some common normal forms include conjunctive normal form and disjunctive normal form . Any propositional formula can be reduced to its conjunctive or disjunctive normal form.
Reduction to normal form is relatively simple once a truth table for the formula is prepared. But further attempts to minimize the number of literals (see below) requires some tools: reduction by De Morgan's laws and truth tables can be unwieldy, but Karnaugh maps are very suitable a small number of variables (5 or less). Some sophisticated tabular methods exist for more complex circuits with multiple outputs but these are beyond the scope of this article; for more see Quine–McCluskey algorithm .
In electrical engineering, a variable x or its negation ~(x) can be referred to as a literal . A string of literals connected by ANDs is called a term. A string of literals connected by OR is called an alterm. Typically the literal ~(x) is abbreviated ~x. Sometimes the &-symbol is omitted altogether in the manner of algebraic multiplication.
In the same way that a 2 n -row truth table displays the evaluation of a propositional formula for all 2 n possible values of its variables, n variables produces a 2 n -square Karnaugh map (even though we cannot draw it in its full-dimensional realization). For example, 3 variables produces 2 3 = 8 rows and 8 Karnaugh squares; 4 variables produces 16 truth-table rows and 16 squares and therefore 16 minterms . Each Karnaugh-map square and its corresponding truth-table evaluation represents one minterm.
Any propositional formula can be reduced to the "logical sum" (OR) of the active (i.e. "1"- or "T"-valued) minterms. When in this form the formula is said to be in disjunctive normal form . But even though it is in this form, it is not necessarily minimized with respect to either the number of terms or the number of literals.
In the following table, observe the peculiar numbering of the rows: (0, 1, 3, 2, 6, 7, 5, 4, 0). The first column is the decimal equivalent of the binary equivalent of the digits "cba", in other words:
This numbering comes about because as one moves down the table from row to row only one variable at a time changes its value. Gray code is derived from this notion. This notion can be extended to three and four-dimensional hypercubes called Hasse diagrams where each corner's variables change only one at a time as one moves around the edges of the cube. Hasse diagrams (hypercubes) flattened into two dimensions are either Veitch diagrams or Karnaugh maps (these are virtually the same thing).
When working with Karnaugh maps one must always keep in mind that the top edge "wrap arounds" to the bottom edge, and the left edge wraps around to the right edge—the Karnaugh diagram is really a three- or four- or n-dimensional flattened object.
Veitch improved the notion of Venn diagrams by converting the circles to abutting squares, and Karnaugh simplified the Veitch diagram by converting the minterms, written in their literal-form (e.g. ~abc~d) into numbers. [ 22 ] The method proceeds as follows:
Produce the formula's truth table. Number its rows using the binary-equivalents of the variables (usually just sequentially 0 through n-1) for n variables.
Example: ((c & d) ∨ (p & ~(c & (~d)))) = q in conjunctive normal form is:
However, this formula be reduced both in the number of terms (from 4 to 3) and in the total count of its literals (12 to 6).
Use the values of the formula (e.g. "p") found by the truth-table method and place them in their into their respective (associated) Karnaugh squares (these are numbered per the Gray code convention). If values of "d" for "don't care" appear in the table, this adds flexibility during the reduction phase.
Minterms of adjacent (abutting) 1-squares (T-squares) can be reduced with respect to the number of their literals , and the number terms also will be reduced in the process. Two abutting squares (2 x 1 horizontal or 1 x 2 vertical, even the edges represent abutting squares) lose one literal, four squares in a 4 x 1 rectangle (horizontal or vertical) or 2 x 2 square (even the four corners represent abutting squares) lose two literals, eight squares in a rectangle lose 3 literals, etc. (One seeks out the largest square or rectangles and ignores the smaller squares or rectangles contained totally within it. ) This process continues until all abutting squares are accounted for, at which point the propositional formula is minimized.
For example, squares #3 and #7 abut. These two abutting squares can lose one literal (e.g. "p" from squares #3 and #7), four squares in a rectangle or square lose two literals, eight squares in a rectangle lose 3 literals, etc. (One seeks out the largest square or rectangles.) This process continues until all abutting squares are accounted for, at which point the propositional formula is said to be minimized.
Example: The map method usually is done by inspection. The following example expands the algebraic method to show the "trick" behind the combining of terms on a Karnaugh map:
Observe that by the Idempotency law (A ∨ A) = A, we can create more terms. Then by association and distributive laws the variables to disappear can be paired, and then "disappeared" with the Law of contradiction (x & ~x)=0. The following uses brackets [ and ] only to keep track of the terms; they have no special significance:
Given the following examples-as-definitions, what does one make of the subsequent reasoning:
Then assign the variable "s" to the left-most sentence "This sentence is simple". Define "compound" c = "not simple" ~s, and assign c = ~s to "This sentence is compound"; assign "j" to "It [this sentence] is conjoined by AND". The second sentence can be expressed as:
If truth values are to be placed on the sentences c = ~s and j, then all are clearly FALSEHOODS: e.g. "This sentence is complex" is a FALSEHOOD (it is simple , by definition). So their conjunction (AND) is a falsehood. But when taken in its assembled form, the sentence a TRUTH.
This is an example of the paradoxes that result from an impredicative definition —that is, when an object m has a property P, but the object m is defined in terms of property P. [ 23 ] The best advice for a rhetorician or one involved in deductive analysis is avoid impredicative definitions but at the same time be on the lookout for them because they can indeed create paradoxes. Engineers, on the other hand, put them to work in the form of propositional formulas with feedback.
The notion of a propositional formula appearing as one of its own variables requires a formation rule that allows the assignment of the formula to a variable. In general there is no stipulation (either axiomatic or truth-table systems of objects and relations) that forbids this from happening. [ 24 ]
The simplest case occurs when an OR formula becomes one its own inputs e.g. p = q. Begin with (p ∨ s) = q, then let p = q. Observe that q's "definition" depends on itself "q" as well as on "s" and the OR connective; this definition of q is thus impredicative.
Either of two conditions can result: [ 25 ] oscillation or memory.
It helps to think of the formula as a black box . Without knowledge of what is going on "inside" the formula-"box" from the outside it would appear that the output is no longer a function of the inputs alone. That is, sometimes one looks at q and sees 0 and other times 1. To avoid this problem one has to know the state (condition) of the "hidden" variable p inside the box (i.e. the value of q fed back and assigned to p). When this is known the apparent inconsistency goes away.
To understand [predict] the behavior of formulas with feedback requires the more sophisticated analysis of sequential circuits . Propositional formulas with feedback lead, in their simplest form, to state machines; they also lead to memories in the form of Turing tapes and counter-machine counters. From combinations of these elements one can build any sort of bounded computational model (e.g. Turing machines , counter machines , register machines , Macintosh computers , etc.).
In the abstract (ideal) case the simplest oscillating formula is a NOT fed back to itself: ~(~(p=q)) = q. Analysis of an abstract (ideal) propositional formula in a truth-table reveals an inconsistency for both p=1 and p=0 cases: When p=1, q=0, this cannot be because p=q; ditto for when p=0 and q=1.
Oscillation with delay : If a delay [ 26 ] (ideal or non-ideal) is inserted in the abstract formula between p and q then p will oscillate between 1 and 0: 101010...101... ad infinitum . If either of the delay and NOT are not abstract (i.e. not ideal), the type of analysis to be used will be dependent upon the exact nature of the objects that make up the oscillator; such things fall outside mathematics and into engineering.
Analysis requires a delay to be inserted and then the loop cut between the delay and the input "p". The delay must be viewed as a kind of proposition that has "qd" (q-delayed) as output for "q" as input. This new proposition adds another column to the truth table. The inconsistency is now between "qd" and "p" as shown in red; two stable states resulting:
Without delay, inconsistencies must be eliminated from a truth table analysis. With the notion of "delay", this condition presents itself as a momentary inconsistency between the fed-back output variable q and p = q delayed .
A truth table reveals the rows where inconsistencies occur between p = q delayed at the input and q at the output. After "breaking" the feed-back, [ 27 ] the truth table construction proceeds in the conventional manner. But afterwards, in every row the output q is compared to the now-independent input p and any inconsistencies between p and q are noted (i.e. p=0 together with q=1, or p=1 and q=0); when the "line" is "remade" both are rendered impossible by the Law of contradiction ~(p & ~p)). Rows revealing inconsistencies are either considered transient states or just eliminated as inconsistent and hence "impossible".
About the simplest memory results when the output of an OR feeds back to one of its inputs, in this case output "q" feeds back into "p". Given that the formula is first evaluated (initialized) with p=0 & q=0, it will "flip" once when "set" by s=1. Thereafter, output "q" will sustain "q" in the "flipped" condition (state q=1). This behavior, now time-dependent, is shown by the state diagram to the right of the once-flip.
The next simplest case is the "set-reset" flip-flop shown below the once-flip. Given that r=0 & s=0 and q=0 at the outset, it is "set" (s=1) in a manner similar to the once-flip. It however has a provision to "reset" q=0 when "r"=1. And additional complication occurs if both set=1 and reset=1. In this formula, the set=1 forces the output q=1 so when and if (s=0 & r=1) the flip-flop will be reset. Or, if (s=1 & r=0) the flip-flop will be set. In the abstract (ideal) instance in which s=1 ⇒ s=0 & r=1 ⇒ r=0 simultaneously, the formula q will be indeterminate (undecidable). Due to delays in "real" OR, AND and NOT the result will be unknown at the outset but thereafter predicable.
The formula known as "clocked flip-flop" memory ("c" is the "clock" and "d" is the "data") is given below. It works as follows: When c = 0 the data d (either 0 or 1) cannot "get through" to affect output q. When c = 1 the data d "gets through" and output q "follows" d's value. When c goes from 1 to 0 the last value of the data remains "trapped" at output "q". As long as c=0, d can change value without causing q to change.
The state diagram is similar in shape to the flip-flop's state diagram, but with different labelling on the transitions.
Bertrand Russell (1912:74) lists three laws of thought that derive from Aristotle : (1) The law of identity : "Whatever is, is.", (2) The law of noncontradiction : "Nothing can both be and not be", and (3) The law of excluded middle : "Everything must be or not be."
The use of the word "everything" in the law of excluded middle renders Russell's expression of this law open to debate. If restricted to an expression about BEING or QUALITY with reference to a finite collection of objects (a finite "universe of discourse") -- the members of which can be investigated one after another for the presence or absence of the assertion—then the law is considered intuitionistically appropriate. Thus an assertion such as: "This object must either BE or NOT BE (in the collection)", or "This object must either have this QUALITY or NOT have this QUALITY (relative to the objects in the collection)" is acceptable. See more at Venn diagram .
Although a propositional calculus originated with Aristotle, the notion of an algebra applied to propositions had to wait until the early 19th century. In an (adverse) reaction to the 2000 year tradition of Aristotle's syllogisms , John Locke 's Essay concerning human understanding (1690) used the word semiotics (theory of the use of symbols). By 1826 Richard Whately had critically analyzed the syllogistic logic with a sympathy toward Locke's semiotics. George Bentham 's work (1827) resulted in the notion of "quantification of the predicate" (1827) (nowadays symbolized as ∀ ≡ "for all"). A "row" instigated by William Hamilton over a priority dispute with Augustus De Morgan "inspired George Boole to write up his ideas on logic, and to publish them as MAL [Mathematical Analysis of Logic] in 1847" (Grattin-Guinness and Bornet 1997:xxviii).
About his contribution Grattin-Guinness and Bornet comment:
Gottlob Frege 's massive undertaking (1879) resulted in a formal calculus of propositions, but his symbolism is so daunting that it had little influence excepting on one person: Bertrand Russell . First as the student of Alfred North Whitehead he studied Frege's work and suggested a (famous and notorious) emendation with respect to it (1904) around the problem of an antinomy that he discovered in Frege's treatment ( cf Russell's paradox ). Russell's work led to a collaboration with Whitehead that, in the year 1912, produced the first volume of Principia Mathematica (PM). It is here that what we consider "modern" propositional logic first appeared. In particular, PM introduces NOT and OR and the assertion symbol ⊦ as primitives. In terms of these notions they define IMPLICATION → ( def. *1.01: ~p ∨ q ), then AND (def. *3.01: ~(~p ∨ ~q) ), then EQUIVALENCE p ←→ q (*4.01: (p → q) & ( q → p ) ).
Computation and switching logic : | https://en.wikipedia.org/wiki/Propositional_formula |
In propositional calculus , a propositional function or a predicate is a sentence expressed in a way that would assume the value of true or false , except that within the sentence there is a variable ( x ) that is not defined or specified (thus being a free variable ), which leaves the statement undetermined. The sentence may contain several such variables (e.g. n variables, in which case the function takes n arguments).
As a mathematical function , A ( x ) or A ( x 1 , x 2 , ..., x n ), the propositional function is abstracted from predicates or propositional forms. As an example, consider the predicate scheme, "x is hot". The substitution of any entity for x will produce a specific proposition that can be described as either true or false, even though " x is hot" on its own has no value as either a true or false statement. However, when a value is assigned to x , such as lava , the function then has the value true ; while one assigns to x a value like ice , the function then has the value false .
Propositional functions are useful in set theory for the formation of sets . For example, in 1903 Bertrand Russell wrote in The Principles of Mathematics (page 106):
Later Russell examined the problem of whether propositional functions were predicative or not, and he proposed two theories to try to get at this question: the zig-zag theory and the ramified theory of types. [ 1 ]
A Propositional Function, or a predicate, in a variable x is an open formula p ( x ) involving x that becomes a proposition when one gives x a definite value from the set of values it can take.
According to Clarence Lewis , "A proposition is any expression which is either true or false; a propositional function is an expression, containing one or more variables, which becomes a proposition when each of the variables is replaced by some one of its values from a discourse domain of individuals." [ 2 ] Lewis used the notion of propositional functions to introduce relations , for example, a propositional function of n variables is a relation of arity n . The case of n = 2 corresponds to binary relations , of which there are homogeneous relations (both variables from the same set) and heterogeneous relations . | https://en.wikipedia.org/wiki/Propositional_function |
In propositional calculus and proof complexity a propositional proof system ( pps ), also called a Cook–Reckhow propositional proof system , is a system for proving classical propositional tautologies.
Formally a pps is a polynomial-time function P whose range is the set of all propositional tautologies (denoted TAUT). [ 1 ] If A is a formula, then any x such that P ( x ) = A is called a P -proof of A . The condition defining pps can be broken up as follows:
In general, a proof system for a language L is a polynomial-time function whose range is L . Thus, a propositional proof system is a proof system for TAUT.
Sometimes the following alternative definition is considered: a pps is given as a proof-verification algorithm P ( A , x ) with two inputs. If P accepts the pair ( A , x ) we say that x is a P -proof of A . P is required to run in polynomial time, and moreover, it must hold that A has a P -proof if and only if it is a tautology.
If P 1 is a pps according to the first definition, then P 2 defined by P 2 ( A , x ) if and only if P 1 ( x ) = A is a pps according to the second definition. Conversely, if P 2 is a pps according to the second definition, then P 1 defined by
( P 1 takes pairs as input) is a pps according to the first definition, where ⊤ {\displaystyle \top } is a fixed tautology.
One can view the second definition as a non-deterministic algorithm for solving membership in TAUT. This means that proving a superpolynomial proof size lower-bound for pps would rule out existence of a certain class of polynomial-time algorithms based on that pps.
As an example, exponential proof size lower-bounds in resolution for the pigeon hole principle imply that any algorithm based on resolution cannot decide TAUT or SAT efficiently and will fail on pigeon hole principle tautologies. This is significant because the class of algorithms based on resolution includes most of current propositional proof search algorithms and modern industrial SAT solvers.
Historically, Frege's propositional calculus was the first propositional proof system. The general definition of a propositional proof system is due to Stephen Cook and Robert A. Reckhow (1979). [ 1 ]
Propositional proof system can be compared using the notion of p-simulation . A propositional proof system P p-simulates Q (written as P ≤ p Q ) when there is a polynomial-time function F such that P ( F ( x )) = Q ( x ) for every x . [ 1 ] That is, given a Q -proof x , we can find in polynomial time a P -proof of the same tautology. If P ≤ p Q and Q ≤ p P , the proof systems P and Q are p-equivalent . There is also a weaker notion of simulation: a pps P simulates or weakly p-simulates a pps Q if there is a polynomial p such that for every Q -proof x of a tautology A , there is a P -proof y of A such that the length of y , | y | is at most p (| x |). (Some authors use the words p-simulation and simulation interchangeably for either of these two concepts, usually the latter.)
A propositional proof system is called p-optimal if it p -simulates all other propositional proof systems, and it is optimal if it simulates all other pps. A propositional proof system P is polynomially bounded (also called super) if every tautology has a short (i.e., polynomial-size) P -proof.
If P is polynomially bounded and Q simulates P , then Q is also polynomially bounded.
The set of propositional tautologies, TAUT, is a coNP -complete set. A propositional proof system is a certificate-verifier for membership in TAUT. Existence of a polynomially bounded propositional proof system means that there is a verifier with polynomial-size certificates, i.e., TAUT is in NP . In fact these two statements are equivalent, i.e., there is a polynomially bounded propositional proof system if and only if the complexity classes NP and coNP are equal. [ 1 ]
Some equivalence classes of proof systems under simulation or p -simulation are closely related to theories of bounded arithmetic ; they are essentially "non-uniform" versions of the bounded arithmetic, in the same way that circuit classes are non-uniform versions of resource-based complexity classes. "Extended Frege" systems (allowing the introduction of new variables by definition) correspond in this way to polynomially-bounded systems, for example. Where the bounded arithmetic in turn corresponds to a circuit-based complexity class, there are often similarities between the theory of proof systems and the theory of the circuit families, such as matching lower bound results and separations. For example, just as counting cannot be done by an A C 0 {\displaystyle \mathbf {AC} ^{0}} circuit family of subexponential size, many tautologies relating to the pigeonhole principle cannot have subexponential proofs in a proof system based on bounded-depth formulas (and in particular, not by resolution-based systems, since they rely solely on depth 1 formulas).
Some examples of propositional proof systems studied are: | https://en.wikipedia.org/wiki/Propositional_proof_system |
Proprietary firmware is any firmware that has had its use, private modification, copying , or republishing restricted by the producer. Proprietors may enforce restrictions by technical means, such as by restricting source code access, firmware replacement restrictions (by denying complete tooling that may be necessary in order to recompile and replace the firmware), or by legal means, such as through copyright and patents . Alternatives to proprietary firmware may be free (libre) or open-source .
Proprietary firmware (and especially the microcode) is much more difficult to avoid than proprietary software or even proprietary device drivers , because the firmware is usually very specific to the manufacturer of each device (often being unique for each model), and the programming documentation and complete specifications that would be necessary to create a replacement are often withheld by the hardware manufacturer. [ 1 ]
Many open-source operating systems reluctantly choose to include proprietary firmware files in their distributions simply to make their device drivers work, [ 2 ] because manufacturers try to save money by removing flash memory or EEPROM from their devices, requiring the operating system to upload the firmware each time the device is used. [ 3 ] However, in order to do so, the operating system still has to have distribution rights for this proprietary microcode. [ 3 ]
Proprietary firmware poses a significant security risk to the user because of the direct memory access (DMA) architecture of modern computers and the potential for DMA attacks . [ citation needed ] Theo de Raadt of OpenBSD suggests that wireless firmware are kept proprietary because of poor design quality and firmware defects. [ 4 ] [ 5 ] Mark Shuttleworth of Ubuntu suggests that "it's reasonable to assume that all firmware is a cesspool of insecurity courtesy of incompetence of the worst degree from manufacturers, and competence of the highest degree from a very wide range of such agencies". [ 6 ]
The security and reliability risks posed by proprietary microcode may be lower than those posed by proprietary device drivers , because the microcode in this context isn't linked against the operating system , and doesn't run on the host's main processor . [ 2 ]
Custom firmware may still be available for certain products, which is often free and open-source software , and is especially popular in certain segments of hardware like gaming consoles , wireless routers and Android phones , which are capable of running complete general-purpose operating systems like Linux , FreeBSD or NetBSD , which are often the systems used by the manufacturer in their original proprietary firmware.
Another potential solution is going with open-source hardware , which goes a step further by also providing schematics for replicating the hardware itself. | https://en.wikipedia.org/wiki/Proprietary_firmware |
In telecommunications , a proprietary protocol is a communications protocol owned by a single organization or individual. [ 1 ]
Ownership by a single organization gives the owner the ability to place restrictions on the use of the protocol and to change the protocol unilaterally. Specifications for proprietary protocols may or may not be published, and implementations are not freely distributed . Proprietors may enforce restrictions through control of the intellectual property rights, for example through enforcement of patent rights, and by keeping the protocol specification a trade secret . Some proprietary protocols strictly limit the right to create an implementation; others are widely implemented by entities that do not control the intellectual property but subject to restrictions the owner of the intellectual property may seek to impose.
The Skype protocol is a proprietary protocol. [ 2 ]
The Venturi Transport Protocol (VTP) is a patented proprietary protocol [ 3 ] that is designed to replace TCP transparently in order to overcome perceived inefficiencies related to wireless data transport.
Microsoft Exchange Server protocols are proprietary [ 4 ] open access protocols. The rights to develop and release protocols are held by Microsoft, but all technical details are free for access and implementation. [ 5 ]
Microsoft developed a proprietary extension to the Kerberos network authentication protocol for the Windows 2000 operating system . The extensions made the protocol incompatible with implementations supporting the original standards, and this has raised concerns that this, along with the licensing restrictions, effectively denies products unable to conform to the standard access to a Windows 2000 Server using Kerberos. [ 6 ]
The use of proprietary instant messaging protocols meant that instant messaging networks were incompatible and people were unable to reach friends on other networks. [ 7 ]
Reverse engineering is the process of retrieving a protocol’s details from a software implementation of the specification. Methods of reverse-engineering a protocol include packet sniffing and binary decompilation and disassembly .
There are legal precedents when the reverse-engineering is aimed at interoperability of protocols. [ 8 ] [ 9 ] [ 10 ] In the United States , the Digital Millennium Copyright Act grants a safe harbor to reverse engineer software for the purposes of interoperability with other software. [ 11 ] [ 12 ] | https://en.wikipedia.org/wiki/Proprietary_protocol |
Propylene glycol dinitrate ( PGDN , 1,2-propylene glycol dinitrate , or 1,2-propanediol dinitrate ) is an organic chemical , an ester of nitric acid and propylene glycol . It is structurally similar to nitroglycerin , except that it has one fewer nitrate group. It is a characteristically and unpleasantly smelling [ 4 ] colorless liquid, which decomposes at 121 °C, below its boiling point. It is flammable and explosive . It is shock-sensitive and burns with a clean flame producing water vapor , carbon monoxide , and nitrogen gas.
The principal current use of propylene glycol dinitrate is as a propellant in Otto Fuel II , together with 2-nitrodiphenylamine and dibutyl sebacate . Otto Fuel II is used in some torpedoes as a propellant . [ 3 ] [ 5 ]
Nitrates of polyhydric alcohols , of which propylene glycol dinitrate is an example, have been used in medicine for the treatment of angina pectoris , and as explosives since the mid-nineteenth century.
PGDN affects blood pressure , causes respiratory toxicity, damages liver and kidneys , distorts vision, causes methoglobinuria , and can cause headache and lack of coordination. It may be absorbed through skin. Its primary toxicity mechanism is methemoglobinemia . It may cause permanent nerve damage.
For occupational exposures, the National Institute for Occupational Safety and Health has set a recommended exposure limit at 0.05 ppm (0.3 mg/m 3 ) over an eight-hour workday, for dermal exposures. [ 6 ] | https://en.wikipedia.org/wiki/Propylene_glycol_dinitrate |
In organic chemistry , a propynyl group is a propyl bearing a triple bond. [ 1 ]
This organic chemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Propynyl_group |
Propynylidyne is a chemical compound that has been identified in interstellar space .
μ D =3.551 Debye [ 1 ]
2 Π electronic ground state
A rotational spectrum of the 2 Π electronic ground state of l -C 3 H can be made using the PGopher software (a Program for Simulating Rotational Structure, C. M. Western, University of Bristol, http://pgopher.chm.bris.ac.uk ) and molecular constants extracted from the literature. These constants include μ=3.551 Debye [ 1 ] and others provided by Yamamoto et al. 1990, [ 2 ] given in units of MHz: B=11189.052, D=0.0051365, A SO =432834.31, γ=-48.57, p=-7.0842, and q=-13.057. A selection rule of ΔJ=0,1 was applied, with S=0.5. The resulting simulation for the rotational spectrum of C 3 H at a temperature of 30 K agree well with observations. [ 2 ] The simulated spectrum is shown in the figure at right with the approximate atmospheric transmission overplotted in blue. All of the strongest simulated lines with J < 8.5 are observed by Yamamoto et al. [ 2 ]
μ D =2.4 Debye [ 3 ] electronic ground state
The molecule C 3 H has been observed in cold, dense molecular clouds . The dominant formation and destruction mechanisms are presented below, for a typical cloud with temperature 10K. The relative contributions of each reaction have been calculated using rates and abundances from the UMIST database for astrochemistry. [ 3 ]
The C 3 H molecule provides the dominant pathway to the production of C 4 H + , and thereby all other C n H (n>3) molecules via the reactions:
These reactions produce the majority of C 4 H + , which is necessary for the production of higher-order carbon-chain molecules. Compared to the competing reaction, C 3 H 3 + + C → C 4 H 2 + + H, also shown right, the destruction of C 3 H provides a much faster pathway for hydrocarbon growth.
Other molecules in the C 3 H family, C 2 H and C 3 H 2 , do not significantly contribute to the production of carbon-chain molecules, rather forming endpoints in this process. The production of C 2 H and C 3 H 2 essentially inhibits larger carbon-chain molecule formation, since neither they nor the products of their destruction are recycled into the hydrocarbon chemistry.
The first confirmation of the existence of the interstellar molecule C 3 H was announced by W.M Irvine et al. at the January 1985 meeting of the American Astronomical Society. [ 4 ] The group detected C 3 H in both the spectrum of the evolved carbon star IRC+10216 and in the molecular cloud TMC-1. These results were formally published in July of the same year by Thaddeus et al. [ 5 ] A 1987 paper by W.M. Irvine provides a comparison of detections for 39 molecules observed in cold (T k ≅10K), dark clouds, with particular emphasis paid to tri-carbon species, including C 3 H. [ 6 ]
Later reports of astronomical detections of the C 3 H radical are given in chronological order below.
In 1987, Yamamoto et al. [ 7 ] report measurements of the rotational spectra of the cyclic C 3 H radical (c-C 3 H) in the laboratory and in interstellar space towards TMC-1. This publication marks the first terrestrial measurement of C 3 H. Yamamoto et al. precisely determine molecular constants and identify 49 lines in the c-C 3 H rotational spectrum. Both fine and hyperfine components are detected toward TMC-1, and the column density for the line of sight toward TMC-1 is estimated to be 6x10 12 cm −2 , which is comparable to the linear C 3 H radical (l-C 3 H).
M.L Marconi and A. Korth et al. [ 8 ] reported a likely detection of C 3 H within the ionopause of Comet Halley in 1989. Using the heavy ion analyzer (PICCA) on board the Giotto spacecraft they determined that C 3 H was responsible for producing a peak at 37amu detected within ~4500 km of the comet nucleus. Marconi et al. argue that a gas phase progenitor molecule for C 3 H is unlikely to exist within the ionopause and suggest that desorption from circumnuclear CHON dust grains may have instead produced the observed C 3 H.
In 1990, Yamamoto et al. [ 2 ] detected C 3 H toward IRC + 10216 using the Nobeyama Radio Observatory's 45-m radio telescope. They determine an upper limit for the column density of the ν 4 state 3x10 12 cm −2 . From additional laboratory measurements they determine an extremely low vibrationally excited state for the C 3 H radical: ν 4 ( 2 Σ μ )=610197(1230) MHz, caused by the Renner-Teller effect in the ν 4 (CCH bending) state.
J.G. Mangum and A. Wootten [ 9 ] report new detections of c-C 3 H towards 13 of 19 observed Galactic molecular clouds. They measure relative abundance of C 3 H to C 3 H 2 : N(c-C 3 H)/N(C 3 H 2 ) = 9.04±2.87 x 10 −2 . This ratio does not change systematically for warmer sources, which they suggest provides evidence that the two ring molecules have a common precursor in C 3 H 3 + .
L.A. Nyman et al. [ 10 ] present a molecular line survey of the carbon star IRAS 15194-5115 using the 15m Swedish-ESO Submillimetre Telescope to probe the 3 and 1.3 mm bands. Comparing the molecular abundances with those of IRC + 10216, they find C 3 H to have similar abundances in both sources.
In 1993, M. Guelin et al. [ 11 ] map the emission from the 95 GHz and 98 GHz lines of the C 3 H radicals in IRC+10216. This reveals a shell-like distribution of the C 3 H emission and time-dependent chemistry. The close correspondence between the emission peaks of C 3 H and the species <noautolink>MgNC</noautolink> and C 4 H suggests a fast common formation mechanism, suggested to be desorption from dust grains.
Turner et al. [ 12 ] survey 10 hydrocarbon species, including l-C 3 H and c-C 3 H in three translucent clouds and TMC-1 and L183. Abundances are measured or estimated for each. The mean cyclic-to-linear abundance ratio for C 3 H is found to be 2.7, although a large variation in this ratio is observed from source to source.
In 2004, N. Kaifu et al. [ 13 ] completed the first spectral line survey toward TMC-1 in the frequency range 8.8-50.0 GHz with the 45-m radio telescope at Nobeyama Radio Observatory. They detected 414 lines of 38 molecular species including c-C 3 H and compiled spectral charts and improved molecular constants for several carbon-chain molecules.
Martin et al. [ 14 ] made the first spectral line survey towards an extragalactic source, targeting the starburst galaxy NGC253 across the frequency range 129.1-175.2 GHz. Approximately 100 spectral features were identified as transitions from 25 different molecular species, including a tentative first extra-galactic detection of C 3 H. | https://en.wikipedia.org/wiki/Propynylidyne |
A prosection is the dissection of a cadaver (human or animal) or part of a cadaver by an experienced anatomist in order to demonstrate for students anatomic structure . [ 1 ] In a dissection , students learn by doing; in a prosection, students learn by either observing a dissection being performed by an experienced anatomist or examining a specimen that has already been dissected by an experienced anatomist (etymology: Latin pro- "before" + sectio "a cutting"). [ 2 ]
A prosection may also refer to the dissected cadaver or cadaver part which is then reassembled and provided to students for review. [ 3 ]
Prosections are used primarily in the teaching of anatomy in disciplines as varied as human medicine , chiropractic , veterinary medicine , and physical therapy . [ 4 ] Prosections may also be used to teach surgical techniques (such as the suturing of skin), pathology , physiology , reproduction medicine and theriogenology , and other topics.
The use of the prosection teaching technique is somewhat controversial in medicine. In the teaching of veterinary medicine, the goal is to "create the best quality education ... while ensuring that animals are not used harmfully and that respect for animal life is engendered within the student." [ 5 ] Others have concluded that dissections and prosections have a negative impact on students' respect for patients and human life. [ 6 ] [ 7 ] Some scholars argue that while actual hands-on experience is essential, alternatives such as plastinated or freeze-dried cadavers are just as effective in the teaching of anatomy while dramatically reducing the number of cadavers or cadaver parts needed. [ 8 ] [ 9 ] [ 10 ] [ 11 ] Other alternatives such as instructional videos, plastic models, and printed materials also exist. Some studies find them equally effective as dissection or prosections, [ 12 ] [ 13 ] and some schools of human medicine in the UK have abandoned the use of cadavers entirely. [ 14 ] But others question the usefulness of these alternatives, arguing dissection or prosection of cadavers are required for in-depth learning and teach skills alternatives cannot. [ 15 ] [ 16 ] [ 17 ] Some scholars and teachers go so far as to argue that cadavers and prosections are irreplaceable in the teaching of medicine. [ 18 ]
Whether prosections are as effective as dissections in the teaching of medicine is also an unsettled aspect of medical education. Some have concluded that prosections are equally effective. [ 19 ] [ 20 ] [ 21 ] [ 22 ] However, others argue that the use of prosections is not as effective, [ 4 ] and that dissections help students learn about "detached concern," better understand medical uncertainty, and allow teachers to raise moral issues about death and dying. [ 23 ]
Some academics conclude that the effectiveness of prosections versus dissection or other alternatives depends on the type of anatomy or the discipline being taught (e.g., anatomy versus pathology), that the teaching of anatomy is yet insufficiently understood, and that existing studies are too narrow or limited to draw conclusions. [ 24 ] [ 25 ] [ 26 ] [ 27 ] | https://en.wikipedia.org/wiki/Prosection |
In mathematics , the trigonometric functions (also called circular functions , angle functions or goniometric functions ) [ 1 ] are real functions which relate an angle of a right-angled triangle to ratios of two side lengths. They are widely used in all sciences that are related to geometry , such as navigation , solid mechanics , celestial mechanics , geodesy , and many others. They are among the simplest periodic functions , and as such are also widely used for studying periodic phenomena through Fourier analysis .
The trigonometric functions most widely used in modern mathematics are the sine , the cosine , and the tangent functions. Their reciprocals are respectively the cosecant , the secant , and the cotangent functions, which are less used. Each of these six trigonometric functions has a corresponding inverse function , and an analog among the hyperbolic functions .
The oldest definitions of trigonometric functions, related to right-angle triangles, define them only for acute angles . To extend the sine and cosine functions to functions whose domain is the whole real line , geometrical definitions using the standard unit circle (i.e., a circle with radius 1 unit) are often used; then the domain of the other functions is the real line with some isolated points removed. Modern definitions express trigonometric functions as infinite series or as solutions of differential equations . This allows extending the domain of sine and cosine functions to the whole complex plane , and the domain of the other trigonometric functions to the complex plane with some isolated points removed.
Conventionally, an abbreviation of each trigonometric function's name is used as its symbol in formulas. Today, the most common versions of these abbreviations are " sin " for sine, " cos " for cosine, " tan " or " tg " for tangent, " sec " for secant, " csc " or " cosec " for cosecant, and " cot " or " ctg " for cotangent. Historically, these abbreviations were first used in prose sentences to indicate particular line segments or their lengths related to an arc of an arbitrary circle, and later to indicate ratios of lengths, but as the function concept developed in the 17th–18th century, they began to be considered as functions of real-number-valued angle measures, and written with functional notation , for example sin( x ) . Parentheses are still often omitted to reduce clutter, but are sometimes necessary; for example the expression sin x + y {\displaystyle \sin x+y} would typically be interpreted to mean ( sin x ) + y , {\displaystyle (\sin x)+y,} so parentheses are required to express sin ( x + y ) . {\displaystyle \sin(x+y).}
A positive integer appearing as a superscript after the symbol of the function denotes exponentiation , not function composition . For example sin 2 x {\displaystyle \sin ^{2}x} and sin 2 ( x ) {\displaystyle \sin ^{2}(x)} denote ( sin x ) 2 , {\displaystyle (\sin x)^{2},} not sin ( sin x ) . {\displaystyle \sin(\sin x).} This differs from the (historically later) general functional notation in which f 2 ( x ) = ( f ∘ f ) ( x ) = f ( f ( x ) ) . {\displaystyle f^{2}(x)=(f\circ f)(x)=f(f(x)).}
In contrast, the superscript − 1 {\displaystyle -1} is commonly used to denote the inverse function , not the reciprocal . For example sin − 1 x {\displaystyle \sin ^{-1}x} and sin − 1 ( x ) {\displaystyle \sin ^{-1}(x)} denote the inverse trigonometric function alternatively written arcsin x . {\displaystyle \arcsin x\,.} The equation θ = sin − 1 x {\displaystyle \theta =\sin ^{-1}x} implies sin θ = x , {\displaystyle \sin \theta =x,} not θ ⋅ sin x = 1. {\displaystyle \theta \cdot \sin x=1.} In this case, the superscript could be considered as denoting a composed or iterated function , but negative superscripts other than − 1 {\displaystyle {-1}} are not in common use.
If the acute angle θ is given, then any right triangles that have an angle of θ are similar to each other. This means that the ratio of any two side lengths depends only on θ . Thus these six ratios define six functions of θ , which are the trigonometric functions. In the following definitions, the hypotenuse is the length of the side opposite the right angle, opposite represents the side opposite the given angle θ , and adjacent represents the side between the angle θ and the right angle. [ 2 ] [ 3 ]
Various mnemonics can be used to remember these definitions.
In a right-angled triangle, the sum of the two acute angles is a right angle, that is, 90° or π / 2 radians . Therefore sin ( θ ) {\displaystyle \sin(\theta )} and cos ( 90 ∘ − θ ) {\displaystyle \cos(90^{\circ }-\theta )} represent the same ratio, and thus are equal. This identity and analogous relationships between the other trigonometric functions are summarized in the following table.
In geometric applications, the argument of a trigonometric function is generally the measure of an angle . For this purpose, any angular unit is convenient. One common unit is degrees , in which a right angle is 90° and a complete turn is 360° (particularly in elementary mathematics ).
However, in calculus and mathematical analysis , the trigonometric functions are generally regarded more abstractly as functions of real or complex numbers , rather than angles. In fact, the functions sin and cos can be defined for all complex numbers in terms of the exponential function , via power series, [ 5 ] or as solutions to differential equations given particular initial values [ 6 ] ( see below ), without reference to any geometric notions. The other four trigonometric functions ( tan , cot , sec , csc ) can be defined as quotients and reciprocals of sin and cos , except where zero occurs in the denominator. It can be proved, for real arguments, that these definitions coincide with elementary geometric definitions if the argument is regarded as an angle in radians. [ 5 ] Moreover, these definitions result in simple expressions for the derivatives and indefinite integrals for the trigonometric functions. [ 7 ] Thus, in settings beyond elementary geometry, radians are regarded as the mathematically natural unit for describing angle measures.
When radians (rad) are employed, the angle is given as the length of the arc of the unit circle subtended by it: the angle that subtends an arc of length 1 on the unit circle is 1 rad (≈ 57.3°), [ 8 ] and a complete turn (360°) is an angle of 2 π (≈ 6.28) rad. [ 9 ] For real number x , the notation sin x , cos x , etc. refers to the value of the trigonometric functions evaluated at an angle of x rad. If units of degrees are intended, the degree sign must be explicitly shown ( sin x° , cos x° , etc.). Using this standard notation, the argument x for the trigonometric functions satisfies the relationship x = (180 x / π )°, so that, for example, sin π = sin 180° when we take x = π . In this way, the degree symbol can be regarded as a mathematical constant such that 1° = π /180 ≈ 0.0175. [ 10 ]
The six trigonometric functions can be defined as coordinate values of points on the Euclidean plane that are related to the unit circle , which is the circle of radius one centered at the origin O of this coordinate system. While right-angled triangle definitions allow for the definition of the trigonometric functions for angles between 0 and π 2 {\textstyle {\frac {\pi }{2}}} radians (90°), the unit circle definitions allow the domain of trigonometric functions to be extended to all positive and negative real numbers.
Let L {\displaystyle {\mathcal {L}}} be the ray obtained by rotating by an angle θ the positive half of the x -axis ( counterclockwise rotation for θ > 0 , {\displaystyle \theta >0,} and clockwise rotation for θ < 0 {\displaystyle \theta <0} ). This ray intersects the unit circle at the point A = ( x A , y A ) . {\displaystyle \mathrm {A} =(x_{\mathrm {A} },y_{\mathrm {A} }).} The ray L , {\displaystyle {\mathcal {L}},} extended to a line if necessary, intersects the line of equation x = 1 {\displaystyle x=1} at point B = ( 1 , y B ) , {\displaystyle \mathrm {B} =(1,y_{\mathrm {B} }),} and the line of equation y = 1 {\displaystyle y=1} at point C = ( x C , 1 ) . {\displaystyle \mathrm {C} =(x_{\mathrm {C} },1).} The tangent line to the unit circle at the point A , is perpendicular to L , {\displaystyle {\mathcal {L}},} and intersects the y - and x -axes at points D = ( 0 , y D ) {\displaystyle \mathrm {D} =(0,y_{\mathrm {D} })} and E = ( x E , 0 ) . {\displaystyle \mathrm {E} =(x_{\mathrm {E} },0).} The coordinates of these points give the values of all trigonometric functions for any arbitrary real value of θ in the following manner.
The trigonometric functions cos and sin are defined, respectively, as the x - and y -coordinate values of point A . That is,
In the range 0 ≤ θ ≤ π / 2 {\displaystyle 0\leq \theta \leq \pi /2} , this definition coincides with the right-angled triangle definition, by taking the right-angled triangle to have the unit radius OA as hypotenuse . And since the equation x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} holds for all points P = ( x , y ) {\displaystyle \mathrm {P} =(x,y)} on the unit circle, this definition of cosine and sine also satisfies the Pythagorean identity .
The other trigonometric functions can be found along the unit circle as
By applying the Pythagorean identity and geometric proof methods, these definitions can readily be shown to coincide with the definitions of tangent, cotangent, secant and cosecant in terms of sine and cosine, that is
Since a rotation of an angle of ± 2 π {\displaystyle \pm 2\pi } does not change the position or size of a shape, the points A , B , C , D , and E are the same for two angles whose difference is an integer multiple of 2 π {\displaystyle 2\pi } . Thus trigonometric functions are periodic functions with period 2 π {\displaystyle 2\pi } . That is, the equalities
hold for any angle θ and any integer k . The same is true for the four other trigonometric functions. By observing the sign and the monotonicity of the functions sine, cosine, cosecant, and secant in the four quadrants, one can show that 2 π {\displaystyle 2\pi } is the smallest value for which they are periodic (i.e., 2 π {\displaystyle 2\pi } is the fundamental period of these functions). However, after a rotation by an angle π {\displaystyle \pi } , the points B and C already return to their original position, so that the tangent function and the cotangent function have a fundamental period of π {\displaystyle \pi } . That is, the equalities
hold for any angle θ and any integer k .
The algebraic expressions for the most important angles are as follows:
Writing the numerators as square roots of consecutive non-negative integers, with a denominator of 2, provides an easy way to remember the values. [ 13 ]
Such simple expressions generally do not exist for other angles which are rational multiples of a right angle.
The following table lists the sines, cosines, and tangents of multiples of 15 degrees from 0 to 90 degrees.
G. H. Hardy noted in his 1908 work A Course of Pure Mathematics that the definition of the trigonometric functions in terms of the unit circle is not satisfactory, because it depends implicitly on a notion of angle that can be measured by a real number. [ 14 ] Thus in modern analysis, trigonometric functions are usually constructed without reference to geometry.
Various ways exist in the literature for defining the trigonometric functions in a manner suitable for analysis; they include:
Sine and cosine can be defined as the unique solution to the initial value problem : [ 17 ]
Differentiating again, d 2 d x 2 sin x = d d x cos x = − sin x {\textstyle {\frac {d^{2}}{dx^{2}}}\sin x={\frac {d}{dx}}\cos x=-\sin x} and d 2 d x 2 cos x = − d d x sin x = − cos x {\textstyle {\frac {d^{2}}{dx^{2}}}\cos x=-{\frac {d}{dx}}\sin x=-\cos x} , so both sine and cosine are solutions of the same ordinary differential equation
Sine is the unique solution with y (0) = 0 and y ′(0) = 1 ; cosine is the unique solution with y (0) = 1 and y ′(0) = 0 .
One can then prove, as a theorem, that solutions cos , sin {\displaystyle \cos ,\sin } are periodic, having the same period. Writing this period as 2 π {\displaystyle 2\pi } is then a definition of the real number π {\displaystyle \pi } which is independent of geometry.
Applying the quotient rule to the tangent tan x = sin x / cos x {\displaystyle \tan x=\sin x/\cos x} ,
so the tangent function satisfies the ordinary differential equation
It is the unique solution with y (0) = 0 .
The basic trigonometric functions can be defined by the following power series expansions. [ 18 ] These series are also known as the Taylor series or Maclaurin series of these trigonometric functions:
The radius of convergence of these series is infinite. Therefore, the sine and the cosine can be extended to entire functions (also called "sine" and "cosine"), which are (by definition) complex-valued functions that are defined and holomorphic on the whole complex plane .
Term-by-term differentiation shows that the sine and cosine defined by the series obey the differential equation discussed previously, and conversely one can obtain these series from elementary recursion relations derived from the differential equation.
Being defined as fractions of entire functions, the other trigonometric functions may be extended to meromorphic functions , that is functions that are holomorphic in the whole complex plane, except some isolated points called poles . Here, the poles are the numbers of the form ( 2 k + 1 ) π 2 {\textstyle (2k+1){\frac {\pi }{2}}} for the tangent and the secant, or k π {\displaystyle k\pi } for the cotangent and the cosecant, where k is an arbitrary integer.
Recurrences relations may also be computed for the coefficients of the Taylor series of the other trigonometric functions. These series have a finite radius of convergence . Their coefficients have a combinatorial interpretation: they enumerate alternating permutations of finite sets. [ 19 ]
More precisely, defining
one has the following series expansions: [ 20 ]
The following continued fractions are valid in the whole complex plane:
The last one was used in the historically first proof that π is irrational . [ 21 ]
There is a series representation as partial fraction expansion where just translated reciprocal functions are summed up, such that the poles of the cotangent function and the reciprocal functions match: [ 22 ]
This identity can be proved with the Herglotz trick. [ 23 ] Combining the (– n ) th with the n th term lead to absolutely convergent series:
Similarly, one can find a partial fraction expansion for the secant, cosecant and tangent functions:
The following infinite product for the sine is due to Leonhard Euler , and is of great importance in complex analysis: [ 24 ]
This may be obtained from the partial fraction decomposition of cot z {\displaystyle \cot z} given above, which is the logarithmic derivative of sin z {\displaystyle \sin z} . [ 25 ] From this, it can be deduced also that
Euler's formula relates sine and cosine to the exponential function :
This formula is commonly considered for real values of x , but it remains true for all complex values.
Proof : Let f 1 ( x ) = cos x + i sin x , {\displaystyle f_{1}(x)=\cos x+i\sin x,} and f 2 ( x ) = e i x . {\displaystyle f_{2}(x)=e^{ix}.} One has d f j ( x ) / d x = i f j ( x ) {\displaystyle df_{j}(x)/dx=if_{j}(x)} for j = 1, 2 . The quotient rule implies thus that d / d x ( f 1 ( x ) / f 2 ( x ) ) = 0 {\displaystyle d/dx\,(f_{1}(x)/f_{2}(x))=0} . Therefore, f 1 ( x ) / f 2 ( x ) {\displaystyle f_{1}(x)/f_{2}(x)} is a constant function, which equals 1 , as f 1 ( 0 ) = f 2 ( 0 ) = 1. {\displaystyle f_{1}(0)=f_{2}(0)=1.} This proves the formula.
One has
Solving this linear system in sine and cosine, one can express them in terms of the exponential function:
When x is real, this may be rewritten as
Most trigonometric identities can be proved by expressing trigonometric functions in terms of the complex exponential function by using above formulas, and then using the identity e a + b = e a e b {\displaystyle e^{a+b}=e^{a}e^{b}} for simplifying the result.
Euler's formula can also be used to define the basic trigonometric function directly, as follows, using the language of topological groups . [ 26 ] The set U {\displaystyle U} of complex numbers of unit modulus is a compact and connected topological group, which has a neighborhood of the identity that is homeomorphic to the real line. Therefore, it is isomorphic as a topological group to the one-dimensional torus group R / Z {\displaystyle \mathbb {R} /\mathbb {Z} } , via an isomorphism e : R / Z → U . {\displaystyle e:\mathbb {R} /\mathbb {Z} \to U.} In pedestrian terms e ( t ) = exp ( 2 π i t ) {\displaystyle e(t)=\exp(2\pi it)} , and this isomorphism is unique up to taking complex conjugates.
For a nonzero real number a {\displaystyle a} (the base ), the function t ↦ e ( t / a ) {\displaystyle t\mapsto e(t/a)} defines an isomorphism of the group R / a Z → U {\displaystyle \mathbb {R} /a\mathbb {Z} \to U} . The real and imaginary parts of e ( t / a ) {\displaystyle e(t/a)} are the cosine and sine, where a {\displaystyle a} is used as the base for measuring angles. For example, when a = 2 π {\displaystyle a=2\pi } , we get the measure in radians, and the usual trigonometric functions. When a = 360 {\displaystyle a=360} , we get the sine and cosine of angles measured in degrees.
Note that a = 2 π {\displaystyle a=2\pi } is the unique value at which the derivative d d t e ( t / a ) {\displaystyle {\frac {d}{dt}}e(t/a)} becomes a unit vector with positive imaginary part at t = 0 {\displaystyle t=0} . This fact can, in turn, be used to define the constant 2 π {\displaystyle 2\pi } .
Another way to define the trigonometric functions in analysis is using integration. [ 14 ] [ 27 ] For a real number t {\displaystyle t} , put θ ( t ) = ∫ 0 t d τ 1 + τ 2 = arctan t {\displaystyle \theta (t)=\int _{0}^{t}{\frac {d\tau }{1+\tau ^{2}}}=\arctan t} where this defines this inverse tangent function. Also, π {\displaystyle \pi } is defined by 1 2 π = ∫ 0 ∞ d τ 1 + τ 2 {\displaystyle {\frac {1}{2}}\pi =\int _{0}^{\infty }{\frac {d\tau }{1+\tau ^{2}}}} a definition that goes back to Karl Weierstrass . [ 28 ]
On the interval − π / 2 < θ < π / 2 {\displaystyle -\pi /2<\theta <\pi /2} , the trigonometric functions are defined by inverting the relation θ = arctan t {\displaystyle \theta =\arctan t} . Thus we define the trigonometric functions by tan θ = t , cos θ = ( 1 + t 2 ) − 1 / 2 , sin θ = t ( 1 + t 2 ) − 1 / 2 {\displaystyle \tan \theta =t,\quad \cos \theta =(1+t^{2})^{-1/2},\quad \sin \theta =t(1+t^{2})^{-1/2}} where the point ( t , θ ) {\displaystyle (t,\theta )} is on the graph of θ = arctan t {\displaystyle \theta =\arctan t} and the positive square root is taken.
This defines the trigonometric functions on ( − π / 2 , π / 2 ) {\displaystyle (-\pi /2,\pi /2)} . The definition can be extended to all real numbers by first observing that, as θ → π / 2 {\displaystyle \theta \to \pi /2} , t → ∞ {\displaystyle t\to \infty } , and so cos θ = ( 1 + t 2 ) − 1 / 2 → 0 {\displaystyle \cos \theta =(1+t^{2})^{-1/2}\to 0} and sin θ = t ( 1 + t 2 ) − 1 / 2 → 1 {\displaystyle \sin \theta =t(1+t^{2})^{-1/2}\to 1} . Thus cos θ {\displaystyle \cos \theta } and sin θ {\displaystyle \sin \theta } are extended continuously so that cos ( π / 2 ) = 0 , sin ( π / 2 ) = 1 {\displaystyle \cos(\pi /2)=0,\sin(\pi /2)=1} . Now the conditions cos ( θ + π ) = − cos ( θ ) {\displaystyle \cos(\theta +\pi )=-\cos(\theta )} and sin ( θ + π ) = − sin ( θ ) {\displaystyle \sin(\theta +\pi )=-\sin(\theta )} define the sine and cosine as periodic functions with period 2 π {\displaystyle 2\pi } , for all real numbers.
Proving the basic properties of sine and cosine, including the fact that sine and cosine are analytic, one may first establish the addition formulae. First, arctan s + arctan t = arctan s + t 1 − s t {\displaystyle \arctan s+\arctan t=\arctan {\frac {s+t}{1-st}}} holds, provided arctan s + arctan t ∈ ( − π / 2 , π / 2 ) {\displaystyle \arctan s+\arctan t\in (-\pi /2,\pi /2)} , since arctan s + arctan t = ∫ − s t d τ 1 + τ 2 = ∫ 0 s + t 1 − s t d τ 1 + τ 2 {\displaystyle \arctan s+\arctan t=\int _{-s}^{t}{\frac {d\tau }{1+\tau ^{2}}}=\int _{0}^{\frac {s+t}{1-st}}{\frac {d\tau }{1+\tau ^{2}}}} after the substitution τ → s + τ 1 − s τ {\displaystyle \tau \to {\frac {s+\tau }{1-s\tau }}} . In particular, the limiting case as s → ∞ {\displaystyle s\to \infty } gives arctan t + π 2 = arctan ( − 1 / t ) , t ∈ ( − ∞ , 0 ) . {\displaystyle \arctan t+{\frac {\pi }{2}}=\arctan(-1/t),\quad t\in (-\infty ,0).} Thus we have sin ( θ + π 2 ) = − 1 t 1 + ( − 1 / t ) 2 = − 1 1 + t 2 = − cos ( θ ) {\displaystyle \sin \left(\theta +{\frac {\pi }{2}}\right)={\frac {-1}{t{\sqrt {1+(-1/t)^{2}}}}}={\frac {-1}{\sqrt {1+t^{2}}}}=-\cos(\theta )} and cos ( θ + π 2 ) = 1 1 + ( − 1 / t ) 2 = t 1 + t 2 = sin ( θ ) . {\displaystyle \cos \left(\theta +{\frac {\pi }{2}}\right)={\frac {1}{\sqrt {1+(-1/t)^{2}}}}={\frac {t}{\sqrt {1+t^{2}}}}=\sin(\theta ).} So the sine and cosine functions are related by translation over a quarter period π / 2 {\displaystyle \pi /2} .
One can also define the trigonometric functions using various functional equations .
For example, [ 29 ] the sine and the cosine form the unique pair of continuous functions that satisfy the difference formula
and the added condition
The sine and cosine of a complex number z = x + i y {\displaystyle z=x+iy} can be expressed in terms of real sines, cosines, and hyperbolic functions as follows:
By taking advantage of domain coloring , it is possible to graph the trigonometric functions as complex-valued functions. Various features unique to the complex functions can be seen from the graph; for example, the sine and cosine functions can be seen to be unbounded as the imaginary part of z {\displaystyle z} becomes larger (since the color white represents infinity), and the fact that the functions contain simple zeros or poles is apparent from the fact that the hue cycles around each zero or pole exactly once. Comparing these graphs with those of the corresponding Hyperbolic functions highlights the relationships between the two.
sin z {\displaystyle \sin z\,}
cos z {\displaystyle \cos z\,}
tan z {\displaystyle \tan z\,}
cot z {\displaystyle \cot z\,}
sec z {\displaystyle \sec z\,}
csc z {\displaystyle \csc z\,}
The sine and cosine functions are periodic , with period 2 π {\displaystyle 2\pi } , which is the smallest positive period: sin ( z + 2 π ) = sin ( z ) , cos ( z + 2 π ) = cos ( z ) . {\displaystyle \sin(z+2\pi )=\sin(z),\quad \cos(z+2\pi )=\cos(z).} Consequently, the cosecant and secant also have 2 π {\displaystyle 2\pi } as their period.
The functions sine and cosine also have semiperiods π {\displaystyle \pi } , and sin ( z + π ) = − sin ( z ) , cos ( z + π ) = − cos ( z ) {\displaystyle \sin(z+\pi )=-\sin(z),\quad \cos(z+\pi )=-\cos(z)} and consequently tan ( z + π ) = tan ( z ) , cot ( z + π ) = cot ( z ) . {\displaystyle \tan(z+\pi )=\tan(z),\quad \cot(z+\pi )=\cot(z).} Also, sin ( x + π / 2 ) = cos ( x ) , cos ( x + π / 2 ) = − sin ( x ) {\displaystyle \sin(x+\pi /2)=\cos(x),\quad \cos(x+\pi /2)=-\sin(x)} (see Complementary angles ).
The function sin ( z ) {\displaystyle \sin(z)} has a unique zero (at z = 0 {\displaystyle z=0} ) in the strip − π < ℜ ( z ) < π {\displaystyle -\pi <\Re (z)<\pi } . The function cos ( z ) {\displaystyle \cos(z)} has the pair of zeros z = ± π / 2 {\displaystyle z=\pm \pi /2} in the same strip. Because of the periodicity, the zeros of sine are π Z = { … , − 2 π , − π , 0 , π , 2 π , … } ⊂ C . {\displaystyle \pi \mathbb {Z} =\left\{\dots ,-2\pi ,-\pi ,0,\pi ,2\pi ,\dots \right\}\subset \mathbb {C} .} There zeros of cosine are π 2 + π Z = { … , − 3 π 2 , − π 2 , π 2 , 3 π 2 , … } ⊂ C . {\displaystyle {\frac {\pi }{2}}+\pi \mathbb {Z} =\left\{\dots ,-{\frac {3\pi }{2}},-{\frac {\pi }{2}},{\frac {\pi }{2}},{\frac {3\pi }{2}},\dots \right\}\subset \mathbb {C} .} All of the zeros are simple zeros, and both functions have derivative ± 1 {\displaystyle \pm 1} at each of the zeros.
The tangent function tan ( z ) = sin ( z ) / cos ( z ) {\displaystyle \tan(z)=\sin(z)/\cos(z)} has a simple zero at z = 0 {\displaystyle z=0} and vertical asymptotes at z = ± π / 2 {\displaystyle z=\pm \pi /2} , where it has a simple pole of residue − 1 {\displaystyle -1} . Again, owing to the periodicity, the zeros are all the integer multiples of π {\displaystyle \pi } and the poles are odd multiples of π / 2 {\displaystyle \pi /2} , all having the same residue. The poles correspond to vertical asymptotes lim x → π − tan ( x ) = + ∞ , lim x → π + tan ( x ) = − ∞ . {\displaystyle \lim _{x\to \pi ^{-}}\tan(x)=+\infty ,\quad \lim _{x\to \pi ^{+}}\tan(x)=-\infty .}
The cotangent function cot ( z ) = cos ( z ) / sin ( z ) {\displaystyle \cot(z)=\cos(z)/\sin(z)} has a simple pole of residue 1 at the integer multiples of π {\displaystyle \pi } and simple zeros at odd multiples of π / 2 {\displaystyle \pi /2} . The poles correspond to vertical asymptotes lim x → 0 − cot ( x ) = − ∞ , lim x → 0 + cot ( x ) = + ∞ . {\displaystyle \lim _{x\to 0^{-}}\cot(x)=-\infty ,\quad \lim _{x\to 0^{+}}\cot(x)=+\infty .}
Many identities interrelate the trigonometric functions. This section contains the most basic ones; for more identities, see List of trigonometric identities . These identities may be proved geometrically from the unit-circle definitions or the right-angled-triangle definitions (although, for the latter definitions, care must be taken for angles that are not in the interval [0, π /2] , see Proofs of trigonometric identities ). For non-geometrical proofs using only tools of calculus , one may use directly the differential equations, in a way that is similar to that of the above proof of Euler's identity. One can also use Euler's identity for expressing all trigonometric functions in terms of complex exponentials and using properties of the exponential function.
The cosine and the secant are even functions ; the other trigonometric functions are odd functions . That is:
All trigonometric functions are periodic functions of period 2 π . This is the smallest period, except for the tangent and the cotangent, which have π as smallest period. This means that, for every integer k , one has
See Periodicity and asymptotes .
The Pythagorean identity, is the expression of the Pythagorean theorem in terms of trigonometric functions. It is
Dividing through by either cos 2 x {\displaystyle \cos ^{2}x} or sin 2 x {\displaystyle \sin ^{2}x} gives
and
The sum and difference formulas allow expanding the sine, the cosine, and the tangent of a sum or a difference of two angles in terms of sines and cosines and tangents of the angles themselves. These can be derived geometrically, using arguments that date to Ptolemy (see Angle sum and difference identities ). One can also produce them algebraically using Euler's formula .
When the two angles are equal, the sum formulas reduce to simpler equations known as the double-angle formulae .
These identities can be used to derive the product-to-sum identities .
By setting t = tan 1 2 θ , {\displaystyle t=\tan {\tfrac {1}{2}}\theta ,} all trigonometric functions of θ {\displaystyle \theta } can be expressed as rational fractions of t {\displaystyle t} :
Together with
this is the tangent half-angle substitution , which reduces the computation of integrals and antiderivatives of trigonometric functions to that of rational fractions.
The derivatives of trigonometric functions result from those of sine and cosine by applying the quotient rule . The values given for the antiderivatives in the following table can be verified by differentiating them. The number C is a constant of integration .
Note: For 0 < x < π {\displaystyle 0<x<\pi } the integral of csc x {\displaystyle \csc x} can also be written as − arsinh ( cot x ) , {\displaystyle -\operatorname {arsinh} (\cot x),} and for the integral of sec x {\displaystyle \sec x} for − π / 2 < x < π / 2 {\displaystyle -\pi /2<x<\pi /2} as arsinh ( tan x ) , {\displaystyle \operatorname {arsinh} (\tan x),} where arsinh {\displaystyle \operatorname {arsinh} } is the inverse hyperbolic sine .
Alternatively, the derivatives of the 'co-functions' can be obtained using trigonometric identities and the chain rule:
The trigonometric functions are periodic, and hence not injective , so strictly speaking, they do not have an inverse function . However, on each interval on which a trigonometric function is monotonic , one can define an inverse function, and this defines inverse trigonometric functions as multivalued functions . To define a true inverse function, one must restrict the domain to an interval where the function is monotonic, and is thus bijective from this interval to its image by the function. The common choice for this interval, called the set of principal values , is given in the following table. As usual, the inverse trigonometric functions are denoted with the prefix "arc" before the name or its abbreviation of the function.
The notations sin −1 , cos −1 , etc. are often used for arcsin and arccos , etc. When this notation is used, inverse functions could be confused with multiplicative inverses. The notation with the "arc" prefix avoids such a confusion, though "arcsec" for arcsecant can be confused with " arcsecond ".
Just like the sine and cosine, the inverse trigonometric functions can also be expressed in terms of infinite series. They can also be expressed in terms of complex logarithms .
In this section A , B , C denote the three (interior) angles of a triangle, and a , b , c denote the lengths of the respective opposite edges. They are related by various formulas, which are named by the trigonometric functions they involve.
The law of sines states that for an arbitrary triangle with sides a , b , and c and angles opposite those sides A , B and C : sin A a = sin B b = sin C c = 2 Δ a b c , {\displaystyle {\frac {\sin A}{a}}={\frac {\sin B}{b}}={\frac {\sin C}{c}}={\frac {2\Delta }{abc}},} where Δ is the area of the triangle,
or, equivalently, a sin A = b sin B = c sin C = 2 R , {\displaystyle {\frac {a}{\sin A}}={\frac {b}{\sin B}}={\frac {c}{\sin C}}=2R,} where R is the triangle's circumradius .
It can be proved by dividing the triangle into two right ones and using the above definition of sine. The law of sines is useful for computing the lengths of the unknown sides in a triangle if two angles and one side are known. This is a common situation occurring in triangulation , a technique to determine unknown distances by measuring two angles and an accessible enclosed distance.
The law of cosines (also known as the cosine formula or cosine rule) is an extension of the Pythagorean theorem : c 2 = a 2 + b 2 − 2 a b cos C , {\displaystyle c^{2}=a^{2}+b^{2}-2ab\cos C,} or equivalently, cos C = a 2 + b 2 − c 2 2 a b . {\displaystyle \cos C={\frac {a^{2}+b^{2}-c^{2}}{2ab}}.}
In this formula the angle at C is opposite to the side c . This theorem can be proved by dividing the triangle into two right ones and using the Pythagorean theorem .
The law of cosines can be used to determine a side of a triangle if two sides and the angle between them are known. It can also be used to find the cosines of an angle (and consequently the angles themselves) if the lengths of all the sides are known.
The law of tangents says that:
If s is the triangle's semiperimeter, ( a + b + c )/2, and r is the radius of the triangle's incircle , then rs is the triangle's area. Therefore Heron's formula implies that:
The law of cotangents says that: [ 30 ]
It follows that
The trigonometric functions are also important in physics. The sine and the cosine functions, for example, are used to describe simple harmonic motion , which models many natural phenomena, such as the movement of a mass attached to a spring and, for small angles, the pendular motion of a mass hanging by a string. The sine and cosine functions are one-dimensional projections of uniform circular motion .
Trigonometric functions also prove to be useful in the study of general periodic functions . The characteristic wave patterns of periodic functions are useful for modeling recurring phenomena such as sound or light waves . [ 31 ]
Under rather general conditions, a periodic function f ( x ) can be expressed as a sum of sine waves or cosine waves in a Fourier series . [ 32 ] Denoting the sine or cosine basis functions by φ k , the expansion of the periodic function f ( t ) takes the form: f ( t ) = ∑ k = 1 ∞ c k φ k ( t ) . {\displaystyle f(t)=\sum _{k=1}^{\infty }c_{k}\varphi _{k}(t).}
For example, the square wave can be written as the Fourier series f square ( t ) = 4 π ∑ k = 1 ∞ sin ( ( 2 k − 1 ) t ) 2 k − 1 . {\displaystyle f_{\text{square}}(t)={\frac {4}{\pi }}\sum _{k=1}^{\infty }{\sin {\big (}(2k-1)t{\big )} \over 2k-1}.}
In the animation of a square wave at top right it can be seen that just a few terms already produce a fairly good approximation. The superposition of several terms in the expansion of a sawtooth wave are shown underneath.
While the early study of trigonometry can be traced to antiquity, the trigonometric functions as they are in use today were developed in the medieval period. The chord function was defined by Hipparchus of Nicaea (180–125 BCE) and Ptolemy of Roman Egypt (90–165 CE). The functions of sine and versine (1 – cosine) are closely related to the jyā and koti-jyā functions used in Gupta period Indian astronomy ( Aryabhatiya , Surya Siddhanta ), via translation from Sanskrit to Arabic and then from Arabic to Latin. [ 33 ] (See Aryabhata's sine table .)
All six trigonometric functions in current use were known in Islamic mathematics by the 9th century, as was the law of sines , used in solving triangles . [ 34 ] Al-Khwārizmī (c. 780–850) produced tables of sines and cosines. Circa 860, Habash al-Hasib al-Marwazi defined the tangent and the cotangent, and produced their tables. [ 35 ] [ 36 ] Muhammad ibn Jābir al-Harrānī al-Battānī (853–929) defined the reciprocal functions of secant and cosecant, and produced the first table of cosecants for each degree from 1° to 90°. [ 36 ] The trigonometric functions were later studied by mathematicians including Omar Khayyám , Bhāskara II , Nasir al-Din al-Tusi , Jamshīd al-Kāshī (14th century), Ulugh Beg (14th century), Regiomontanus (1464), Rheticus , and Rheticus' student Valentinus Otho .
Madhava of Sangamagrama (c. 1400) made early strides in the analysis of trigonometric functions in terms of infinite series . [ 37 ] (See Madhava series and Madhava's sine table .)
The tangent function was brought to Europe by Giovanni Bianchini in 1467 in trigonometry tables he created to support the calculation of stellar coordinates. [ 38 ]
The terms tangent and secant were first introduced by the Danish mathematician Thomas Fincke in his book Geometria rotundi (1583). [ 39 ]
The 17th century French mathematician Albert Girard made the first published use of the abbreviations sin , cos , and tan in his book Trigonométrie . [ 40 ]
In a paper published in 1682, Gottfried Leibniz proved that sin x is not an algebraic function of x . [ 41 ] Though defined as ratios of sides of a right triangle , and thus appearing to be rational functions , Leibnitz result established that they are actually transcendental functions of their argument. The task of assimilating circular functions into algebraic expressions was accomplished by Euler in his Introduction to the Analysis of the Infinite (1748). His method was to show that the sine and cosine functions are alternating series formed from the even and odd terms respectively of the exponential series . He presented " Euler's formula ", as well as near-modern abbreviations ( sin. , cos. , tang. , cot. , sec. , and cosec. ). [ 33 ]
A few functions were common historically, but are now seldom used, such as the chord , versine (which appeared in the earliest tables [ 33 ] ), haversine , coversine , [ 42 ] half-tangent (tangent of half an angle), and exsecant . List of trigonometric identities shows more relations between these functions.
Historically, trigonometric functions were often combined with logarithms in compound functions like the logarithmic sine, logarithmic cosine, logarithmic secant, logarithmic cosecant, logarithmic tangent and logarithmic cotangent. [ 43 ] [ 44 ] [ 45 ] [ 46 ]
The word sine derives [ 47 ] from Latin sinus , meaning "bend; bay", and more specifically "the hanging fold of the upper part of a toga ", "the bosom of a garment", which was chosen as the translation of what was interpreted as the Arabic word jaib , meaning "pocket" or "fold" in the twelfth-century translations of works by Al-Battani and al-Khwārizmī into Medieval Latin . [ 48 ] The choice was based on a misreading of the Arabic written form j-y-b ( جيب ), which itself originated as a transliteration from Sanskrit jīvā , which along with its synonym jyā (the standard Sanskrit term for the sine) translates to "bowstring", being in turn adopted from Ancient Greek χορδή "string". [ 49 ]
The word tangent comes from Latin tangens meaning "touching", since the line touches the circle of unit radius, whereas secant stems from Latin secans —"cutting"—since the line cuts the circle. [ 50 ]
The prefix " co- " (in "cosine", "cotangent", "cosecant") is found in Edmund Gunter 's Canon triangulorum (1620), which defines the cosinus as an abbreviation of the sinus complementi (sine of the complementary angle ) and proceeds to define the cotangens similarly. [ 51 ] [ 52 ] | https://en.wikipedia.org/wiki/Prosinus |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.