text
stringlengths
559
401k
source
stringlengths
13
121
In mathematics, a system of linear equations (or linear system) is a collection of two or more linear equations involving the same variables. For example, { 3 x + 2 y − z = 1 2 x − 2 y + 4 z = − 2 − x + 1 2 y − z = 0 {\displaystyle {\begin{cases}3x+2y-z=1\\2x-2y+4z=-2\\-x+{\frac {1}{2}}y-z=0\end{cases}}} is a system of three equations in the three variables x, y, z. A solution to a linear system is an assignment of values to the variables such that all the equations are simultaneously satisfied. In the example above, a solution is given by the ordered triple ( x , y , z ) = ( 1 , − 2 , − 2 ) , {\displaystyle (x,y,z)=(1,-2,-2),} since it makes all three equations valid. Linear systems are a fundamental part of linear algebra, a subject used in most modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and play a prominent role in engineering, physics, chemistry, computer science, and economics. A system of non-linear equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system. Very often, and in this article, the coefficients and solutions of the equations are constrained to be real or complex numbers, but the theory and algorithms apply to coefficients and solutions in any field. For other algebraic structures, other theories have been developed. For coefficients and solutions in an integral domain, such as the ring of integers, see Linear equation over a ring. For coefficients and solutions that are polynomials, see Gröbner basis. For finding the "best" integer solutions among many, see Integer linear programming. For an example of a more exotic structure to which linear algebra can be applied, see Tropical geometry. == Elementary examples == === Trivial example === The system of one equation in one unknown 2 x = 4 {\displaystyle 2x=4} has the solution x = 2. {\displaystyle x=2.} However, most interesting linear systems have at least two equations. === Simple nontrivial example === The simplest kind of nontrivial linear system involves two equations and two variables: 2 x + 3 y = 6 4 x + 9 y = 15 . {\displaystyle {\begin{alignedat}{5}2x&&\;+\;&&3y&&\;=\;&&6&\\4x&&\;+\;&&9y&&\;=\;&&15&.\end{alignedat}}} One method for solving such a system is as follows. First, solve the top equation for x {\displaystyle x} in terms of y {\displaystyle y} : x = 3 − 3 2 y . {\displaystyle x=3-{\frac {3}{2}}y.} Now substitute this expression for x into the bottom equation: 4 ( 3 − 3 2 y ) + 9 y = 15. {\displaystyle 4\left(3-{\frac {3}{2}}y\right)+9y=15.} This results in a single equation involving only the variable y {\displaystyle y} . Solving gives y = 1 {\displaystyle y=1} , and substituting this back into the equation for x {\displaystyle x} yields x = 3 2 {\displaystyle x={\frac {3}{2}}} . This method generalizes to systems with additional variables (see "elimination of variables" below, or the article on elementary algebra.) == General form == A general system of m linear equations with n unknowns and coefficients can be written as { a 11 x 1 + a 12 x 2 + ⋯ + a 1 n x n = b 1 a 21 x 1 + a 22 x 2 + ⋯ + a 2 n x n = b 2 ⋮ a m 1 x 1 + a m 2 x 2 + ⋯ + a m n x n = b m , {\displaystyle {\begin{cases}a_{11}x_{1}+a_{12}x_{2}+\dots +a_{1n}x_{n}=b_{1}\\a_{21}x_{1}+a_{22}x_{2}+\dots +a_{2n}x_{n}=b_{2}\\\vdots \\a_{m1}x_{1}+a_{m2}x_{2}+\dots +a_{mn}x_{n}=b_{m},\end{cases}}} where x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\dots ,x_{n}} are the unknowns, a 11 , a 12 , … , a m n {\displaystyle a_{11},a_{12},\dots ,a_{mn}} are the coefficients of the system, and b 1 , b 2 , … , b m {\displaystyle b_{1},b_{2},\dots ,b_{m}} are the constant terms. Often the coefficients and unknowns are real or complex numbers, but integers and rational numbers are also seen, as are polynomials and elements of an abstract algebraic structure. === Vector equation === One extremely helpful view is that each unknown is a weight for a column vector in a linear combination. x 1 [ a 11 a 21 ⋮ a m 1 ] + x 2 [ a 12 a 22 ⋮ a m 2 ] + ⋯ + x n [ a 1 n a 2 n ⋮ a m n ] = [ b 1 b 2 ⋮ b m ] {\displaystyle x_{1}{\begin{bmatrix}a_{11}\\a_{21}\\\vdots \\a_{m1}\end{bmatrix}}+x_{2}{\begin{bmatrix}a_{12}\\a_{22}\\\vdots \\a_{m2}\end{bmatrix}}+\dots +x_{n}{\begin{bmatrix}a_{1n}\\a_{2n}\\\vdots \\a_{mn}\end{bmatrix}}={\begin{bmatrix}b_{1}\\b_{2}\\\vdots \\b_{m}\end{bmatrix}}} This allows all the language and theory of vector spaces (or more generally, modules) to be brought to bear. For example, the collection of all possible linear combinations of the vectors on the left-hand side (LHS) is called their span, and the equations have a solution just when the right-hand vector is within that span. If every vector within that span has exactly one expression as a linear combination of the given left-hand vectors, then any solution is unique. In any event, the span has a basis of linearly independent vectors that do guarantee exactly one expression; and the number of vectors in that basis (its dimension) cannot be larger than m or n, but it can be smaller. This is important because if we have m independent vectors a solution is guaranteed regardless of the right-hand side (RHS), and otherwise not guaranteed. === Matrix equation === The vector equation is equivalent to a matrix equation of the form A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } where A is an m×n matrix, x is a column vector with n entries, and b is a column vector with m entries. A = [ a 11 a 12 ⋯ a 1 n a 21 a 22 ⋯ a 2 n ⋮ ⋮ ⋱ ⋮ a m 1 a m 2 ⋯ a m n ] , x = [ x 1 x 2 ⋮ x n ] , b = [ b 1 b 2 ⋮ b m ] . {\displaystyle A={\begin{bmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&\cdots &a_{mn}\end{bmatrix}},\quad \mathbf {x} ={\begin{bmatrix}x_{1}\\x_{2}\\\vdots \\x_{n}\end{bmatrix}},\quad \mathbf {b} ={\begin{bmatrix}b_{1}\\b_{2}\\\vdots \\b_{m}\end{bmatrix}}.} The number of vectors in a basis for the span is now expressed as the rank of the matrix. == Solution set == A solution of a linear system is an assignment of values to the variables x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\dots ,x_{n}} such that each of the equations is satisfied. The set of all possible solutions is called the solution set. A linear system may behave in any one of three possible ways: The system has infinitely many solutions. The system has a unique solution. The system has no solution. === Geometric interpretation === For a system involving two variables (x and y), each linear equation determines a line on the xy-plane. Because a solution to a linear system must satisfy all of the equations, the solution set is the intersection of these lines, and is hence either a line, a single point, or the empty set. For three variables, each linear equation determines a plane in three-dimensional space, and the solution set is the intersection of these planes. Thus the solution set may be a plane, a line, a single point, or the empty set. For example, as three parallel planes do not have a common point, the solution set of their equations is empty; the solution set of the equations of three planes intersecting at a point is single point; if three planes pass through two points, their equations have at least two common solutions; in fact the solution set is infinite and consists in all the line passing through these points. For n variables, each linear equation determines a hyperplane in n-dimensional space. The solution set is the intersection of these hyperplanes, and is a flat, which may have any dimension lower than n. === General behavior === In general, the behavior of a linear system is determined by the relationship between the number of equations and the number of unknowns. Here, "in general" means that a different behavior may occur for specific values of the coefficients of the equations. In general, a system with fewer equations than unknowns has infinitely many solutions, but it may have no solution. Such a system is known as an underdetermined system. In general, a system with the same number of equations and unknowns has a single unique solution. In general, a system with more equations than unknowns has no solution. Such a system is also known as an overdetermined system. In the first case, the dimension of the solution set is, in general, equal to n − m, where n is the number of variables and m is the number of equations. The following pictures illustrate this trichotomy in the case of two variables: The first system has infinitely many solutions, namely all of the points on the blue line. The second system has a single unique solution, namely the intersection of the two lines. The third system has no solutions, since the three lines share no common point. It must be kept in mind that the pictures above show only the most common case (the general case). It is possible for a system of two equations and two unknowns to have no solution (if the two lines are parallel), or for a system of three equations and two unknowns to be solvable (if the three lines intersect at a single point). A system of linear equations behave differently from the general case if the equations are linearly dependent, or if it is inconsistent and has no more equations than unknowns. == Properties == === Independence === The equations of a linear system are independent if none of the equations can be derived algebraically from the others. When the equations are independent, each equation contains new information about the variables, and removing any of the equations increases the size of the solution set. For linear equations, logical independence is the same as linear independence. For example, the equations 3 x + 2 y = 6 and 6 x + 4 y = 12 {\displaystyle 3x+2y=6\;\;\;\;{\text{and}}\;\;\;\;6x+4y=12} are not independent — they are the same equation when scaled by a factor of two, and they would produce identical graphs. This is an example of equivalence in a system of linear equations. For a more complicated example, the equations x − 2 y = − 1 3 x + 5 y = 8 4 x + 3 y = 7 {\displaystyle {\begin{alignedat}{5}x&&\;-\;&&2y&&\;=\;&&-1&\\3x&&\;+\;&&5y&&\;=\;&&8&\\4x&&\;+\;&&3y&&\;=\;&&7&\end{alignedat}}} are not independent, because the third equation is the sum of the other two. Indeed, any one of these equations can be derived from the other two, and any one of the equations can be removed without affecting the solution set. The graphs of these equations are three lines that intersect at a single point. === Consistency === A linear system is inconsistent if it has no solution, and otherwise, it is said to be consistent. When the system is inconsistent, it is possible to derive a contradiction from the equations, that may always be rewritten as the statement 0 = 1. For example, the equations 3 x + 2 y = 6 and 3 x + 2 y = 12 {\displaystyle 3x+2y=6\;\;\;\;{\text{and}}\;\;\;\;3x+2y=12} are inconsistent. In fact, by subtracting the first equation from the second one and multiplying both sides of the result by 1/6, we get 0 = 1. The graphs of these equations on the xy-plane are a pair of parallel lines. It is possible for three linear equations to be inconsistent, even though any two of them are consistent together. For example, the equations x + y = 1 2 x + y = 1 3 x + 2 y = 3 {\displaystyle {\begin{alignedat}{7}x&&\;+\;&&y&&\;=\;&&1&\\2x&&\;+\;&&y&&\;=\;&&1&\\3x&&\;+\;&&2y&&\;=\;&&3&\end{alignedat}}} are inconsistent. Adding the first two equations together gives 3x + 2y = 2, which can be subtracted from the third equation to yield 0 = 1. Any two of these equations have a common solution. The same phenomenon can occur for any number of equations. In general, inconsistencies occur if the left-hand sides of the equations in a system are linearly dependent, and the constant terms do not satisfy the dependence relation. A system of equations whose left-hand sides are linearly independent is always consistent. Putting it another way, according to the Rouché–Capelli theorem, any system of equations (overdetermined or otherwise) is inconsistent if the rank of the augmented matrix is greater than the rank of the coefficient matrix. If, on the other hand, the ranks of these two matrices are equal, the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables. Otherwise the general solution has k free parameters where k is the difference between the number of variables and the rank; hence in such a case there is an infinitude of solutions. The rank of a system of equations (that is, the rank of the augmented matrix) can never be higher than [the number of variables] + 1, which means that a system with any number of equations can always be reduced to a system that has a number of independent equations that is at most equal to [the number of variables] + 1. === Equivalence === Two linear systems using the same set of variables are equivalent if each of the equations in the second system can be derived algebraically from the equations in the first system, and vice versa. Two systems are equivalent if either both are inconsistent or each equation of each of them is a linear combination of the equations of the other one. It follows that two linear systems are equivalent if and only if they have the same solution set. == Solving a linear system == There are several algorithms for solving a system of linear equations. === Describing the solution === When the solution set is finite, it is reduced to a single element. In this case, the unique solution is described by a sequence of equations whose left-hand sides are the names of the unknowns and right-hand sides are the corresponding values, for example ( x = 3 , y = − 2 , z = 6 ) {\displaystyle (x=3,\;y=-2,\;z=6)} . When an order on the unknowns has been fixed, for example the alphabetical order the solution may be described as a vector of values, like ( 3 , − 2 , 6 ) {\displaystyle (3,\,-2,\,6)} for the previous example. To describe a set with an infinite number of solutions, typically some of the variables are designated as free (or independent, or as parameters), meaning that they are allowed to take any value, while the remaining variables are dependent on the values of the free variables. For example, consider the following system: x + 3 y − 2 z = 5 3 x + 5 y + 6 z = 7 {\displaystyle {\begin{alignedat}{7}x&&\;+\;&&3y&&\;-\;&&2z&&\;=\;&&5&\\3x&&\;+\;&&5y&&\;+\;&&6z&&\;=\;&&7&\end{alignedat}}} The solution set to this system can be described by the following equations: x = − 7 z − 1 and y = 3 z + 2 . {\displaystyle x=-7z-1\;\;\;\;{\text{and}}\;\;\;\;y=3z+2{\text{.}}} Here z is the free variable, while x and y are dependent on z. Any point in the solution set can be obtained by first choosing a value for z, and then computing the corresponding values for x and y. Each free variable gives the solution space one degree of freedom, the number of which is equal to the dimension of the solution set. For example, the solution set for the above equation is a line, since a point in the solution set can be chosen by specifying the value of the parameter z. An infinite solution of higher order may describe a plane, or higher-dimensional set. Different choices for the free variables may lead to different descriptions of the same solution set. For example, the solution to the above equations can alternatively be described as follows: y = − 3 7 x + 11 7 and z = − 1 7 x − 1 7 . {\displaystyle y=-{\frac {3}{7}}x+{\frac {11}{7}}\;\;\;\;{\text{and}}\;\;\;\;z=-{\frac {1}{7}}x-{\frac {1}{7}}{\text{.}}} Here x is the free variable, and y and z are dependent. === Elimination of variables === The simplest method for solving a system of linear equations is to repeatedly eliminate variables. This method can be described as follows: In the first equation, solve for one of the variables in terms of the others. Substitute this expression into the remaining equations. This yields a system of equations with one fewer equation and unknown. Repeat steps 1 and 2 until the system is reduced to a single linear equation. Solve this equation, and then back-substitute until the entire solution is found. For example, consider the following system: { x + 3 y − 2 z = 5 3 x + 5 y + 6 z = 7 2 x + 4 y + 3 z = 8 {\displaystyle {\begin{cases}x+3y-2z=5\\3x+5y+6z=7\\2x+4y+3z=8\end{cases}}} Solving the first equation for x gives x = 5 + 2 z − 3 y {\displaystyle x=5+2z-3y} , and plugging this into the second and third equation yields { y = 3 z + 2 y = 7 2 z + 1 {\displaystyle {\begin{cases}y=3z+2\\y={\tfrac {7}{2}}z+1\end{cases}}} Since the LHS of both of these equations equal y, equating the RHS of the equations. We now have: 3 z + 2 = 7 2 z + 1 ⇒ z = 2 {\displaystyle {\begin{aligned}3z+2={\tfrac {7}{2}}z+1\\\Rightarrow z=2\end{aligned}}} Substituting z = 2 into the second or third equation gives y = 8, and the values of y and z into the first equation yields x = −15. Therefore, the solution set is the ordered triple ( x , y , z ) = ( − 15 , 8 , 2 ) {\displaystyle (x,y,z)=(-15,8,2)} . === Row reduction === In row reduction (also known as Gaussian elimination), the linear system is represented as an augmented matrix [ 1 3 − 2 5 3 5 6 7 2 4 3 8 ] . {\displaystyle \left[{\begin{array}{rrr|r}1&3&-2&5\\3&5&6&7\\2&4&3&8\end{array}}\right]{\text{.}}} This matrix is then modified using elementary row operations until it reaches reduced row echelon form. There are three types of elementary row operations: Type 1: Swap the positions of two rows. Type 2: Multiply a row by a nonzero scalar. Type 3: Add to one row a scalar multiple of another. Because these operations are reversible, the augmented matrix produced always represents a linear system that is equivalent to the original. There are several specific algorithms to row-reduce an augmented matrix, the simplest of which are Gaussian elimination and Gauss–Jordan elimination. The following computation shows Gauss–Jordan elimination applied to the matrix above: [ 1 3 − 2 5 3 5 6 7 2 4 3 8 ] ∼ [ 1 3 − 2 5 0 − 4 12 − 8 2 4 3 8 ] ∼ [ 1 3 − 2 5 0 − 4 12 − 8 0 − 2 7 − 2 ] ∼ [ 1 3 − 2 5 0 1 − 3 2 0 − 2 7 − 2 ] ∼ [ 1 3 − 2 5 0 1 − 3 2 0 0 1 2 ] ∼ [ 1 3 − 2 5 0 1 0 8 0 0 1 2 ] ∼ [ 1 3 0 9 0 1 0 8 0 0 1 2 ] ∼ [ 1 0 0 − 15 0 1 0 8 0 0 1 2 ] . {\displaystyle {\begin{aligned}\left[{\begin{array}{rrr|r}1&3&-2&5\\3&5&6&7\\2&4&3&8\end{array}}\right]&\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&-4&12&-8\\2&4&3&8\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&-4&12&-8\\0&-2&7&-2\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&1&-3&2\\0&-2&7&-2\end{array}}\right]\\&\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&1&-3&2\\0&0&1&2\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&1&0&8\\0&0&1&2\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&3&0&9\\0&1&0&8\\0&0&1&2\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&0&0&-15\\0&1&0&8\\0&0&1&2\end{array}}\right].\end{aligned}}} The last matrix is in reduced row echelon form, and represents the system x = −15, y = 8, z = 2. A comparison with the example in the previous section on the algebraic elimination of variables shows that these two methods are in fact the same; the difference lies in how the computations are written down. === Cramer's rule === Cramer's rule is an explicit formula for the solution of a system of linear equations, with each variable given by a quotient of two determinants. For example, the solution to the system x + 3 y − 2 z = 5 3 x + 5 y + 6 z = 7 2 x + 4 y + 3 z = 8 {\displaystyle {\begin{alignedat}{7}x&\;+&\;3y&\;-&\;2z&\;=&\;5\\3x&\;+&\;5y&\;+&\;6z&\;=&\;7\\2x&\;+&\;4y&\;+&\;3z&\;=&\;8\end{alignedat}}} is given by x = | 5 3 − 2 7 5 6 8 4 3 | | 1 3 − 2 3 5 6 2 4 3 | , y = | 1 5 − 2 3 7 6 2 8 3 | | 1 3 − 2 3 5 6 2 4 3 | , z = | 1 3 5 3 5 7 2 4 8 | | 1 3 − 2 3 5 6 2 4 3 | . {\displaystyle x={\frac {\,{\begin{vmatrix}5&3&-2\\7&5&6\\8&4&3\end{vmatrix}}\,}{\,{\begin{vmatrix}1&3&-2\\3&5&6\\2&4&3\end{vmatrix}}\,}},\;\;\;\;y={\frac {\,{\begin{vmatrix}1&5&-2\\3&7&6\\2&8&3\end{vmatrix}}\,}{\,{\begin{vmatrix}1&3&-2\\3&5&6\\2&4&3\end{vmatrix}}\,}},\;\;\;\;z={\frac {\,{\begin{vmatrix}1&3&5\\3&5&7\\2&4&8\end{vmatrix}}\,}{\,{\begin{vmatrix}1&3&-2\\3&5&6\\2&4&3\end{vmatrix}}\,}}.} For each variable, the denominator is the determinant of the matrix of coefficients, while the numerator is the determinant of a matrix in which one column has been replaced by the vector of constant terms. Though Cramer's rule is important theoretically, it has little practical value for large matrices, since the computation of large determinants is somewhat cumbersome. (Indeed, large determinants are most easily computed using row reduction.) Further, Cramer's rule has very poor numerical properties, making it unsuitable for solving even small systems reliably, unless the operations are performed in rational arithmetic with unbounded precision. === Matrix solution === If the equation system is expressed in the matrix form A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } , the entire solution set can also be expressed in matrix form. If the matrix A is square (has m rows and n=m columns) and has full rank (all m rows are independent), then the system has a unique solution given by x = A − 1 b {\displaystyle \mathbf {x} =A^{-1}\mathbf {b} } where A − 1 {\displaystyle A^{-1}} is the inverse of A. More generally, regardless of whether m=n or not and regardless of the rank of A, all solutions (if any exist) are given using the Moore–Penrose inverse of A, denoted A + {\displaystyle A^{+}} , as follows: x = A + b + ( I − A + A ) w {\displaystyle \mathbf {x} =A^{+}\mathbf {b} +\left(I-A^{+}A\right)\mathbf {w} } where w {\displaystyle \mathbf {w} } is a vector of free parameters that ranges over all possible n×1 vectors. A necessary and sufficient condition for any solution(s) to exist is that the potential solution obtained using w = 0 {\displaystyle \mathbf {w} =\mathbf {0} } satisfy A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } — that is, that A A + b = b . {\displaystyle AA^{+}\mathbf {b} =\mathbf {b} .} If this condition does not hold, the equation system is inconsistent and has no solution. If the condition holds, the system is consistent and at least one solution exists. For example, in the above-mentioned case in which A is square and of full rank, A + {\displaystyle A^{+}} simply equals A − 1 {\displaystyle A^{-1}} and the general solution equation simplifies to x = A − 1 b + ( I − A − 1 A ) w = A − 1 b + ( I − I ) w = A − 1 b {\displaystyle \mathbf {x} =A^{-1}\mathbf {b} +\left(I-A^{-1}A\right)\mathbf {w} =A^{-1}\mathbf {b} +\left(I-I\right)\mathbf {w} =A^{-1}\mathbf {b} } as previously stated, where w {\displaystyle \mathbf {w} } has completely dropped out of the solution, leaving only a single solution. In other cases, though, w {\displaystyle \mathbf {w} } remains and hence an infinitude of potential values of the free parameter vector w {\displaystyle \mathbf {w} } give an infinitude of solutions of the equation. === Other methods === While systems of three or four equations can be readily solved by hand (see Cracovian), computers are often used for larger systems. The standard algorithm for solving a system of linear equations is based on Gaussian elimination with some modifications. Firstly, it is essential to avoid division by small numbers, which may lead to inaccurate results. This can be done by reordering the equations if necessary, a process known as pivoting. Secondly, the algorithm does not exactly do Gaussian elimination, but it computes the LU decomposition of the matrix A. This is mostly an organizational tool, but it is much quicker if one has to solve several systems with the same matrix A but different vectors b. If the matrix A has some special structure, this can be exploited to obtain faster or more accurate algorithms. For instance, systems with a symmetric positive definite matrix can be solved twice as fast with the Cholesky decomposition. Levinson recursion is a fast method for Toeplitz matrices. Special methods exist also for matrices with many zero elements (so-called sparse matrices), which appear often in applications. A completely different approach is often taken for very large systems, which would otherwise take too much time or memory. The idea is to start with an initial approximation to the solution (which does not have to be accurate at all), and to change this approximation in several steps to bring it closer to the true solution. Once the approximation is sufficiently accurate, this is taken to be the solution to the system. This leads to the class of iterative methods. For some sparse matrices, the introduction of randomness improves the speed of the iterative methods. One example of an iterative method is the Jacobi method, where the matrix A {\displaystyle A} is split into its diagonal component D {\displaystyle D} and its non-diagonal component L + U {\displaystyle L+U} . An initial guess x ( 0 ) {\displaystyle {\mathbf {x}}^{(0)}} is used at the start of the algorithm. Each subsequent guess is computed using the iterative equation: x ( k + 1 ) = D − 1 ( b − ( L + U ) x ( k ) ) {\displaystyle {\mathbf {x}}^{(k+1)}=D^{-1}({\mathbf {b}}-(L+U){\mathbf {x}}^{(k)})} When the difference between guesses x ( k ) {\displaystyle {\mathbf {x}}^{(k)}} and x ( k + 1 ) {\displaystyle {\mathbf {x}}^{(k+1)}} is sufficiently small, the algorithm is said to have converged on the solution. There is also a quantum algorithm for linear systems of equations. == Homogeneous systems == A system of linear equations is homogeneous if all of the constant terms are zero: a 11 x 1 + a 12 x 2 + ⋯ + a 1 n x n = 0 a 21 x 1 + a 22 x 2 + ⋯ + a 2 n x n = 0 ⋮ a m 1 x 1 + a m 2 x 2 + ⋯ + a m n x n = 0. {\displaystyle {\begin{alignedat}{7}a_{11}x_{1}&&\;+\;&&a_{12}x_{2}&&\;+\cdots +\;&&a_{1n}x_{n}&&\;=\;&&&0\\a_{21}x_{1}&&\;+\;&&a_{22}x_{2}&&\;+\cdots +\;&&a_{2n}x_{n}&&\;=\;&&&0\\&&&&&&&&&&\vdots \;\ &&&\\a_{m1}x_{1}&&\;+\;&&a_{m2}x_{2}&&\;+\cdots +\;&&a_{mn}x_{n}&&\;=\;&&&0.\\\end{alignedat}}} A homogeneous system is equivalent to a matrix equation of the form A x = 0 {\displaystyle A\mathbf {x} =\mathbf {0} } where A is an m × n matrix, x is a column vector with n entries, and 0 is the zero vector with m entries. === Homogeneous solution set === Every homogeneous system has at least one solution, known as the zero (or trivial) solution, which is obtained by assigning the value of zero to each of the variables. If the system has a non-singular matrix (det(A) ≠ 0) then it is also the only solution. If the system has a singular matrix then there is a solution set with an infinite number of solutions. This solution set has the following additional properties: If u and v are two vectors representing solutions to a homogeneous system, then the vector sum u + v is also a solution to the system. If u is a vector representing a solution to a homogeneous system, and r is any scalar, then ru is also a solution to the system. These are exactly the properties required for the solution set to be a linear subspace of Rn. In particular, the solution set to a homogeneous system is the same as the null space of the corresponding matrix A. === Relation to nonhomogeneous systems === There is a close relationship between the solutions to a linear system and the solutions to the corresponding homogeneous system: A x = b and A x = 0 . {\displaystyle A\mathbf {x} =\mathbf {b} \qquad {\text{and}}\qquad A\mathbf {x} =\mathbf {0} .} Specifically, if p is any specific solution to the linear system Ax = b, then the entire solution set can be described as { p + v : v is any solution to A x = 0 } . {\displaystyle \left\{\mathbf {p} +\mathbf {v} :\mathbf {v} {\text{ is any solution to }}A\mathbf {x} =\mathbf {0} \right\}.} Geometrically, this says that the solution set for Ax = b is a translation of the solution set for Ax = 0. Specifically, the flat for the first system can be obtained by translating the linear subspace for the homogeneous system by the vector p. This reasoning only applies if the system Ax = b has at least one solution. This occurs if and only if the vector b lies in the image of the linear transformation A. == See also == Arrangement of hyperplanes Iterative refinement – Method to improve accuracy of numerical solutions to systems of linear equations Coates graph – A mathematical graph for solution of linear equations LAPACK – Software library for numerical linear algebra Linear equation over a ring Linear least squares – Least squares approximation of linear functions to dataPages displaying short descriptions of redirect targets Matrix decomposition – Representation of a matrix as a product Matrix splitting – Representation of a matrix as a sum NAG Numerical Library – Software library of numerical-analysis algorithms Rybicki Press algorithm – An algorithm for inverting a matrix Simultaneous equations – Set of equations to be solved togetherPages displaying short descriptions of redirect targets == References == == Bibliography == Anton, Howard (1987), Elementary Linear Algebra (5th ed.), New York: Wiley, ISBN 0-471-84819-0 Beauregard, Raymond A.; Fraleigh, John B. (1973), A First Course In Linear Algebra: with Optional Introduction to Groups, Rings, and Fields, Boston: Houghton Mifflin Company, ISBN 0-395-14017-X Burden, Richard L.; Faires, J. Douglas (1993), Numerical Analysis (5th ed.), Boston: Prindle, Weber and Schmidt, ISBN 0-534-93219-3 Cullen, Charles G. (1990), Matrices and Linear Transformations, MA: Dover, ISBN 978-0-486-66328-9 Golub, Gene H.; Van Loan, Charles F. (1996), Matrix Computations (3rd ed.), Baltimore: Johns Hopkins University Press, ISBN 0-8018-5414-8 Harper, Charlie (1976), Introduction to Mathematical Physics, New Jersey: Prentice-Hall, ISBN 0-13-487538-9 Harrow, Aram W.; Hassidim, Avinatan; Lloyd, Seth (2009), "Quantum Algorithm for Linear Systems of Equations", Physical Review Letters, 103 (15): 150502, arXiv:0811.3171, Bibcode:2009PhRvL.103o0502H, doi:10.1103/PhysRevLett.103.150502, PMID 19905613, S2CID 5187993 Sterling, Mary J. (2009), Linear Algebra for Dummies, Indianapolis, Indiana: Wiley, ISBN 978-0-470-43090-3 Whitelaw, T. A. (1991), Introduction to Linear Algebra (2nd ed.), CRC Press, ISBN 0-7514-0159-5 == Further reading == Axler, Sheldon Jay (1997). Linear Algebra Done Right (2nd ed.). Springer-Verlag. ISBN 0-387-98259-0. Lay, David C. (August 22, 2005). Linear Algebra and Its Applications (3rd ed.). Addison Wesley. ISBN 978-0-321-28713-7. Meyer, Carl D. (February 15, 2001). Matrix Analysis and Applied Linear Algebra. Society for Industrial and Applied Mathematics (SIAM). ISBN 978-0-89871-454-8. Archived from the original on March 1, 2001. Poole, David (2006). Linear Algebra: A Modern Introduction (2nd ed.). Brooks/Cole. ISBN 0-534-99845-3. Anton, Howard (2005). Elementary Linear Algebra (Applications Version) (9th ed.). Wiley International. Leon, Steven J. (2006). Linear Algebra With Applications (7th ed.). Pearson Prentice Hall. Strang, Gilbert (2005). Linear Algebra and Its Applications. Peng, Richard; Vempala, Santosh S. (2024). "Solving Sparse Linear Systems Faster than Matrix Multiplication". Comm. ACM. 67 (7): 79–86. arXiv:2007.10254. doi:10.1145/3615679. == External links == Media related to System of linear equations at Wikimedia Commons
Wikipedia/Systems_of_linear_equations
In projective geometry, a homography is an isomorphism of projective spaces, induced by an isomorphism of the vector spaces from which the projective spaces derive. It is a bijection that maps lines to lines, and thus a collineation. In general, some collineations are not homographies, but the fundamental theorem of projective geometry asserts that is not so in the case of real projective spaces of dimension at least two. Synonyms include projectivity, projective transformation, and projective collineation. Historically, homographies (and projective spaces) have been introduced to study perspective and projections in Euclidean geometry, and the term homography, which, etymologically, roughly means "similar drawing", dates from this time. At the end of the 19th century, formal definitions of projective spaces were introduced, which extended Euclidean and affine spaces by the addition of new points called points at infinity. The term "projective transformation" originated in these abstract constructions. These constructions divide into two classes that have been shown to be equivalent. A projective space may be constructed as the set of the lines of a vector space over a given field (the above definition is based on this version); this construction facilitates the definition of projective coordinates and allows using the tools of linear algebra for the study of homographies. The alternative approach consists in defining the projective space through a set of axioms, which do not involve explicitly any field (incidence geometry, see also synthetic geometry); in this context, collineations are easier to define than homographies, and homographies are defined as specific collineations, thus called "projective collineations". For sake of simplicity, unless otherwise stated, the projective spaces considered in this article are supposed to be defined over a (commutative) field. Equivalently Pappus's hexagon theorem and Desargues's theorem are supposed to be true. A large part of the results remain true, or may be generalized to projective geometries for which these theorems do not hold. == Geometric motivation == Historically, the concept of homography had been introduced to understand, explain and study visual perspective, and, specifically, the difference in appearance of two plane objects viewed from different points of view. In three-dimensional Euclidean space, a central projection from a point O (the center) onto a plane P that does not contain O is the mapping that sends a point A to the intersection (if it exists) of the line OA and the plane P. The projection is not defined if the point A belongs to the plane passing through O and parallel to P. The notion of projective space was originally introduced by extending the Euclidean space, that is, by adding points at infinity to it, in order to define the projection for every point except O. Given another plane Q, which does not contain O, the restriction to Q of the above projection is called a perspectivity. With these definitions, a perspectivity is only a partial function, but it becomes a bijection if extended to projective spaces. Therefore, this notion is normally defined for projective spaces. The notion is also easily generalized to projective spaces of any dimension, over any field, in the following way: Given two projective spaces P and Q of dimension n, a perspectivity is a bijection from P to Q that may be obtained by embedding P and Q in a projective space R of dimension n + 1 and restricting to P a central projection onto Q. If f is a perspectivity from P to Q, and g a perspectivity from Q to P, with a different center, then g ⋅ f is a homography from P to itself, which is called a central collineation, when the dimension of P is at least two. (See § Central collineations below and Perspectivity § Perspective collineations.) Originally, a homography was defined as the composition of a finite number of perspectivities. It is a part of the fundamental theorem of projective geometry (see below) that this definition coincides with the more algebraic definition sketched in the introduction and detailed below. == Definition and expression in homogeneous coordinates == A projective space P(V) of dimension n over a field K may be defined as the set of the lines through the origin in a K-vector space V of dimension n + 1. If a basis of V has been fixed, a point of V may be represented by a point (x0, ..., xn) of Kn+1. A point of P(V), being a line in V, may thus be represented by the coordinates of any nonzero point of this line, which are thus called homogeneous coordinates of the projective point. Given two projective spaces P(V) and P(W) of the same dimension, a homography is a mapping from P(V) to P(W), which is induced by an isomorphism of vector spaces f : V → W. Such an isomorphism induces a bijection from P(V) to P(W), because of the linearity of f. Two such isomorphisms, f and g, define the same homography if and only if there is a nonzero element a of K such that g = af. This may be written in terms of homogeneous coordinates in the following way: A homography φ may be defined by a nonsingular (n+1) × (n+1) matrix [ai,j], called the matrix of the homography. This matrix is defined up to the multiplication by a nonzero element of K. The homogeneous coordinates [x0 : ... : xn] of a point and the coordinates [y0 : ... : yn] of its image by φ are related by y 0 = a 0 , 0 x 0 + ⋯ + a 0 , n x n ⋮ y n = a n , 0 x 0 + ⋯ + a n , n x n . {\displaystyle {\begin{aligned}y_{0}&=a_{0,0}x_{0}+\dots +a_{0,n}x_{n}\\&\vdots \\y_{n}&=a_{n,0}x_{0}+\dots +a_{n,n}x_{n}.\end{aligned}}} When the projective spaces are defined by adding points at infinity to affine spaces (projective completion) the preceding formulas become, in affine coordinates, y 1 = a 1 , 0 + a 1 , 1 x 1 + ⋯ + a 1 , n x n a 0 , 0 + a 0 , 1 x 1 + ⋯ + a 0 , n x n ⋮ y n = a n , 0 + a n , 1 x 1 + ⋯ + a n , n x n a 0 , 0 + a 0 , 1 x 1 + ⋯ + a 0 , n x n {\displaystyle {\begin{aligned}y_{1}&={\frac {a_{1,0}+a_{1,1}x_{1}+\dots +a_{1,n}x_{n}}{a_{0,0}+a_{0,1}x_{1}+\dots +a_{0,n}x_{n}}}\\&\vdots \\y_{n}&={\frac {a_{n,0}+a_{n,1}x_{1}+\dots +a_{n,n}x_{n}}{a_{0,0}+a_{0,1}x_{1}+\dots +a_{0,n}x_{n}}}\end{aligned}}} which generalizes the expression of the homographic function of the next section. This defines only a partial function between affine spaces, which is defined only outside the hyperplane where the denominator is zero. == Homographies of a projective line == The projective line over a field K may be identified with the union of K and a point, called the "point at infinity" and denoted by ∞ (see Projective line). With this representation of the projective line, the homographies are the mappings z ↦ a z + b c z + d , where a d − b c ≠ 0 , {\displaystyle z\mapsto {\frac {az+b}{cz+d}},{\text{ where }}ad-bc\neq 0,} which are called homographic functions or linear fractional transformations. In the case of the complex projective line, which can be identified with the Riemann sphere, the homographies are called Möbius transformations. These correspond precisely with those bijections of the Riemann sphere that preserve orientation and are conformal. In the study of collineations, the case of projective lines is special due to the small dimension. When the line is viewed as a projective space in isolation, any permutation of the points of a projective line is a collineation, since every set of points are collinear. However, if the projective line is embedded in a higher-dimensional projective space, the geometric structure of that space can be used to impose a geometric structure on the line. Thus, in synthetic geometry, the homographies and the collineations of the projective line that are considered are those obtained by restrictions to the line of collineations and homographies of spaces of higher dimension. This means that the fundamental theorem of projective geometry (see below) remains valid in the one-dimensional setting. A homography of a projective line may also be properly defined by insisting that the mapping preserves cross-ratios. == Projective frame and coordinates == A projective frame or projective basis of a projective space of dimension n is an ordered set of n + 2 points such that no hyperplane contains n + 1 of them. A projective frame is sometimes called a simplex, although a simplex in a space of dimension n has at most n + 1 vertices. Projective spaces over a commutative field K are considered in this section, although most results may be generalized to projective spaces over a division ring. Let P(V) be a projective space of dimension n, where V is a K-vector space of dimension n + 1, and p : V ∖ {0} → P(V) be the canonical projection that maps a nonzero vector to the vector line that contains it. For every frame of P(V), there exists a basis e0, ..., en of V such that the frame is (p(e0), ..., p(en), p(e0 + ... + en)), and this basis is unique up to the multiplication of all its elements by the same nonzero element of K. Conversely, if e0, ..., en is a basis of V, then (p(e0), ..., p(en), p(e0 + ... + en)) is a frame of P(V) It follows that, given two frames, there is exactly one homography mapping the first one onto the second one. In particular, the only homography fixing the points of a frame is the identity map. This result is much more difficult in synthetic geometry (where projective spaces are defined through axioms). It is sometimes called the first fundamental theorem of projective geometry. Every frame (p(e0), ..., p(en), p(e0 + ... + en)) allows to define projective coordinates, also known as homogeneous coordinates: every point may be written as p(v); the projective coordinates of p(v) on this frame are the coordinates of v on the base (e0, ..., en). It is not difficult to verify that changing the ei and v, without changing the frame nor p(v), results in multiplying the projective coordinates by the same nonzero element of K. The projective space Pn(K) = P(Kn+1) has a canonical frame consisting of the image by p of the canonical basis of Kn+1 (consisting of the elements having only one nonzero entry, which is equal to 1), and (1, 1, ..., 1). On this basis, the homogeneous coordinates of p(v) are simply the entries (coefficients) of the tuple v. Given another projective space P(V) of the same dimension, and a frame F of it, there is one and only one homography h mapping F onto the canonical frame of Pn(K). The projective coordinates of a point a on the frame F are the homogeneous coordinates of h(a) on the canonical frame of Pn(K). == Central collineations == In above sections, homographies have been defined through linear algebra. In synthetic geometry, they are traditionally defined as the composition of one or several special homographies called central collineations. It is a part of the fundamental theorem of projective geometry that the two definitions are equivalent. In a projective space, P, of dimension n ≥ 2, a collineation of P is a bijection from P onto P that maps lines onto lines. A central collineation (traditionally these were called perspectivities, but this term may be confusing, having another meaning; see Perspectivity) is a bijection α from P to P, such that there exists a hyperplane H (called the axis of α), which is fixed pointwise by α (that is, α(X) = X for all points X in H) and a point O (called the center of α), which is fixed linewise by α (any line through O is mapped to itself by α, but not necessarily pointwise). There are two types of central collineations. Elations are the central collineations in which the center is incident with the axis and homologies are those in which the center is not incident with the axis. A central collineation is uniquely defined by its center, its axis, and the image α(P) of any given point P that differs from the center O and does not belong to the axis. (The image α(Q) of any other point Q is the intersection of the line defined by O and Q and the line passing through α(P) and the intersection with the axis of the line defined by P and Q.) A central collineation is a homography defined by a (n+1) × (n+1) matrix that has an eigenspace of dimension n. It is a homology, if the matrix has another eigenvalue and is therefore diagonalizable. It is an elation, if all the eigenvalues are equal and the matrix is not diagonalizable. The geometric view of a central collineation is easiest to see in a projective plane. Given a central collineation α, consider a line ℓ that does not pass through the center O, and its image under α, ℓ′ = α(ℓ). Setting R = ℓ ∩ ℓ′, the axis of α is some line M through R. The image of any point A of ℓ under α is the intersection of OA with ℓ′. The image B′ of a point B that does not belong to ℓ may be constructed in the following way: let S = AB ∩ M, then B′ = SA′ ∩ OB. The composition of two central collineations, while still a homography in general, is not a central collineation. In fact, every homography is the composition of a finite number of central collineations. In synthetic geometry, this property, which is a part of the fundamental theory of projective geometry is taken as the definition of homographies. == Fundamental theorem of projective geometry == There are collineations besides the homographies. In particular, any field automorphism σ of a field F induces a collineation of every projective space over F by applying σ to all homogeneous coordinates (over a projective frame) of a point. These collineations are called automorphic collineations. The fundamental theorem of projective geometry consists of the three following theorems. Given two projective frames of a projective space P, there is exactly one homography of P that maps the first frame onto the second one. If the dimension of a projective space P is at least two, every collineation of P is the composition of an automorphic collineation and a homography. In particular, over the reals, every collineation of a projective space of dimension at least two is a homography. Every homography is the composition of a finite number of perspectivities. In particular, if the dimension of the implied projective space is at least two, every homography is the composition of a finite number of central collineations. If projective spaces are defined by means of axioms (synthetic geometry), the third part is simply a definition. On the other hand, if projective spaces are defined by means of linear algebra, the first part is an easy corollary of the definitions. Therefore, the proof of the first part in synthetic geometry, and the proof of the third part in terms of linear algebra both are fundamental steps of the proof of the equivalence of the two ways of defining projective spaces. == Homography groups == As every homography has an inverse mapping and the composition of two homographies is another, the homographies of a given projective space form a group. For example, the Möbius group is the homography group of any complex projective line. As all the projective spaces of the same dimension over the same field are isomorphic, the same is true for their homography groups. They are therefore considered as a single group acting on several spaces, and only the dimension and the field appear in the notation, not the specific projective space. Homography groups also called projective linear groups are denoted PGL(n + 1, F) when acting on a projective space of dimension n over a field F. Above definition of homographies shows that PGL(n + 1, F) may be identified to the quotient group GL(n + 1, F) / F×I, where GL(n + 1, F) is the general linear group of the invertible matrices, and F×I is the group of the products by a nonzero element of F of the identity matrix of size (n + 1) × (n + 1). When F is a Galois field GF(q) then the homography group is written PGL(n, q). For example, PGL(2, 7) acts on the eight points in the projective line over the finite field GF(7), while PGL(2, 4), which is isomorphic to the alternating group A5, is the homography group of the projective line with five points. The homography group PGL(n + 1, F) is a subgroup of the collineation group PΓL(n + 1, F) of the collineations of a projective space of dimension n. When the points and lines of the projective space are viewed as a block design, whose blocks are the sets of points contained in a line, it is common to call the collineation group the automorphism group of the design. == Cross-ratio == The cross-ratio of four collinear points is an invariant under the homography that is fundamental for the study of the homographies of the lines. Three distinct points a, b and c on a projective line over a field F form a projective frame of this line. There is therefore a unique homography h of this line onto F ∪ {∞} that maps a to ∞, b to 0, and c to 1. Given a fourth point on the same line, the cross-ratio of the four points a, b, c and d, denoted [a, b; c, d], is the element h(d) of F ∪ {∞}. In other words, if d has homogeneous coordinates [k : 1] over the projective frame (a, b, c), then [a, b; c, d] = k. == Over a ring == Suppose A is a ring and U is its group of units. Homographies act on a projective line over A, written P(A), consisting of points U[a, b] with projective coordinates. The homographies on P(A) are described by matrix mappings U [ z , 1 ] ( a c b d ) = U [ z a + b , z c + d ] . {\displaystyle U[z,1]{\begin{pmatrix}a&c\\b&d\end{pmatrix}}=U[za+b,\ zc+d].} When A is a commutative ring, the homography may be written z ↦ z a + b z c + d , {\displaystyle z\mapsto {\frac {za+b}{zc+d}}\ ,} but otherwise the linear fractional transformation is seen as an equivalence: U [ z a + b , z c + d ] ∼ U [ ( z c + d ) − 1 ( z a + b ) , 1 ] . {\displaystyle U[za+b,\ zc+d]\thicksim U[(zc+d)^{-1}(za+b),\ 1].} The homography group of the ring of integers Z is modular group PSL(2, Z). Ring homographies have been used in quaternion analysis, and with dual quaternions to facilitate screw theory. The conformal group of spacetime can be represented with homographies where A is the composition algebra of biquaternions. == Periodic homographies == The homography h = ( 1 1 0 1 ) {\displaystyle h={\begin{pmatrix}1&1\\0&1\end{pmatrix}}} is periodic when the ring is Z/nZ (the integers modulo n) since then h n = ( 1 n 0 1 ) = ( 1 0 0 1 ) . {\displaystyle h^{n}={\begin{pmatrix}1&n\\0&1\end{pmatrix}}={\begin{pmatrix}1&0\\0&1\end{pmatrix}}.} Arthur Cayley was interested in periodicity when he calculated iterates in 1879. In his review of a brute force approach to periodicity of homographies, H. S. M. Coxeter gave this analysis: A real homography is involutory (of period 2) if and only if a + d = 0. If it is periodic with period n > 2, then it is elliptic, and no loss of generality occurs by assuming that ad − bc = 1. Since the characteristic roots are exp(±hπi/m), where (h, m) = 1, the trace is a + d = 2 cos(hπ/m). == See also == W-curve == Notes == == References == Artin, E. (1957), Geometric Algebra, Interscience Publishers Baer, Reinhold (2005) [First published 1952], Linear Algebra and Projective Geometry, Dover, ISBN 9780486445656 Berger, Marcel (2009), Geometry I, Springer-Verlag, ISBN 978-3-540-11658-5, translated from the 1977 French original by M. Cole and S. Levy, fourth printing of the 1987 English translation Beutelspacher, Albrecht; Rosenbaum, Ute (1998), Projective Geometry: From Foundations to Applications, Cambridge University Press, ISBN 0-521-48364-6 Hartshorne, Robin (1967), Foundations of Projective Geometry, New York: W.A. Benjamin, Inc Hirschfeld, J. W. P. (1979), Projective Geometries Over Finite Fields, Oxford University Press, ISBN 978-0-19-850295-1 Meserve, Bruce E. (1983), Fundamental Concepts of Geometry, Dover, ISBN 0-486-63415-9 Yale, Paul B. (1968), Geometry and Symmetry, Holden-Day == Further reading == Patrick du Val (1964) Homographies, quaternions and rotations, Oxford Mathematical Monographs, Clarendon Press, Oxford, MR0169108 . Gunter Ewald (1971) Geometry: An Introduction, page 263, Belmont:Wadsworth Publishing ISBN 0-534-00034-7. == External links == Media related to Homography at Wikimedia Commons
Wikipedia/Projective_transformation
In mathematics, the direct method in the calculus of variations is a general method for constructing a proof of the existence of a minimizer for a given functional, introduced by Stanisław Zaremba and David Hilbert around 1900. The method relies on methods of functional analysis and topology. As well as being used to prove the existence of a solution, direct methods may be used to compute the solution to desired accuracy. == The method == The calculus of variations deals with functionals J : V → R ¯ {\displaystyle J:V\to {\bar {\mathbb {R} }}} , where V {\displaystyle V} is some function space and R ¯ = R ∪ { ∞ } {\displaystyle {\bar {\mathbb {R} }}=\mathbb {R} \cup \{\infty \}} . The main interest of the subject is to find minimizers for such functionals, that is, functions v ∈ V {\displaystyle v\in V} such that J ( v ) ≤ J ( u ) {\displaystyle J(v)\leq J(u)} for all u ∈ V {\displaystyle u\in V} . The standard tool for obtaining necessary conditions for a function to be a minimizer is the Euler–Lagrange equation. But seeking a minimizer amongst functions satisfying these may lead to false conclusions if the existence of a minimizer is not established beforehand. The functional J {\displaystyle J} must be bounded from below to have a minimizer. This means inf { J ( u ) | u ∈ V } > − ∞ . {\displaystyle \inf\{J(u)|u\in V\}>-\infty .\,} This condition is not enough to know that a minimizer exists, but it shows the existence of a minimizing sequence, that is, a sequence ( u n ) {\displaystyle (u_{n})} in V {\displaystyle V} such that J ( u n ) → inf { J ( u ) | u ∈ V } . {\displaystyle J(u_{n})\to \inf\{J(u)|u\in V\}.} The direct method may be broken into the following steps Take a minimizing sequence ( u n ) {\displaystyle (u_{n})} for J {\displaystyle J} . Show that ( u n ) {\displaystyle (u_{n})} admits some subsequence ( u n k ) {\displaystyle (u_{n_{k}})} , that converges to a u 0 ∈ V {\displaystyle u_{0}\in V} with respect to a topology τ {\displaystyle \tau } on V {\displaystyle V} . Show that J {\displaystyle J} is sequentially lower semi-continuous with respect to the topology τ {\displaystyle \tau } . To see that this shows the existence of a minimizer, consider the following characterization of sequentially lower-semicontinuous functions. The function J {\displaystyle J} is sequentially lower-semicontinuous if lim inf n → ∞ J ( u n ) ≥ J ( u 0 ) {\displaystyle \liminf _{n\to \infty }J(u_{n})\geq J(u_{0})} for any convergent sequence u n → u 0 {\displaystyle u_{n}\to u_{0}} in V {\displaystyle V} . The conclusions follows from inf { J ( u ) | u ∈ V } = lim n → ∞ J ( u n ) = lim k → ∞ J ( u n k ) ≥ J ( u 0 ) ≥ inf { J ( u ) | u ∈ V } {\displaystyle \inf\{J(u)|u\in V\}=\lim _{n\to \infty }J(u_{n})=\lim _{k\to \infty }J(u_{n_{k}})\geq J(u_{0})\geq \inf\{J(u)|u\in V\}} , in other words J ( u 0 ) = inf { J ( u ) | u ∈ V } {\displaystyle J(u_{0})=\inf\{J(u)|u\in V\}} . == Details == === Banach spaces === The direct method may often be applied with success when the space V {\displaystyle V} is a subset of a separable reflexive Banach space W {\displaystyle W} . In this case the sequential Banach–Alaoglu theorem implies that any bounded sequence ( u n ) {\displaystyle (u_{n})} in V {\displaystyle V} has a subsequence that converges to some u 0 {\displaystyle u_{0}} in W {\displaystyle W} with respect to the weak topology. If V {\displaystyle V} is sequentially closed in W {\displaystyle W} , so that u 0 {\displaystyle u_{0}} is in V {\displaystyle V} , the direct method may be applied to a functional J : V → R ¯ {\displaystyle J:V\to {\bar {\mathbb {R} }}} by showing J {\displaystyle J} is bounded from below, any minimizing sequence for J {\displaystyle J} is bounded, and J {\displaystyle J} is weakly sequentially lower semi-continuous, i.e., for any weakly convergent sequence u n → u 0 {\displaystyle u_{n}\to u_{0}} it holds that lim inf n → ∞ J ( u n ) ≥ J ( u 0 ) {\displaystyle \liminf _{n\to \infty }J(u_{n})\geq J(u_{0})} . The second part is usually accomplished by showing that J {\displaystyle J} admits some growth condition. An example is J ( x ) ≥ α ‖ x ‖ q − β {\displaystyle J(x)\geq \alpha \lVert x\rVert ^{q}-\beta } for some α > 0 {\displaystyle \alpha >0} , q ≥ 1 {\displaystyle q\geq 1} and β ≥ 0 {\displaystyle \beta \geq 0} . A functional with this property is sometimes called coercive. Showing sequential lower semi-continuity is usually the most difficult part when applying the direct method. See below for some theorems for a general class of functionals. === Sobolev spaces === The typical functional in the calculus of variations is an integral of the form J ( u ) = ∫ Ω F ( x , u ( x ) , ∇ u ( x ) ) d x {\displaystyle J(u)=\int _{\Omega }F(x,u(x),\nabla u(x))dx} where Ω {\displaystyle \Omega } is a subset of R n {\displaystyle \mathbb {R} ^{n}} and F {\displaystyle F} is a real-valued function on Ω × R m × R m n {\displaystyle \Omega \times \mathbb {R} ^{m}\times \mathbb {R} ^{mn}} . The argument of J {\displaystyle J} is a differentiable function u : Ω → R m {\displaystyle u:\Omega \to \mathbb {R} ^{m}} , and its Jacobian ∇ u ( x ) {\displaystyle \nabla u(x)} is identified with a m n {\displaystyle mn} -vector. When deriving the Euler–Lagrange equation, the common approach is to assume Ω {\displaystyle \Omega } has a C 2 {\displaystyle C^{2}} boundary and let the domain of definition for J {\displaystyle J} be C 2 ( Ω , R m ) {\displaystyle C^{2}(\Omega ,\mathbb {R} ^{m})} . This space is a Banach space when endowed with the supremum norm, but it is not reflexive. When applying the direct method, the functional is usually defined on a Sobolev space W 1 , p ( Ω , R m ) {\displaystyle W^{1,p}(\Omega ,\mathbb {R} ^{m})} with p > 1 {\displaystyle p>1} , which is a reflexive Banach space. The derivatives of u {\displaystyle u} in the formula for J {\displaystyle J} must then be taken as weak derivatives. Another common function space is W g 1 , p ( Ω , R m ) {\displaystyle W_{g}^{1,p}(\Omega ,\mathbb {R} ^{m})} which is the affine sub space of W 1 , p ( Ω , R m ) {\displaystyle W^{1,p}(\Omega ,\mathbb {R} ^{m})} of functions whose trace is some fixed function g {\displaystyle g} in the image of the trace operator. This restriction allows finding minimizers of the functional J {\displaystyle J} that satisfy some desired boundary conditions. This is similar to solving the Euler–Lagrange equation with Dirichlet boundary conditions. Additionally there are settings in which there are minimizers in W g 1 , p ( Ω , R m ) {\displaystyle W_{g}^{1,p}(\Omega ,\mathbb {R} ^{m})} but not in W 1 , p ( Ω , R m ) {\displaystyle W^{1,p}(\Omega ,\mathbb {R} ^{m})} . The idea of solving minimization problems while restricting the values on the boundary can be further generalized by looking on function spaces where the trace is fixed only on a part of the boundary, and can be arbitrary on the rest. The next section presents theorems regarding weak sequential lower semi-continuity of functionals of the above type. == Sequential lower semi-continuity of integrals == As many functionals in the calculus of variations are of the form J ( u ) = ∫ Ω F ( x , u ( x ) , ∇ u ( x ) ) d x {\displaystyle J(u)=\int _{\Omega }F(x,u(x),\nabla u(x))dx} , where Ω ⊆ R n {\displaystyle \Omega \subseteq \mathbb {R} ^{n}} is open, theorems characterizing functions F {\displaystyle F} for which J {\displaystyle J} is weakly sequentially lower-semicontinuous in W 1 , p ( Ω , R m ) {\displaystyle W^{1,p}(\Omega ,\mathbb {R} ^{m})} with p ≥ 1 {\displaystyle p\geq 1} is of great importance. In general one has the following: Assume that F {\displaystyle F} is a function that has the following properties: The function F {\displaystyle F} is a Carathéodory function. There exist a ∈ L q ( Ω , R m n ) {\displaystyle a\in L^{q}(\Omega ,\mathbb {R} ^{mn})} with Hölder conjugate q = p p − 1 {\displaystyle q={\tfrac {p}{p-1}}} and b ∈ L 1 ( Ω ) {\displaystyle b\in L^{1}(\Omega )} such that the following inequality holds true for almost every x ∈ Ω {\displaystyle x\in \Omega } and every ( y , A ) ∈ R m × R m n {\displaystyle (y,A)\in \mathbb {R} ^{m}\times \mathbb {R} ^{mn}} : F ( x , y , A ) ≥ ⟨ a ( x ) , A ⟩ + b ( x ) {\displaystyle F(x,y,A)\geq \langle a(x),A\rangle +b(x)} . Here, ⟨ a ( x ) , A ⟩ {\displaystyle \langle a(x),A\rangle } denotes the Frobenius inner product of a ( x ) {\displaystyle a(x)} and A {\displaystyle A} in R m n {\displaystyle \mathbb {R} ^{mn}} ). If the function A ↦ F ( x , y , A ) {\displaystyle A\mapsto F(x,y,A)} is convex for almost every x ∈ Ω {\displaystyle x\in \Omega } and every y ∈ R m {\displaystyle y\in \mathbb {R} ^{m}} , then J {\displaystyle J} is sequentially weakly lower semi-continuous. When n = 1 {\displaystyle n=1} or m = 1 {\displaystyle m=1} the following converse-like theorem holds Assume that F {\displaystyle F} is continuous and satisfies | F ( x , y , A ) | ≤ a ( x , | y | , | A | ) {\displaystyle |F(x,y,A)|\leq a(x,|y|,|A|)} for every ( x , y , A ) {\displaystyle (x,y,A)} , and a fixed function a ( x , | y | , | A | ) {\displaystyle a(x,|y|,|A|)} increasing in | y | {\displaystyle |y|} and | A | {\displaystyle |A|} , and locally integrable in x {\displaystyle x} . If J {\displaystyle J} is sequentially weakly lower semi-continuous, then for any given ( x , y ) ∈ Ω × R m {\displaystyle (x,y)\in \Omega \times \mathbb {R} ^{m}} the function A ↦ F ( x , y , A ) {\displaystyle A\mapsto F(x,y,A)} is convex. In conclusion, when m = 1 {\displaystyle m=1} or n = 1 {\displaystyle n=1} , the functional J {\displaystyle J} , assuming reasonable growth and boundedness on F {\displaystyle F} , is weakly sequentially lower semi-continuous if, and only if the function A ↦ F ( x , y , A ) {\displaystyle A\mapsto F(x,y,A)} is convex. However, there are many interesting cases where one cannot assume that F {\displaystyle F} is convex. The following theorem proves sequential lower semi-continuity using a weaker notion of convexity: Assume that F : Ω × R m × R m n → [ 0 , ∞ ) {\displaystyle F:\Omega \times \mathbb {R} ^{m}\times \mathbb {R} ^{mn}\to [0,\infty )} is a function that has the following properties: The function F {\displaystyle F} is a Carathéodory function. The function F {\displaystyle F} has p {\displaystyle p} -growth for some p > 1 {\displaystyle p>1} : There exists a constant C {\displaystyle C} such that for every y ∈ R m {\displaystyle y\in \mathbb {R} ^{m}} and for almost every x ∈ Ω {\displaystyle x\in \Omega } | F ( x , y , A ) | ≤ C ( 1 + | y | p + | A | p ) {\displaystyle |F(x,y,A)|\leq C(1+|y|^{p}+|A|^{p})} . For every y ∈ R m {\displaystyle y\in \mathbb {R} ^{m}} and for almost every x ∈ Ω {\displaystyle x\in \Omega } , the function A ↦ F ( x , y , A ) {\displaystyle A\mapsto F(x,y,A)} is quasiconvex: there exists a cube D ⊆ R n {\displaystyle D\subseteq \mathbb {R} ^{n}} such that for every A ∈ R m n , φ ∈ W 0 1 , ∞ ( Ω , R m ) {\displaystyle A\in \mathbb {R} ^{mn},\varphi \in W_{0}^{1,\infty }(\Omega ,\mathbb {R} ^{m})} it holds: F ( x , y , A ) ≤ | D | − 1 ∫ D F ( x , y , A + ∇ φ ( z ) ) d z {\displaystyle F(x,y,A)\leq |D|^{-1}\int _{D}F(x,y,A+\nabla \varphi (z))dz} where | D | {\displaystyle |D|} is the volume of D {\displaystyle D} . Then J {\displaystyle J} is sequentially weakly lower semi-continuous in W 1 , p ( Ω , R m ) {\displaystyle W^{1,p}(\Omega ,\mathbb {R} ^{m})} . A converse like theorem in this case is the following: Assume that F {\displaystyle F} is continuous and satisfies | F ( x , y , A ) | ≤ a ( x , | y | , | A | ) {\displaystyle |F(x,y,A)|\leq a(x,|y|,|A|)} for every ( x , y , A ) {\displaystyle (x,y,A)} , and a fixed function a ( x , | y | , | A | ) {\displaystyle a(x,|y|,|A|)} increasing in | y | {\displaystyle |y|} and | A | {\displaystyle |A|} , and locally integrable in x {\displaystyle x} . If J {\displaystyle J} is sequentially weakly lower semi-continuous, then for any given ( x , y ) ∈ Ω × R m {\displaystyle (x,y)\in \Omega \times \mathbb {R} ^{m}} the function A ↦ F ( x , y , A ) {\displaystyle A\mapsto F(x,y,A)} is quasiconvex. The claim is true even when both m , n {\displaystyle m,n} are bigger than 1 {\displaystyle 1} and coincides with the previous claim when m = 1 {\displaystyle m=1} or n = 1 {\displaystyle n=1} , since then quasiconvexity is equivalent to convexity. == Notes == == References and further reading == Dacorogna, Bernard (1989). Direct Methods in the Calculus of Variations. Springer-Verlag. ISBN 0-387-50491-5. Fonseca, Irene; Giovanni Leoni (2007). Modern Methods in the Calculus of Variations: L p {\displaystyle L^{p}} Spaces. Springer. ISBN 978-0-387-35784-3. Morrey, C. B., Jr.: Multiple Integrals in the Calculus of Variations. Springer, 1966 (reprinted 2008), Berlin ISBN 978-3-540-69915-6. Jindřich Nečas: Direct Methods in the Theory of Elliptic Equations. (Transl. from French original 1967 by A.Kufner and G.Tronel), Springer, 2012, ISBN 978-3-642-10455-8. T. Roubíček (2000). "Direct method for parabolic problems". Adv. Math. Sci. Appl. Vol. 10. pp. 57–65. MR 1769181. Acerbi Emilio, Fusco Nicola. "Semicontinuity problems in the calculus of variations." Archive for Rational Mechanics and Analysis 86.2 (1984): 125-145
Wikipedia/Direct_method_in_the_calculus_of_variations
François Viète (French: [fʁɑ̃swa vjɛt]; 1540 – 23 February 1603), known in Latin as Franciscus Vieta, was a French mathematician whose work on new algebra was an important step towards modern algebra, due to his innovative use of letters as parameters in equations. He was a lawyer by trade, and served as a privy councillor to both Henry III and Henry IV of France. == Biography == === Early life and education === Viète was born at Fontenay-le-Comte in present-day Vendée. His grandfather was a merchant from La Rochelle. His father, Etienne Viète, was an attorney in Fontenay-le-Comte and a notary in Le Busseau. His mother was the aunt of Barnabé Brisson, a magistrate and the first president of parliament during the ascendancy of the Catholic League of France. Viète went to a Franciscan school and in 1558 studied law at Poitiers, graduating as a Bachelor of Laws in 1559. A year later, he began his career as an attorney in his native town. From the outset, he was entrusted with some major cases, including the settlement of rent in Poitou for the widow of King Francis I of France and looking after the interests of Mary, Queen of Scots. === Serving Parthenay === In 1564, Viète entered the service of Antoinette d'Aubeterre, Lady Soubise, wife of Jean V de Parthenay-Soubise, one of the main Huguenot military leaders, and accompanied him to Lyon to collect documents about his heroic defence of that city against the troops of Jacques of Savoy, 2nd Duke of Nemours just the year before. The same year, at Parc-Soubise, in the commune of Mouchamps in present-day Vendée, Viète became the tutor of Catherine de Parthenay, Soubise's twelve-year-old daughter. He taught her science and mathematics and wrote for her numerous treatises on astronomy and trigonometry, some of which have survived. In these treatises, Viète used decimal numbers (twenty years before Stevin's paper) and he also noted the elliptic orbit of the planets, forty years before Kepler and twenty years before Giordano Bruno's death. John V de Parthenay presented him to King Charles IX of France. Viète wrote a genealogy of the Parthenay family and following the death of Jean V de Parthenay-Soubise in 1566 his biography. In 1568, Antoinette, Lady Soubise, married her daughter Catherine to Baron Charles de Quellenec and Viète went with Lady Soubise to La Rochelle, where he mixed with the highest Calvinist aristocracy, leaders like Coligny and Condé and Queen Jeanne d’Albret of Navarre and her son, Henry of Navarre, the future Henry IV of France. In 1570, he refused to represent the Soubise ladies in their infamous lawsuit against the Baron De Quellenec, where they claimed the Baron was unable (or unwilling) to provide an heir. === First steps in Paris === In 1571, he enrolled as an attorney in Paris, and continued to visit his student Catherine. He regularly lived in Fontenay-le-Comte, where he took on some municipal functions. He began publishing his Universalium inspectionum ad Canonem mathematicum liber singularis and wrote new mathematical research by night or during periods of leisure. He was known to dwell on any one question for up to three days, his elbow on the desk, feeding himself without changing position (according to his friend, Jacques de Thou). In 1572, Viète was in Paris during the St. Bartholomew's Day massacre. That night, Baron De Quellenec was killed after having tried to save Admiral Coligny the previous night. The same year, Viète met Françoise de Rohan, Lady of Garnache, and became her adviser against Jacques, Duke of Nemours. In 1573, he became a councillor of the Parlement of Rennes, at Rennes, and two years later, he obtained the agreement of Antoinette d'Aubeterre for the marriage of Catherine of Parthenay to Duke René de Rohan, Françoise's brother. In 1576, Henri, duc de Rohan took him under his special protection, recommending him in 1580 as "maître des requêtes". In 1579, Viète finished the printing of his Universalium inspectionum (Mettayer publisher), published as an appendix to a book of two trigonometric tables (Canon mathematicus, seu ad triangula, the "canon" referred to by the title of his Universalium inspectionum, and Canonion triangulorum laterum rationalium). A year later, he was appointed maître des requêtes to the parliament of Paris, committed to serving the king. That same year, his success in the trial between the Duke of Nemours and Françoise de Rohan, to the benefit of the latter, earned him the resentment of the tenacious Catholic League. === Exile in Fontenay === Between 1583 and 1585, the League persuaded king Henry III to release Viète, Viète having been accused of sympathy with the Protestant cause. Henry of Navarre, at Rohan's instigation, addressed two letters to King Henry III of France on March 3 and April 26, 1585, in an attempt to obtain Viète's restoration to his former office, but he failed. Viète retired to Fontenay and Beauvoir-sur-Mer, with François de Rohan. He spent four years devoted to mathematics, writing his New Algebra (1591). === Code-breaker to two kings === In 1589, Henry III took refuge in Blois. He commanded the royal officials to be at Tours before 15 April 1589. Viète was one of the first who came back to Tours. He deciphered the secret letters of the Catholic League and other enemies of the king. Later, he had arguments with the classical scholar Joseph Juste Scaliger. Viète triumphed against him in 1590. After the death of Henry III, Viète became a privy councillor to Henry of Navarre, now Henry IV of France.: 75–77  He was appreciated by the king, who admired his mathematical talents. Viète was given the position of councillor of the parlement at Tours. In 1590, Viète broke the key to a Spanish cipher, consisting of more than 500 characters, and this meant that all dispatches in that language which fell into the hands of the French could be easily read. Henry IV published a letter from Commander Moreo to the King of Spain. The contents of this letter, read by Viète, revealed that the head of the League in France, Charles, Duke of Mayenne, planned to become king in place of Henry IV. This publication led to the settlement of the Wars of Religion. The King of Spain accused Viète of having used magical powers. In 1593, Viète published his arguments against Scaliger. Beginning in 1594, he was appointed exclusively deciphering the enemy's secret codes. === Gregorian calendar === In 1582, Pope Gregory XIII published his bull Inter gravissimas and ordered Catholic kings to comply with the change from the Julian calendar, based on the calculations of the Calabrian doctor Aloysius Lilius, aka Luigi Lilio or Luigi Giglio. His work was resumed, after his death, by the scientific adviser to the Pope, Christopher Clavius. Viète accused Clavius, in a series of pamphlets (1600), of introducing corrections and intermediate days in an arbitrary manner, and misunderstanding the meaning of the works of his predecessor, particularly in the calculation of the lunar cycle. Viète gave a new timetable, which Clavius cleverly refuted, after Viète's death, in his Explicatio (1603). It is said that Viète was wrong. Without doubt, he believed himself to be a kind of "King of Times" as the historian of mathematics, Dhombres, claimed. It is true that Viète held Clavius in low esteem, as evidenced by De Thou: He said that Clavius was very clever to explain the principles of mathematics, that he heard with great clarity what the authors had invented, and wrote various treatises compiling what had been written before him without quoting its references. So, his works were in a better order which was scattered and confused in early writings. === The Adriaan van Roomen problem === In 1596, Scaliger resumed his attacks from the University of Leyden. Viète replied definitively the following year. In March that same year, Adriaan van Roomen sought the resolution, by any of Europe's top mathematicians, to a polynomial equation of degree 45. King Henri IV received a snub from the Dutch ambassador, who claimed that there was no mathematician in France. He said it was simply because some Dutch mathematician, Adriaan van Roomen, had not asked any Frenchman to solve his problem. Viète came, saw the problem, and, after leaning on a window for a few minutes, solved it. It was the equation between sin(x) and sin(x/45). He resolved this at once, and said he was able to give at the same time (actually the next day) the solution to the other 22 problems to the ambassador. "Ut legit, ut solvit," he later said. Further, he sent a new problem back to Van Roomen, for resolution by Euclidean tools (rule and compass) of the lost answer to the problem first set by Apollonius of Perga. Van Roomen could not overcome that problem without resorting to a trick (see detail below). === Final years === In 1598, Viète was granted special leave. Henry IV, however, charged him to end the revolt of the Notaries, whom the King had ordered to pay back their fees. Sick and exhausted by work, he left the King's service in December 1602 and received 20,000 écus, which were found at his bedside after his death. A few weeks before his death, he wrote a final thesis on issues of cryptography, which essay made obsolete all encryption methods of the time. He died on 23 February 1603, as De Thou wrote, leaving two daughters, Jeanne, whose mother was Barbe Cottereau, and Suzanne, whose mother was Julienne Leclerc. Jeanne, the eldest, died in 1628, having married Jean Gabriau, a councillor of the parliament of Brittany. Suzanne died in January 1618 in Paris. The cause of Viète's death is unknown. Alexander Anderson, student of Viète and publisher of his scientific writings, speaks of a "praeceps et immaturum autoris fatum" (meeting an untimely end). == Work and thought == === New algebra === ==== Background ==== At the end of the 16th century, mathematics was placed under the dual aegis of Greek geometry and the Arabic procedures for resolution. At the time of Viète, algebra therefore oscillated between arithmetic, which gave the appearance of a list of rules; and geometry, which seemed more rigorous. Meanwhile, Italian mathematicians Luca Pacioli, Scipione del Ferro, Niccolò Fontana Tartaglia, Gerolamo Cardano, Lodovico Ferrari, and especially Raphael Bombelli (1560) all developed techniques for solving equations of the third degree, which heralded a new era. On the other hand, from the German school of Coss, the Welsh mathematician Robert Recorde (1550) and the Dutchman Simon Stevin (1581) brought an early algebraic notation: the use of decimals and exponents. However, complex numbers remained at best a philosophical way of thinking. Descartes, almost a century after their invention, used them as imaginary numbers. Only positive solutions were considered and using geometrical proof was common. The mathematician's task was in fact twofold. It was necessary to produce algebra in a more geometrical way (i.e. to give it a rigorous foundation), and it was also necessary to make geometry more algebraic, allowing for analytical calculation in the plane. Viète and Descartes solved this dual task in a double revolution. ==== Viète's symbolic algebra ==== Firstly, Viète gave algebra a foundation as strong as that of geometry. He then ended the algebra of procedures (al-Jabr and al-Muqabala), creating the first symbolic algebra, and claiming that with it, all problems could be solved (nullum non problema solvere). In his dedication of the Isagoge to Catherine de Parthenay, Viète wrote: "These things which are new are wont in the beginning to be set forth rudely and formlessly and must then be polished and perfected in succeeding centuries. Behold, the art which I present is new, but in truth so old, so spoiled and defiled by the barbarians, that I considered it necessary, in order to introduce an entirely new form into it, to think out and publish a new vocabulary, having gotten rid of all its pseudo-technical terms..." Viète did not know "multiplied" notation (given by William Oughtred in 1631) or the symbol of equality, =, an absence which is more striking because Robert Recorde had used the present symbol for this purpose since 1557, and Guilielmus Xylander had used parallel vertical lines since 1575. Note also the use of a 'u' like symbol with a number above it for an unknown to a given power by Rafael Bombelli in 1572. Viète had neither much time, nor students able to brilliantly illustrate his method. He took years in publishing his work (he was very meticulous), and most importantly, he made a very specific choice to separate the unknown variables, using consonants for parameters and vowels for unknowns. In this notation he perhaps followed some older contemporaries, such as Petrus Ramus, who designated the points in geometrical figures by vowels, making use of consonants, R, S, T, etc., only when these were exhausted. This choice proved unpopular with future mathematicians and Descartes, among others, preferred the first letters of the alphabet to designate the parameters and the latter for the unknowns. Viète also remained a prisoner of his time in several respects. First, he was heir of Ramus and did not address the lengths as numbers. His writing kept track of homogeneity, which did not simplify their reading. He failed to recognize the complex numbers of Bombelli and needed to double-check his algebraic answers through geometrical construction. Although he was fully aware that his new algebra was sufficient to give a solution, this concession tainted his reputation. However, Viète created many innovations: the binomial formula, which would be taken by Pascal and Newton, and the coefficients of a polynomial to sums and products of its roots, called Viète's formula. ==== Geometric algebra ==== Viète was well skilled in most modern artifices, aiming at the simplification of equations by the substitution of new quantities having a certain connection with the primitive unknown quantities. Another of his works, Recensio canonica effectionum geometricarum, bears a modern stamp, being what was later called an algebraic geometry—a collection of precepts how to construct algebraic expressions with the use of ruler and compass only. While these writings were generally intelligible, and therefore of the greatest didactic importance, the principle of homogeneity, first enunciated by Viète, was so far in advance of his times that most readers seem to have passed it over. That principle had been made use of by the Greek authors of the classic age; but of later mathematicians only Hero, Diophantus, etc., ventured to regard lines and surfaces as mere numbers that could be joined to give a new number, their sum. The study of such sums, found in the works of Diophantus, may have prompted Viète to lay down the principle that quantities occurring in an equation ought to be homogeneous, all of them lines, or surfaces, or solids, or supersolids — an equation between mere numbers being inadmissible. During the centuries that have elapsed between Viète's day and the present, several changes of opinion have taken place on this subject. Modern mathematicians like to make homogeneous such equations as are not so from the beginning, in order to get values of a symmetrical shape. Viète himself did not see that far; nevertheless, he indirectly suggested the thought. He also conceived methods for the general resolution of equations of the second, third and fourth degrees different from those of Scipione dal Ferro and Lodovico Ferrari, with which he had not been acquainted. He devised an approximate numerical solution of equations of the second and third degrees, wherein Leonardo of Pisa must have preceded him, but by a method which was completely lost. Above all, Viète was the first mathematician who introduced notations for the problem (and not just for the unknowns). As a result, his algebra was no longer limited to the statement of rules, but relied on an efficient computational algebra, in which the operations act on the letters and the results can be obtained at the end of the calculations by a simple replacement. This approach, which is the heart of contemporary algebraic method, was a fundamental step in the development of mathematics. With this, Viète marked the end of medieval algebra (from Al-Khwarizmi to Stevin) and opened the modern period. === The logic of species === Being wealthy, Viète began to publish at his own expense, for a few friends and scholars in almost every country of Europe, the systematic presentation of his mathematic theory, which he called "species logistic" (from species: symbol) or art of calculation on symbols (1591). He described in three stages how to proceed for solving a problem: As a first step, he summarized the problem in the form of an equation. Viète called this stage the Zetetic. It denotes the known quantities by consonants (B, D, etc.) and the unknown quantities by the vowels (A, E, etc.) In a second step, he made an analysis. He called this stage the Poristic. Here mathematicians must discuss the equation and solve it. It gives the characteristic of the problem, porisma (corrollary), from which we can move to the next step. In the last step, the exegetical analysis, he returned to the initial problem which presents a solution through a geometrical or numerical construction based on porisma. Among the problems addressed by Viète with this method is the complete resolution of the quadratic equations of the form X 2 + X b = c {\displaystyle X^{2}+Xb=c} and third-degree equations of the form X 3 + a X = b {\displaystyle X^{3}+aX=b} (Viète reduced it to quadratic equations). He knew the connection between the positive roots of an equation (which, in his day, were alone thought of as roots) and the coefficients of the different powers of the unknown quantity (see Viète's formulas and their application on quadratic equations). He discovered the formula for deriving the sine of a multiple angle, knowing that of the simple angle with due regard to the periodicity of sines. This formula must have been known to Viète in 1593. === Viète's formula === In 1593, based on geometrical considerations and through trigonometric calculations perfectly mastered, he discovered the first infinite product in the history of mathematics by giving an expression of π, now known as Viète's formula: π = 2 × 2 2 × 2 2 + 2 × 2 2 + 2 + 2 × 2 2 + 2 + 2 + 2 × ⋯ {\displaystyle \pi =2\times {\frac {2}{\sqrt {2}}}\times {\frac {2}{\sqrt {2+{\sqrt {2}}}}}\times {\frac {2}{\sqrt {2+{\sqrt {2+{\sqrt {2}}}}}}}\times {\frac {2}{\sqrt {2+{\sqrt {2+{\sqrt {2+{\sqrt {2}}}}}}}}}\times \cdots } He provides 10 decimal places of π by applying the Archimedes method to a polygon with 6 × 216 = 393,216 sides. === Adriaan van Roomen's challenge and the problem of Apollonius === This famous controversy is told by Tallemant des Réaux in these terms (46th story from the first volume of Les Historiettes. Mémoires pour servir à l’histoire du XVIIe siècle): "In the times of Henri the fourth, a Dutchman called Adrianus Romanus, a learned mathematician, but not so good as he believed, published a treatise in which he proposed a question to all the mathematicians of Europe, but did not ask any Frenchman. Shortly after, a state ambassador came to the King at Fontainebleau. The King took pleasure in showing him all the sights, and he said people there were excellent in every profession in his kingdom. 'But, Sire,' said the ambassador, 'you have no mathematician, according to Adrianus Romanus, who didn't mention any in his catalog.' 'Yes, we have,' said the King. 'I have an excellent man. Go and seek Monsieur Viette,' he ordered. Vieta, who was at Fontainebleau, came at once. The ambassador sent for the book from Adrianus Romanus and showed the proposal to Vieta, who had arrived in the gallery, and before the King came out, he had already written two solutions with a pencil. By the evening he had sent many other solutions to the ambassador." When, in 1595, Viète published his response to the problem set by Adriaan van Roomen, he proposed finding the resolution of the old problem of Apollonius, namely to find a circle tangent to three given circles. Van Roomen proposed a solution using a hyperbola, with which Viète did not agree, as he was hoping for a solution using Euclidean tools. Viète published his own solution in 1600 in his work Apollonius Gallus. In this paper, Viète made use of the center of similitude of two circles. His friend De Thou said that Adriaan van Roomen immediately left the University of Würzburg, saddled his horse and went to Fontenay-le-Comte, where Viète lived. According to De Thou, he stayed a month with him, and learned the methods of the new algebra. The two men became friends and Viète paid all van Roomen's expenses before his return to Würzburg. This resolution had an almost immediate impact in Europe and Viète earned the admiration of many mathematicians over the centuries. Viète did not deal with cases (circles together, these tangents, etc.), but recognized that the number of solutions depends on the relative position of the three circles and outlined the ten resulting situations. Descartes completed (in 1643) the theorem of the three circles of Apollonius, leading to a quadratic equation in 87 terms, each of which is a product of six factors (which, with this method, makes the actual construction humanly impossible). === Religious and political beliefs === Viète was accused of Protestantism by the Catholic League, but he was not a Huguenot. His father was, according to Dhombres. Indifferent in religious matters, he did not adopt the Calvinist faith of Parthenay, nor that of his other protectors, the Rohan family. His call to the parliament of Rennes proved the opposite. At the reception as a member of the court of Brittany, on 6 April 1574, he read in public a statement of Catholic faith. Nevertheless, Viète defended and protected Protestants his whole life, and suffered, in turn, the wrath of the League. It seems that for him, the stability of the state was to be preserved and that under this requirement, the King's religion did not matter. At that time, such people were called "Politicals." Furthermore, at his death, he did not want to confess his sins. A friend had to convince him that his own daughter would not find a husband, were he to refuse the sacraments of the Catholic Church. Whether Viète was an atheist or not is a matter of debate. === Publications === Chronological list Between 1564 and 1568, Viète prepared for his student, Catherine de Parthenay, some textbooks of astronomy and trigonometry and a treatise that was never published: Harmonicon coeleste. In 1579, the trigonometric tables Canon mathematicus, seu ad triangula, published together with a table of rational-sided triangles Canonion triangulorum laterum rationalium, and a book of trigonometry Universalium inspectionum ad canonem mathematicum – which he published at his own expense and with great printing difficulties. This text contains many formulas on the sine and cosine and is unusual in using decimal numbers. The trigonometric tables here exceeded those of Regiomontanus (Triangulate Omnimodis, 1533) and Rheticus (1543, annexed to De revolutionibus of Copernicus). (Alternative scan of a 1589 reprint) In 1589, Deschiffrement d'une lettre escripte par le Commandeur Moreo au Roy d'Espaigne son maître. In 1590, Deschiffrement description of a letter by the Commander Moreo at Roy Espaigne of his master, Tours: Mettayer. In 1591: In artem analyticem isagoge (Introduction to the art of analysis), also known as Algebra Nova (New Algebra) Tours: Mettayer, in 9 folio; the first edition of the Isagoge. Zeteticorum libri quinque. Tours: Mettayer, in 24 folio; which are the five books of Zetetics, a collection of problems from Diophantus solved using the analytical art. Between 1591 and 1593, Effectionum geometricarum canonica recensio. Tours: Mettayer, in 7 folio. In 1593: Vietae Supplementum geometriae. Tours: Francisci, in 21 folio. Francisci Vietae Variorum de rebus responsorum mathematics liber VIII. Tours: Mettaye, in 49 folio; about the challenges of Scaliger. Variorum de rebus mathematicis responsorum liber VIII; the "Eighth Book of Varied Responses" in which he talks about the problems of the trisection of the angle (which he acknowledges that it is bound to an equation of third degree) of squaring the circle, building the regular heptagon, etc. In 1594, Munimen adversus nova cyclometrica. Paris: Mettayer, in quarto, 8 folio; again, a response against Scaliger. In 1595, Ad problema quod omnibus mathematicis totius orbis construendum proposuit Adrianus Romanus, Francisci Vietae responsum. Paris: Mettayer, in quarto, 16 folio; about the Adriaan van Roomen problem. In 1600: De numerosa potestatum ad exegesim resolutione. Paris: Le Clerc, in 36 folio; work that provided the means for extracting roots and solutions of equations of degree at most 6. Francisci Vietae Apollonius Gallus. Paris: Le Clerc, in quarto, 13 folio; where he referred to himself as the French Apollonius. Between 1600 and 1602: Fontenaeensis libellorum supplicum in Regia magistri relatio Kalendarii vere Gregoriani ad ecclesiasticos doctores exhibita Pontifici Maximi Clementi VIII. Paris: Mettayer, in quarto, 40 folio. Francisci Vietae adversus Christophorum Clavium expostulatio. Paris: Mettayer, in quarto, 8 folio; his theses against Clavius. Posthumous publications 1612: Supplementum Apollonii Galli edited by Marin Ghetaldi. Supplementum Apollonii Redivivi sive analysis problematis bactenus desiderati ad Apollonii Pergaei doctrinam a Marino Ghetaldo Patritio Regusino hujusque non ita pridem institutam edited by Alexander Anderson. 1615: Ad Angularum Sectionem Analytica Theoremata F. Vieta primum excogitata at absque ulla demonstratione ad nos transmissa, iam tandem demonstrationibus confirmata edited by Alexander Anderson. Pro Zetetico Apolloniani problematis a se jam pridem edito in supplemento Apollonii Redivivi Zetetico Apolloniani problematis a se jam pridem edito; in qua ad ea quae obiter inibi perstrinxit Ghetaldus respondetur edited by Alexander Anderson Francisci Vietae Fontenaeensis, De aequationum — recognitione et emendatione tractatus duo per Alexandrum Andersonum edited by Alexander Anderson 1617: Animadversionis in Franciscum Vietam, a Clemente Cyriaco nuper editae brevis diakrisis edited by Alexander Anderson 1619: Exercitationum Mathematicarum Decas Prima edited by Alexander Anderson 1631: In artem analyticem isagoge. Eiusdem ad logisticem speciosam notae priores, nunc primum in lucem editae. Paris: Baudry, in 12 folio; the second edition of the Isagoge, including the posthumously published Ad logisticem speciosam notae priores. == Reception and influence == During the ascendancy of the Catholic League, Viète's secretary was Nathaniel Tarporley, perhaps one of the more interesting and enigmatic mathematicians of 16th-century England. When he returned to London, Tarporley became one of the trusted friends of Thomas Harriot. Apart from Catherine de Parthenay, Viète's other notable students were: French mathematician Jacques Aleaume, from Orleans, Marino Ghetaldi of Ragusa, Jean de Beaugrand and the Scottish mathematician Alexander Anderson. They illustrated his theories by publishing his works and continuing his methods. At his death, his heirs gave his manuscripts to Peter Aleaume. We give here the most important posthumous editions: In 1612: Supplementum Apollonii Galli of Marino Ghetaldi. From 1615 to 1619: Animadversionis in Franciscum Vietam, Clemente a Cyriaco nuper by Alexander Anderson Francisci Vietae Fontenaeensis ab aequationum recognitione et emendatione Tractatus duo Alexandrum per Andersonum. Paris, Laquehay, 1615, in 4, 135 p. The death of Alexander Anderson unfortunately halted the publication. In 1630, an Introduction en l'art analytic ou nouvelle algèbre ('Introduction to the analytic art or modern algebra), translated into French and commentary by mathematician J. L. Sieur de Vaulezard. Paris, Jacquin. The Five Books of François Viette's Zetetic (Les cinq livres des zététiques de François Viette), put into French, and commented increased by mathematician J. L. Sieur de Vaulezard. Paris, Jacquin, p. 219. The same year, there appeared an Isagoge by Antoine Vasset (a pseudonym of Claude Hardy), and the following year, a translation into Latin of Beaugrand, which Descartes would have received. In 1648, the corpus of mathematical works printed by Frans van Schooten, professor at Leiden University (Elzevirs presses). He was assisted by Jacques Golius and Mersenne. The English mathematicians Thomas Harriot and Isaac Newton, and the Dutch physicist Willebrord Snellius, the French mathematicians Pierre de Fermat and Blaise Pascal all used Viète's symbolism. About 1770, the Italian mathematician Targioni Tozzetti, found in Florence Viète's Harmonicon coeleste. Viète had written in it: Describat Planeta Ellipsim ad motum anomaliae ad Terram. (That shows he adopted Copernicus' system and understood before Kepler the elliptic form of the orbits of planets.) In 1841, the French mathematician Michel Chasles was one of the first to reevaluate his role in the development of modern algebra. In 1847, a letter from François Arago, perpetual secretary of the Academy of Sciences (Paris), announced his intention to write a biography of François Viète. Between 1880 and 1890, the polytechnician Fréderic Ritter, based in Fontenay-le-Comte, was the first translator of the works of François Viète and his first contemporary biographer with Benjamin Fillon. === Descartes' views on Viète === Thirty-four years after the death of Viète, the philosopher René Descartes published his method and a book of geometry that changed the landscape of algebra and built on Viète's work, applying it to the geometry by removing its requirements of homogeneity. Descartes, accused by Jean Baptiste Chauveau, a former classmate of La Flèche, explained in a letter to Mersenne (1639 February) that he never read those works. Descartes accepted the Viète's view of mathematics for which the study shall stress the self-evidence of the results that Descartes implemented translating the symbolic algebra in geometric reasoning. Descartes adopted the term mathesis universalis, which he called an "already venerable term with a received usage", which originated in van Roomen's book Mathesis Universalis. "I have no knowledge of this surveyor and I wonder what he said, that we studied Viète's work together in Paris, because it is a book which I cannot remember having seen the cover, while I was in France." Elsewhere, Descartes said that Viète's notations were confusing and used unnecessary geometric justifications. In some letters, he showed he understands the program of the Artem Analyticem Isagoge; in others, he shamelessly caricatured Viète's proposals. One of his biographers, Charles Adam, noted this contradiction: "These words are surprising, by the way, for he (Descartes) had just said a few lines earlier that he had tried to put in his geometry only what he believed "was known neither by Vieta nor by anyone else". So he was informed of what Viète knew; and he must have read his works previously." Current research has not shown the extent of the direct influence of the works of Viète on Descartes. This influence could have been formed through the works of Adriaan van Roomen or Jacques Aleaume at the Hague, or through the book by Jean de Beaugrand. In his letters to Mersenne, Descartes consciously minimized the originality and depth of the work of his predecessors. "I began," he says, "where Vieta finished". His views emerged in the 17th century and mathematicians won a clear algebraic language without the requirements of homogeneity. Many contemporary studies have restored the work of Parthenay's mathematician, showing he had the double merit of introducing the first elements of literal calculation and building a first axiomatic for algebra. Although Viète was not the first to propose notation of unknown quantities by letters - Jordanus Nemorarius had done this in the past - we can reasonably estimate that it would be simplistic to summarize his innovations for that discovery and place him at the junction of algebraic transformations made during the late sixteenth – early 17th century. == See also == Vieta's formulas Michael Stifel Rafael Bombelli == Notes == == Bibliography == == Attribution == This article incorporates text from a publication now in the public domain: Cantor, Moritz (1911). "Vieta, François". In Chisholm, Hugh (ed.). Encyclopædia Britannica. Vol. 28 (11th ed.). Cambridge University Press. pp. 57–58. == External links == Literature by and about François Viète in the German National Library catalogue François Viète at Library of Congress O'Connor, John J.; Robertson, Edmund F., "François Viète", MacTutor History of Mathematics Archive, University of St Andrews New Algebra (1591) online Francois Viète: Father of Modern Algebraic Notation The Lawyer and the Gambler About Tarporley Site de Jean-Paul Guichard (in French) L'algèbre nouvelle (in French) "About the Harmonicon" (PDF). Archived from the original (PDF) on 2011-08-07. Retrieved 2009-06-18. (200 KB). (in French)
Wikipedia/New_Algebra
Algebra can essentially be considered as doing computations similar to those of arithmetic but with non-numerical mathematical objects. However, until the 19th century, algebra consisted essentially of the theory of equations. For example, the fundamental theorem of algebra belongs to the theory of equations and is not, nowadays, considered as belonging to algebra (in fact, every proof must use the completeness of the real numbers, which is not an algebraic property). This article describes the history of the theory of equations, referred to in this article as "algebra", from the origins to the emergence of algebra as a separate area of mathematics. == Etymology == The word "algebra" is derived from the Arabic word الجبر al-jabr, and this comes from the treatise written in the year 830 by the medieval Persian mathematician, Al-Khwārizmī, whose Arabic title, Kitāb al-muḫtaṣar fī ḥisāb al-ğabr wa-l-muqābala, can be translated as The Compendious Book on Calculation by Completion and Balancing. The treatise provided for the systematic solution of linear and quadratic equations. According to one history, "[i]t is not certain just what the terms al-jabr and muqabalah mean, but the usual interpretation is similar to that implied in the previous translation. The word 'al-jabr' presumably meant something like 'restoration' or 'completion' and seems to refer to the transposition of subtracted terms to the other side of an equation; the word 'muqabalah' is said to refer to 'reduction' or 'balancing'—that is, the cancellation of like terms on opposite sides of the equation. Arabic influence in Spain long after the time of al-Khwarizmi is found in Don Quixote, where the word 'algebrista' is used for a bone-setter, that is, a 'restorer'." The term is used by al-Khwarizmi to describe the operations that he introduced, "reduction" and "balancing", referring to the transposition of subtracted terms to the other side of an equation, that is, the cancellation of like terms on opposite sides of the equation. == Stages of algebra == === Algebraic expression === Algebra did not always make use of the symbolism that is now ubiquitous in mathematics; instead, it went through three distinct stages. The stages in the development of symbolic algebra are approximately as follows: Rhetorical algebra, in which equations are written in full sentences. For example, the rhetorical form of x + 1 = 2 {\displaystyle x+1=2} is "The thing plus one equals two" or possibly "The thing plus 1 equals 2". Rhetorical algebra was first developed by the ancient Babylonians and remained dominant up to the 16th century. Syncopated algebra, in which some symbolism is used, but which does not contain all of the characteristics of symbolic algebra. For instance, there may be a restriction that subtraction may be used only once within one side of an equation, which is not the case with symbolic algebra. Syncopated algebraic expression first appeared in Diophantus' Arithmetica (3rd century AD), followed by Brahmagupta's Brahma Sphuta Siddhanta (7th century). Symbolic algebra, in which full symbolism is used. Early steps toward this can be seen in the work of several Islamic mathematicians such as Ibn al-Banna (13th–14th centuries) and al-Qalasadi (15th century), although fully symbolic algebra was developed by François Viète (16th century). Later, René Descartes (17th century) introduced the modern notation (for example, the use of x—see below) and showed that the problems occurring in geometry can be expressed and solved in terms of algebra (Cartesian geometry). As important as the use or lack of symbolism in algebra was the degree of the equations that were addressed. Quadratic equations played an important role in early algebra; and throughout most of history, until the early modern period, all quadratic equations were classified as belonging to one of three categories. x 2 + p x = q {\displaystyle x^{2}+px=q} x 2 = p x + q {\displaystyle x^{2}=px+q} x 2 + q = p x {\displaystyle x^{2}+q=px} where p {\displaystyle p} and q {\displaystyle q} are positive. This trichotomy comes about because quadratic equations of the form x 2 + p x + q = 0 {\displaystyle x^{2}+px+q=0} , with p {\displaystyle p} and q {\displaystyle q} positive, have no positive roots. In between the rhetorical and syncopated stages of symbolic algebra, a geometric constructive algebra was developed by classical Greek and Vedic Indian mathematicians in which algebraic equations were solved through geometry. For instance, an equation of the form x 2 = A {\displaystyle x^{2}=A} was solved by finding the side of a square of area A . {\displaystyle A.} === Conceptual stages === In addition to the three stages of expressing algebraic ideas, some authors recognized four conceptual stages in the development of algebra that occurred alongside the changes in expression. These four stages were as follows: Geometric stage, where the concepts of algebra are largely geometric. This dates back to the Babylonians and continued with the Greeks, and was later revived by Omar Khayyám. Static equation-solving stage, where the objective is to find numbers satisfying certain relationships. The move away from the geometric stage dates back to Diophantus and Brahmagupta, but algebra did not decisively move to the static equation-solving stage until Al-Khwarizmi introduced generalized algorithmic processes for solving algebraic problems. Dynamic function stage, where motion is an underlying idea. The idea of a function began emerging with Sharaf al-Dīn al-Tūsī, but algebra did not decisively move to the dynamic function stage until Gottfried Leibniz. Abstract stage, where mathematical structure plays a central role. Abstract algebra is largely a product of the 19th and 20th centuries. == Babylon == The origins of algebra can be traced to the ancient Babylonians, who developed a positional number system that greatly aided them in solving their rhetorical algebraic equations. The Babylonians were not interested in exact solutions, but rather approximations, and so they would commonly use linear interpolation to approximate intermediate values. One of the most famous tablets is the Plimpton 322 tablet, created around 1900–1600 BC, which gives a table of Pythagorean triples and represents some of the most advanced mathematics prior to Greek mathematics. Babylonian algebra was much more advanced than the Egyptian algebra of the time; whereas the Egyptians were mainly concerned with linear equations the Babylonians were more concerned with quadratic and cubic equations. The Babylonians had developed flexible algebraic operations with which they were able to add equals to equals and multiply both sides of an equation by like quantities so as to eliminate fractions and factors. They were familiar with many simple forms of factoring, three-term quadratic equations with positive roots, and many cubic equations, although it is not known if they were able to reduce the general cubic equation. == Ancient Egypt == Ancient Egyptian algebra dealt mainly with linear equations while the Babylonians found these equations too elementary, and developed mathematics to a higher level than the Egyptians. The Rhind Papyrus, also known as the Ahmes Papyrus, is an ancient Egyptian papyrus written c. 1650 BC by Ahmes, who transcribed it from an earlier work that he dated to between 2000 and 1800 BC. It is the most extensive ancient Egyptian mathematical document known to historians. The Rhind Papyrus contains problems where linear equations of the form x + a x = b {\displaystyle x+ax=b} and x + a x + b x = c {\displaystyle x+ax+bx=c} are solved, where a , b , {\displaystyle a,b,} and c {\displaystyle c} are known and x , {\displaystyle x,} which is referred to as "aha" or heap, is the unknown. The solutions were possibly, but not likely, arrived at by using the "method of false position", or regula falsi, where first a specific value is substituted into the left hand side of the equation, then the required arithmetic calculations are done, thirdly the result is compared to the right hand side of the equation, and finally the correct answer is found through the use of proportions. In some of the problems the author "checks" his solution, thereby writing one of the earliest known simple proofs. == Greek mathematics == It is sometimes alleged that the Greeks had no algebra, but this is disputed. By the time of Plato, Greek mathematics had undergone a drastic change. The Greeks created a geometric algebra where terms were represented by sides of geometric objects, usually lines, that had letters associated with them, and with this new form of algebra they were able to find solutions to equations by using a process that they invented, known as "the application of areas". "The application of areas" is only a part of geometric algebra and it is thoroughly covered in Euclid's Elements. An example of geometric algebra would be solving the linear equation a x = b c . {\displaystyle ax=bc.} The ancient Greeks would solve this equation by looking at it as an equality of areas rather than as an equality between the ratios a : b {\displaystyle a:b} and c : x . {\displaystyle c:x.} The Greeks would construct a rectangle with sides of length b {\displaystyle b} and c , {\displaystyle c,} then extend a side of the rectangle to length a , {\displaystyle a,} and finally they would complete the extended rectangle so as to find the side of the rectangle that is the solution. === Bloom of Thymaridas === Iamblichus in Introductio arithmatica says that Thymaridas (c. 400 BC – c. 350 BC) worked with simultaneous linear equations. In particular, he created the then famous rule that was known as the "bloom of Thymaridas" or as the "flower of Thymaridas", which states that: If the sum of n {\displaystyle n} quantities be given, and also the sum of every pair containing a particular quantity, then this particular quantity is equal to 1 / ( n − 2 ) {\displaystyle 1/(n-2)} of the difference between the sums of these pairs and the first given sum. or using modern notation, the solution of the following system of n {\displaystyle n} linear equations in n {\displaystyle n} unknowns, x + x 1 + x 2 + ⋯ + x n − 1 = s {\displaystyle x+x_{1}+x_{2}+\cdots +x_{n-1}=s} x + x 1 = m 1 {\displaystyle x+x_{1}=m_{1}} x + x 2 = m 2 {\displaystyle x+x_{2}=m_{2}} ⋮ {\displaystyle \vdots } x + x n − 1 = m n − 1 {\displaystyle x+x_{n-1}=m_{n-1}} is, x = ( m 1 + m 2 + . . . + m n − 1 ) − s n − 2 = ( ∑ i = 1 n − 1 m i ) − s n − 2 . {\displaystyle x={\cfrac {(m_{1}+m_{2}+...+m_{n-1})-s}{n-2}}={\cfrac {(\sum _{i=1}^{n-1}m_{i})-s}{n-2}}.} Iamblichus goes on to describe how some systems of linear equations that are not in this form can be placed into this form. === Euclid of Alexandria === Euclid (Greek: Εὐκλείδης) was a Greek mathematician who flourished in Alexandria, Egypt, almost certainly during the reign of Ptolemy I (323–283 BC). Neither the year nor place of his birth have been established, nor the circumstances of his death. Euclid is regarded as the "father of geometry". His Elements is the most successful textbook in the history of mathematics. Although he is one of the most famous mathematicians in history there are no new discoveries attributed to him; rather he is remembered for his great explanatory skills. The Elements is not, as is sometimes thought, a collection of all Greek mathematical knowledge to its date; rather, it is an elementary introduction to it. ==== Elements ==== The geometric work of the Greeks, typified in Euclid's Elements, provided the framework for generalizing formulae beyond the solution of particular problems into more general systems of stating and solving equations. Book II of the Elements contains fourteen propositions, which in Euclid's time were extremely significant for doing geometric algebra. These propositions and their results are the geometric equivalents of our modern symbolic algebra and trigonometry. Today, using modern symbolic algebra, we let symbols represent known and unknown magnitudes (i.e. numbers) and then apply algebraic operations on them, while in Euclid's time magnitudes were viewed as line segments and then results were deduced using the axioms or theorems of geometry. Many basic laws of addition and multiplication are included or proved geometrically in the Elements. For instance, proposition 1 of Book II states: If there be two straight lines, and one of them be cut into any number of segments whatever, the rectangle contained by the two straight lines is equal to the rectangles contained by the uncut straight line and each of the segments. But this is nothing more than the geometric version of the (left) distributive law, a ( b + c + d ) = a b + a c + a d {\displaystyle a(b+c+d)=ab+ac+ad} ; and in Books V and VII of the Elements the commutative and associative laws for multiplication are demonstrated. Many basic equations were also proved geometrically. For instance, proposition 5 in Book II proves that a 2 − b 2 = ( a + b ) ( a − b ) , {\displaystyle a^{2}-b^{2}=(a+b)(a-b),} and proposition 4 in Book II proves that ( a + b ) 2 = a 2 + 2 a b + b 2 . {\displaystyle (a+b)^{2}=a^{2}+2ab+b^{2}.} Furthermore, there are also geometric solutions given to many equations. For instance, proposition 6 of Book II gives the solution to the quadratic equation a x + x 2 = b 2 , {\displaystyle ax+x^{2}=b^{2},} and proposition 11 of Book II gives a solution to a x + x 2 = a 2 . {\displaystyle ax+x^{2}=a^{2}.} ==== Data ==== Data is a work written by Euclid for use at the schools of Alexandria and it was meant to be used as a companion volume to the first six books of the Elements. The book contains some fifteen definitions and ninety-five statements, of which there are about two dozen statements that serve as algebraic rules or formulas. Some of these statements are geometric equivalents to solutions of quadratic equations. For instance, Data contains the solutions to the equations d x 2 − a d x + b 2 c = 0 {\displaystyle dx^{2}-adx+b^{2}c=0} and the familiar Babylonian equation x y = a 2 , x ± y = b . {\displaystyle xy=a^{2},x\pm y=b.} === Conic sections === A conic section is a curve that results from the intersection of a cone with a plane. There are three primary types of conic sections: ellipses (including circles), parabolas, and hyperbolas. The conic sections are reputed to have been discovered by Menaechmus (c. 380 BC – c. 320 BC) and since dealing with conic sections is equivalent to dealing with their respective equations, they played geometric roles equivalent to cubic equations and other higher order equations. Menaechmus knew that in a parabola, the equation y 2 = l x {\displaystyle y^{2}=lx} holds, where l {\displaystyle l} is a constant called the latus rectum, although he was not aware of the fact that any equation in two unknowns determines a curve. He apparently derived these properties of conic sections and others as well. Using this information it was now possible to find a solution to the problem of the duplication of the cube by solving for the points at which two parabolas intersect, a solution equivalent to solving a cubic equation. We are informed by Eutocius that the method he used to solve the cubic equation was due to Dionysodorus (250 BC – 190 BC). Dionysodorus solved the cubic by means of the intersection of a rectangular hyperbola and a parabola. This was related to a problem in Archimedes' On the Sphere and Cylinder. Conic sections would be studied and used for thousands of years by Greek, and later Islamic and European, mathematicians. In particular Apollonius of Perga's famous Conics deals with conic sections, among other topics. == China == Chinese mathematics dates to at least 300 BC with the Zhoubi Suanjing, generally considered to be one of the oldest Chinese mathematical documents. === Nine Chapters on the Mathematical Art === Chiu-chang suan-shu or The Nine Chapters on the Mathematical Art, written around 250 BC, is one of the most influential of all Chinese math books and it is composed of some 246 problems. Chapter eight deals with solving determinate and indeterminate simultaneous linear equations using positive and negative numbers, with one problem dealing with solving four equations in five unknowns. === Sea-Mirror of the Circle Measurements === Ts'e-yuan hai-ching, or Sea-Mirror of the Circle Measurements, is a collection of some 170 problems written by Li Zhi (or Li Ye) (1192 – 1279 AD). He used fan fa, or Horner's method, to solve equations of degree as high as six, although he did not describe his method of solving equations. === Mathematical Treatise in Nine Sections === Shu-shu chiu-chang, or Mathematical Treatise in Nine Sections, was written by the wealthy governor and minister Ch'in Chiu-shao (c. 1202 – c. 1261). With the introduction of a method for solving simultaneous congruences, now called the Chinese remainder theorem, it marks the high point in Chinese indeterminate analysis. === Magic squares === The earliest known magic squares appeared in China. In Nine Chapters the author solves a system of simultaneous linear equations by placing the coefficients and constant terms of the linear equations into a magic square (i.e. a matrix) and performing column reducing operations on the magic square. The earliest known magic squares of order greater than three are attributed to Yang Hui (fl. c. 1261 – 1275), who worked with magic squares of order as high as ten. === Precious Mirror of the Four Elements === Ssy-yüan yü-chien《四元玉鑒》, or Precious Mirror of the Four Elements, was written by Chu Shih-chieh in 1303 and it marks the peak in the development of Chinese algebra. The four elements, called heaven, earth, man and matter, represented the four unknown quantities in his algebraic equations. The Ssy-yüan yü-chien deals with simultaneous equations and with equations of degrees as high as fourteen. The author uses the method of fan fa, today called Horner's method, to solve these equations. The Precious Mirror opens with a diagram of the arithmetic triangle (Pascal's triangle) using a round zero symbol, but Chu Shih-chieh denies credit for it. A similar triangle appears in Yang Hui's work, but without the zero symbol. There are many summation equations given without proof in the Precious mirror. A few of the summations are: 1 2 + 2 2 + 3 2 + ⋯ + n 2 = n ( n + 1 ) ( 2 n + 1 ) 3 ! {\displaystyle 1^{2}+2^{2}+3^{2}+\cdots +n^{2}={n(n+1)(2n+1) \over 3!}} 1 + 8 + 30 + 80 + ⋯ + n 2 ( n + 1 ) ( n + 2 ) 3 ! = n ( n + 1 ) ( n + 2 ) ( n + 3 ) ( 4 n + 1 ) 5 ! {\displaystyle 1+8+30+80+\cdots +{n^{2}(n+1)(n+2) \over 3!}={n(n+1)(n+2)(n+3)(4n+1) \over 5!}} == Diophantus == Diophantus was a Hellenistic mathematician who lived c. 250 AD, but the uncertainty of this date is so great that it may be off by more than a century. He is known for having written Arithmetica, a treatise that was originally thirteen books but of which only the first six have survived. Arithmetica is the earliest extant work present that solve arithmetic problems by algebra. Diophantus however did not invent the method of algebra, which existed before him. Algebra was practiced and diffused orally by practitioners, with Diophantus picking up techniques to solve problems in arithmetic. In modern algebra a polynomial is a linear combination of variable x that is built of exponentiation, scalar multiplication, addition, and subtraction. The algebra of Diophantus, similar to medieval arabic algebra is an aggregation of objects of different types with no operations present For example, in Diophantus a polynomial "6 4′ inverse Powers, 25 Powers lacking 9 units", which in modern notation is 6 1 4 x − 1 + 25 x 2 − 9 {\displaystyle 6{\tfrac {1}{4}}x^{-1}+25x^{2}-9} is a collection of 6 1 4 {\displaystyle 6{\tfrac {1}{4}}} object of one kind with 25 object of second kind which lack 9 objects of third kind with no operation present. Similar to medieval Arabic algebra Diophantus uses three stages to solve a problem by Algebra: 1) An unknown is named and an equation is set up 2) An equation is simplified to a standard form( al-jabr and al-muqābala in arabic) 3) Simplified equation is solved Diophantus does not give a classification of equations in six types like Al-Khwarizmi in extant parts of Arithmetica. He does say that he would give solution to three terms equations later, so this part of the work is possibly just lost In Arithmetica, Diophantus is the first to use symbols for unknown numbers as well as abbreviations for powers of numbers, relationships, and operations; thus he used what is now known as syncopated algebra. The main difference between Diophantine syncopated algebra and modern algebraic notation is that the former lacked special symbols for operations, relations, and exponentials. So, for example, what we would write as x 3 − 2 x 2 + 10 x − 1 = 5 , {\displaystyle x^{3}-2x^{2}+10x-1=5,} which can be rewritten as ( x 3 1 + x 10 ) − ( x 2 2 + x 0 1 ) = x 0 5 , {\displaystyle \left({x^{3}}1+{x}10\right)-\left({x^{2}}2+{x^{0}}1\right)={x^{0}}5,} would be written in Diophantus's syncopated notation as K υ α ¯ ζ ι ¯ ⋔ Δ υ β ¯ M α ¯ {\displaystyle \mathrm {K} ^{\upsilon }{\overline {\alpha }}\;\zeta {\overline {\iota }}\;\,\pitchfork \;\,\Delta ^{\upsilon }{\overline {\beta }}\;\mathrm {M} {\overline {\alpha }}\,\;} ἴ σ M ε ¯ {\displaystyle \sigma \;\,\mathrm {M} {\overline {\varepsilon }}} where the symbols represent the following: Unlike in modern notation, the coefficients come after the variables and that addition is represented by the juxtaposition of terms. A literal symbol-for-symbol translation of Diophantus's syncopated equation into a modern symbolic equation would be the following: x 3 1 x 10 − x 2 2 x 0 1 = x 0 5 {\displaystyle {x^{3}}1{x}10-{x^{2}}2{x^{0}}1={x^{0}}5} where to clarify, if the modern parentheses and plus are used then the above equation can be rewritten as: ( x 3 1 + x 10 ) − ( x 2 2 + x 0 1 ) = x 0 5 {\displaystyle \left({x^{3}}1+{x}10\right)-\left({x^{2}}2+{x^{0}}1\right)={x^{0}}5} However the distinction between "rhetorical algebra", "syncopated algebra" and "symbolic algebra" is considered outdated by Jeffrey Oaks and Jean Christianidis. The problems were solved on dust-board using some notation, while in books solution were written in "rhetorical style". Arithmetica also makes use of the identities: == India == The Indian mathematicians were active in studying about number systems. The earliest known Indian mathematical documents are dated to around the middle of the first millennium BC (around the 6th century BC). The recurring themes in Indian mathematics are, among others, determinate and indeterminate linear and quadratic equations, simple mensuration, and Pythagorean triples. === Aryabhata === Aryabhata (476–550) was an Indian mathematician who authored Aryabhatiya. In it he gave the rules, 1 2 + 2 2 + ⋯ + n 2 = n ( n + 1 ) ( 2 n + 1 ) 6 {\displaystyle 1^{2}+2^{2}+\cdots +n^{2}={n(n+1)(2n+1) \over 6}} and 1 3 + 2 3 + ⋯ + n 3 = ( 1 + 2 + ⋯ + n ) 2 {\displaystyle 1^{3}+2^{3}+\cdots +n^{3}=(1+2+\cdots +n)^{2}} === Brahma Sphuta Siddhanta === Brahmagupta (fl. 628) was an Indian mathematician who authored Brahma Sphuta Siddhanta. In his work Brahmagupta solves the general quadratic equation for both positive and negative roots. In indeterminate analysis Brahmagupta gives the Pythagorean triads m , 1 2 ( m 2 n − n ) , {\displaystyle m,{\frac {1}{2}}\left({m^{2} \over n}-n\right),} 1 2 ( m 2 n + n ) , {\displaystyle {\frac {1}{2}}\left({m^{2} \over n}+n\right),} but this is a modified form of an old Babylonian rule that Brahmagupta may have been familiar with. He was the first to give a general solution to the linear Diophantine equation a x + b y = c , {\displaystyle ax+by=c,} where a , b , {\displaystyle a,b,} and c {\displaystyle c} are integers. Unlike Diophantus who only gave one solution to an indeterminate equation, Brahmagupta gave all integer solutions; but that Brahmagupta used some of the same examples as Diophantus has led some historians to consider the possibility of a Greek influence on Brahmagupta's work, or at least a common Babylonian source. Like the algebra of Diophantus, the algebra of Brahmagupta was syncopated. Addition was indicated by placing the numbers side by side, subtraction by placing a dot over the subtrahend, and division by placing the divisor below the dividend, similar to our modern notation but without the bar. Multiplication, evolution, and unknown quantities were represented by abbreviations of appropriate terms. The extent of Greek influence on this syncopation, if any, is not known and it is possible that both Greek and Indian syncopation may be derived from a common Babylonian source. === Bhāskara II === Bhāskara II (1114 – c. 1185) was the leading mathematician of the 12th century. In Algebra, he gave the general solution of Pell's equation. He is the author of Lilavati and Vija-Ganita, which contain problems dealing with determinate and indeterminate linear and quadratic equations, and Pythagorean triples and he fails to distinguish between exact and approximate statements. Many of the problems in Lilavati and Vija-Ganita are derived from other Hindu sources, and so Bhaskara is at his best in dealing with indeterminate analysis. Bhaskara uses the initial symbols of the names for colors as the symbols of unknown variables. So, for example, what we would write today as ( − x − 1 ) + ( 2 x − 8 ) = x − 9 {\displaystyle (-x-1)+(2x-8)=x-9} Bhaskara would have written as . _ . ya 1 ru 1 . ya 2 ru 8 . Sum ya 1 ru 9 where ya indicates the first syllable of the word for black, and ru is taken from the word species. The dots over the numbers indicate subtraction. == Islamic world == The first century of the Islamic Arab Empire saw almost no scientific or mathematical achievements since the Arabs, with their newly conquered empire, had not yet gained any intellectual drive and research in other parts of the world had faded. In the second half of the 8th century, Islam had a cultural awakening, and research in mathematics and the sciences increased. The Muslim Abbasid caliph al-Mamun (809–833) is said to have had a dream where Aristotle appeared to him, and as a consequence al-Mamun ordered that Arabic translation be made of as many Greek works as possible, including Ptolemy's Almagest and Euclid's Elements. Greek works would be given to the Muslims by the Byzantine Empire in exchange for treaties, as the two empires held an uneasy peace. Many of these Greek works were translated by Thabit ibn Qurra (826–901), who translated books written by Euclid, Archimedes, Apollonius, Ptolemy, and Eutocius. Arabic mathematicians established algebra as an independent discipline, and gave it the name "algebra" (al-jabr). They were the first to teach algebra in an elementary form and for its own sake. There are three theories about the origins of Arabic Algebra. The first emphasizes Hindu influence, the second emphasizes Mesopotamian or Persian-Syriac influence and the third emphasizes Greek influence. Many scholars believe that it is the result of a combination of all three sources. Throughout their time in power, the Arabs used a fully rhetorical algebra, where often even the numbers were spelled out in words. The Arabs would eventually replace spelled out numbers (e.g. twenty-two) with Arabic numerals (e.g. 22), but the Arabs did not adopt or develop a syncopated or symbolic algebra until the work of Ibn al-Banna, who developed a symbolic algebra in the 13th century, followed by Abū al-Hasan ibn Alī al-Qalasādī in the 15th century. === Al-jabr wa'l muqabalah === The Muslim Persian mathematician Muhammad ibn Mūsā al-Khwārizmī, described as the father or founder of algebra, was a faculty member of the "House of Wisdom" (Bait al-Hikma) in Baghdad, which was established by Al-Mamun. Al-Khwarizmi, who died around 850 AD, wrote more than half a dozen mathematical and astronomical works. One of al-Khwarizmi's most famous books is entitled Al-jabr wa'l muqabalah or The Compendious Book on Calculation by Completion and Balancing, and it gives an exhaustive account of solving polynomials up to the second degree. The book also introduced the fundamental concept of "reduction" and "balancing", referring to the transposition of subtracted terms to the other side of an equation, that is, the cancellation of like terms on opposite sides of the equation. This is the operation which Al-Khwarizmi originally described as al-jabr. The name "algebra" comes from the "al-jabr" in the title of his book. R. Rashed and Angela Armstrong write: "Al-Khwarizmi's text can be seen to be distinct not only from the Babylonian tablets, but also from Diophantus' Arithmetica. It no longer concerns a series of problems to be resolved, but an exposition which starts with primitive terms in which the combinations must give all possible prototypes for equations, which henceforward explicitly constitute the true object of study. On the other hand, the idea of an equation for its own sake appears from the beginning and, one could say, in a generic manner, insofar as it does not simply emerge in the course of solving a problem, but is specifically called on to define an infinite class of problems." Al-Jabr is divided into six chapters, each of which deals with a different type of formula. The first chapter of Al-Jabr deals with equations whose squares equal its roots ( a x 2 = b x ) , {\displaystyle \left(ax^{2}=bx\right),} the second chapter deals with squares equal to number ( a x 2 = c ) , {\displaystyle \left(ax^{2}=c\right),} the third chapter deals with roots equal to a number ( b x = c ) , {\displaystyle \left(bx=c\right),} the fourth chapter deals with squares and roots equal a number ( a x 2 + b x = c ) , {\displaystyle \left(ax^{2}+bx=c\right),} the fifth chapter deals with squares and number equal roots ( a x 2 + c = b x ) , {\displaystyle \left(ax^{2}+c=bx\right),} and the sixth and final chapter deals with roots and number equal to squares ( b x + c = a x 2 ) . {\displaystyle \left(bx+c=ax^{2}\right).} In Al-Jabr, al-Khwarizmi uses geometric proofs, he does not recognize the root x = 0 , {\displaystyle x=0,} and he only deals with positive roots. He also recognizes that the discriminant must be positive and described the method of completing the square, though he does not justify the procedure. The Greek influence is shown by Al-Jabr's geometric foundations and by one problem taken from Heron. He makes use of lettered diagrams but all of the coefficients in all of his equations are specific numbers since he had no way of expressing with parameters what he could express geometrically; although generality of method is intended. Al-Khwarizmi most likely did not know of Diophantus's Arithmetica, which became known to the Arabs sometime before the 10th century. And even though al-Khwarizmi most likely knew of Brahmagupta's work, Al-Jabr is fully rhetorical with the numbers even being spelled out in words. So, for example, what we would write as x 2 + 10 x = 39 {\displaystyle x^{2}+10x=39} Diophantus would have written as Δ Υ α ¯ ς ι ¯ {\displaystyle \Delta ^{\Upsilon }{\overline {\alpha }}\varsigma {\overline {\iota }}\,\;} ἴ σ M λ θ ¯ {\displaystyle \sigma \;\,\mathrm {M} \lambda {\overline {\theta }}} And al-Khwarizmi would have written as One square and ten roots of the same amount to thirty-nine dirhems; that is to say, what must be the square which, when increased by ten of its own roots, amounts to thirty-nine? === Logical Necessities in Mixed Equations === 'Abd al-Hamīd ibn Turk authored a manuscript entitled Logical Necessities in Mixed Equations, which is very similar to al-Khwarzimi's Al-Jabr and was published at around the same time as, or even possibly earlier than, Al-Jabr. The manuscript gives exactly the same geometric demonstration as is found in Al-Jabr, and in one case the same example as found in Al-Jabr, and even goes beyond Al-Jabr by giving a geometric proof that if the discriminant is negative then the quadratic equation has no solution. The similarity between these two works has led some historians to conclude that Arabic algebra may have been well developed by the time of al-Khwarizmi and 'Abd al-Hamid. === Abu Kamil and al-Karaji === Arabic mathematicians treated irrational numbers as algebraic objects. The Egyptian mathematician Abū Kāmil Shujā ibn Aslam (c. 850–930) was the first to accept irrational numbers in the form of a square root or fourth root as solutions to quadratic equations or as coefficients in an equation. He was also the first to solve three non-linear simultaneous equations with three unknown variables. Al-Karaji (953–1029), also known as Al-Karkhi, was the successor of Abū al-Wafā' al-Būzjānī (940–998) and he discovered the first numerical solution to equations of the form a x 2 n + b x n = c . {\displaystyle ax^{2n}+bx^{n}=c.} Al-Karaji only considered positive roots. He is also regarded as the first person to free algebra from geometrical operations and replace them with the type of arithmetic operations which are at the core of algebra today. His work on algebra and polynomials gave the rules for arithmetic operations to manipulate polynomials. The historian of mathematics F. Woepcke, in Extrait du Fakhri, traité d'Algèbre par Abou Bekr Mohammed Ben Alhacan Alkarkhi (Paris, 1853), praised Al-Karaji for being "the first who introduced the theory of algebraic calculus". Stemming from this, Al-Karaji investigated binomial coefficients and Pascal's triangle. === Omar Khayyám, Sharaf al-Dīn al-Tusi, and al-Kashi === Omar Khayyám (c. 1050 – 1123) wrote a book on Algebra that went beyond Al-Jabr to include equations of the third degree. Omar Khayyám provided both arithmetic and geometric solutions for quadratic equations, but he only gave geometric solutions for general cubic equations since he mistakenly believed that arithmetic solutions were impossible. His method of solving cubic equations by using intersecting conics had been used by Menaechmus, Archimedes, and Ibn al-Haytham (Alhazen), but Omar Khayyám generalized the method to cover all cubic equations with positive roots. He only considered positive roots and he did not go past the third degree. He also saw a strong relationship between geometry and algebra. In the 12th century, Sharaf al-Dīn al-Tūsī (1135–1213) wrote the Al-Mu'adalat (Treatise on Equations), which dealt with eight types of cubic equations with positive solutions and five types of cubic equations which may not have positive solutions. He used what would later be known as the "Ruffini-Horner method" to numerically approximate the root of a cubic equation. He also developed the concepts of the maxima and minima of curves in order to solve cubic equations which may not have positive solutions. He understood the importance of the discriminant of the cubic equation and used an early version of Cardano's formula to find algebraic solutions to certain types of cubic equations. Some scholars, such as Roshdi Rashed, argue that Sharaf al-Din discovered the derivative of cubic polynomials and realized its significance, while other scholars connect his solution to the ideas of Euclid and Archimedes. Sharaf al-Din also developed the concept of a function. In his analysis of the equation x 3 + d = b x 2 {\displaystyle x^{3}+d=bx^{2}} for example, he begins by changing the equation's form to x 2 ( b − x ) = d {\displaystyle x^{2}(b-x)=d} . He then states that the question of whether the equation has a solution depends on whether or not the "function" on the left side reaches the value d {\displaystyle d} . To determine this, he finds a maximum value for the function. He proves that the maximum value occurs when x = 2 b 3 {\displaystyle \textstyle x={\frac {2b}{3}}} , which gives the functional value 4 b 3 27 {\displaystyle \textstyle {\frac {4b^{3}}{27}}} . Sharaf al-Din then states that if this value is less than d {\displaystyle d} , there are no positive solutions; if it is equal to d {\displaystyle d} , then there is one solution at x = 2 b 3 {\displaystyle \textstyle x={\frac {2b}{3}}} ; and if it is greater than d {\displaystyle d} , then there are two solutions, one between 0 {\displaystyle 0} and 2 b 3 {\displaystyle \textstyle {\frac {2b}{3}}} and one between 2 b 3 {\displaystyle \textstyle {\frac {2b}{3}}} and b {\displaystyle b} . In the early 15th century, Jamshīd al-Kāshī developed an early form of Newton's method to numerically solve the equation x P − N = 0 {\displaystyle x^{P}-N=0} to find roots of N {\displaystyle N} . Al-Kāshī also developed decimal fractions and claimed to have discovered it himself. However, J. Lennart Berggrenn notes that he was mistaken, as decimal fractions were first used five centuries before him by the Baghdadi mathematician Abu'l-Hasan al-Uqlidisi as early as the 10th century. === Al-Hassār, Ibn al-Banna, and al-Qalasadi === Al-Hassār, a mathematician from Morocco specializing in Islamic inheritance jurisprudence during the 12th century, developed the modern symbolic mathematical notation for fractions, where the numerator and denominator are separated by a horizontal bar. This same fractional notation appeared soon after in the work of Fibonacci in the 13th century. Abū al-Hasan ibn Alī al-Qalasādī (1412–1486) was the last major medieval Arab algebraist, who made the first attempt at creating an algebraic notation since Ibn al-Banna two centuries earlier, who was himself the first to make such an attempt since Diophantus and Brahmagupta in ancient times. The syncopated notations of his predecessors, however, lacked symbols for mathematical operations. Al-Qalasadi "took the first steps toward the introduction of algebraic symbolism by using letters in place of numbers" and by "using short Arabic words, or just their initial letters, as mathematical symbols." == Europe and the Mediterranean region == Just as the death of Hypatia signals the close of the Library of Alexandria as a mathematical center, so does the death of Boethius signal the end of mathematics in the Western Roman Empire. Although there was some work being done at Athens, it came to a close when in 529 the Byzantine emperor Justinian closed the pagan philosophical schools. The year 529 is now taken to be the beginning of the medieval period. Scholars fled the West towards the more hospitable East, particularly towards Persia, where they found haven under King Chosroes and established what might be termed an "Athenian Academy in Exile". Under a treaty with Justinian, Chosroes would eventually return the scholars to the Eastern Empire. During the Dark Ages, European mathematics was at its nadir with mathematical research consisting mainly of commentaries on ancient treatises; and most of this research was centered in the Byzantine Empire. The end of the medieval period is set as the fall of Constantinople to the Turks in 1453. === Late Middle Ages === The 12th century saw a flood of translations from Arabic into Latin and by the 13th century, European mathematics was beginning to rival the mathematics of other lands. In the 13th century, the solution of a cubic equation by Fibonacci is representative of the beginning of a revival in European algebra. As the Islamic world was declining after the 15th century, the European world was ascending. And it is here that algebra was further developed. == Symbolic algebra == Modern notation for arithmetic operations was introduced between the end of the 15th century and the beginning of the 16th century by Johannes Widmann and Michael Stifel. At the end of 16th century, François Viète introduced symbols, now called variables, for representing indeterminate or unknown numbers. This created a new algebra consisting of computing with symbolic expressions as if they were numbers. Another key event in the further development of algebra was the general algebraic solution of the cubic and quartic equations, developed in the mid-16th century. The idea of a determinant was developed by Japanese mathematician Kowa Seki in the 17th century, followed by Gottfried Leibniz ten years later, for the purpose of solving systems of simultaneous linear equations using matrices. Gabriel Cramer also did some work on matrices and determinants in the 18th century. === The symbol x === By tradition, the first unknown variable in an algebraic problem is nowadays represented by the symbol x {\displaystyle {\mathit {x}}} and if there is a second or a third unknown, then these are labeled y {\displaystyle {\mathit {y}}} and z {\displaystyle {\mathit {z}}} respectively. Algebraic x {\displaystyle x} is conventionally printed in italic type to distinguish it from the sign of multiplication. Mathematical historians generally agree that the use of x {\displaystyle x} in algebra was introduced by René Descartes and was first published in his treatise La Géométrie (1637). In that work, he used letters from the beginning of the alphabet ( a , b , c , … ) {\displaystyle (a,b,c,\ldots )} for known quantities, and letters from the end of the alphabet ( z , y , x , … ) {\displaystyle (z,y,x,\ldots )} for unknowns. It has been suggested that he later settled on x {\displaystyle x} (in place of z {\displaystyle z} ) for the first unknown because of its relatively greater abundance in the French and Latin typographical fonts of the time. Three alternative theories of the origin of algebraic x {\displaystyle x} were suggested in the 19th century: (1) a symbol used by German algebraists and thought to be derived from a cursive letter r , {\displaystyle r,} mistaken for x {\displaystyle x} ; (2) the numeral 1 with oblique strikethrough; and (3) an Arabic/Spanish source (see below). But the Swiss-American historian of mathematics Florian Cajori examined these and found all three lacking in concrete evidence; Cajori credited Descartes as the originator, and described his x , y , {\displaystyle x,y,} and z {\displaystyle z} as "free from tradition[,] and their choice purely arbitrary." Nevertheless, the Hispano-Arabic hypothesis continues to have a presence in popular culture today. It is the claim that algebraic x {\displaystyle x} is the abbreviation of a supposed loanword from Arabic in Old Spanish. The theory originated in 1884 with the German orientalist Paul de Lagarde, shortly after he published his edition of a 1505 Spanish/Arabic bilingual glossary in which Spanish cosa ("thing") was paired with its Arabic equivalent, شىء (shayʔ), transcribed as xei. (The "sh" sound in Old Spanish was routinely spelled x . {\displaystyle x.} ) Evidently Lagarde was aware that Arab mathematicians, in the "rhetorical" stage of algebra's development, often used that word to represent the unknown quantity. He surmised that "nothing could be more natural" ("Nichts war also natürlicher...") than for the initial of the Arabic word—romanized as the Old Spanish x {\displaystyle x} —to be adopted for use in algebra. A later reader reinterpreted Lagarde's conjecture as having "proven" the point. Lagarde was unaware that early Spanish mathematicians used, not a transcription of the Arabic word, but rather its translation in their own language, "cosa". There is no instance of xei or similar forms in several compiled historical vocabularies of Spanish. === Gottfried Leibniz === Although the mathematical notion of function was implicit in trigonometric and logarithmic tables, which existed in his day, Gottfried Leibniz was the first, in 1692 and 1694, to employ it explicitly, to denote any of several geometric concepts derived from a curve, such as abscissa, ordinate, tangent, chord, and the perpendicular. In the 18th century, "function" lost these geometrical associations. Leibniz realized that the coefficients of a system of linear equations could be arranged into an array, now called a matrix, which can be manipulated to find the solution of the system, if any. This method was later called Gaussian elimination. Leibniz also discovered Boolean algebra and symbolic logic, also relevant to algebra. === Abstract algebra === The ability to do algebra is a skill cultivated in mathematics education. As explained by Andrew Warwick, Cambridge University students in the early 19th century practiced "mixed mathematics", doing exercises based on physical variables such as space, time, and weight. Over time the association of variables with physical quantities faded away as mathematical technique grew. Eventually mathematics was concerned completely with abstract polynomials, complex numbers, hypercomplex numbers and other concepts. Application to physical situations was then called applied mathematics or mathematical physics, and the field of mathematics expanded to include abstract algebra. For instance, the issue of constructible numbers showed some mathematical limitations, and the field of Galois theory was developed. == Father of algebra == The title of "the father of algebra" is frequently credited to the Persian mathematician Al-Khwarizmi, supported by historians of mathematics, such as Carl Benjamin Boyer, Solomon Gandz and Bartel Leendert van der Waerden. However, the point is debatable and the title is sometimes credited to the Hellenistic mathematician Diophantus. Those who support Diophantus point to the algebra found in Al-Jabr being more elementary than the algebra found in Arithmetica, and Arithmetica being syncopated while Al-Jabr is fully rhetorical. However, the mathematics historian Kurt Vogel argues against Diophantus holding this title, as his mathematics was not much more algebraic than that of the ancient Babylonians. Those who support Al-Khwarizmi point to the fact that he gave an exhaustive explanation for the algebraic solution of quadratic equations with positive roots, and was the first to teach algebra in an elementary form and for its own sake, whereas Diophantus was primarily concerned with the theory of numbers. Al-Khwarizmi also introduced the fundamental concept of "reduction" and "balancing" (which he originally used the term al-jabr to refer to), referring to the transposition of subtracted terms to the other side of an equation, that is, the cancellation of like terms on opposite sides of the equation. Other supporters of Al-Khwarizmi point to his algebra no longer being concerned "with a series of problems to be resolved, but an exposition which starts with primitive terms in which the combinations must give all possible prototypes for equations, which henceforward explicitly constitute the true object of study." They also point to his treatment of an equation for its own sake and "in a generic manner, insofar as it does not simply emerge in the course of solving a problem, but is specifically called on to define an infinite class of problems". Victor J. Katz regards Al-Jabr as the first true algebra text that is still extant. According to Jeffrey Oaks and Jean Christianidis neither Diophantus nor Al-Khwarizmi should be called "father of algebra". Pre-modern algebra was developed and used by merchants and surveyors as part of what Jens Høyrup called "subscientific" tradition. Diophantus used this method of algebra in his book, in particular for indeterminate problems, while Al-Khwarizmi wrote one of the first books in Arabic about this method. == See also == Al-Mansur – 2nd Abbasid caliph (r. 754–775) Timeline of algebra – Notable events in the history of algebra == References == == Sources == Alcalá, Pedro de (1505), De lingua arabica, Granada Edition by Paul de Lagarde, Göttingen: Arnold Hoyer, 1883 Alonso, Martín [in Spanish] (1986), Diccionario del español medieval, Salamanca: Universidad Pontificia de Salamanca Aurel, Marco (1552), Libro primero de arithmetica algebratica, Valencia: Joan de Mey Bashmakova, I; Smirnova, G. (2000). The Beginnings and Evolution of Algebra. Dolciani Mathematical Expositions. Vol. 23. Translated by Abe Shenitzer. The Mathematical Association of America. Boyer, Carl B. (1991), A History of Mathematics (2nd ed.), John Wiley & Sons, Inc., ISBN 978-0-471-54397-8 Burton, David M. (1995), Burton's History of Mathematics: An Introduction (3rd ed.), Dubuque: Wm. C. Brown Burton, David M. (1997), The History of Mathematics: An Introduction (3rd ed.), The McGraw-Hill Companies, Inc., ISBN 978-0-07-009465-9 Cajori, Florian (1919), "How x Came to Stand for Unknown Quantity", School Science and Mathematics, 19 (8): 698–699, doi:10.1111/j.1949-8594.1919.tb07713.x Cajori, Florian (1928), A History of Mathematical Notations, Chicago: Open Court Publishing, ISBN 9780486161167 {{citation}}: ISBN / Date incompatibility (help) Cooke, Roger (1997), The History of Mathematics: A Brief Course, Wiley-Interscience, ISBN 978-0-471-18082-1 Derbyshire, John (2006), Unknown Quantity: A Real And Imaginary History of Algebra, Washington, DC: Joseph Henry Press, ISBN 978-0-309-09657-7 Descartes, René (1637), La Géométrie, Leyde: Ian Maire. Online 2008 ed. by L. Hermann, Project Gutenberg. Descartes, René (1925), The Geometry of René Descartes, Chicago: Open Court, ISBN 9781602066922 {{citation}}: ISBN / Date incompatibility (help) Díez, Juan (1556), Sumario compendioso de las quentas de plata y oro que en los reynos del Piru son necessarias a los mercaderes: y todo genero de tratantes, con algunas reglas tocantes al arithmetica, Mexico City Eneström, Gustaf (1905), "Kleine Mitteilungen", Bibliotheca Mathematica, Ser. 3, 6 (online access only in U.S.) Flegg, Graham (1983), Numbers: Their History and Meaning, Dover publications, ISBN 978-0-486-42165-0 Heath, Thomas Little (1981a), A History of Greek Mathematics, Volume I, Dover publications, ISBN 978-0-486-24073-2 Heath, Thomas Little (1981b), A History of Greek Mathematics, Volume II, Dover publications, ISBN 978-0-486-24074-9 Jacob, Georg (1903), "Oriental Elements of Culture in the Occident", Annual Report of the Board of Regents of the Smithsonian Institution [...] for the Year Ending June 30, 1902: 509–529 Kasten, Lloyd A.; Cody, Florian J. (2001), Tentative Dictionary of Medieval Spanish (2nd ed.), New York: Hispanic Seminary of Medieval Studies Katz, Victor J.; Parshall, Karen Hunger (2014), Taming the Unknown: A History of Algebra from Antiquity to the Early Twentieth Century, Princeton, NJ: Princeton University Press, ISBN 978-1-400-85052-5 Lagarde, Paul de (1884), "Woher stammt das x der Mathematiker?", Mittheilungen, vol. 1, Goettingen: Dieterichsche Sortimentsbuchhandlung, pp. 134–137 Nunes, Pedro (1567), Libro de algebra en arithmetica y geometria, Antwerp: Arnoldo Birckman Oelschläger, Victor R. B. (1940), A Medieval Spanish Word-List, Madison: University of Wisconsin Press Ortega, Juan de (1552), Tractado subtilissimo de arismetica y geometria, Granada: René Rabut Pérez de Moya, Juan [in Spanish] (1562), Aritmética práctica y especulativa, Salamanca: Mathias Gast Puig, Andrés (1672), Arithmetica especulativa y practica; y arte de algebra, Barcelona: Antonio Lacavalleria Rider, Robin E. (1982), A Bibliography of Early Modern Algebra, 1500–1800, Berkeley: Berkeley Papers in History of Science Sesiano, Jacques (1999), An Introduction to the History of Algebra: Solving Equations from Mesopotamian Times to the Renaissance, Providence, RI: American Mathematical Society, ISBN 9780821844731 Stillwell, John (2004), Mathematics and its History (2nd ed.), Springer Science + Business Media Inc., ISBN 978-0-387-95336-6 Swetz, Frank J. (2013), The European Mathematical Awakening: A Journey Through the History of Mathematics, 1000–1800 (2nd ed.), Mineola, NY: Dover Publications, ISBN 9780486498058 Tropfke, Johannes (1902), Geschichte der Elementar-Mathematik in systematischer Darstellung, vol. 1, Leipzig: Von Veit & Comp. == External links == "Commentary by Islam's Sheikh Zakariyya al-Ansari on Ibn al-Hā’im's Poem on the Science of Algebra and Balancing Called the Creator's Epiphany in Explaining the Cogent" featuring the basic concepts of algebra dating back to the 15th century, from the World Digital Library.
Wikipedia/Rhetorical_algebra
George Peacock FRS (9 April 1791 – 8 November 1858) was an English mathematician and Anglican cleric. He founded what has been called the British algebra of logic. == Early life == Peacock was born on 9 April 1791 at Thornton Hall, Denton, near Darlington, County Durham. His father, Thomas Peacock, was a priest of the Church of England, incumbent and for 50 years curate of the parish of Denton, where he also kept a school. In early life, Peacock did not show any precocity of genius. He was more remarkable for daring feats of climbing than for any special attachment to study. Initially, he received his elementary education from his father and then at Sedbergh School, and at 17 years of age, he was sent to Richmond School under James Tate, a graduate of Cambridge University. At this school he distinguished himself greatly both in classics and in the rather elementary mathematics then required for entrance at Cambridge. In 1809 he became a student of Trinity College, Cambridge. In 1812 Peacock took the rank of Second Wrangler, and the second Smith's prize, the senior wrangler being John Herschel. Two years later, he became a candidate for a fellowship in his college and won it immediately, partly by means of his extensive and accurate knowledge of the classics. A fellowship then meant about £200 a year, tenable for seven years provided the Fellow did not marry meanwhile, and capable of being extended after the seven years provided the Fellow took clerical orders, which Peacock did in 1819. == Mathematical career == The year after taking a Fellowship, Peacock was appointed a tutor and lecturer of his college, which position he continued to hold for many years. Peacock, in common with many other students of his own standing, was profoundly impressed with the need of reforming Cambridge's position ignoring the differential notation for calculus, and while still an undergraduate formed a league with Babbage and Herschel to adopt measures to bring it about. In 1815 they formed what they called the Analytical Society, the object of which was stated to be to advocate the d 'ism of the Continent versus the dot-age of the university. The first movement on the part of the Analytical Society was to translate from the French the smaller work of Lacroix on the differential and integral calculus; it was published in 1816. At that time the French language had the best manuals, as well as the greatest works on mathematics. Peacock followed up the translation with a volume containing a copious Collection of Examples of the Application of the Differential and Integral Calculus, which was published in 1820. The sale of both books was rapid, and contributed materially to further the object of the Society. In that time, high wranglers of one year became the examiners of the mathematical tripos three or four years afterwards. Peacock was appointed an examiner in 1817, and he did not fail to make use of the position as a powerful lever to advance the cause of reform. In his questions set for the examination the differential notation was for the first time officially employed in Cambridge. The innovation did not escape censure, but he wrote to a friend as follows: "I assure you that I shall never cease to exert myself to the utmost in the cause of reform, and that I will never decline any office which may increase my power to effect it. I am nearly certain of being nominated to the office of Moderator in the year 1818-1819, and as I am an examiner in virtue of my office, for the next year I shall pursue a course even more decided than hitherto, since I shall feel that men have been prepared for the change, and will then be enabled to have acquired a better system by the publication of improved elementary books. I have considerable influence as a lecturer, and I will not neglect it. It is by silent perseverance only, that we can hope to reduce the many-headed monster of prejudice and make the University answer her character as the loving mother of good learning and science." These few sentences give an insight into the character of Peacock: he was an ardent reformer and a few years brought success to the cause of the Analytical Society. Another reform at which Peacock labored was the teaching of algebra. In 1830 he published A Treatise on Algebra which had for its object the placing of algebra on a true scientific basis, adequate for the development which it had received at the hands of the Continental mathematicians. To elevate astronomical science the Astronomical Society of London was founded, and the three reformers Peacock, Babbage and Herschel were again prime movers in the undertaking. Peacock was one of the most zealous promoters of an astronomical observatory at Cambridge, and one of the founders of the Philosophical Society of Cambridge. In 1831 the British Association for the Advancement of Science (prototype of the American, French and Australasian Associations) held its first meeting in the ancient city of York. One of the first resolutions adopted was to procure reports on the state and progress of particular sciences, to be drawn up from time to time by competent persons for the information of the annual meetings, and the first to be placed on the list was a report on the progress of mathematical science. Whewell, the mathematician and philosopher, was a vice-president of the meeting: he was instructed to select the reporter. He first asked William Rowan Hamilton, who declined; he then asked Peacock, who accepted. Peacock had his report ready for the third meeting of the Association, which was held in Cambridge in 1833; although limited to Algebra, Trigonometry, and the Arithmetic of Sines, it is one of the best of the long series of valuable reports which have been prepared for and printed by the Association. In 1837 Peacock was appointed Lowndean Professor of Astronomy in the University of Cambridge, the chair afterwards occupied by Adams, the co-discoverer of Neptune, and later occupied by Robert Ball, celebrated for his Theory of Screws. An object of reform was the statutes of the university; he worked hard at it and was made a member of a commission appointed by the Government for the purpose. He was elected a Fellow of the Royal Society in January 1818. In 1842, Peacock was elected as a member of the American Philosophical Society. == Clerical career == He was ordained as a deacon in 1819, a priest in 1822 and appointed vicar of Wymeswold in Leicestershire in 1826 (until 1835). In 1839 he was appointed Dean of Ely cathedral, Cambridgeshire, a position he held for the rest of his life, some 20 years. Together with the architect George Gilbert Scott he undertook a major restoration of the cathedral building. This included the installation of the boarded ceiling. While holding this position he wrote a text book on algebra, A Treatise on Algebra (1830). Later, a second edition appeared in two volumes, the one called Arithmetical Algebra (1842) and the other On Symbolical Algebra and its Applications to the Geometry of Position (1845). == Symbolical algebra == Peacock's main contribution to mathematical analysis is his attempt to place algebra on a strictly logical basis. He founded what has been called the British algebra of logic; to which Gregory, De Morgan and Boole belonged. His answer to Maseres and Frend was that the science of algebra consisted of two parts—arithmetical algebra and symbolical algebra—and that they erred in restricting the science to the arithmetical part. His view of arithmetical algebra is as follows: "In arithmetical algebra we consider symbols as representing numbers, and the operations to which they are submitted as included in the same definitions as in common arithmetic; the signs + {\displaystyle +} and − {\displaystyle -} denote the operations of addition and subtraction in their ordinary meaning only, and those operations are considered as impossible in all cases where the symbols subjected to them possess values which would render them so in case they were replaced by digital numbers; thus in expressions such as a + b {\displaystyle a+b} we must suppose a {\displaystyle a} and b {\displaystyle b} to be quantities of the same kind; in others, like a − b {\displaystyle a-b} , we must suppose a {\displaystyle a} greater than b {\displaystyle b} and therefore homogeneous with it; in products and quotients, like a b {\displaystyle ab} and a b {\displaystyle {\frac {a}{b}}} we must suppose the multiplier and divisor to be abstract numbers; all results whatsoever, including negative quantities, which are not strictly deducible as legitimate conclusions from the definitions of the several operations must be rejected as impossible, or as foreign to the science." Peacock's principle may be stated thus: the elementary symbol of arithmetical algebra denotes a digital, i.e., an integer number; and every combination of elementary symbols must reduce to a digital number, otherwise it is impossible or foreign to the science. If a {\displaystyle a} and b {\displaystyle b} are numbers, then a + b {\displaystyle a+b} is always a number; but a − b {\displaystyle a-b} is a number only when b {\displaystyle b} is less than a {\displaystyle a} . Again, under the same conditions, a b {\displaystyle ab} is always a number, but a b {\displaystyle {\frac {a}{b}}} is really a number only when b {\displaystyle b} is an exact divisor of a {\displaystyle a} . Hence the following dilemma: Either a b {\displaystyle {\frac {a}{b}}} must be held to be an impossible expression in general, or else the meaning of the fundamental symbol of algebra must be extended so as to include rational fractions. If the former horn of the dilemma is chosen, arithmetical algebra becomes a mere shadow; if the latter horn is chosen, the operations of algebra cannot be defined on the supposition that the elementary symbol is an integer number. Peacock attempts to get out of the difficulty by supposing that a symbol which is used as a multiplier is always an integer number, but that a symbol in the place of the multiplicand may be a fraction. For instance, in a b {\displaystyle ab} , a {\displaystyle a} can denote only an integer number, but b {\displaystyle b} may denote a rational fraction. Now there is no more fundamental principle in arithmetical algebra than that a b = b a {\displaystyle ab=ba} ; which would be illegitimate on Peacock's principle. One of the earliest English writers on arithmetic is Robert Recorde, who dedicated his work to King Edward VI. The author gives his treatise the form of a dialogue between master and scholar. The scholar battles long over this difficulty—that multiplying a thing could make it less. The master attempts to explain the anomaly by reference to proportion; that the product due to a fraction bears the same proportion to the thing multiplied that the fraction bears to unity. But the scholar is not satisfied and the master goes on to say: "If I multiply by more than one, the thing is increased; if I take it but once, it is not changed, and if I take it less than once, it cannot be so much as it was before. Then seeing that a fraction is less than one, if I multiply by a fraction, it follows that I do take it less than once." Whereupon the scholar replies, "Sir, I do thank you much for this reason, – and I trust that I do perceive the thing." The fact is that even in arithmetic the two processes of multiplication and division are generalized into a common multiplication; and the difficulty consists in passing from the original idea of multiplication to the generalized idea of a tensor, which idea includes compressing the magnitude as well as stretching it. Let m {\displaystyle m} denote an integer number; the next step is to gain the idea of the reciprocal of m {\displaystyle m} , not as 1 m {\displaystyle {\frac {1}{m}}} but simply as / m {\displaystyle /m} . When m {\displaystyle m} and / n {\displaystyle /n} are compounded we get the idea of a rational fraction; for in general m / n {\displaystyle m/n} will not reduce to a number nor to the reciprocal of a number. Suppose, however, that we pass over this objection; how does Peacock lay the foundation for general algebra? He calls it symbolical algebra, and he passes from arithmetical algebra to symbolical algebra in the following manner: "Symbolical algebra adopts the rules of arithmetical algebra but removes altogether their restrictions; thus symbolical subtraction differs from the same operation in arithmetical algebra in being possible for all relations of value of the symbols or expressions employed. All the results of arithmetical algebra which are deduced by the application of its rules, and which are general in form though particular in value, are results likewise of symbolical algebra where they are general in value as well as in form; thus the product of a m {\displaystyle a^{m}} and a n {\displaystyle a^{n}} which is a m + n {\displaystyle a^{m+n}} when m {\displaystyle m} and n {\displaystyle n} are whole numbers and therefore general in form though particular in value, will be their product likewise when m {\displaystyle m} and n {\displaystyle n} are general in value as well as in form; the series for ( a + b ) n {\displaystyle (a+b)^{n}} determined by the principles of arithmetical algebra when n {\displaystyle n} is any whole number, if it be exhibited in a general form, without reference to a final term, may be shown upon the same principle to the equivalent series for ( a + b ) n {\displaystyle (a+b)^{n}} when n {\displaystyle n} is general both in form and value." The principle here indicated by means of examples was named by Peacock the "principle of the permanence of equivalent forms," and at page 59 of the Symbolical Algebra it is thus enunciated: "Whatever algebraic forms are equivalent when the symbols are general in form, but specific in value, will be equivalent likewise when the symbols are general in value as well as in form." For example, let a {\displaystyle a} , b {\displaystyle b} , c {\displaystyle c} , d {\displaystyle d} denote any integer numbers, but subject to the restrictions that b {\displaystyle b} is less than a {\displaystyle a} , and d {\displaystyle d} less than c {\displaystyle c} ; it may then be shown arithmetically that ( a − b ) ( c − d ) = a c + b d − a d − b c {\displaystyle (a-b)(c-d)=ac+bd-ad-bc} . Peacock's principle says that the form on the left side is equivalent to the form on the right side, not only when the said restrictions of being less are removed, but when a {\displaystyle a} , b {\displaystyle b} , c {\displaystyle c} , d {\displaystyle d} denote the most general algebraic symbol. It means that a {\displaystyle a} , b {\displaystyle b} , c {\displaystyle c} , d {\displaystyle d} may be rational fractions, or surds, or imaginary quantities, or indeed operators such as d d x {\displaystyle {\frac {d}{dx}}} . The equivalence is not established by means of the nature of the quantity denoted; the equivalence is assumed to be true, and then it is attempted to find the different interpretations which may be put on the symbol. It is not difficult to see that the problem before us involves the fundamental problem of a rational logic or theory of knowledge; namely, how are we able to ascend from particular truths to more general truths. If a {\displaystyle a} , b {\displaystyle b} , c {\displaystyle c} , d {\displaystyle d} denote integer numbers, of which b {\displaystyle b} is less than a {\displaystyle a} and d {\displaystyle d} less than c {\displaystyle c} , then ( a − b ) ( c − d ) = a c + b d − a d − b c {\displaystyle (a-b)(c-d)=ac+bd-ad-bc} . It is first seen that the above restrictions may be removed, and still the above equation holds. But the antecedent is still too narrow; the true scientific problem consists in specifying the meaning of the symbols, which, and only which, will admit of the forms being equal. It is not to find "some meanings", but the "most general meaning", which allows the equivalence to be true. Let us examine some other cases; we shall find that Peacock's principle is not a solution of the difficulty; the great logical process of generalization cannot be reduced to any such easy and arbitrary procedure. When a {\displaystyle a} , m {\displaystyle m} , n {\displaystyle n} denote integer numbers, it can be shown that a m a n = a m + n {\displaystyle a^{m}a^{n}=a^{m+n}} . According to Peacock the form on the left is always to be equal to the form on the right, and the meanings of a {\displaystyle a} , m {\displaystyle m} , n {\displaystyle n} are to be found by interpretation. Suppose that a {\displaystyle a} takes the form of the incommensurate quantity e {\displaystyle e} , the base of the natural system of logarithms. A number is a degraded form of a complex quantity p + q − 1 {\displaystyle p+q^{\sqrt {-1}}} and a complex quantity is a degraded form of a quaternion; consequently one meaning which may be assigned to m {\displaystyle m} and n {\displaystyle n} is that of quaternion. Peacock's principle would lead us to suppose that e m e n = e m + n {\displaystyle e^{m}e^{n}=e^{m+n}} , m {\displaystyle m} and n {\displaystyle n} denoting quaternions; but that is just what William Rowan Hamilton, the inventor of the quaternion generalization, denies. There are reasons for believing that he was mistaken, and that the forms remain equivalent even under that extreme generalization of m {\displaystyle m} and n {\displaystyle n} ; but the point is this: it is not a question of conventional definition and formal truth; it is a question of objective definition and real truth. Let the symbols have the prescribed meaning, does or does not the equivalence still hold? And if it does not hold, what is the higher or more complex form which the equivalence assumes? Or does such equivalence form even exist? == Private life == Politically, George Peacock was a Whig. He married Frances Elizabeth, the daughter of William Selwyn. They had no children. His last public act was to attend a meeting of the university reform commission. He died in Ely on 8 November 1858, in the 68th year of his age, and was buried in Ely cemetery. == Bibliography == A Treatise on Algebra (J. & J. J. Deighton, 1830). A Treatise on Algebra (2nd ed., Scripta Mathematica, 1842–1845). Vol. 1: Arithmetical Algebra (1842). Vol. 2: On Symbolical Algebra and its Applications to the Geometry of Position (1845) Life of Thomas Young: M.D., F.R.S., &c.; and One of the Eight Foreign Associates of the National Institute of France (John Murray, 1855). == References == == Sources == Macfarlane, Alexander (2009) [1916]. Lectures on Ten British Mathematicians of the Nineteenth Century. Mathematical monographs. Vol. 17. Cornell University Library. ISBN 978-1-112-28306-2. (complete text Archived 29 July 2017 at the Wayback Machine at Project Gutenberg) == External links == O'Connor, John J.; Robertson, Edmund F., "George Peacock", MacTutor History of Mathematics Archive, University of St Andrews Biography of Peacock
Wikipedia/Arithmetical_algebra
George Peacock FRS (9 April 1791 – 8 November 1858) was an English mathematician and Anglican cleric. He founded what has been called the British algebra of logic. == Early life == Peacock was born on 9 April 1791 at Thornton Hall, Denton, near Darlington, County Durham. His father, Thomas Peacock, was a priest of the Church of England, incumbent and for 50 years curate of the parish of Denton, where he also kept a school. In early life, Peacock did not show any precocity of genius. He was more remarkable for daring feats of climbing than for any special attachment to study. Initially, he received his elementary education from his father and then at Sedbergh School, and at 17 years of age, he was sent to Richmond School under James Tate, a graduate of Cambridge University. At this school he distinguished himself greatly both in classics and in the rather elementary mathematics then required for entrance at Cambridge. In 1809 he became a student of Trinity College, Cambridge. In 1812 Peacock took the rank of Second Wrangler, and the second Smith's prize, the senior wrangler being John Herschel. Two years later, he became a candidate for a fellowship in his college and won it immediately, partly by means of his extensive and accurate knowledge of the classics. A fellowship then meant about £200 a year, tenable for seven years provided the Fellow did not marry meanwhile, and capable of being extended after the seven years provided the Fellow took clerical orders, which Peacock did in 1819. == Mathematical career == The year after taking a Fellowship, Peacock was appointed a tutor and lecturer of his college, which position he continued to hold for many years. Peacock, in common with many other students of his own standing, was profoundly impressed with the need of reforming Cambridge's position ignoring the differential notation for calculus, and while still an undergraduate formed a league with Babbage and Herschel to adopt measures to bring it about. In 1815 they formed what they called the Analytical Society, the object of which was stated to be to advocate the d 'ism of the Continent versus the dot-age of the university. The first movement on the part of the Analytical Society was to translate from the French the smaller work of Lacroix on the differential and integral calculus; it was published in 1816. At that time the French language had the best manuals, as well as the greatest works on mathematics. Peacock followed up the translation with a volume containing a copious Collection of Examples of the Application of the Differential and Integral Calculus, which was published in 1820. The sale of both books was rapid, and contributed materially to further the object of the Society. In that time, high wranglers of one year became the examiners of the mathematical tripos three or four years afterwards. Peacock was appointed an examiner in 1817, and he did not fail to make use of the position as a powerful lever to advance the cause of reform. In his questions set for the examination the differential notation was for the first time officially employed in Cambridge. The innovation did not escape censure, but he wrote to a friend as follows: "I assure you that I shall never cease to exert myself to the utmost in the cause of reform, and that I will never decline any office which may increase my power to effect it. I am nearly certain of being nominated to the office of Moderator in the year 1818-1819, and as I am an examiner in virtue of my office, for the next year I shall pursue a course even more decided than hitherto, since I shall feel that men have been prepared for the change, and will then be enabled to have acquired a better system by the publication of improved elementary books. I have considerable influence as a lecturer, and I will not neglect it. It is by silent perseverance only, that we can hope to reduce the many-headed monster of prejudice and make the University answer her character as the loving mother of good learning and science." These few sentences give an insight into the character of Peacock: he was an ardent reformer and a few years brought success to the cause of the Analytical Society. Another reform at which Peacock labored was the teaching of algebra. In 1830 he published A Treatise on Algebra which had for its object the placing of algebra on a true scientific basis, adequate for the development which it had received at the hands of the Continental mathematicians. To elevate astronomical science the Astronomical Society of London was founded, and the three reformers Peacock, Babbage and Herschel were again prime movers in the undertaking. Peacock was one of the most zealous promoters of an astronomical observatory at Cambridge, and one of the founders of the Philosophical Society of Cambridge. In 1831 the British Association for the Advancement of Science (prototype of the American, French and Australasian Associations) held its first meeting in the ancient city of York. One of the first resolutions adopted was to procure reports on the state and progress of particular sciences, to be drawn up from time to time by competent persons for the information of the annual meetings, and the first to be placed on the list was a report on the progress of mathematical science. Whewell, the mathematician and philosopher, was a vice-president of the meeting: he was instructed to select the reporter. He first asked William Rowan Hamilton, who declined; he then asked Peacock, who accepted. Peacock had his report ready for the third meeting of the Association, which was held in Cambridge in 1833; although limited to Algebra, Trigonometry, and the Arithmetic of Sines, it is one of the best of the long series of valuable reports which have been prepared for and printed by the Association. In 1837 Peacock was appointed Lowndean Professor of Astronomy in the University of Cambridge, the chair afterwards occupied by Adams, the co-discoverer of Neptune, and later occupied by Robert Ball, celebrated for his Theory of Screws. An object of reform was the statutes of the university; he worked hard at it and was made a member of a commission appointed by the Government for the purpose. He was elected a Fellow of the Royal Society in January 1818. In 1842, Peacock was elected as a member of the American Philosophical Society. == Clerical career == He was ordained as a deacon in 1819, a priest in 1822 and appointed vicar of Wymeswold in Leicestershire in 1826 (until 1835). In 1839 he was appointed Dean of Ely cathedral, Cambridgeshire, a position he held for the rest of his life, some 20 years. Together with the architect George Gilbert Scott he undertook a major restoration of the cathedral building. This included the installation of the boarded ceiling. While holding this position he wrote a text book on algebra, A Treatise on Algebra (1830). Later, a second edition appeared in two volumes, the one called Arithmetical Algebra (1842) and the other On Symbolical Algebra and its Applications to the Geometry of Position (1845). == Symbolical algebra == Peacock's main contribution to mathematical analysis is his attempt to place algebra on a strictly logical basis. He founded what has been called the British algebra of logic; to which Gregory, De Morgan and Boole belonged. His answer to Maseres and Frend was that the science of algebra consisted of two parts—arithmetical algebra and symbolical algebra—and that they erred in restricting the science to the arithmetical part. His view of arithmetical algebra is as follows: "In arithmetical algebra we consider symbols as representing numbers, and the operations to which they are submitted as included in the same definitions as in common arithmetic; the signs + {\displaystyle +} and − {\displaystyle -} denote the operations of addition and subtraction in their ordinary meaning only, and those operations are considered as impossible in all cases where the symbols subjected to them possess values which would render them so in case they were replaced by digital numbers; thus in expressions such as a + b {\displaystyle a+b} we must suppose a {\displaystyle a} and b {\displaystyle b} to be quantities of the same kind; in others, like a − b {\displaystyle a-b} , we must suppose a {\displaystyle a} greater than b {\displaystyle b} and therefore homogeneous with it; in products and quotients, like a b {\displaystyle ab} and a b {\displaystyle {\frac {a}{b}}} we must suppose the multiplier and divisor to be abstract numbers; all results whatsoever, including negative quantities, which are not strictly deducible as legitimate conclusions from the definitions of the several operations must be rejected as impossible, or as foreign to the science." Peacock's principle may be stated thus: the elementary symbol of arithmetical algebra denotes a digital, i.e., an integer number; and every combination of elementary symbols must reduce to a digital number, otherwise it is impossible or foreign to the science. If a {\displaystyle a} and b {\displaystyle b} are numbers, then a + b {\displaystyle a+b} is always a number; but a − b {\displaystyle a-b} is a number only when b {\displaystyle b} is less than a {\displaystyle a} . Again, under the same conditions, a b {\displaystyle ab} is always a number, but a b {\displaystyle {\frac {a}{b}}} is really a number only when b {\displaystyle b} is an exact divisor of a {\displaystyle a} . Hence the following dilemma: Either a b {\displaystyle {\frac {a}{b}}} must be held to be an impossible expression in general, or else the meaning of the fundamental symbol of algebra must be extended so as to include rational fractions. If the former horn of the dilemma is chosen, arithmetical algebra becomes a mere shadow; if the latter horn is chosen, the operations of algebra cannot be defined on the supposition that the elementary symbol is an integer number. Peacock attempts to get out of the difficulty by supposing that a symbol which is used as a multiplier is always an integer number, but that a symbol in the place of the multiplicand may be a fraction. For instance, in a b {\displaystyle ab} , a {\displaystyle a} can denote only an integer number, but b {\displaystyle b} may denote a rational fraction. Now there is no more fundamental principle in arithmetical algebra than that a b = b a {\displaystyle ab=ba} ; which would be illegitimate on Peacock's principle. One of the earliest English writers on arithmetic is Robert Recorde, who dedicated his work to King Edward VI. The author gives his treatise the form of a dialogue between master and scholar. The scholar battles long over this difficulty—that multiplying a thing could make it less. The master attempts to explain the anomaly by reference to proportion; that the product due to a fraction bears the same proportion to the thing multiplied that the fraction bears to unity. But the scholar is not satisfied and the master goes on to say: "If I multiply by more than one, the thing is increased; if I take it but once, it is not changed, and if I take it less than once, it cannot be so much as it was before. Then seeing that a fraction is less than one, if I multiply by a fraction, it follows that I do take it less than once." Whereupon the scholar replies, "Sir, I do thank you much for this reason, – and I trust that I do perceive the thing." The fact is that even in arithmetic the two processes of multiplication and division are generalized into a common multiplication; and the difficulty consists in passing from the original idea of multiplication to the generalized idea of a tensor, which idea includes compressing the magnitude as well as stretching it. Let m {\displaystyle m} denote an integer number; the next step is to gain the idea of the reciprocal of m {\displaystyle m} , not as 1 m {\displaystyle {\frac {1}{m}}} but simply as / m {\displaystyle /m} . When m {\displaystyle m} and / n {\displaystyle /n} are compounded we get the idea of a rational fraction; for in general m / n {\displaystyle m/n} will not reduce to a number nor to the reciprocal of a number. Suppose, however, that we pass over this objection; how does Peacock lay the foundation for general algebra? He calls it symbolical algebra, and he passes from arithmetical algebra to symbolical algebra in the following manner: "Symbolical algebra adopts the rules of arithmetical algebra but removes altogether their restrictions; thus symbolical subtraction differs from the same operation in arithmetical algebra in being possible for all relations of value of the symbols or expressions employed. All the results of arithmetical algebra which are deduced by the application of its rules, and which are general in form though particular in value, are results likewise of symbolical algebra where they are general in value as well as in form; thus the product of a m {\displaystyle a^{m}} and a n {\displaystyle a^{n}} which is a m + n {\displaystyle a^{m+n}} when m {\displaystyle m} and n {\displaystyle n} are whole numbers and therefore general in form though particular in value, will be their product likewise when m {\displaystyle m} and n {\displaystyle n} are general in value as well as in form; the series for ( a + b ) n {\displaystyle (a+b)^{n}} determined by the principles of arithmetical algebra when n {\displaystyle n} is any whole number, if it be exhibited in a general form, without reference to a final term, may be shown upon the same principle to the equivalent series for ( a + b ) n {\displaystyle (a+b)^{n}} when n {\displaystyle n} is general both in form and value." The principle here indicated by means of examples was named by Peacock the "principle of the permanence of equivalent forms," and at page 59 of the Symbolical Algebra it is thus enunciated: "Whatever algebraic forms are equivalent when the symbols are general in form, but specific in value, will be equivalent likewise when the symbols are general in value as well as in form." For example, let a {\displaystyle a} , b {\displaystyle b} , c {\displaystyle c} , d {\displaystyle d} denote any integer numbers, but subject to the restrictions that b {\displaystyle b} is less than a {\displaystyle a} , and d {\displaystyle d} less than c {\displaystyle c} ; it may then be shown arithmetically that ( a − b ) ( c − d ) = a c + b d − a d − b c {\displaystyle (a-b)(c-d)=ac+bd-ad-bc} . Peacock's principle says that the form on the left side is equivalent to the form on the right side, not only when the said restrictions of being less are removed, but when a {\displaystyle a} , b {\displaystyle b} , c {\displaystyle c} , d {\displaystyle d} denote the most general algebraic symbol. It means that a {\displaystyle a} , b {\displaystyle b} , c {\displaystyle c} , d {\displaystyle d} may be rational fractions, or surds, or imaginary quantities, or indeed operators such as d d x {\displaystyle {\frac {d}{dx}}} . The equivalence is not established by means of the nature of the quantity denoted; the equivalence is assumed to be true, and then it is attempted to find the different interpretations which may be put on the symbol. It is not difficult to see that the problem before us involves the fundamental problem of a rational logic or theory of knowledge; namely, how are we able to ascend from particular truths to more general truths. If a {\displaystyle a} , b {\displaystyle b} , c {\displaystyle c} , d {\displaystyle d} denote integer numbers, of which b {\displaystyle b} is less than a {\displaystyle a} and d {\displaystyle d} less than c {\displaystyle c} , then ( a − b ) ( c − d ) = a c + b d − a d − b c {\displaystyle (a-b)(c-d)=ac+bd-ad-bc} . It is first seen that the above restrictions may be removed, and still the above equation holds. But the antecedent is still too narrow; the true scientific problem consists in specifying the meaning of the symbols, which, and only which, will admit of the forms being equal. It is not to find "some meanings", but the "most general meaning", which allows the equivalence to be true. Let us examine some other cases; we shall find that Peacock's principle is not a solution of the difficulty; the great logical process of generalization cannot be reduced to any such easy and arbitrary procedure. When a {\displaystyle a} , m {\displaystyle m} , n {\displaystyle n} denote integer numbers, it can be shown that a m a n = a m + n {\displaystyle a^{m}a^{n}=a^{m+n}} . According to Peacock the form on the left is always to be equal to the form on the right, and the meanings of a {\displaystyle a} , m {\displaystyle m} , n {\displaystyle n} are to be found by interpretation. Suppose that a {\displaystyle a} takes the form of the incommensurate quantity e {\displaystyle e} , the base of the natural system of logarithms. A number is a degraded form of a complex quantity p + q − 1 {\displaystyle p+q^{\sqrt {-1}}} and a complex quantity is a degraded form of a quaternion; consequently one meaning which may be assigned to m {\displaystyle m} and n {\displaystyle n} is that of quaternion. Peacock's principle would lead us to suppose that e m e n = e m + n {\displaystyle e^{m}e^{n}=e^{m+n}} , m {\displaystyle m} and n {\displaystyle n} denoting quaternions; but that is just what William Rowan Hamilton, the inventor of the quaternion generalization, denies. There are reasons for believing that he was mistaken, and that the forms remain equivalent even under that extreme generalization of m {\displaystyle m} and n {\displaystyle n} ; but the point is this: it is not a question of conventional definition and formal truth; it is a question of objective definition and real truth. Let the symbols have the prescribed meaning, does or does not the equivalence still hold? And if it does not hold, what is the higher or more complex form which the equivalence assumes? Or does such equivalence form even exist? == Private life == Politically, George Peacock was a Whig. He married Frances Elizabeth, the daughter of William Selwyn. They had no children. His last public act was to attend a meeting of the university reform commission. He died in Ely on 8 November 1858, in the 68th year of his age, and was buried in Ely cemetery. == Bibliography == A Treatise on Algebra (J. & J. J. Deighton, 1830). A Treatise on Algebra (2nd ed., Scripta Mathematica, 1842–1845). Vol. 1: Arithmetical Algebra (1842). Vol. 2: On Symbolical Algebra and its Applications to the Geometry of Position (1845) Life of Thomas Young: M.D., F.R.S., &c.; and One of the Eight Foreign Associates of the National Institute of France (John Murray, 1855). == References == == Sources == Macfarlane, Alexander (2009) [1916]. Lectures on Ten British Mathematicians of the Nineteenth Century. Mathematical monographs. Vol. 17. Cornell University Library. ISBN 978-1-112-28306-2. (complete text Archived 29 July 2017 at the Wayback Machine at Project Gutenberg) == External links == O'Connor, John J.; Robertson, Edmund F., "George Peacock", MacTutor History of Mathematics Archive, University of St Andrews Biography of Peacock
Wikipedia/Symbolical_algebra
Moderne Algebra is a two-volume German textbook on graduate abstract algebra by Bartel Leendert van der Waerden (1930, 1931), originally based on lectures given by Emil Artin in 1926 and by Emmy Noether (1929) from 1924 to 1928. The English translation of 1949–1950 had the title Modern algebra, though a later, extensively revised edition in 1970 had the title Algebra. The book was one of the first textbooks to use an abstract axiomatic approach to groups, rings, and fields, and was by far the most successful, becoming the standard reference for graduate algebra for several decades. It "had a tremendous impact, and is widely considered to be the major text on algebra in the twentieth century." In 1975 van der Waerden described the sources he drew upon to write the book. In 1997 Saunders Mac Lane recollected the book's influence: Upon its publication it was soon clear that this was the way that algebra should be presented. Its simple but austere style set the pattern for mathematical texts in other subjects, from Banach algebras to topological group theory. [Van der Waerden's] two volumes on modern algebra ... dramatically changed the way algebra is now taught by providing a decisive example of a clear and perspicacious presentation. It is, in my view, the most influential text of algebra of the twentieth century. == Publication history == Moderne Algebra has a rather confusing publication history, because it went through many different editions, several of which were extensively rewritten with chapters and major topics added, deleted, or rearranged. In addition the new editions of first and second volumes were issued almost independently and at different times, and the numbering of the English editions does not correspond to the numbering of the German editions. In 1955 the title was changed from "Moderne Algebra" to "Algebra" following a suggestion of Brandt, with the result that the two volumes of the third German edition do not even have the same title. For volume 1, the first German edition was published in 1930, the second in 1937 (with the axiom of choice removed), the third in 1951 (with the axiom of choice reinstated, and with more on valuations). The fourth edition appeared in 1955 (with the title changed to Algebra), the fifth in 1960, the sixth in 1964, the seventh in 1966, the eighth in 1971, the ninth in 1993. For volume 2, the first edition was published in 1931, the second in 1940, the third in 1955 (with the title changed to Algebra), the fourth in 1959 (extensively rewritten, with elimination theory replaced by algebraic functions of 1 variable), the fifth in 1967, and the sixth in 1993. The German editions were all published by Springer. The first English edition was published in 1949–1950 and was a translation of the second German edition. There was a second edition in 1953, and a third edition under the new title Algebra in 1970 translated from the 7th German edition of volume 1 and the 5th German edition of volume 2. The three English editions were originally published by Ungar, though the 3rd English edition was later reprinted by Springer. van der Waerden, Bartel Leendert (1930). Moderne Algebra. Teil I. Die Grundlehren der mathematischen Wissenschaften (in German). Vol. 33. Berlin, New York: Springer-Verlag. ISBN 978-3-540-56799-8. MR 0009016. {{cite book}}: ISBN / Date incompatibility (help) van der Waerden, Bartel Leendert (1931). Moderne Algebra. Teil II. Die Grundlehren der mathematischen Wissenschaften (in German). Vol. 34. Springer-Verlag. ISBN 978-3-540-56801-8. MR 0002841. {{cite book}}: ISBN / Date incompatibility (help) van der Waerden, Bartel Leendert (1949). Modern Algebra. Vol I. Translated by Blum, Fred. New York, N. Y.: Frederick Ungar Publishing Co. MR 0029363. van der Waerden, Bartel Leendert (1950). Modern Algebra. Vol II. Translated by Blum, Fred. New York, N. Y.: Frederick Ungar Publishing Co. van der Waerden, Bartel Leendert (1970). Algebra. Vol 1. Translated by Blum, Fred; Schulenberger, John R. New York, N. Y.: Frederick Ungar Publishing Co. ISBN 978-0-387-40624-4. MR 0263582. Reprinted by Springer. van der Waerden, Bartel Leendert (1970). Algebra. Vol. 2. Translated by Schulenberger, John R. (3rd ed.). New York, N. Y.: Frederick Ungar Publishing Co. ISBN 9780387406251. MR 0263583. Reprinted by Springer. There were also Russian editions published in 1976 and 1979, and Japanese editions published in 1959 and 1967–1971. == References == Hahn, H.; Taussky, Olga (1932), "Literaturberichte: Moderne Algebra", Monatshefte für Mathematik und Physik, 39 (1): A11 – A12, doi:10.1007/BF01699101, ISSN 0026-9255, S2CID 197662655 Hofreiter (1936), "Literaturberichte: Moderne Algebra", Monatshefte für Mathematik und Physik, 45 (1): A25 – A26, doi:10.1007/BF01708057, ISSN 0026-9255, S2CID 124462638 Hofreiter (1941), "Literaturberichte: Moderne Algebra", Monatshefte für Mathematik und Physik, 49 (1): A21, doi:10.1007/BF01707351, ISSN 0026-9255 Ore, Oystein (1931), "Recent Publications: Reviews: Moderne Algebra", The American Mathematical Monthly, 38 (4): 226, doi:10.2307/2301771, ISSN 0002-9890, JSTOR 2301771 Ore, Oystein (1932), "Book Review: Moderne Algebra", Bulletin of the American Mathematical Society, 38 (5): 327–329, doi:10.1090/S0002-9904-1932-05385-X, ISSN 0002-9904 Ore, Oystein (1938), "Book Review: Moderne Algebra", Bulletin of the American Mathematical Society, 44 (5): 320–321, doi:10.1090/S0002-9904-1938-06737-7, ISSN 0002-9904 Noether, Emmy (1929), "Hyperkomplexe Größen und Darstellungstheorie", Mathematische Zeitschrift, 30: 641–692, doi:10.1007/BF01187794, ISSN 0025-5874, S2CID 120464373 Schlote, K.-H. (2005), "Chapter 70 B. L. van der Waerden, Moderne Algebra", in Grattan-Guinness, Ivor (ed.), Landmark writings in western mathematics 1640--1940, Elsevier B. V., Amsterdam, pp. 901–916, ISBN 978-0-444-50871-3, MR 2169816 Taussky, Olga (1933), "Literaturberichte: Moderne Algebra", Monatshefte für Mathematik und Physik, 40 (1): A3 – A4, doi:10.1007/BF01708891, ISSN 0026-9255, S2CID 124038027
Wikipedia/Moderne_Algebra
In algebra, linear equations and systems of linear equations over a field are widely studied. "Over a field" means that the coefficients of the equations and the solutions that one is looking for belong to a given field, commonly the real or the complex numbers. This article is devoted to the same problems where "field" is replaced by "commutative ring", or "typically Noetherian integral domain". In the case of a single equation, the problem splits in two parts. First, the ideal membership problem, which consists, given a non-homogeneous equation a 1 x 1 + ⋯ + a k x k = b {\displaystyle a_{1}x_{1}+\cdots +a_{k}x_{k}=b} with a 1 , … , a k {\displaystyle a_{1},\ldots ,a_{k}} and b in a given ring R, to decide if it has a solution with x 1 , … , x k {\displaystyle x_{1},\ldots ,x_{k}} in R, and, if any, to provide one. This amounts to decide if b belongs to the ideal generated by the ai. The simplest instance of this problem is, for k = 1 and b = 1, to decide if a is a unit in R. The syzygy problem consists, given k elements a 1 , … , a k {\displaystyle a_{1},\ldots ,a_{k}} in R, to provide a system of generators of the module of the syzygies of ( a 1 , … , a k ) , {\displaystyle (a_{1},\ldots ,a_{k}),} that is a system of generators of the submodule of those elements ( x 1 , … , x k ) {\displaystyle (x_{1},\ldots ,x_{k})} in Rk that are solutions of the homogeneous equation a 1 x 1 + ⋯ + a k x k = 0. {\displaystyle a_{1}x_{1}+\cdots +a_{k}x_{k}=0.} The simplest case, when k = 1 amounts to find a system of generators of the annihilator of a1. Given a solution of the ideal membership problem, one obtains all the solutions by adding to it the elements of the module of syzygies. In other words, all the solutions are provided by the solution of these two partial problems. In the case of several equations, the same decomposition into subproblems occurs. The first problem becomes the submodule membership problem. The second one is also called the syzygy problem. A ring such that there are algorithms for the arithmetic operations (addition, subtraction, multiplication) and for the above problems may be called a computable ring, or effective ring. One may also say that linear algebra on the ring is effective. The article considers the main rings for which linear algebra is effective. == Generalities == To be able to solve the syzygy problem, it is necessary that the module of syzygies is finitely generated, because it is impossible to output an infinite list. Therefore, the problems considered here make sense only for a Noetherian ring, or at least a coherent ring. In fact, this article is restricted to Noetherian integral domains because of the following result. Given a Noetherian integral domain, if there are algorithms to solve the ideal membership problem and the syzygies problem for a single equation, then one may deduce from them algorithms for the similar problems concerning systems of equations. This theorem is useful to prove the existence of algorithms. However, in practice, the algorithms for the systems are designed directly. A field is an effective ring as soon one has algorithms for addition, subtraction, multiplication, and computation of multiplicative inverses. In fact, solving the submodule membership problem is what is commonly called solving the system, and solving the syzygy problem is the computation of the null space of the matrix of a system of linear equations. The basic algorithm for both problems is Gaussian elimination. === Properties of effective rings === Let R be an effective commutative ring. There is an algorithm for testing if an element a is a zero divisor: this amounts to solving the linear equation ax = 0. There is an algorithm for testing if an element a is a unit, and if it is, computing its inverse: this amounts to solving the linear equation ax = 1. Given an ideal I generated by a1, ..., ak, there is an algorithm for testing if two elements of R have the same image in R/I: testing the equality of the images of a and b amounts to solving the equation a = b + a1 z1 + ⋯ + ak zk; linear algebra is effective over R/I: for solving a linear system over R/I, it suffices to write it over R and to add to one side of the ith equation a1 zi,1 + ⋯ + ak zi, k (for i = 1, ...), where the zi, j are new unknowns. Linear algebra is effective on the polynomial ring R [ x 1 , … , x n ] {\displaystyle R[x_{1},\ldots ,x_{n}]} if and only if one has an algorithm that computes an upper bound of the degree of the polynomials that may occur when solving linear systems of equations: if one has solving algorithms, their outputs give the degrees. Conversely, if one knows an upper bound of the degrees occurring in a solution, one may write the unknown polynomials as polynomials with unknown coefficients. Then, as two polynomials are equal if and only if their coefficients are equal, the equations of the problem become linear equations in the coefficients, that can be solved over an effective ring. == Over the integers or a principal ideal domain == There are algorithms to solve all the problems addressed in this article over the integers. In other words, linear algebra is effective over the integers; see Linear Diophantine system for details. More generally, linear algebra is effective on a principal ideal domain if there are algorithms for addition, subtraction and multiplication, and Solving equations of the form ax = b, that is, testing whether a is a divisor of b, and, if this is the case, computing the quotient a/b, Computing Bézout's identity, that is, given a and b, computing s and t such that as + bt is a greatest common divisor of a and b. It is useful to extend to the general case the notion of a unimodular matrix by calling unimodular a square matrix whose determinant is a unit. This means that the determinant is invertible and implies that the unimodular matrices are exactly the invertible matrices such all entries of the inverse matrix belong to the domain. The above two algorithms imply that given a and b in the principal ideal domain, there is an algorithm computing a unimodular matrix [ s t u v ] {\displaystyle {\begin{bmatrix}s&t\\u&v\end{bmatrix}}} such that [ s t u v ] [ a b ] = [ gcd ( a , b ) 0 ] . {\displaystyle {\begin{bmatrix}s&t\\u&v\end{bmatrix}}{\begin{bmatrix}a\\b\end{bmatrix}}={\begin{bmatrix}\gcd(a,b)\\0\end{bmatrix}}.} (This algorithm is obtained by taking for s and t the coefficients of Bézout's identity, and for u and v the quotient of −b and a by as + bt; this choice implies that the determinant of the square matrix is 1.) Having such an algorithm, the Smith normal form of a matrix may be computed exactly as in the integer case, and this suffices to apply the described in Linear Diophantine system for getting an algorithm for solving every linear system. The main case where this is commonly used is the case of linear systems over the ring of univariate polynomials over a field. In this case, the extended Euclidean algorithm may be used for computing the above unimodular matrix; see Polynomial greatest common divisor § Bézout's identity and extended GCD algorithm for details. == Over polynomials rings over a field == Linear algebra is effective on a polynomial ring k [ x 1 , … , x n ] {\displaystyle k[x_{1},\ldots ,x_{n}]} over a field k. This has been first proved in 1926 by Grete Hermann. The algorithms resulting from Hermann's results are only of historical interest, as their computational complexity is too high for allowing effective computer computation. Proofs that linear algebra is effective on polynomial rings and computer implementations are presently all based on Gröbner basis theory. == References == == External links == Cox, David A.; Little, John; O'Shea, Donal (1997). Ideals, Varieties, and Algorithms (second ed.). Springer-Verlag. ISBN 0-387-94680-2. Zbl 0861.13012. Aschenbrenner, Matthias (2004). "Ideal membership in polynomial rings over the integers" (PDF). J. Amer. Math. Soc. 17 (2). AMS: 407–441. doi:10.1090/S0894-0347-04-00451-5. S2CID 8176473. Retrieved 23 October 2013.
Wikipedia/Linear_equation_over_a_ring
Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics. It is a subfield of numerical analysis, and a type of linear algebra. Computers use floating-point arithmetic and cannot exactly represent irrational data, so when a computer algorithm is applied to a matrix of data, it can sometimes increase the difference between a number stored in the computer and the true number that it is an approximation of. Numerical linear algebra uses properties of vectors and matrices to develop computer algorithms that minimize the error introduced by the computer, and is also concerned with ensuring that the algorithm is as efficient as possible. Numerical linear algebra aims to solve problems of continuous mathematics using finite precision computers, so its applications to the natural and social sciences are as vast as the applications of continuous mathematics. It is often a fundamental part of engineering and computational science problems, such as image and signal processing, telecommunication, computational finance, materials science simulations, structural biology, data mining, bioinformatics, and fluid dynamics. Matrix methods are particularly used in finite difference methods, finite element methods, and the modeling of differential equations. Noting the broad applications of numerical linear algebra, Lloyd N. Trefethen and David Bau, III argue that it is "as fundamental to the mathematical sciences as calculus and differential equations",: x  even though it is a comparatively small field. Because many properties of matrices and vectors also apply to functions and operators, numerical linear algebra can also be viewed as a type of functional analysis which has a particular emphasis on practical algorithms.: ix  Common problems in numerical linear algebra include obtaining matrix decompositions like the singular value decomposition, the QR factorization, the LU factorization, or the eigendecomposition, which can then be used to answer common linear algebraic problems like solving linear systems of equations, locating eigenvalues, or least squares optimisation. Numerical linear algebra's central concern with developing algorithms that do not introduce errors when applied to real data on a finite precision computer is often achieved by iterative methods rather than direct ones. == History == Numerical linear algebra was developed by computer pioneers like John von Neumann, Alan Turing, James H. Wilkinson, Alston Scott Householder, George Forsythe, and Heinz Rutishauser, in order to apply the earliest computers to problems in continuous mathematics, such as ballistics problems and the solutions to systems of partial differential equations. The first serious attempt to minimize computer error in the application of algorithms to real data is John von Neumann and Herman Goldstine's work in 1947. The field has grown as technology has increasingly enabled researchers to solve complex problems on extremely large high-precision matrices, and some numerical algorithms have grown in prominence as technologies like parallel computing have made them practical approaches to scientific problems. == Matrix decompositions == === Partitioned matrices === For many problems in applied linear algebra, it is useful to adopt the perspective of a matrix as being a concatenation of column vectors. For example, when solving the linear system x = A − 1 b {\displaystyle x=A^{-1}b} , rather than understanding x as the product of A − 1 {\displaystyle A^{-1}} with b, it is helpful to think of x as the vector of coefficients in the linear expansion of b in the basis formed by the columns of A.: 8  Thinking of matrices as a concatenation of columns is also a practical approach for the purposes of matrix algorithms. This is because matrix algorithms frequently contain two nested loops: one over the columns of a matrix A, and another over the rows of A. For example, for matrices A m × n {\displaystyle A^{m\times n}} and vectors x n × 1 {\displaystyle x^{n\times 1}} and y m × 1 {\displaystyle y^{m\times 1}} , we could use the column partitioning perspective to compute y := Ax + y as === Singular value decomposition === The singular value decomposition of a matrix A m × n {\displaystyle A^{m\times n}} is A = U Σ V ∗ {\displaystyle A=U\Sigma V^{\ast }} where U and V are unitary, and Σ {\displaystyle \Sigma } is diagonal. The diagonal entries of Σ {\displaystyle \Sigma } are called the singular values of A. Because singular values are the square roots of the eigenvalues of A A ∗ {\displaystyle AA^{\ast }} , there is a tight connection between the singular value decomposition and eigenvalue decompositions. This means that most methods for computing the singular value decomposition are similar to eigenvalue methods;: 36  perhaps the most common method involves Householder procedures.: 253  === QR factorization === The QR factorization of a matrix A m × n {\displaystyle A^{m\times n}} is a matrix Q m × m {\displaystyle Q^{m\times m}} and a matrix R m × n {\displaystyle R^{m\times n}} so that A = QR, where Q is orthogonal and R is upper triangular.: 50 : 223  The two main algorithms for computing QR factorizations are the Gram–Schmidt process and the Householder transformation. The QR factorization is often used to solve linear least-squares problems, and eigenvalue problems (by way of the iterative QR algorithm). === LU factorization === An LU factorization of a matrix A consists of a lower triangular matrix L and an upper triangular matrix U so that A = LU. The matrix U is found by an upper triangularization procedure which involves left-multiplying A by a series of matrices M 1 , … , M n − 1 {\displaystyle M_{1},\ldots ,M_{n-1}} to form the product M n − 1 ⋯ M 1 A = U {\displaystyle M_{n-1}\cdots M_{1}A=U} , so that equivalently L = M 1 − 1 ⋯ M n − 1 − 1 {\displaystyle L=M_{1}^{-1}\cdots M_{n-1}^{-1}} .: 147 : 96  === Eigenvalue decomposition === The eigenvalue decomposition of a matrix A m × m {\displaystyle A^{m\times m}} is A = X Λ X − 1 {\displaystyle A=X\Lambda X^{-1}} , where the columns of X are the eigenvectors of A, and Λ {\displaystyle \Lambda } is a diagonal matrix the diagonal entries of which are the corresponding eigenvalues of A.: 33  There is no direct method for finding the eigenvalue decomposition of an arbitrary matrix. Because it is not possible to write a program that finds the exact roots of an arbitrary polynomial in finite time, any general eigenvalue solver must necessarily be iterative.: 192  == Algorithms == === Gaussian elimination === From the numerical linear algebra perspective, Gaussian elimination is a procedure for factoring a matrix A into its LU factorization, which Gaussian elimination accomplishes by left-multiplying A by a succession of matrices L m − 1 ⋯ L 2 L 1 A = U {\displaystyle L_{m-1}\cdots L_{2}L_{1}A=U} until U is upper triangular and L is lower triangular, where L ≡ L 1 − 1 L 2 − 1 ⋯ L m − 1 − 1 {\displaystyle L\equiv L_{1}^{-1}L_{2}^{-1}\cdots L_{m-1}^{-1}} .: 148  Naive programs for Gaussian elimination are notoriously highly unstable, and produce huge errors when applied to matrices with many significant digits. The simplest solution is to introduce pivoting, which produces a modified Gaussian elimination algorithm that is stable.: 151  === Solutions of linear systems === Numerical linear algebra characteristically approaches matrices as a concatenation of columns vectors. In order to solve the linear system x = A − 1 b {\displaystyle x=A^{-1}b} , the traditional algebraic approach is to understand x as the product of A − 1 {\displaystyle A^{-1}} with b. Numerical linear algebra instead interprets x as the vector of coefficients of the linear expansion of b in the basis formed by the columns of A.: 8  Many different decompositions can be used to solve the linear problem, depending on the characteristics of the matrix A and the vectors x and b, which may make one factorization much easier to obtain than others. If A = QR is a QR factorization of A, then equivalently R x = Q ∗ b {\displaystyle Rx=Q^{\ast }b} . This is as easy to compute as a matrix factorization.: 54  If A = X Λ X − 1 {\displaystyle A=X\Lambda X^{-1}} is an eigendecomposition A, and we seek to find b so that b = Ax, with b ′ = X − 1 b {\displaystyle b'=X^{-1}b} and x ′ = X − 1 x {\displaystyle x'=X^{-1}x} , then we have b ′ = Λ x ′ {\displaystyle b'=\Lambda x'} .: 33  This is closely related to the solution to the linear system using the singular value decomposition, because singular values of a matrix are the absolute values of its eigenvalues, which are also equivalent to the square roots of the absolute values of the eigenvalues of the Gram matrix X ∗ X {\displaystyle X^{*}X} . And if A = LU is an LU factorization of A, then Ax = b can be solved using the triangular matrices Ly = b and Ux = y.: 147 : 99  === Least squares optimisation === Matrix decompositions suggest a number of ways to solve the linear system r = b − Ax where we seek to minimize r, as in the regression problem. The QR algorithm solves this problem by computing the reduced QR factorization of A and rearranging to obtain R ^ x = Q ^ ∗ b {\displaystyle {\widehat {R}}x={\widehat {Q}}^{\ast }b} . This upper triangular system can then be solved for x. The SVD also suggests an algorithm for obtaining linear least squares. By computing the reduced SVD decomposition A = U ^ Σ ^ V ∗ {\displaystyle A={\widehat {U}}{\widehat {\Sigma }}V^{\ast }} and then computing the vector U ^ ∗ b {\displaystyle {\widehat {U}}^{\ast }b} , we reduce the least squares problem to a simple diagonal system.: 84  The fact that least squares solutions can be produced by the QR and SVD factorizations means that, in addition to the classical normal equations method for solving least squares problems, these problems can also be solved by methods that include the Gram-Schmidt algorithm and Householder methods. == Conditioning and stability == Allow that a problem is a function f : X → Y {\displaystyle f:X\to Y} , where X is a normed vector space of data and Y is a normed vector space of solutions. For some data point x ∈ X {\displaystyle x\in X} , the problem is said to be ill-conditioned if a small perturbation in x produces a large change in the value of f(x). We can quantify this by defining a condition number which represents how well-conditioned a problem is, defined as κ ^ = lim δ → 0 sup ‖ δ x ‖ ≤ δ ‖ δ f ‖ ‖ δ x ‖ . {\displaystyle {\widehat {\kappa }}=\lim _{\delta \to 0}\sup _{\|\delta x\|\leq \delta }{\frac {\|\delta f\|}{\|\delta x\|}}.} Instability is the tendency of computer algorithms, which depend on floating-point arithmetic, to produce results that differ dramatically from the exact mathematical solution to a problem. When a matrix contains real data with many significant digits, many algorithms for solving problems like linear systems of equation or least squares optimisation may produce highly inaccurate results. Creating stable algorithms for ill-conditioned problems is a central concern in numerical linear algebra. One example is that the stability of householder triangularization makes it a particularly robust solution method for linear systems, whereas the instability of the normal equations method for solving least squares problems is a reason to favour matrix decomposition methods like using the singular value decomposition. Some matrix decomposition methods may be unstable, but have straightforward modifications that make them stable; one example is the unstable Gram–Schmidt, which can easily be changed to produce the stable modified Gram–Schmidt.: 140  Another classical problem in numerical linear algebra is the finding that Gaussian elimination is unstable, but becomes stable with the introduction of pivoting. == Iterative methods == There are two reasons that iterative algorithms are an important part of numerical linear algebra. First, many important numerical problems have no direct solution; in order to find the eigenvalues and eigenvectors of an arbitrary matrix, we can only adopt an iterative approach. Second, noniterative algorithms for an arbitrary m × m {\displaystyle m\times m} matrix require O ( m 3 ) {\displaystyle O(m^{3})} time, which is a surprisingly high floor given that matrices contain only m 2 {\displaystyle m^{2}} numbers. Iterative approaches can take advantage of several features of some matrices to reduce this time. For example, when a matrix is sparse, an iterative algorithm can skip many of the steps that a direct approach would necessarily follow, even if they are redundant steps given a highly structured matrix. The core of many iterative methods in numerical linear algebra is the projection of a matrix onto a lower dimensional Krylov subspace, which allows features of a high-dimensional matrix to be approximated by iteratively computing the equivalent features of similar matrices starting in a low dimension space and moving to successively higher dimensions. When A is symmetric and we wish to solve the linear problem Ax = b, the classical iterative approach is the conjugate gradient method. If A is not symmetric, then examples of iterative solutions to the linear problem are the generalized minimal residual method and CGN. If A is symmetric, then to solve the eigenvalue and eigenvector problem we can use the Lanczos algorithm, and if A is non-symmetric, then we can use Arnoldi iteration. == Software == Several programming languages use numerical linear algebra optimisation techniques and are designed to implement numerical linear algebra algorithms. These languages include MATLAB, Analytica, Maple, and Mathematica. Other programming languages which are not explicitly designed for numerical linear algebra have libraries that provide numerical linear algebra routines and optimisation; C and Fortran have packages like Basic Linear Algebra Subprograms and LAPACK, Python has the library NumPy, and Perl has the Perl Data Language. Many numerical linear algebra commands in R rely on these more fundamental libraries like LAPACK. More libraries can be found on the List of numerical libraries. == References == == Further reading == Dongarra, Jack; Hammarling, Sven (1990). "Evolution of Numerical Software for Dense Linear Algebra". In Cox, M. G.; Hammarling, S. (eds.). Reliable Numerical Computation. Oxford: Clarendon Press. pp. 297–327. ISBN 0-19-853564-3. Claude Brezinski, Gérard Meurant and Michela Redivo-Zaglia (2022): A Journey through the History of Numerical Linear Algebra, SIAM, ISBN 978-1-61197-722-6. Demmel, J. W. (1997): Applied Numerical Linear Algebra, SIAM. Ciarlet, P. G., Miara, B., & Thomas, J. M. (1989): Introduction to Numerical Linear Algebra and Optimization, Cambridge Univ. Press. Trefethen, Lloyd; Bau III, David (1997): Numerical Linear Algebra (1st ed.), SIAM, ISBN 978-0-89871-361-9. Golub, Gene H.; Van Loan, Charles F. (1996): Matrix Computations (3rd ed.), The Johns Hopkins University Press. ISBN 0-8018-5413-X. G. W. Stewart (1998): Matrix Algorithms Vol I: Basic Decompositions, SIAM, ISBN 0-89871-414-1. G. W. Stewart (2001): Matrix Algorithms Vol II: Eigensystems, SIAM, ISBN 0-89871-503-2. Varga, Richard S. (2000): Matrix Iterative Analysis, Springer. Yousef Saad (2003) : Iterative Methods for Sparse Linear Systems, 2nd Ed., SIAM, ISBN 978-0-89871534-7. Raf Vandebril, Marc Van Barel, and Nicola Mastronardi (2008): Matrix Computations and Semiseparable Matrices, Volume 1: Linear systems, Johns Hopkins Univ. Press, ISBN 978-0-8018-8714-7. Raf Vandebril, Marc Van Barel, and Nicola Mastronardi (2008): Matrix Computations and Semiseparable Matrices, Volume 2: Eigenvalue and Singular Value Methods, Johns Hopkins Univ. Press, ISBN 978-0-8018-9052-9. Higham, N. J. (2002): Accuracy and Stability of Numerical Algorithms, SIAM. Higham, N. J. (2008): Functions of Matrices: Theory and Computation, SIAM. David S. Watkins (2008): The Matrix Eigenvalue Problem: GR and Krylov Subspace Methods, SIAM. Liesen, J., and Strakos, Z. (2012): Krylov Subspace Methods: Principles and Analysis, Oxford Univ. Press. == External links == Freely available software for numerical algebra on the web, composed by Jack Dongarra and Hatem Ltaief, University of Tennessee NAG Library of numerical linear algebra routines Numerical Linear Algebra Group on Twitter (Research group in the United Kingdom) siagla on Twitter (Activity group about numerical linear algebra in the Society for Industrial and Applied Mathematics) The GAMM Activity Group on Applied and Numerical Linear Algebra
Wikipedia/Numerical_solution_of_linear_systems
In mathematics, the solution set of a system of equations or inequality is the set of all its solutions, that is the values that satisfy all equations and inequalities. Also, the solution set or the truth set of a statement or a predicate is the set of all values that satisfy it. If there is no solution, the solution set is the empty set. == Examples == The solution set of the single equation x = 0 {\displaystyle x=0} is the singleton set { 0 } {\displaystyle \{0\}} . Since there do not exist numbers x {\displaystyle x} and y {\displaystyle y} making the two equations { x + 2 y = 3 , x + 2 y = − 3 {\displaystyle {\begin{cases}x+2y=3,&\\x+2y=-3\end{cases}}} simultaneously true, the solution set of this system is the empty set ∅ {\displaystyle \emptyset } . The solution set of a constrained optimization problem is its feasible region. The truth set of the predicate P ( n ) : n i s e v e n {\displaystyle P(n):n\mathrm {\ is\ even} } is { 2 , 4 , 6 , 8 , … } {\displaystyle \{2,4,6,8,\ldots \}} . == Remarks == In algebraic geometry, solution sets are called algebraic sets if there are no inequalities. Over the reals, and with inequalities, there are called semialgebraic sets. == Other meanings == More generally, the solution set to an arbitrary collection E of relations (Ei) (i varying in some index set I) for a collection of unknowns ( x j ) j ∈ J {\displaystyle {(x_{j})}_{j\in J}} , supposed to take values in respective spaces ( X j ) j ∈ J {\displaystyle {(X_{j})}_{j\in J}} , is the set S of all solutions to the relations E, where a solution x ( k ) {\displaystyle x^{(k)}} is a family of values ( x j ( k ) ) j ∈ J ∈ ∏ j ∈ J X j {\textstyle {\left(x_{j}^{(k)}\right)}_{j\in J}\in \prod _{j\in J}X_{j}} such that substituting ( x j ) j ∈ J {\displaystyle {\left(x_{j}\right)}_{j\in J}} by x ( k ) {\displaystyle x^{(k)}} in the collection E makes all relations "true". (Instead of relations depending on unknowns, one should speak more correctly of predicates, the collection E is their logical conjunction, and the solution set is the inverse image of the boolean value true by the associated boolean-valued function.) The above meaning is a special case of this one, if the set of polynomials fi if interpreted as the set of equations fi(x)=0. === Examples === The solution set for E = { x+y = 0 } with respect to ( x , y ) ∈ R 2 {\displaystyle (x,y)\in \mathbb {R} ^{2}} is S = { (a,−a) : a ∈ R }. The solution set for E = { x+y = 0 } with respect to x ∈ R {\displaystyle x\in \mathbb {R} } is S = { −y }. (Here, y is not "declared" as an unknown, and thus to be seen as a parameter on which the equation, and therefore the solution set, depends.) The solution set for E = { x ≤ 4 } {\displaystyle E=\{{\sqrt {x}}\leq 4\}} with respect to x ∈ R {\displaystyle x\in \mathbb {R} } is the interval S = [0,2] (since x {\displaystyle {\sqrt {x}}} is undefined for negative values of x). The solution set for E = { e i x = 1 } {\displaystyle E=\{e^{ix}=1\}} with respect to x ∈ C {\displaystyle x\in \mathbb {C} } is S = 2πZ (see Euler's identity). == See also == Equation solving Extraneous and missing solutions == References ==
Wikipedia/Solution_set
In mathematics, the Coates graph or Coates flow graph, named after C.L. Coates, is a graph associated with the Coates' method for the solution of a system of linear equations. The Coates graph Gc(A) associated with an n × n matrix A is an n-node, weighted, labeled, directed graph. The nodes, labeled 1 through n, are each associated with the corresponding row/column of A. If entry aji ≠ 0 then there is a directed edge from node i to node j with weight aji. In other words, the Coates graph for matrix A is the one whose adjacency matrix is the transpose of A. == See also == Flow graph (mathematics) Mason graph == References ==
Wikipedia/Coates_graph
In mathematics, the linear span (also called the linear hull or just span) of a set S {\displaystyle S} of elements of a vector space V {\displaystyle V} is the smallest linear subspace of V {\displaystyle V} that contains S . {\displaystyle S.} It is the set of all finite linear combinations of the elements of S, and the intersection of all linear subspaces that contain S . {\displaystyle S.} It is often denoted span(S) or ⟨ S ⟩ . {\displaystyle \langle S\rangle .} For example, in geometry, two linearly independent vectors span a plane. To express that a vector space V is a linear span of a subset S, one commonly uses one of the following phrases: S spans V; S is a spanning set of V; V is spanned or generated by S; S is a generator set or a generating set of V. Spans can be generalized to many mathematical structures, in which case, the smallest substructure containing S {\displaystyle S} is generally called the substructure generated by S . {\displaystyle S.} == Definition == Given a vector space V over a field K, the span of a set S of vectors (not necessarily finite) is defined to be the intersection W of all subspaces of V that contain S. It is thus the smallest (for set inclusion) subspace containing S. It is referred to as the subspace spanned by S, or by the vectors in S. Conversely, S is called a spanning set of W, and we say that S spans W. It follows from this definition that the span of S is the set of all finite linear combinations of elements (vectors) of S, and can be defined as such. That is, span ⁡ ( S ) = { λ 1 v 1 + λ 2 v 2 + ⋯ + λ n v n ∣ n ∈ N , v 1 , . . . v n ∈ S , λ 1 , . . . λ n ∈ K } {\displaystyle \operatorname {span} (S)={\biggl \{}\lambda _{1}\mathbf {v} _{1}+\lambda _{2}\mathbf {v} _{2}+\cdots +\lambda _{n}\mathbf {v} _{n}\mid n\in \mathbb {N} ,\;\mathbf {v} _{1},...\mathbf {v} _{n}\in S,\;\lambda _{1},...\lambda _{n}\in K{\biggr \}}} When S is empty, the only possibility is n = 0, and the previous expression for span ⁡ ( S ) {\displaystyle \operatorname {span} (S)} reduces to the empty sum. The standard convention for the empty sum implies thus span ( ∅ ) = { 0 } , {\displaystyle {\text{span}}(\emptyset )=\{\mathbf {0} \},} a property that is immediate with the other definitions. However, many introductory textbooks simply include this fact as part of the definition. When S = { v 1 , … , v n } {\displaystyle S=\{\mathbf {v} _{1},\ldots ,\mathbf {v} _{n}\}} is finite, one has span ⁡ ( S ) = { λ 1 v 1 + λ 2 v 2 + ⋯ + λ n v n ∣ λ 1 , . . . λ n ∈ K } {\displaystyle \operatorname {span} (S)=\{\lambda _{1}\mathbf {v} _{1}+\lambda _{2}\mathbf {v} _{2}+\cdots +\lambda _{n}\mathbf {v} _{n}\mid \lambda _{1},...\lambda _{n}\in K\}} == Examples == The real vector space R 3 {\displaystyle \mathbb {R} ^{3}} has {(−1, 0, 0), (0, 1, 0), (0, 0, 1)} as a spanning set. This particular spanning set is also a basis. If (−1, 0, 0) were replaced by (1, 0, 0), it would also form the canonical basis of R 3 {\displaystyle \mathbb {R} ^{3}} . Another spanning set for the same space is given by {(1, 2, 3), (0, 1, 2), (−1, 1⁄2, 3), (1, 1, 1)}, but this set is not a basis, because it is linearly dependent. The set {(1, 0, 0), (0, 1, 0), (1, 1, 0)} is not a spanning set of R 3 {\displaystyle \mathbb {R} ^{3}} , since its span is the space of all vectors in R 3 {\displaystyle \mathbb {R} ^{3}} whose last component is zero. That space is also spanned by the set {(1, 0, 0), (0, 1, 0)}, as (1, 1, 0) is a linear combination of (1, 0, 0) and (0, 1, 0). Thus, the spanned space is not R 3 . {\displaystyle \mathbb {R} ^{3}.} It can be identified with R 2 {\displaystyle \mathbb {R} ^{2}} by removing the third components equal to zero. The empty set is a spanning set of {(0, 0, 0)}, since the empty set is a subset of all possible vector spaces in R 3 {\displaystyle \mathbb {R} ^{3}} , and {(0, 0, 0)} is the intersection of all of these vector spaces. The set of monomials xn, where n is a non-negative integer, spans the space of polynomials. == Theorems == === Equivalence of definitions === The set of all linear combinations of a subset S of V, a vector space over K, is the smallest linear subspace of V containing S. Proof. We first prove that span S is a subspace of V. Since S is a subset of V, we only need to prove the existence of a zero vector 0 in span S, that span S is closed under addition, and that span S is closed under scalar multiplication. Letting S = { v 1 , v 2 , … , v n } {\displaystyle S=\{\mathbf {v} _{1},\mathbf {v} _{2},\ldots ,\mathbf {v} _{n}\}} , it is trivial that the zero vector of V exists in span S, since 0 = 0 v 1 + 0 v 2 + ⋯ + 0 v n {\displaystyle \mathbf {0} =0\mathbf {v} _{1}+0\mathbf {v} _{2}+\cdots +0\mathbf {v} _{n}} . Adding together two linear combinations of S also produces a linear combination of S: ( λ 1 v 1 + ⋯ + λ n v n ) + ( μ 1 v 1 + ⋯ + μ n v n ) = ( λ 1 + μ 1 ) v 1 + ⋯ + ( λ n + μ n ) v n {\displaystyle (\lambda _{1}\mathbf {v} _{1}+\cdots +\lambda _{n}\mathbf {v} _{n})+(\mu _{1}\mathbf {v} _{1}+\cdots +\mu _{n}\mathbf {v} _{n})=(\lambda _{1}+\mu _{1})\mathbf {v} _{1}+\cdots +(\lambda _{n}+\mu _{n})\mathbf {v} _{n}} , where all λ i , μ i ∈ K {\displaystyle \lambda _{i},\mu _{i}\in K} , and multiplying a linear combination of S by a scalar c ∈ K {\displaystyle c\in K} will produce another linear combination of S: c ( λ 1 v 1 + ⋯ + λ n v n ) = c λ 1 v 1 + ⋯ + c λ n v n {\displaystyle c(\lambda _{1}\mathbf {v} _{1}+\cdots +\lambda _{n}\mathbf {v} _{n})=c\lambda _{1}\mathbf {v} _{1}+\cdots +c\lambda _{n}\mathbf {v} _{n}} . Thus span S is a subspace of V. It follows that S ⊆ span ⁡ S {\displaystyle S\subseteq \operatorname {span} S} , since every vi is a linear combination of S (trivially). Suppose that W is a linear subspace of V containing S. Since W is closed under addition and scalar multiplication, then every linear combination λ 1 v 1 + ⋯ + λ n v n {\displaystyle \lambda _{1}\mathbf {v} _{1}+\cdots +\lambda _{n}\mathbf {v} _{n}} must be contained in W. Thus, span S is contained in every subspace of V containing S, and the intersection of all such subspaces, or the smallest such subspace, is equal to the set of all linear combinations of S. === Size of spanning set is at least size of linearly independent set === Every spanning set S of a vector space V must contain at least as many elements as any linearly independent set of vectors from V. Proof. Let S = { v 1 , … , v m } {\displaystyle S=\{\mathbf {v} _{1},\ldots ,\mathbf {v} _{m}\}} be a spanning set and W = { w 1 , … , w n } {\displaystyle W=\{\mathbf {w} _{1},\ldots ,\mathbf {w} _{n}\}} be a linearly independent set of vectors from V. We want to show that m ≥ n {\displaystyle m\geq n} . Since S spans V, then S ∪ { w 1 } {\displaystyle S\cup \{\mathbf {w} _{1}\}} must also span V, and w 1 {\displaystyle \mathbf {w} _{1}} must be a linear combination of S. Thus S ∪ { w 1 } {\displaystyle S\cup \{\mathbf {w} _{1}\}} is linearly dependent, and we can remove one vector from S that is a linear combination of the other elements. This vector cannot be any of the wi, since W is linearly independent. The resulting set is { w 1 , v 1 , … , v i − 1 , v i + 1 , … , v m } {\displaystyle \{\mathbf {w} _{1},\mathbf {v} _{1},\ldots ,\mathbf {v} _{i-1},\mathbf {v} _{i+1},\ldots ,\mathbf {v} _{m}\}} , which is a spanning set of V. We repeat this step n times, where the resulting set after the pth step is the union of { w 1 , … , w p } {\displaystyle \{\mathbf {w} _{1},\ldots ,\mathbf {w} _{p}\}} and m - p vectors of S. It is ensured until the nth step that there will always be some vi to remove out of S for every adjoint of v, and thus there are at least as many vi's as there are wi's—i.e. m ≥ n {\displaystyle m\geq n} . To verify this, we assume by way of contradiction that m < n {\displaystyle m<n} . Then, at the mth step, we have the set { w 1 , … , w m } {\displaystyle \{\mathbf {w} _{1},\ldots ,\mathbf {w} _{m}\}} and we can adjoin another vector w m + 1 {\displaystyle \mathbf {w} _{m+1}} . But, since { w 1 , … , w m } {\displaystyle \{\mathbf {w} _{1},\ldots ,\mathbf {w} _{m}\}} is a spanning set of V, w m + 1 {\displaystyle \mathbf {w} _{m+1}} is a linear combination of { w 1 , … , w m } {\displaystyle \{\mathbf {w} _{1},\ldots ,\mathbf {w} _{m}\}} . This is a contradiction, since W is linearly independent. === Spanning set can be reduced to a basis === Let V be a finite-dimensional vector space. Any set of vectors that spans V can be reduced to a basis for V, by discarding vectors if necessary (i.e. if there are linearly dependent vectors in the set). If the axiom of choice holds, this is true without the assumption that V has finite dimension. This also indicates that a basis is a minimal spanning set when V is finite-dimensional. == Generalizations == Generalizing the definition of the span of points in space, a subset X of the ground set of a matroid is called a spanning set if the rank of X equals the rank of the entire ground set The vector space definition can also be generalized to modules. Given an R-module A and a collection of elements a1, ..., an of A, the submodule of A spanned by a1, ..., an is the sum of cyclic modules R a 1 + ⋯ + R a n = { ∑ k = 1 n r k a k | r k ∈ R } {\displaystyle Ra_{1}+\cdots +Ra_{n}=\left\{\sum _{k=1}^{n}r_{k}a_{k}{\bigg |}r_{k}\in R\right\}} consisting of all R-linear combinations of the elements ai. As with the case of vector spaces, the submodule of A spanned by any subset of A is the intersection of all submodules containing that subset. == Closed linear span (functional analysis) == In functional analysis, a closed linear span of a set of vectors is the minimal closed set which contains the linear span of that set. Suppose that X is a normed vector space and let E be any non-empty subset of X. The closed linear span of E, denoted by Sp ¯ ( E ) {\displaystyle {\overline {\operatorname {Sp} }}(E)} or Span ¯ ( E ) {\displaystyle {\overline {\operatorname {Span} }}(E)} , is the intersection of all the closed linear subspaces of X which contain E. One mathematical formulation of this is Sp ¯ ( E ) = { u ∈ X | ∀ ε > 0 ∃ x ∈ Sp ⁡ ( E ) : ‖ x − u ‖ < ε } . {\displaystyle {\overline {\operatorname {Sp} }}(E)=\{u\in X|\forall \varepsilon >0\,\exists x\in \operatorname {Sp} (E):\|x-u\|<\varepsilon \}.} The closed linear span of the set of functions xn on the interval [0, 1], where n is a non-negative integer, depends on the norm used. If the L2 norm is used, then the closed linear span is the Hilbert space of square-integrable functions on the interval. But if the maximum norm is used, the closed linear span will be the space of continuous functions on the interval. In either case, the closed linear span contains functions that are not polynomials, and so are not in the linear span itself. However, the cardinality of the set of functions in the closed linear span is the cardinality of the continuum, which is the same cardinality as for the set of polynomials. === Notes === The linear span of a set is dense in the closed linear span. Moreover, as stated in the lemma below, the closed linear span is indeed the closure of the linear span. Closed linear spans are important when dealing with closed linear subspaces (which are themselves highly important, see Riesz's lemma). === A useful lemma === Let X be a normed space and let E be any non-empty subset of X. Then (So the usual way to find the closed linear span is to find the linear span first, and then the closure of that linear span.) == See also == Affine hull Conical combination Convex hull == Footnotes == == Citations == == Sources == === Textbooks === Axler, Sheldon Jay (2015). Linear Algebra Done Right (PDF) (3rd ed.). Springer. ISBN 978-3-319-11079-0. Hefferon, Jim (2020). Linear Algebra (PDF) (4th ed.). Orthogonal Publishing. ISBN 978-1-944325-11-4. Mac Lane, Saunders; Birkhoff, Garrett (1999) [1988]. Algebra (3rd ed.). AMS Chelsea Publishing. ISBN 978-0821816462. Oxley, James G. (2011). Matroid Theory. Oxford Graduate Texts in Mathematics. Vol. 3 (2nd ed.). Oxford University Press. ISBN 9780199202508. Roman, Steven (2005). Advanced Linear Algebra (PDF) (2nd ed.). Springer. ISBN 0-387-24766-1. Rynne, Brian P.; Youngson, Martin A. (2008). Linear Functional Analysis. Springer. ISBN 978-1848000049. Lay, David C. (2021) Linear Algebra and Its Applications (6th Edition). Pearson. === Web === Lankham, Isaiah; Nachtergaele, Bruno; Schilling, Anne (13 February 2010). "Linear Algebra - As an Introduction to Abstract Mathematics" (PDF). University of California, Davis. Retrieved 27 September 2011. Weisstein, Eric Wolfgang. "Vector Space Span". MathWorld. Retrieved 16 Feb 2021. "Linear hull". Encyclopedia of Mathematics. 5 April 2020. Retrieved 16 Feb 2021. == External links == Linear Combinations and Span: Understanding linear combinations and spans of vectors, khanacademy.org. Sanderson, Grant (August 6, 2016). "Linear combinations, span, and basis vectors". Essence of Linear Algebra. Archived from the original on 2021-12-11 – via YouTube.
Wikipedia/Span_(linear_algebra)
In mathematics, LHS is informal shorthand for the left-hand side of an equation. Similarly, RHS is the right-hand side. The two sides have the same value, expressed differently, since equality is symmetric. More generally, these terms may apply to an inequation or inequality; the right-hand side is everything on the right side of a test operator in an expression, with LHS defined similarly. == Example == The expression on the right side of the "=" sign is the right side of the equation and the expression on the left of the "=" is the left side of the equation. For example, in x + 5 = y + 8 {\displaystyle x+5=y+8} x + 5 is the left-hand side (LHS) and y + 8 is the right-hand side (RHS). == Homogeneous and inhomogeneous equations == In solving mathematical equations, particularly linear simultaneous equations, differential equations and integral equations, the terminology homogeneous is often used for equations with some linear operator L on the LHS and 0 on the RHS. In contrast, an equation with a non-zero RHS is called inhomogeneous or non-homogeneous, as exemplified by Lf = g, with g a fixed function, which equation is to be solved for f. Then any solution of the inhomogeneous equation may have a solution of the homogeneous equation added to it, and still remain a solution. For example in mathematical physics, the homogeneous equation may correspond to a physical theory formulated in empty space, while the inhomogeneous equation asks for more 'realistic' solutions with some matter, or charged particles. == Syntax == More abstractly, when using infix notation T * U the term T stands as the left-hand side and U as the right-hand side of the operator *. This usage is less common, though. == See also == Equals sign == References ==
Wikipedia/Sides_of_an_equation
In numerical linear algebra, the Jacobi method (a.k.a. the Jacobi iteration method) is an iterative algorithm for determining the solutions of a strictly diagonally dominant system of linear equations. Each diagonal element is solved for, and an approximate value is plugged in. The process is then iterated until it converges. This algorithm is a stripped-down version of the Jacobi transformation method of matrix diagonalization. The method is named after Carl Gustav Jacob Jacobi. == Description == Let A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } be a square system of n linear equations, where: A = [ a 11 a 12 ⋯ a 1 n a 21 a 22 ⋯ a 2 n ⋮ ⋮ ⋱ ⋮ a n 1 a n 2 ⋯ a n n ] , x = [ x 1 x 2 ⋮ x n ] , b = [ b 1 b 2 ⋮ b n ] . {\displaystyle A={\begin{bmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{n1}&a_{n2}&\cdots &a_{nn}\end{bmatrix}},\qquad \mathbf {x} ={\begin{bmatrix}x_{1}\\x_{2}\\\vdots \\x_{n}\end{bmatrix}},\qquad \mathbf {b} ={\begin{bmatrix}b_{1}\\b_{2}\\\vdots \\b_{n}\end{bmatrix}}.} When A {\displaystyle A} and b {\displaystyle \mathbf {b} } are known, and x {\displaystyle \mathbf {x} } is unknown, we can use the Jacobi method to approximate x {\displaystyle \mathbf {x} } . The vector x ( 0 ) {\displaystyle \mathbf {x} ^{(0)}} denotes our initial guess for x {\displaystyle \mathbf {x} } (often x i ( 0 ) = 0 {\displaystyle \mathbf {x} _{i}^{(0)}=0} for i = 1 , 2 , . . . , n {\displaystyle i=1,2,...,n} ). We denote x ( k ) {\displaystyle \mathbf {x} ^{(k)}} as the k-th approximation or iteration of x {\displaystyle \mathbf {x} } , and x ( k + 1 ) {\displaystyle \mathbf {x} ^{(k+1)}} is the next (or k+1) iteration of x {\displaystyle \mathbf {x} } . === Matrix-based formula === Then A can be decomposed into a diagonal component D, a lower triangular part L and an upper triangular part U: A = D + L + U where D = [ a 11 0 ⋯ 0 0 a 22 ⋯ 0 ⋮ ⋮ ⋱ ⋮ 0 0 ⋯ a n n ] and L + U = [ 0 a 12 ⋯ a 1 n a 21 0 ⋯ a 2 n ⋮ ⋮ ⋱ ⋮ a n 1 a n 2 ⋯ 0 ] . {\displaystyle A=D+L+U\qquad {\text{where}}\qquad D={\begin{bmatrix}a_{11}&0&\cdots &0\\0&a_{22}&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &a_{nn}\end{bmatrix}}{\text{ and }}L+U={\begin{bmatrix}0&a_{12}&\cdots &a_{1n}\\a_{21}&0&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{n1}&a_{n2}&\cdots &0\end{bmatrix}}.} The solution is then obtained iteratively via x ( k + 1 ) = D − 1 ( b − ( L + U ) x ( k ) ) . {\displaystyle \mathbf {x} ^{(k+1)}=D^{-1}(\mathbf {b} -(L+U)\mathbf {x} ^{(k)}).} === Element-based formula === The element-based formula for each row i {\displaystyle i} is thus: x i ( k + 1 ) = 1 a i i ( b i − ∑ j ≠ i a i j x j ( k ) ) , i = 1 , 2 , … , n . {\displaystyle x_{i}^{(k+1)}={\frac {1}{a_{ii}}}\left(b_{i}-\sum _{j\neq i}a_{ij}x_{j}^{(k)}\right),\quad i=1,2,\ldots ,n.} The computation of x i ( k + 1 ) {\displaystyle x_{i}^{(k+1)}} requires each element in x ( k ) {\displaystyle \mathbf {x} ^{(k)}} except itself. Unlike the Gauss–Seidel method, we cannot overwrite x i ( k ) {\displaystyle x_{i}^{(k)}} with x i ( k + 1 ) {\displaystyle x_{i}^{(k+1)}} , as that value will be needed by the rest of the computation. The minimum amount of storage is two vectors of size n. == Algorithm == Input: initial guess x(0) to the solution, (diagonal dominant) matrix A, right-hand side vector b, convergence criterion Output: solution when convergence is reached Comments: pseudocode based on the element-based formula above k = 0 while convergence not reached do for i := 1 step until n do σ = 0 for j := 1 step until n do if j ≠ i then σ = σ + aij xj(k) end end xi(k+1) = (bi − σ) / aii end increment k end == Convergence == The standard convergence condition (for any iterative method) is when the spectral radius of the iteration matrix is less than 1: ρ ( D − 1 ( L + U ) ) < 1. {\displaystyle \rho (D^{-1}(L+U))<1.} A sufficient (but not necessary) condition for the method to converge is that the matrix A is strictly or irreducibly diagonally dominant. Strict row diagonal dominance means that for each row, the absolute value of the diagonal term is greater than the sum of absolute values of other terms: | a i i | > ∑ j ≠ i | a i j | . {\displaystyle \left|a_{ii}\right|>\sum _{j\neq i}{\left|a_{ij}\right|}.} The Jacobi method sometimes converges even if these conditions are not satisfied. Note that the Jacobi method does not converge for every symmetric positive-definite matrix. For example, A = ( 29 2 1 2 6 1 1 1 1 5 ) ⇒ D − 1 ( L + U ) = ( 0 2 29 1 29 1 3 0 1 6 5 5 0 ) ⇒ ρ ( D − 1 ( L + U ) ) ≈ 1.0661 . {\displaystyle A={\begin{pmatrix}29&2&1\\2&6&1\\1&1&{\frac {1}{5}}\end{pmatrix}}\quad \Rightarrow \quad D^{-1}(L+U)={\begin{pmatrix}0&{\frac {2}{29}}&{\frac {1}{29}}\\{\frac {1}{3}}&0&{\frac {1}{6}}\\5&5&0\end{pmatrix}}\quad \Rightarrow \quad \rho (D^{-1}(L+U))\approx 1.0661\,.} == Examples == === Example question === A linear system of the form A x = b {\displaystyle Ax=b} with initial estimate x ( 0 ) {\displaystyle x^{(0)}} is given by A = [ 2 1 5 7 ] , b = [ 11 13 ] and x ( 0 ) = [ 1 1 ] . {\displaystyle A={\begin{bmatrix}2&1\\5&7\\\end{bmatrix}},\ b={\begin{bmatrix}11\\13\\\end{bmatrix}}\quad {\text{and}}\quad x^{(0)}={\begin{bmatrix}1\\1\\\end{bmatrix}}.} We use the equation x ( k + 1 ) = D − 1 ( b − ( L + U ) x ( k ) ) {\displaystyle x^{(k+1)}=D^{-1}(b-(L+U)x^{(k)})} , described above, to estimate x {\displaystyle x} . First, we rewrite the equation in a more convenient form D − 1 ( b − ( L + U ) x ( k ) ) = T x ( k ) + C {\displaystyle D^{-1}(b-(L+U)x^{(k)})=Tx^{(k)}+C} , where T = − D − 1 ( L + U ) {\displaystyle T=-D^{-1}(L+U)} and C = D − 1 b {\displaystyle C=D^{-1}b} . From the known values D − 1 = [ 1 / 2 0 0 1 / 7 ] , L = [ 0 0 5 0 ] and U = [ 0 1 0 0 ] . {\displaystyle D^{-1}={\begin{bmatrix}1/2&0\\0&1/7\\\end{bmatrix}},\ L={\begin{bmatrix}0&0\\5&0\\\end{bmatrix}}\quad {\text{and}}\quad U={\begin{bmatrix}0&1\\0&0\\\end{bmatrix}}.} we determine T = − D − 1 ( L + U ) {\displaystyle T=-D^{-1}(L+U)} as T = [ 1 / 2 0 0 1 / 7 ] { [ 0 0 − 5 0 ] + [ 0 − 1 0 0 ] } = [ 0 − 1 / 2 − 5 / 7 0 ] . {\displaystyle T={\begin{bmatrix}1/2&0\\0&1/7\\\end{bmatrix}}\left\{{\begin{bmatrix}0&0\\-5&0\\\end{bmatrix}}+{\begin{bmatrix}0&-1\\0&0\\\end{bmatrix}}\right\}={\begin{bmatrix}0&-1/2\\-5/7&0\\\end{bmatrix}}.} Further, C {\displaystyle C} is found as C = [ 1 / 2 0 0 1 / 7 ] [ 11 13 ] = [ 11 / 2 13 / 7 ] . {\displaystyle C={\begin{bmatrix}1/2&0\\0&1/7\\\end{bmatrix}}{\begin{bmatrix}11\\13\\\end{bmatrix}}={\begin{bmatrix}11/2\\13/7\\\end{bmatrix}}.} With T {\displaystyle T} and C {\displaystyle C} calculated, we estimate x {\displaystyle x} as x ( 1 ) = T x ( 0 ) + C {\displaystyle x^{(1)}=Tx^{(0)}+C} : x ( 1 ) = [ 0 − 1 / 2 − 5 / 7 0 ] [ 1 1 ] + [ 11 / 2 13 / 7 ] = [ 5.0 8 / 7 ] ≈ [ 5 1.143 ] . {\displaystyle x^{(1)}={\begin{bmatrix}0&-1/2\\-5/7&0\\\end{bmatrix}}{\begin{bmatrix}1\\1\\\end{bmatrix}}+{\begin{bmatrix}11/2\\13/7\\\end{bmatrix}}={\begin{bmatrix}5.0\\8/7\\\end{bmatrix}}\approx {\begin{bmatrix}5\\1.143\\\end{bmatrix}}.} The next iteration yields x ( 2 ) = [ 0 − 1 / 2 − 5 / 7 0 ] [ 5.0 8 / 7 ] + [ 11 / 2 13 / 7 ] = [ 69 / 14 − 12 / 7 ] ≈ [ 4.929 − 1.714 ] . {\displaystyle x^{(2)}={\begin{bmatrix}0&-1/2\\-5/7&0\\\end{bmatrix}}{\begin{bmatrix}5.0\\8/7\\\end{bmatrix}}+{\begin{bmatrix}11/2\\13/7\\\end{bmatrix}}={\begin{bmatrix}69/14\\-12/7\\\end{bmatrix}}\approx {\begin{bmatrix}4.929\\-1.714\\\end{bmatrix}}.} This process is repeated until convergence (i.e., until ‖ A x ( n ) − b ‖ {\displaystyle \|Ax^{(n)}-b\|} is small). The solution after 25 iterations is x = [ 7.111 − 3.222 ] . {\displaystyle x={\begin{bmatrix}7.111\\-3.222\end{bmatrix}}.} === Example question 2 === Suppose we are given the following linear system: 10 x 1 − x 2 + 2 x 3 = 6 , − x 1 + 11 x 2 − x 3 + 3 x 4 = 25 , 2 x 1 − x 2 + 10 x 3 − x 4 = − 11 , 3 x 2 − x 3 + 8 x 4 = 15. {\displaystyle {\begin{aligned}10x_{1}-x_{2}+2x_{3}&=6,\\-x_{1}+11x_{2}-x_{3}+3x_{4}&=25,\\2x_{1}-x_{2}+10x_{3}-x_{4}&=-11,\\3x_{2}-x_{3}+8x_{4}&=15.\end{aligned}}} If we choose (0, 0, 0, 0) as the initial approximation, then the first approximate solution is given by x 1 = ( 6 + 0 − ( 2 ∗ 0 ) ) / 10 = 0.6 , x 2 = ( 25 + 0 + 0 − ( 3 ∗ 0 ) ) / 11 = 25 / 11 = 2.2727 , x 3 = ( − 11 − ( 2 ∗ 0 ) + 0 + 0 ) / 10 = − 1.1 , x 4 = ( 15 − ( 3 ∗ 0 ) + 0 ) / 8 = 1.875. {\displaystyle {\begin{aligned}x_{1}&=(6+0-(2*0))/10=0.6,\\x_{2}&=(25+0+0-(3*0))/11=25/11=2.2727,\\x_{3}&=(-11-(2*0)+0+0)/10=-1.1,\\x_{4}&=(15-(3*0)+0)/8=1.875.\end{aligned}}} Using the approximations obtained, the iterative procedure is repeated until the desired accuracy has been reached. The following are the approximated solutions after five iterations. The exact solution of the system is (1, 2, −1, 1). === Python example === == Weighted Jacobi method == The weighted Jacobi iteration uses a parameter ω {\displaystyle \omega } to compute the iteration as x ( k + 1 ) = ω D − 1 ( b − ( L + U ) x ( k ) ) + ( 1 − ω ) x ( k ) {\displaystyle \mathbf {x} ^{(k+1)}=\omega D^{-1}(\mathbf {b} -(L+U)\mathbf {x} ^{(k)})+\left(1-\omega \right)\mathbf {x} ^{(k)}} with ω = 2 / 3 {\displaystyle \omega =2/3} being the usual choice. From the relation L + U = A − D {\displaystyle L+U=A-D} , this may also be expressed as x ( k + 1 ) = ω D − 1 b + ( I − ω D − 1 A ) x ( k ) {\displaystyle \mathbf {x} ^{(k+1)}=\omega D^{-1}\mathbf {b} +\left(I-\omega D^{-1}A\right)\mathbf {x} ^{(k)}} . === Convergence in the symmetric positive definite case === In case that the system matrix A {\displaystyle A} is of symmetric positive-definite type one can show convergence. Let C = C ω = I − ω D − 1 A {\displaystyle C=C_{\omega }=I-\omega D^{-1}A} be the iteration matrix. Then, convergence is guaranteed for ρ ( C ω ) < 1 ⟺ 0 < ω < 2 λ max ( D − 1 A ) , {\displaystyle \rho (C_{\omega })<1\quad \Longleftrightarrow \quad 0<\omega <{\frac {2}{\lambda _{\text{max}}(D^{-1}A)}}\,,} where λ max {\displaystyle \lambda _{\text{max}}} is the maximal eigenvalue. The spectral radius can be minimized for a particular choice of ω = ω opt {\displaystyle \omega =\omega _{\text{opt}}} as follows min ω ρ ( C ω ) = ρ ( C ω opt ) = 1 − 2 κ ( D − 1 A ) + 1 for ω opt := 2 λ min ( D − 1 A ) + λ max ( D − 1 A ) , {\displaystyle \min _{\omega }\rho (C_{\omega })=\rho (C_{\omega _{\text{opt}}})=1-{\frac {2}{\kappa (D^{-1}A)+1}}\quad {\text{for}}\quad \omega _{\text{opt}}:={\frac {2}{\lambda _{\text{min}}(D^{-1}A)+\lambda _{\text{max}}(D^{-1}A)}}\,,} where κ {\displaystyle \kappa } is the matrix condition number. == See also == Gauss–Seidel method Successive over-relaxation Iterative method § Linear systems Gaussian Belief Propagation Matrix splitting == References == == External links == This article incorporates text from the article Jacobi_method on CFD-Wiki that is under the GFDL license. Black, Noel; Moore, Shirley & Weisstein, Eric W. "Jacobi method". MathWorld. Jacobi Method from www.math-linux.com
Wikipedia/Jacobi_method
A substitution is a syntactic transformation on formal expressions. To apply a substitution to an expression means to consistently replace its variable, or placeholder, symbols with other expressions. The resulting expression is called a substitution instance, or instance for short, of the original expression. == Propositional logic == === Definition === Where ψ and φ represent formulas of propositional logic, ψ is a substitution instance of φ if and only if ψ may be obtained from φ by substituting formulas for propositional variables in φ, replacing each occurrence of the same variable by an occurrence of the same formula. For example: ψ: (R → S) & (T → S) is a substitution instance of φ: P & Q That is, ψ can be obtained by replacing P and Q in φ with (R → S) and (T → S) respectively. Similarly: ψ: (A ↔ A) ↔ (A ↔ A) is a substitution instance of: φ: (A ↔ A) since ψ can be obtained by replacing each A in φ with (A ↔ A). In some deduction systems for propositional logic, a new expression (a proposition) may be entered on a line of a derivation if it is a substitution instance of a previous line of the derivation. This is how new lines are introduced in some axiomatic systems. In systems that use rules of transformation, a rule may include the use of a substitution instance for the purpose of introducing certain variables into a derivation. === Tautologies === A propositional formula is a tautology if it is true under every valuation (or interpretation) of its predicate symbols. If Φ is a tautology, and Θ is a substitution instance of Φ, then Θ is again a tautology. This fact implies the soundness of the deduction rule described in the previous section. == First-order logic == In first-order logic, a substitution is a total mapping σ: V → T from variables to terms; many,: 73 : 445  but not all: 250  authors additionally require σ(x) = x for all but finitely many variables x. The notation { x1 ↦ t1, …, xk ↦ tk } refers to a substitution mapping each variable xi to the corresponding term ti, for i=1,…,k, and every other variable to itself; the xi must be pairwise distinct. Most authors additionally require each term ti to be syntactically different from xi, to avoid infinitely many distinct notations for the same substitution. Applying that substitution to a term t is written in postfix notation as t { x1 ↦ t1, ..., xk ↦ tk }; it means to (simultaneously) replace every occurrence of each xi in t by ti. The result tσ of applying a substitution σ to a term t is called an instance of that term t. For example, applying the substitution { x ↦ z, z ↦ h(a,y) } to the term The domain dom(σ) of a substitution σ is commonly defined as the set of variables actually replaced, i.e. dom(σ) = { x ∈ V | xσ ≠ x }. A substitution is called a ground substitution if it maps all variables of its domain to ground, i.e. variable-free, terms. The substitution instance tσ of a ground substitution is a ground term if all of t's variables are in σ's domain, i.e. if vars(t) ⊆ dom(σ). A substitution σ is called a linear substitution if tσ is a linear term for some (and hence every) linear term t containing precisely the variables of σ's domain, i.e. with vars(t) = dom(σ). A substitution σ is called a flat substitution if xσ is a variable for every variable x. A substitution σ is called a renaming substitution if it is a permutation on the set of all variables. Like every permutation, a renaming substitution σ always has an inverse substitution σ−1, such that tσσ−1 = t = tσ−1σ for every term t. However, it is not possible to define an inverse for an arbitrary substitution. For example, { x ↦ 2, y ↦ 3+4 } is a ground substitution, { x ↦ x1, y ↦ y2+4 } is non-ground and non-flat, but linear, { x ↦ y2, y ↦ y2+4 } is non-linear and non-flat, { x ↦ y2, y ↦ y2 } is flat, but non-linear, { x ↦ x1, y ↦ y2 } is both linear and flat, but not a renaming, since it maps both y and y2 to y2; each of these substitutions has the set {x,y} as its domain. An example for a renaming substitution is { x ↦ x1, x1 ↦ y, y ↦ y2, y2 ↦ x }, it has the inverse { x ↦ y2, y2 ↦ y, y ↦ x1, x1 ↦ x }. The flat substitution { x ↦ z, y ↦ z } cannot have an inverse, since e.g. (x+y) { x ↦ z, y ↦ z } = z+z, and the latter term cannot be transformed back to x+y, as the information about the origin a z stems from is lost. The ground substitution { x ↦ 2 } cannot have an inverse due to a similar loss of origin information e.g. in (x+2) { x ↦ 2 } = 2+2, even if replacing constants by variables was allowed by some fictitious kind of "generalized substitutions". Two substitutions are considered equal if they map each variable to syntactically equal result terms, formally: σ = τ if xσ = xτ for each variable x ∈ V. The composition of two substitutions σ = { x1 ↦ t1, …, xk ↦ tk } and τ = { y1 ↦ u1, …, yl ↦ ul } is obtained by removing from the substitution { x1 ↦ t1τ, …, xk ↦ tkτ, y1 ↦ u1, …, yl ↦ ul } those pairs yi ↦ ui for which yi ∈ { x1, …, xk }. The composition of σ and τ is denoted by στ. Composition is an associative operation, and is compatible with substitution application, i.e. (ρσ)τ = ρ(στ), and (tσ)τ = t(στ), respectively, for every substitutions ρ, σ, τ, and every term t. The identity substitution, which maps every variable to itself, is the neutral element of substitution composition. A substitution σ is called idempotent if σσ = σ, and hence tσσ = tσ for every term t. When xi≠ti for all i, the substitution { x1 ↦ t1, …, xk ↦ tk } is idempotent if and only if none of the variables xi occurs in any tj. Substitution composition is not commutative, that is, στ may be different from τσ, even if σ and τ are idempotent.: 73–74 : 445–446  For example, { x ↦ 2, y ↦ 3+4 } is equal to { y ↦ 3+4, x ↦ 2 }, but different from { x ↦ 2, y ↦ 7 }. The substitution { x ↦ y+y } is idempotent, e.g. ((x+y) {x↦y+y}) {x↦y+y} = ((y+y)+y) {x↦y+y} = (y+y)+y, while the substitution { x ↦ x+y } is non-idempotent, e.g. ((x+y) {x↦x+y}) {x↦x+y} = ((x+y)+y) {x↦x+y} = ((x+y)+y)+y. An example for non-commuting substitutions is { x ↦ y } { y ↦ z } = { x ↦ z, y ↦ z }, but { y ↦ z} { x ↦ y} = { x ↦ y, y ↦ z }. == Mathematics == In mathematics, there are two common uses of substitution: substitution of variables for constants (also called assignment for that variable), and the substitution property of equality, also called Leibniz's Law. Considering mathematics as a formal language, a variable is a symbol from an alphabet, usually a letter like x, y, and z, which denotes a range of possible values. If a variable is free in a given expression or formula, then it can be replaced with any of the values in its range. Certain kinds of bound variables can be substituted too. For instance, parameters of an expression (like the coefficients of a polynomial), or the argument of a function. Moreover, variables being universally quantified can be replaced with any of the values in its range, and the result will a true statement. (This is called Universal instantiation) For a non-formalized language, that is, in most mathematical texts outside of mathematical logic, for an individual expression it is not always possible to identify which variables are free and bound. For example, in ∑ i < k a i k {\textstyle \sum _{i<k}a_{ik}} , depending on the context, the variable i {\textstyle i} can be free and k {\textstyle k} bound, or vice-versa, but they cannot both be free. Determining which value is assumed to be free depends on context and semantics. The substitution property of equality, or Leibniz's Law (though the latter term is usually reserved for philosophical contexts), generally states that, if two things are equal, then any property of one, must be a property of the other. It can be formally stated in logical notation as: ( a = b ) ⟹ [ ϕ ( a ) ⇒ ϕ ( b ) ] {\displaystyle (a=b)\implies {\bigl [}\phi (a)\Rightarrow \phi (b){\bigr ]}} For every a {\textstyle a} and b {\textstyle b} , and any well-formed formula ϕ ( x ) {\textstyle \phi (x)} (with a free variable x). For example: For all real numbers a and b, if a = b, then a ≥ 0 implies b ≥ 0 (here, ϕ ( x ) {\displaystyle \phi (x)} is x ≥ 0). This is a property which is most often used in algebra, especially in solving systems of equations, but is apllied in nearly every area of math that uses equality. This, taken together with the reflexive property of equality, forms the axioms of equality in first-order logic. Substitution is related to, but not identical to, function composition; it is closely related to β-reduction in lambda calculus. In contrast to these notions, however, the accent in algebra is on the preservation of algebraic structure by the substitution operation, the fact that substitution gives a homomorphism for the structure at hand (in the case of polynomials, the ring structure). === Algebra === Substitution is a basic operation in algebra, in particular in computer algebra. A common case of substitution involves polynomials, where substitution of a numerical value (or another expression) for the indeterminate of a univariate polynomial amounts to evaluating the polynomial at that value. Indeed, this operation occurs so frequently that the notation for polynomials is often adapted to it; instead of designating a polynomial by a name like P, as one would do for other mathematical objects, one could define P ( X ) = X 5 − 3 X 2 + 5 X − 17 {\displaystyle P(X)=X^{5}-3X^{2}+5X-17} so that substitution for X can be designated by replacement inside "P(X)", say P ( 2 ) = 13 {\displaystyle P(2)=13} or P ( X + 1 ) = X 5 + 5 X 4 + 10 X 3 + 7 X 2 + 4 X − 14. {\displaystyle P(X+1)=X^{5}+5X^{4}+10X^{3}+7X^{2}+4X-14.} Substitution can also be applied to other kinds of formal objects built from symbols, for instance elements of free groups. In order for substitution to be defined, one needs an algebraic structure with an appropriate universal property, that asserts the existence of unique homomorphisms that send indeterminates to specific values; the substitution then amounts to finding the image of an element under such a homomorphism. === Proof of substitution in ZFC === The following is a proof of the substitution property of equality in ZFC (as defined in first-order logic without equality), which is adapted from Introduction to Axiomatic Set Theory (1982) by Gaisi Takeuti and Wilson M. Zaring. See Zermelo–Fraenkel set theory § Formal language for the definition of formulas in ZFC. The definition is recursive, so a proof by induction is used. In ZFC in first-order logic without equality, "set equality" is defined to mean that two sets have the same elements, written symbolically as "for all z, z is in x if and onlt if z is in y". Then, the Axiom of Extensionality asserts that if two sets have the same elements, then they belong to the same sets: == See also == Integration by substitution Lambda calculus § Substitution String interpolation Substitution property of Equality Trigonometric substitution Universal instantiation Principal equation form == Notes == == Citations == == References == Crabbé, M. (2004). On the Notion of Substitution. Logic Journal of the IGPL, 12, 111–124. Curry, H. B. (1952) On the definition of substitution, replacement and allied notions in an abstract formal system. Revue philosophique de Louvain 50, 251–269. Kleene, S. C. (1967). Mathematical Logic. Reprinted 2002, Dover. ISBN 0-486-42533-9 Robinson, Alan J. A.; Voronkov, Andrei (2001-06-22). Handbook of Automated Reasoning. Elsevier. ISBN 978-0-08-053279-0 == External links == Substitution at the nLab
Wikipedia/Substitution_(algebra)
A differential equation can be homogeneous in either of two respects. A first order differential equation is said to be homogeneous if it may be written f ( x , y ) d y = g ( x , y ) d x , {\displaystyle f(x,y)\,dy=g(x,y)\,dx,} where f and g are homogeneous functions of the same degree of x and y. In this case, the change of variable y = ux leads to an equation of the form d x x = h ( u ) d u , {\displaystyle {\frac {dx}{x}}=h(u)\,du,} which is easy to solve by integration of the two members. Otherwise, a differential equation is homogeneous if it is a homogeneous function of the unknown function and its derivatives. In the case of linear differential equations, this means that there are no constant terms. The solutions of any linear ordinary differential equation of any order may be deduced by integration from the solution of the homogeneous equation obtained by removing the constant term. == History == The term homogeneous was first applied to differential equations by Johann Bernoulli in section 9 of his 1726 article De integraionibus aequationum differentialium (On the integration of differential equations). == Homogeneous first-order differential equations == A first-order ordinary differential equation in the form: M ( x , y ) d x + N ( x , y ) d y = 0 {\displaystyle M(x,y)\,dx+N(x,y)\,dy=0} is a homogeneous type if both functions M(x, y) and N(x, y) are homogeneous functions of the same degree n. That is, multiplying each variable by a parameter λ, we find M ( λ x , λ y ) = λ n M ( x , y ) and N ( λ x , λ y ) = λ n N ( x , y ) . {\displaystyle M(\lambda x,\lambda y)=\lambda ^{n}M(x,y)\quad {\text{and}}\quad N(\lambda x,\lambda y)=\lambda ^{n}N(x,y)\,.} Thus, M ( λ x , λ y ) N ( λ x , λ y ) = M ( x , y ) N ( x , y ) . {\displaystyle {\frac {M(\lambda x,\lambda y)}{N(\lambda x,\lambda y)}}={\frac {M(x,y)}{N(x,y)}}\,.} === Solution method === In the quotient M ( t x , t y ) N ( t x , t y ) = M ( x , y ) N ( x , y ) {\textstyle {\frac {M(tx,ty)}{N(tx,ty)}}={\frac {M(x,y)}{N(x,y)}}} , we can let t = ⁠1/x⁠ to simplify this quotient to a function f of the single variable ⁠y/x⁠: M ( x , y ) N ( x , y ) = M ( t x , t y ) N ( t x , t y ) = M ( 1 , y / x ) N ( 1 , y / x ) = f ( y / x ) . {\displaystyle {\frac {M(x,y)}{N(x,y)}}={\frac {M(tx,ty)}{N(tx,ty)}}={\frac {M(1,y/x)}{N(1,y/x)}}=f(y/x)\,.} That is d y d x = − f ( y / x ) . {\displaystyle {\frac {dy}{dx}}=-f(y/x).} Introduce the change of variables y = ux; differentiate using the product rule: d y d x = d ( u x ) d x = x d u d x + u d x d x = x d u d x + u . {\displaystyle {\frac {dy}{dx}}={\frac {d(ux)}{dx}}=x{\frac {du}{dx}}+u{\frac {dx}{dx}}=x{\frac {du}{dx}}+u.} This transforms the original differential equation into the separable form x d u d x = − f ( u ) − u , {\displaystyle x{\frac {du}{dx}}=-f(u)-u,} or 1 x d x d u = − 1 f ( u ) + u , {\displaystyle {\frac {1}{x}}{\frac {dx}{du}}={\frac {-1}{f(u)+u}},} which can now be integrated directly: ln x equals the antiderivative of the right-hand side (see ordinary differential equation). === Special case === A first order differential equation of the form (a, b, c, e, f, g are all constants) ( a x + b y + c ) d x + ( e x + f y + g ) d y = 0 {\displaystyle \left(ax+by+c\right)dx+\left(ex+fy+g\right)dy=0} where af ≠ be can be transformed into a homogeneous type by a linear transformation of both variables (α and β are constants): t = x + α ; z = y + β , {\displaystyle t=x+\alpha ;\;\;z=y+\beta \,,} where α = c f − b g a f − b e ; β = a g − c e a f − b e . {\displaystyle \alpha ={\frac {cf-bg}{af-be}};\;\;\beta ={\frac {ag-ce}{af-be}}\,.} For cases where af = be, introduce the change of variables u = ax + by or u = ex + fy; differentiation yields: d u d x = a − b ( a c + a u a g + e u ) , {\displaystyle {\frac {du}{dx}}=a-b{\bigg (}{\frac {ac+au}{ag+eu}}{\bigg )},} or d u d x = e − f ( e c + a u e g + e u ) , {\displaystyle {\frac {du}{dx}}=e-f{\bigg (}{\frac {ec+au}{eg+eu}}{\bigg )},} for each respective substitution. Both may be solved via Separation of Variables. == Homogeneous linear differential equations == A linear differential equation is homogeneous if it is a homogeneous linear equation in the unknown function and its derivatives. It follows that, if φ(x) is a solution, so is cφ(x), for any (non-zero) constant c. In order for this condition to hold, each nonzero term of the linear differential equation must depend on the unknown function or any derivative of it. A linear differential equation that fails this condition is called inhomogeneous. A linear differential equation can be represented as a linear operator acting on y(x) where x is usually the independent variable and y is the dependent variable. Therefore, the general form of a linear homogeneous differential equation is L ( y ) = 0 {\displaystyle L(y)=0} where L is a differential operator, a sum of derivatives (defining the "0th derivative" as the original, non-differentiated function), each multiplied by a function fi of x: L = ∑ i = 0 n f i ( x ) d i d x i , {\displaystyle L=\sum _{i=0}^{n}f_{i}(x){\frac {d^{i}}{dx^{i}}}\,,} where fi may be constants, but not all fi may be zero. For example, the following linear differential equation is homogeneous: sin ⁡ ( x ) d 2 y d x 2 + 4 d y d x + y = 0 , {\displaystyle \sin(x){\frac {d^{2}y}{dx^{2}}}+4{\frac {dy}{dx}}+y=0\,,} whereas the following two are inhomogeneous: 2 x 2 d 2 y d x 2 + 4 x d y d x + y = cos ⁡ ( x ) ; {\displaystyle 2x^{2}{\frac {d^{2}y}{dx^{2}}}+4x{\frac {dy}{dx}}+y=\cos(x)\,;} 2 x 2 d 2 y d x 2 − 3 x d y d x + y = 2 . {\displaystyle 2x^{2}{\frac {d^{2}y}{dx^{2}}}-3x{\frac {dy}{dx}}+y=2\,.} The existence of a constant term is a sufficient condition for an equation to be inhomogeneous, as in the above example. == See also == Separation of variables == Notes == == References == Boyce, William E.; DiPrima, Richard C. (2012), Elementary differential equations and boundary value problems (10th ed.), Wiley, ISBN 978-0470458310. (This is a good introductory reference on differential equations.) Ince, E. L. (1956), Ordinary differential equations, New York: Dover Publications, ISBN 0486603490 {{citation}}: ISBN / Date incompatibility (help). (This is a classic reference on ODEs, first published in 1926.) Andrei D. Polyanin; Valentin F. Zaitsev (15 November 2017). Handbook of Ordinary Differential Equations: Exact Solutions, Methods, and Problems. CRC Press. ISBN 978-1-4665-6940-9. Matthew R. Boelkins; Jack L. Goldberg; Merle C. Potter (5 November 2009). Differential Equations with Linear Algebra. Oxford University Press. pp. 274–. ISBN 978-0-19-973666-9. == External links == Homogeneous differential equations at MathWorld Wikibooks: Ordinary Differential Equations/Substitution 1
Wikipedia/Homogeneous_differential_equation
The Rybicki–Press algorithm is a fast algorithm for inverting a matrix whose entries are given by A ( i , j ) = exp ⁡ ( − a | t i − t j | ) {\displaystyle A(i,j)=\exp(-a\vert t_{i}-t_{j}\vert )} , where a ∈ R {\displaystyle a\in \mathbb {R} } and where the t i {\displaystyle t_{i}} are sorted in order. The key observation behind the Rybicki-Press observation is that the matrix inverse of such a matrix is always a tridiagonal matrix (a matrix with nonzero entries only on the main diagonal and the two adjoining ones), and tridiagonal systems of equations can be solved efficiently (to be more precise, in linear time). It is a computational optimization of a general set of statistical methods developed to determine whether two noisy, irregularly sampled data sets are, in fact, dimensionally shifted representations of the same underlying function. The most common use of the algorithm is in the detection of periodicity in astronomical observations, such as for detecting quasars. The method has been extended to the Generalized Rybicki-Press algorithm for inverting matrices with entries of the form A ( i , j ) = ∑ k = 1 p a k exp ⁡ ( − β k | t i − t j | ) {\displaystyle A(i,j)=\sum _{k=1}^{p}a_{k}\exp(-\beta _{k}\vert t_{i}-t_{j}\vert )} . The key observation in the Generalized Rybicki-Press (GRP) algorithm is that the matrix A {\displaystyle A} is a semi-separable matrix with rank p {\displaystyle p} (that is, a matrix whose upper half, not including the main diagonal, is that of some matrix with matrix rank p {\displaystyle p} and whose lower half is also that of some possibly different rank p {\displaystyle p} matrix) and so can be embedded into a larger band matrix (see figure on the right), whose sparsity structure can be leveraged to reduce the computational complexity. As the matrix A ∈ R n × n {\displaystyle A\in \mathbb {R} ^{n\times n}} has a semi-separable rank of p {\displaystyle p} , the computational complexity of solving the linear system A x = b {\displaystyle Ax=b} or of calculating the determinant of the matrix A {\displaystyle A} scales as O ( p 2 n ) {\displaystyle {\mathcal {O}}\left(p^{2}n\right)} , thereby making it attractive for large matrices. The fact that matrix A {\displaystyle A} is a semi-separable matrix also forms the basis for celerite library, which is a library for fast and scalable Gaussian process regression in one dimension with implementations in C++, Python, and Julia. The celerite method also provides an algorithm for generating samples from a high-dimensional distribution. The method has found attractive applications in a wide range of fields, especially in astronomical data analysis. == See also == Invertible matrix Matrix decomposition Multidimensional signal processing System of linear equations == References == == External links == Implementation of the Generalized Rybicki Press algorithm celerite library on GitHub
Wikipedia/Rybicki_Press_algorithm
The Harrow–Hassidim–Lloyd (HHL) algorithm is a quantum algorithm for numerically solving a system of linear equations, designed by Aram Harrow, Avinatan Hassidim, and Seth Lloyd. The algorithm estimates the result of a scalar measurement on the solution vector to a given linear system of equations. The algorithm is one of the main fundamental algorithms expected to provide a speedup over their classical counterparts, along with Shor's factoring algorithm and Grover's search algorithm. Provided the linear system is sparse and has a low condition number κ {\displaystyle \kappa } , and that the user is interested in the result of a scalar measurement on the solution vector, instead of the values of the solution vector itself, then the algorithm has a runtime of O ( log ⁡ ( N ) κ 2 ) {\displaystyle O(\log(N)\kappa ^{2})} , where N {\displaystyle N} is the number of variables in the linear system. This offers an exponential speedup over the fastest classical algorithm, which runs in O ( N κ ) {\displaystyle O(N\kappa )} (or O ( N κ ) {\displaystyle O(N{\sqrt {\kappa }})} for positive semidefinite matrices). An implementation of the quantum algorithm for linear systems of equations was first demonstrated in 2013 by three independent publications. The demonstrations consisted of simple linear equations on specially designed quantum devices. The first demonstration of a general-purpose version of the algorithm appeared in 2018. Due to the prevalence of linear systems in virtually all areas of science and engineering, the quantum algorithm for linear systems of equations has the potential for widespread applicability. == Procedure == The HHL algorithm tackles the following problem: given a N × N {\displaystyle N\times N} Hermitian matrix A {\displaystyle A} and a unit vector b → ∈ R N {\displaystyle {\vec {b}}\in \mathbb {R} ^{N}} , prepare the quantum state | x ⟩ {\displaystyle |x\rangle } corresponding to the vector x → ∈ R N {\displaystyle {\vec {x}}\in \mathbb {R} ^{N}} that solves the linear system A x → = b → {\displaystyle A{\vec {x}}={\vec {b}}} . More precisely, the goal is to prepare a state | x ⟩ {\displaystyle |x\rangle } whose amplitudes equal the elements of x → {\displaystyle {\vec {x}}} . This means, in particular, that the algorithm cannot be used to efficiently retrieve the vector x → {\displaystyle {\vec {x}}} itself. It does, however, allow to efficiently compute expectation values of the form ⟨ x | M | x ⟩ {\displaystyle \langle x|M|x\rangle } for some observable M {\displaystyle M} . First, the algorithm represents the vector b → {\displaystyle {\vec {b}}} as a quantum state of the form: | b ⟩ = ∑ i = ⁡ 1 N b i | i ⟩ . {\displaystyle |b\rangle =\sum _{i\mathop {=} 1}^{N}b_{i}|i\rangle .} Next, Hamiltonian simulation techniques are used to apply the unitary operator e i A t {\displaystyle e^{iAt}} to | b ⟩ {\displaystyle |b\rangle } for a superposition of different times t {\displaystyle t} . The ability to decompose | b ⟩ {\displaystyle |b\rangle } into the eigenbasis of A {\displaystyle A} and to find the corresponding eigenvalues λ j {\displaystyle \lambda _{j}} is facilitated by the use of quantum phase estimation. The state of the system after this decomposition is approximately: ∑ j = ⁡ 1 N β j | u j ⟩ | λ j ⟩ , {\displaystyle \sum _{j\mathop {=} 1}^{N}\beta _{j}|u_{j}\rangle |\lambda _{j}\rangle ,} where u j {\displaystyle u_{j}} is the eigenvector basis of A {\displaystyle A} , and | b ⟩ = ∑ j = ⁡ 1 N β j | u j ⟩ {\displaystyle |b\rangle =\sum _{j\mathop {=} 1}^{N}\beta _{j}|u_{j}\rangle } . We would then like to perform the linear map taking | λ j ⟩ {\displaystyle |\lambda _{j}\rangle } to C λ j − 1 | λ j ⟩ {\displaystyle C\lambda _{j}^{-1}|\lambda _{j}\rangle } , where C {\displaystyle C} is a normalizing constant. The linear mapping operation is not unitary and thus will require a number of repetitions as it has some probability of failing. After it succeeds, we uncomputed the | λ j ⟩ {\displaystyle |\lambda _{j}\rangle } register and are left with a state proportional to: ∑ i = ⁡ 1 N β i λ j − 1 | u j ⟩ = A − 1 | b ⟩ = | x ⟩ , {\displaystyle \sum _{i\mathop {=} 1}^{N}\beta _{i}\lambda _{j}^{-1}|u_{j}\rangle =A^{-1}|b\rangle =|x\rangle ,} where | x ⟩ {\displaystyle |x\rangle } is a quantum-mechanical representation of the desired solution vector x. To read out all components of x would require the procedure be repeated at least N times. However, it is often the case that one is not interested in x {\displaystyle x} itself, but rather some expectation value of a linear operator M acting on x. By mapping M to a quantum-mechanical operator and performing the quantum measurement corresponding to M, we obtain an estimate of the expectation value ⟨ x | M | x ⟩ {\displaystyle \langle x|M|x\rangle } . This allows for a wide variety of features of the vector x to be extracted including normalization, weights in different parts of the state space, and moments without actually computing all the values of the solution vector x. == Explanation == === Initialization === Firstly, the algorithm requires that the matrix A {\displaystyle A} be Hermitian so that it can be converted into a unitary operator. In the case where A {\displaystyle A} is not Hermitian, define C = [ 0 A A † 0 ] . {\displaystyle \mathbf {C} ={\begin{bmatrix}0&A\\A^{\dagger }&0\end{bmatrix}}.} As C {\displaystyle C} is Hermitian, the algorithm can now be used to solve C y = [ b 0 ] {\displaystyle Cy={\begin{bmatrix}b\\0\end{bmatrix}}} to obtain y = [ 0 x ] {\displaystyle y={\begin{bmatrix}0\\x\end{bmatrix}}} . Secondly, the algorithm requires an efficient procedure to prepare | b ⟩ {\displaystyle |b\rangle } , the quantum representation of b. It is assumed that there exists some linear operator B {\displaystyle B} that can take some arbitrary quantum state | i n i t i a l ⟩ {\displaystyle |\mathrm {initial} \rangle } to | b ⟩ {\displaystyle |b\rangle } efficiently or that this algorithm is a subroutine in a larger algorithm and is given | b ⟩ {\displaystyle |b\rangle } as input. Any error in the preparation of state | b ⟩ {\displaystyle |b\rangle } is ignored. Finally, the algorithm assumes that the state | ψ 0 ⟩ {\displaystyle |\psi _{0}\rangle } can be prepared efficiently, where | ψ 0 ⟩ := 2 / T ∑ τ = ⁡ 0 T − 1 sin ⁡ π ( τ + 1 2 T ) | τ ⟩ {\displaystyle |\psi _{0}\rangle :={\sqrt {2/T}}\sum _{\tau \mathop {=} 0}^{T-1}\sin \pi \left({\tfrac {\tau +{\tfrac {1}{2}}}{T}}\right)|\tau \rangle } for some large T {\displaystyle T} . The coefficients of | ψ 0 ⟩ {\displaystyle |\psi _{0}\rangle } are chosen to minimize a certain quadratic loss function which induces error in the U i n v e r t {\displaystyle U_{\mathrm {invert} }} subroutine described below. === Hamiltonian simulation === Hamiltonian simulation is used to transform the Hermitian matrix A {\displaystyle A} into a unitary operator, which can then be applied at will. This is possible if A is s-sparse and efficiently row computable, meaning it has at most s nonzero entries per row and given a row index these entries can be computed in time O(s). Under these assumptions, quantum Hamiltonian simulation allows e i A t {\displaystyle e^{iAt}} to be simulated in time O ( log ⁡ ( N ) s 2 t ) {\displaystyle O(\log(N)s^{2}t)} . === Uinvert subroutine === The key subroutine to the algorithm, denoted U i n v e r t {\displaystyle U_{\mathrm {invert} }} , is defined as follows and incorporates a phase estimation subroutine: 1. Prepare | ψ 0 ⟩ C {\displaystyle |\psi _{0}\rangle ^{C}} on register C 2. Apply the conditional Hamiltonian evolution (sum) 3. Apply the Fourier transform to the register C. Denote the resulting basis states with | k ⟩ {\displaystyle |k\rangle } for k = 0, ..., T − 1. Define λ k := 2 π k / t 0 {\displaystyle \lambda _{k}:=2\pi k/t_{0}} . 4. Adjoin a three-dimensional register S in the state | h ( λ k ) ⟩ S := 1 − f ( λ k ) 2 − g ( λ k ) 2 | n o t h i n g ⟩ S + f ( λ k ) | w e l l ⟩ S + g ( λ k ) | i l l ⟩ S , {\displaystyle |h(\lambda _{k})\rangle ^{S}:={\sqrt {1-f(\lambda _{k})^{2}-g(\lambda _{k})^{2}}}|\mathrm {nothing} \rangle ^{S}+f(\lambda _{k})|\mathrm {well} \rangle ^{S}+g(\lambda _{k})|\mathrm {ill} \rangle ^{S},} 5. Reverse steps 1–3, uncomputing any garbage produced along the way. The phase estimation procedure in steps 1-3 allows for the estimation of eigenvalues of A up to error ϵ {\displaystyle \epsilon } . The ancilla register in step 4 is necessary to construct a final state with inverted eigenvalues corresponding to the diagonalized inverse of A. In this register, the functions f, g, are called filter functions. The states 'nothing', 'well' and 'ill' are used to instruct the loop body on how to proceed; 'nothing' indicates that the desired matrix inversion has not yet taken place, 'well' indicates that the inversion has taken place and the loop should halt, and 'ill' indicates that part of | b ⟩ {\displaystyle |b\rangle } is in the ill-conditioned subspace of A and the algorithm will not be able to produce the desired inversion. Producing a state proportional to the inverse of A requires 'well' to be measured, after which the overall state of the system collapses to the desired state by the extended Born rule. === Main loop === The body of the algorithm follows the amplitude amplification procedure: starting with U i n v e r t B | i n i t i a l ⟩ {\displaystyle U_{\mathrm {invert} }B|\mathrm {initial} \rangle } , the following operation is repeatedly applied: U i n v e r t B R i n i t B † U i n v e r t † R s u c c , {\displaystyle U_{\mathrm {invert} }BR_{\mathrm {init} }B^{\dagger }U_{\mathrm {invert} }^{\dagger }R_{\mathrm {succ} },} where R s u c c = I − 2 | w e l l ⟩ ⟨ w e l l | {\displaystyle R_{\mathrm {succ} }=I-2|\mathrm {well} \rangle \langle \mathrm {well} |} and R i n i t = I − 2 | i n i t i a l ⟩ ⟨ i n i t i a l | . {\displaystyle R_{\mathrm {init} }=I-2|\mathrm {initial} \rangle \langle \mathrm {initial} |.} After each repetition, S {\displaystyle S} is measured and will produce a value of 'nothing', 'well', or 'ill' as described above. This loop is repeated until | w e l l ⟩ {\displaystyle |\mathrm {well} \rangle } is measured, which occurs with a probability p {\displaystyle p} . Rather than repeating 1 p {\displaystyle {\frac {1}{p}}} times to minimize error, amplitude amplification is used to achieve the same error resilience using only O ( 1 p ) {\displaystyle O\left({\frac {1}{\sqrt {p}}}\right)} repetitions. === Scalar measurement === After successfully measuring 'well' on S {\displaystyle S} the system will be in a state proportional to: ∑ i = ⁡ 1 N β i λ j − 1 | u j ⟩ = A − 1 | b ⟩ = | x ⟩ . {\displaystyle \sum _{i\mathop {=} 1}^{N}\beta _{i}\lambda _{j}^{-1}|u_{j}\rangle =A^{-1}|b\rangle =|x\rangle .} Finally, we perform the quantum-mechanical operator corresponding to M and obtain an estimate of the value of ⟨ x | M | x ⟩ {\displaystyle \langle x|M|x\rangle } . == Run time analysis == === Classical efficiency === The best classical algorithm which produces the actual solution vector x → {\displaystyle {\vec {x}}} is Gaussian elimination, which runs in O ( N 3 ) {\displaystyle O(N^{3})} time. If A is s-sparse and positive semi-definite, then the Conjugate Gradient method can be used to find the solution vector x → {\displaystyle {\vec {x}}} , which can be found in O ( N s κ ) {\displaystyle O(Ns\kappa )} time by minimizing the quadratic function | A x → − b → | 2 {\displaystyle |A{\vec {x}}-{\vec {b}}|^{2}} . When only a summary statistic of the solution vector x → {\displaystyle {\vec {x}}} is needed, as is the case for the quantum algorithm for linear systems of equations, a classical computer can find an estimate of x → † M x → {\displaystyle {\vec {x}}^{\dagger }M{\vec {x}}} in O ( N κ ) {\displaystyle O(N{\sqrt {\kappa }})} . === Quantum efficiency === The runtime of the quantum algorithm for solving systems of linear equations originally proposed by Harrow et al. was shown to be O ( κ 2 log ⁡ N / ε ) {\displaystyle O(\kappa ^{2}\log N/\varepsilon )} , where ε > 0 {\displaystyle \varepsilon >0} is the error parameter and κ {\displaystyle \kappa } is the condition number of A {\displaystyle A} . This was subsequently improved to O ( κ log 3 ⁡ κ log ⁡ N / ε 3 ) {\displaystyle O(\kappa \log ^{3}\kappa \log N/\varepsilon ^{3})} by Andris Ambainis and a quantum algorithm with runtime polynomial in log ⁡ ( 1 / ε ) {\displaystyle \log(1/\varepsilon )} was developed by Childs et al. Since the HHL algorithm maintains its logarithmic scaling in N {\displaystyle N} only for sparse or low rank matrices, Wossnig et al. extended the HHL algorithm based on a quantum singular value estimation technique and provided a linear system algorithm for dense matrices which runs in O ( N log ⁡ N κ 2 ) {\displaystyle O({\sqrt {N}}\log N\kappa ^{2})} time compared to the O ( N log ⁡ N κ 2 ) {\displaystyle O(N\log N\kappa ^{2})} of the standard HHL algorithm. === Optimality === An important factor in the performance of the matrix inversion algorithm is the condition number κ {\displaystyle \kappa } , which represents the ratio of A {\displaystyle A} 's largest and smallest eigenvalues. As the condition number increases, the ease with which the solution vector can be found using gradient descent methods such as the conjugate gradient method decreases, as A {\displaystyle A} becomes closer to a matrix which cannot be inverted and the solution vector becomes less stable. This algorithm assumes that all singular values of the matrix A {\displaystyle A} lie between 1 κ {\displaystyle {\frac {1}{\kappa }}} and 1, in which case the claimed run-time proportional to κ 2 {\displaystyle \kappa ^{2}} will be achieved. Therefore, the speedup over classical algorithms is increased further when κ {\displaystyle \kappa } is a p o l y ( log ⁡ ( N ) ) {\displaystyle \mathrm {poly} (\log(N))} . If the run-time of the algorithm were made poly-logarithmic in κ {\displaystyle \kappa } then problems solvable on n qubits could be solved in poly(n) time, causing the complexity class BQP to be equal to PSPACE. == Error analysis == Performing the Hamiltonian simulation, which is the dominant source of error, is done by simulating e i A t {\displaystyle e^{iAt}} . Assuming that A {\displaystyle A} is s-sparse, this can be done with an error bounded by a constant ε {\displaystyle \varepsilon } , which will translate to the additive error achieved in the output state | x ⟩ {\displaystyle |x\rangle } . The phase estimation step errs by O ( 1 t 0 ) {\displaystyle O\left({\frac {1}{t_{0}}}\right)} in estimating λ {\displaystyle \lambda } , which translates into a relative error of O ( 1 λ t 0 ) {\displaystyle O\left({\frac {1}{\lambda t_{0}}}\right)} in λ − 1 {\displaystyle \lambda ^{-1}} . If λ ≥ 1 / κ {\displaystyle \lambda \geq 1/\kappa } , taking t 0 = O ( κ ε ) {\displaystyle t_{0}=O(\kappa \varepsilon )} induces a final error of ε {\displaystyle \varepsilon } . This requires that the overall run-time efficiency be increased proportional to O ( 1 ε ) {\displaystyle O\left({\frac {1}{\varepsilon }}\right)} to minimize error. == Experimental realization == While there does not yet exist a quantum computer that can truly offer a speedup over a classical computer, implementation of a "proof of concept" remains an important milestone in the development of a new quantum algorithm. Demonstrating the quantum algorithm for linear systems of equations remained a challenge for years after its proposal until 2013 when it was demonstrated by Cai et al., Barz et al. and Pan et al. in parallel. === Cai et al. === Published in Physical Review Letters 110, 230501 (2013), Cai et al. reported an experimental demonstration of the simplest meaningful instance of this algorithm, that is, solving 2 × 2 {\displaystyle 2\times 2} linear equations for various input vectors. The quantum circuit is optimized and compiled into a linear optical network with four photonic quantum bits (qubits) and four controlled logic gates, which is used to coherently implement every subroutine for this algorithm. For various input vectors, the quantum computer gives solutions for the linear equations with reasonably high precision, ranging from fidelities of 0.825 to 0.993. === Barz et al. === On February 5, 2013, Stefanie Barz and co-workers demonstrated the quantum algorithm for linear systems of equations on a photonic quantum computing architecture. This implementation used two consecutive entangling gates on the same pair of polarization-encoded qubits. Two separately controlled NOT gates were realized where the successful operation of the first was heralded by a measurement of two ancillary photons. Barz et al. found that the fidelity in the obtained output state ranged from 64.7% to 98.1% due to the influence of higher-order emissions from spontaneous parametric down-conversion. === Pan et al. === On February 8, 2013, Pan et al. reported a proof-of-concept experimental demonstration of the quantum algorithm using a 4-qubit nuclear magnetic resonance quantum information processor. The implementation was tested using simple linear systems of only 2 variables. Across three experiments they obtain the solution vector with over 96% fidelity. === Wen et al. === Another experimental demonstration using NMR for solving an 8*8 system was reported by Wen et al. in 2018 using the algorithm developed by Subaşı et al. == Applications == Quantum computers are devices that harness quantum mechanics to perform computations in ways that classical computers cannot. For certain problems, quantum algorithms supply exponential speedups over their classical counterparts, the most famous example being Shor's factoring algorithm. Few such exponential speedups are known, and those that are (such as the use of quantum computers to simulate other quantum systems) have so far found limited practical use due to the current small size of quantum computers. This algorithm provides an exponentially faster method of estimating features of the solution of a set of linear equations, which is a problem ubiquitous in science and engineering, both on its own and as a subroutine in more complex problems. === Electromagnetic scattering === Clader et al. provided a preconditioned version of the linear systems algorithm that provided two advances. First, they demonstrated how a preconditioner could be included within the quantum algorithm. This expands the class of problems that can achieve the promised exponential speedup, since the scaling of HHL and the best classical algorithms are both polynomial in the condition number. The second advance was the demonstration of how to use HHL to solve for the radar cross-section of a complex shape. This was one of the first end to end examples of how to use HHL to solve a concrete problem exponentially faster than the best known classical algorithm. === Linear differential equation solving === Dominic Berry proposed a new algorithm for solving linear time dependent differential equations as an extension of the quantum algorithm for solving linear systems of equations. Berry provides an efficient algorithm for solving the full-time evolution under sparse linear differential equations on a quantum computer. === Nonlinear differential equation solving === Two groups proposed efficient algorithms for numerically integrating dissipative nonlinear ordinary differential equations. Liu et al. utilized Carleman linearization technique for second order equations and Lloyd et al. employed a mean field linearization method inspired by nonlinear Schrödinger equation for general order nonlinearities. The resulting linear equations are solved using quantum algorithms for linear differential equations. === Finite element method === The Finite Element Method uses large systems of linear equations to find approximate solutions to various physical and mathematical models. Montanaro and Pallister demonstrate that the HHL algorithm, when applied to certain FEM problems, can achieve a polynomial quantum speedup. They suggest that an exponential speedup is not possible in problems with fixed dimensions, and for which the solution meets certain smoothness conditions. Quantum speedups for the finite element method are higher for problems which include solutions with higher-order derivatives and large spatial dimensions. For example, problems in many-body dynamics require the solution of equations containing derivatives on orders scaling with the number of bodies, and some problems in computational finance, such as Black-Scholes models, require large spatial dimensions. === Least-squares fitting === Wiebe et al. provide a new quantum algorithm to determine the quality of a least-squares fit in which a continuous function is used to approximate a set of discrete points by extending the quantum algorithm for linear systems of equations. As the number of discrete points increases, the time required to produce a least-squares fit using even a quantum computer running a quantum state tomography algorithm becomes very large. Wiebe et al. find that in many cases, their algorithm can efficiently find a concise approximation of the data points, eliminating the need for the higher-complexity tomography algorithm. === Machine learning and big data analysis === Machine learning is the study of systems that can identify trends in data. Tasks in machine learning frequently involve manipulating and classifying a large volume of data in high-dimensional vector spaces. The runtime of classical machine learning algorithms is limited by a polynomial dependence on both the volume of data and the dimensions of the space. Quantum computers are capable of manipulating high-dimensional vectors using tensor product spaces and thus are well-suited platforms for machine learning algorithms. The quantum algorithm for linear systems of equations has been applied to a support vector machine, which is an optimized linear or non-linear binary classifier. A support vector machine can be used for supervised machine learning, in which training set of already classified data is available, or unsupervised machine learning, in which all data given to the system is unclassified. Rebentrost et al. show that a quantum support vector machine can be used for big data classification and achieve an exponential speedup over classical computers. In June 2018, Zhao et al. developed an algorithm for performing Bayesian training of deep neural networks in quantum computers with an exponential speedup over classical training due to the use of the quantum algorithm for linear systems of equations, providing also the first general-purpose implementation of the algorithm to be run in cloud-based quantum computers. === Finance === Proposals for using HHL in finance include solving partial differential equations for the Black–Scholes equation and determining portfolio optimization via a Markowitz solution. === Quantum chemistry === In 2023, Baskaran et al. proposed the use of HHL algorithm to quantum chemistry calculations, via the linearized coupled cluster method (LCC). The connection between the HHL algorithm and the LCC method is due to the fact that the latter can be recast in the form of system of linear equations. A key factor that makes this approach useful for quantum chemistry is that the number of state register qubits is the natural logarithm of the number of excitations, thus offering an exponential suppression in the number of required qubits when compared to variational quantum eigensolver or the quantum phase estimation algorithms. This leads to a 'coexistence across scales', where in a given quantum computing era, HHL-LCC could be applied to much larger systems whereas QPE-CASCI could be employed for smaller molecular systems but with better accuracy in predicting molecular properties. On the algorithmic side, the authors introduce the 'AdaptHHL' approach, which circumvents the need to expend an ~Ο(N3) classical overhead associated with fixing a value for the parameter 'c' in the controlled-rotation module of the algorithm. == Implementation difficulties == Recognizing the importance of the HHL algorithm in the field of quantum machine learning, Scott Aaronson analyzes the caveats and factors that could limit the actual quantum advantage of the algorithm. the solution vector, | b ⟩ {\displaystyle |b\rangle } , has to be efficiently prepared in the quantum state. If the vector is not close to uniform, the state preparation is likely to be costly, and if it takes O ( n c ) {\displaystyle O(n^{c})} steps the exponential advantage of HHL would vanish. the QPE phases calls for the generation of the unitary e i A t {\displaystyle e^{iAt}} , and its controlled application. The efficiency of this step depends on the A {\displaystyle A} matrix being sparse and 'well conditioned' (low κ {\displaystyle \kappa } ). Otherwise, the application of e i A t {\displaystyle e^{iAt}} would grow as O ( n c ) {\displaystyle O(n^{c})} and once again, the algorithm's quantum advantage would vanish. lastly, the vector, | x ⟩ {\displaystyle |x\rangle } , is not readily accessible. The HHL algorithm enables learning a 'summary' of the vector, namely the result of measuring the expectation of an operator ⟨ x | M | x ⟩ {\displaystyle \langle x|M|x\rangle } . If actual values of x → {\displaystyle {\vec {x}}} are needed, then HHL would need to be repeated O ( n ) {\displaystyle O(n)} times, killing the exponential speed-up. However, three ways of avoiding getting the actual values have been proposed: first, if only some properties of the solution are needed; second, if the results are needed only to feed downstream matrix operations; third, if only a sample of the solution is needed. == See also == Differentiable programming == References ==
Wikipedia/Quantum_algorithm_for_linear_systems_of_equations
In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in an ambient space, such as in a parametric curve. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a pipe, the random motion of particles in the air, and the number of fish each springtime in a lake. The most general definition unifies several concepts in mathematics such as ordinary differential equations and ergodic theory by allowing different choices of the space and how time is measured. Time can be measured by integers, by real or complex numbers or can be a more general algebraic object, losing the memory of its physical origin, and the space may be a manifold or simply a set, without the need of a smooth space-time structure defined on it. At any given time, a dynamical system has a state representing a point in an appropriate state space. This state is often given by a tuple of real numbers or by a vector in a geometrical manifold. The evolution rule of the dynamical system is a function that describes what future states follow from the current state. Often the function is deterministic, that is, for a given time interval only one future state follows from the current state. However, some systems are stochastic, in that random events also affect the evolution of the state variables. The study of dynamical systems is the focus of dynamical systems theory, which has applications to a wide variety of fields such as mathematics, physics, biology, chemistry, engineering, economics, history, and medicine. Dynamical systems are a fundamental part of chaos theory, logistic map dynamics, bifurcation theory, the self-assembly and self-organization processes, and the edge of chaos concept. == Overview == The concept of a dynamical system has its origins in Newtonian mechanics. There, as in other natural sciences and engineering disciplines, the evolution rule of dynamical systems is an implicit relation that gives the state of the system for only a short time into the future. (The relation is either a differential equation, difference equation or other time scale.) To determine the state for all future times requires iterating the relation many times—each advancing time a small step. The iteration procedure is referred to as solving the system or integrating the system. If the system can be solved, then, given an initial point, it is possible to determine all its future positions, a collection of points known as a trajectory or orbit. Before the advent of computers, finding an orbit required sophisticated mathematical techniques and could be accomplished only for a small class of dynamical systems. Numerical methods implemented on electronic computing machines have simplified the task of determining the orbits of a dynamical system. For simple dynamical systems, knowing the trajectory is often sufficient, but most dynamical systems are too complicated to be understood in terms of individual trajectories. The difficulties arise because: The systems studied may only be known approximately—the parameters of the system may not be known precisely or terms may be missing from the equations. The approximations used bring into question the validity or relevance of numerical solutions. To address these questions several notions of stability have been introduced in the study of dynamical systems, such as Lyapunov stability or structural stability. The stability of the dynamical system implies that there is a class of models or initial conditions for which the trajectories would be equivalent. The operation for comparing orbits to establish their equivalence changes with the different notions of stability. The type of trajectory may be more important than one particular trajectory. Some trajectories may be periodic, whereas others may wander through many different states of the system. Applications often require enumerating these classes or maintaining the system within one class. Classifying all possible trajectories has led to the qualitative study of dynamical systems, that is, properties that do not change under coordinate changes. Linear dynamical systems and systems that have two numbers describing a state are examples of dynamical systems where the possible classes of orbits are understood. The behavior of trajectories as a function of a parameter may be what is needed for an application. As a parameter is varied, the dynamical systems may have bifurcation points where the qualitative behavior of the dynamical system changes. For example, it may go from having only periodic motions to apparently erratic behavior, as in the transition to turbulence of a fluid. The trajectories of the system may appear erratic, as if random. In these cases it may be necessary to compute averages using one very long trajectory or many different trajectories. The averages are well defined for ergodic systems and a more detailed understanding has been worked out for hyperbolic systems. Understanding the probabilistic aspects of dynamical systems has helped establish the foundations of statistical mechanics and of chaos. == History == Many people regard French mathematician Henri Poincaré as the founder of dynamical systems. Poincaré published two now classical monographs, "New Methods of Celestial Mechanics" (1892–1899) and "Lectures on Celestial Mechanics" (1905–1910). In them, he successfully applied the results of their research to the problem of the motion of three bodies and studied in detail the behavior of solutions (frequency, stability, asymptotic, and so on). These papers included the Poincaré recurrence theorem, which states that certain systems will, after a sufficiently long but finite time, return to a state very close to the initial state. Aleksandr Lyapunov developed many important approximation methods. His methods, which he developed in 1899, make it possible to define the stability of sets of ordinary differential equations. He created the modern theory of the stability of a dynamical system. In 1913, George David Birkhoff proved Poincaré's "Last Geometric Theorem", a special case of the three-body problem, a result that made him world-famous. In 1927, he published his Dynamical Systems. Birkhoff's most durable result has been his 1931 discovery of what is now called the ergodic theorem. Combining insights from physics on the ergodic hypothesis with measure theory, this theorem solved, at least in principle, a fundamental problem of statistical mechanics. The ergodic theorem has also had repercussions for dynamics. Stephen Smale made significant advances as well. His first contribution was the Smale horseshoe that jumpstarted significant research in dynamical systems. He also outlined a research program carried out by many others. Oleksandr Mykolaiovych Sharkovsky developed Sharkovsky's theorem on the periods of discrete dynamical systems in 1964. One of the implications of the theorem is that if a discrete dynamical system on the real line has a periodic point of period 3, then it must have periodic points of every other period. In the late 20th century the dynamical system perspective to partial differential equations started gaining popularity. Palestinian mechanical engineer Ali H. Nayfeh applied nonlinear dynamics in mechanical and engineering systems. His pioneering work in applied nonlinear dynamics has been influential in the construction and maintenance of machines and structures that are common in daily life, such as ships, cranes, bridges, buildings, skyscrapers, jet engines, rocket engines, aircraft and spacecraft. == Formal definition == In the most general sense, a dynamical system is a tuple (T, X, Φ) where T is a monoid, written additively, X is a non-empty set and Φ is a function Φ : U ⊆ ( T × X ) → X {\displaystyle \Phi :U\subseteq (T\times X)\to X} with p r o j 2 ( U ) = X {\displaystyle \mathrm {proj} _{2}(U)=X} (where p r o j 2 {\displaystyle \mathrm {proj} _{2}} is the 2nd projection map) and for any x in X: Φ ( 0 , x ) = x {\displaystyle \Phi (0,x)=x} Φ ( t 2 , Φ ( t 1 , x ) ) = Φ ( t 2 + t 1 , x ) , {\displaystyle \Phi (t_{2},\Phi (t_{1},x))=\Phi (t_{2}+t_{1},x),} for t 1 , t 2 + t 1 ∈ I ( x ) {\displaystyle \,t_{1},\,t_{2}+t_{1}\in I(x)} and t 2 ∈ I ( Φ ( t 1 , x ) ) {\displaystyle \ t_{2}\in I(\Phi (t_{1},x))} , where we have defined the set I ( x ) := { t ∈ T : ( t , x ) ∈ U } {\displaystyle I(x):=\{t\in T:(t,x)\in U\}} for any x in X. In particular, in the case that U = T × X {\displaystyle U=T\times X} we have for every x in X that I ( x ) = T {\displaystyle I(x)=T} and thus that Φ defines a monoid action of T on X. The function Φ(t,x) is called the evolution function of the dynamical system: it associates to every point x in the set X a unique image, depending on the variable t, called the evolution parameter. X is called phase space or state space, while the variable x represents an initial state of the system. We often write Φ x ( t ) ≡ Φ ( t , x ) {\displaystyle \Phi _{x}(t)\equiv \Phi (t,x)} Φ t ( x ) ≡ Φ ( t , x ) {\displaystyle \Phi ^{t}(x)\equiv \Phi (t,x)} if we take one of the variables as constant. The function Φ x : I ( x ) → X {\displaystyle \Phi _{x}:I(x)\to X} is called the flow through x and its graph is called the trajectory through x. The set γ x ≡ { Φ ( t , x ) : t ∈ I ( x ) } {\displaystyle \gamma _{x}\equiv \{\Phi (t,x):t\in I(x)\}} is called the orbit through x. The orbit through x is the image of the flow through x. A subset S of the state space X is called Φ-invariant if for all x in S and all t in T Φ ( t , x ) ∈ S . {\displaystyle \Phi (t,x)\in S.} Thus, in particular, if S is Φ-invariant, I ( x ) = T {\displaystyle I(x)=T} for all x in S. That is, the flow through x must be defined for all time for every element of S. More commonly there are two classes of definitions for a dynamical system: one is motivated by ordinary differential equations and is geometrical in flavor; and the other is motivated by ergodic theory and is measure theoretical in flavor. === Geometrical definition === In the geometrical definition, a dynamical system is the tuple ⟨ T , M , f ⟩ {\displaystyle \langle {\mathcal {T}},{\mathcal {M}},f\rangle } . T {\displaystyle {\mathcal {T}}} is the domain for time – there are many choices, usually the reals or the integers, possibly restricted to be non-negative. M {\displaystyle {\mathcal {M}}} is a manifold, i.e. locally a Banach space or Euclidean space, or in the discrete case a graph. f is an evolution rule t → f t (with t ∈ T {\displaystyle t\in {\mathcal {T}}} ) such that f t is a diffeomorphism of the manifold to itself. So, f is a "smooth" mapping of the time-domain T {\displaystyle {\mathcal {T}}} into the space of diffeomorphisms of the manifold to itself. In other terms, f(t) is a diffeomorphism, for every time t in the domain T {\displaystyle {\mathcal {T}}} . ==== Real dynamical system ==== A real dynamical system, real-time dynamical system, continuous time dynamical system, or flow is a tuple (T, M, Φ) with T an open interval in the real numbers R, M a manifold locally diffeomorphic to a Banach space, and Φ a continuous function. If Φ is continuously differentiable we say the system is a differentiable dynamical system. If the manifold M is locally diffeomorphic to Rn, the dynamical system is finite-dimensional; if not, the dynamical system is infinite-dimensional. This does not assume a symplectic structure. When T is taken to be the reals, the dynamical system is called global or a flow; and if T is restricted to the non-negative reals, then the dynamical system is a semi-flow. ==== Discrete dynamical system ==== A discrete dynamical system, discrete-time dynamical system is a tuple (T, M, Φ), where M is a manifold locally diffeomorphic to a Banach space, and Φ is a function. When T is taken to be the integers, it is a cascade or a map. If T is restricted to the non-negative integers we call the system a semi-cascade. ==== Cellular automaton ==== A cellular automaton is a tuple (T, M, Φ), with T a lattice such as the integers or a higher-dimensional integer grid, M is a set of functions from an integer lattice (again, with one or more dimensions) to a finite set, and Φ a (locally defined) evolution function. As such cellular automata are dynamical systems. The lattice in M represents the "space" lattice, while the one in T represents the "time" lattice. ==== Multidimensional generalization ==== Dynamical systems are usually defined over a single independent variable, thought of as time. A more general class of systems are defined over multiple independent variables and are therefore called multidimensional systems. Such systems are useful for modeling, for example, image processing. ==== Compactification of a dynamical system ==== Given a global dynamical system (R, X, Φ) on a locally compact and Hausdorff topological space X, it is often useful to study the continuous extension Φ* of Φ to the one-point compactification X* of X. Although we lose the differential structure of the original system we can now use compactness arguments to analyze the new system (R, X*, Φ*). In compact dynamical systems the limit set of any orbit is non-empty, compact and simply connected. === Measure theoretical definition === A dynamical system may be defined formally as a measure-preserving transformation of a measure space, the triplet (T, (X, Σ, μ), Φ). Here, T is a monoid (usually the non-negative integers), X is a set, and (X, Σ, μ) is a probability space, meaning that Σ is a sigma-algebra on X and μ is a finite measure on (X, Σ). A map Φ: X → X is said to be Σ-measurable if and only if, for every σ in Σ, one has Φ − 1 σ ∈ Σ {\displaystyle \Phi ^{-1}\sigma \in \Sigma } . A map Φ is said to preserve the measure if and only if, for every σ in Σ, one has μ ( Φ − 1 σ ) = μ ( σ ) {\displaystyle \mu (\Phi ^{-1}\sigma )=\mu (\sigma )} . Combining the above, a map Φ is said to be a measure-preserving transformation of X , if it is a map from X to itself, it is Σ-measurable, and is measure-preserving. The triplet (T, (X, Σ, μ), Φ), for such a Φ, is then defined to be a dynamical system. The map Φ embodies the time evolution of the dynamical system. Thus, for discrete dynamical systems the iterates Φ n = Φ ∘ Φ ∘ ⋯ ∘ Φ {\displaystyle \Phi ^{n}=\Phi \circ \Phi \circ \dots \circ \Phi } for every integer n are studied. For continuous dynamical systems, the map Φ is understood to be a finite time evolution map and the construction is more complicated. ==== Relation to geometric definition ==== The measure theoretical definition assumes the existence of a measure-preserving transformation. Many different invariant measures can be associated to any one evolution rule. If the dynamical system is given by a system of differential equations the appropriate measure must be determined. This makes it difficult to develop ergodic theory starting from differential equations, so it becomes convenient to have a dynamical systems-motivated definition within ergodic theory that side-steps the choice of measure and assumes the choice has been made. A simple construction (sometimes called the Krylov–Bogolyubov theorem) shows that for a large class of systems it is always possible to construct a measure so as to make the evolution rule of the dynamical system a measure-preserving transformation. In the construction a given measure of the state space is summed for all future points of a trajectory, assuring the invariance. Some systems have a natural measure, such as the Liouville measure in Hamiltonian systems, chosen over other invariant measures, such as the measures supported on periodic orbits of the Hamiltonian system. For chaotic dissipative systems the choice of invariant measure is technically more challenging. The measure needs to be supported on the attractor, but attractors have zero Lebesgue measure and the invariant measures must be singular with respect to the Lebesgue measure. A small region of phase space shrinks under time evolution. For hyperbolic dynamical systems, the Sinai–Ruelle–Bowen measures appear to be the natural choice. They are constructed on the geometrical structure of stable and unstable manifolds of the dynamical system; they behave physically under small perturbations; and they explain many of the observed statistics of hyperbolic systems. == Construction of dynamical systems == The concept of evolution in time is central to the theory of dynamical systems as seen in the previous sections: the basic reason for this fact is that the starting motivation of the theory was the study of time behavior of classical mechanical systems. But a system of ordinary differential equations must be solved before it becomes a dynamic system. For example, consider an initial value problem such as the following: x ˙ = v ( t , x ) {\displaystyle {\dot {\boldsymbol {x}}}={\boldsymbol {v}}(t,{\boldsymbol {x}})} x | t = 0 = x 0 {\displaystyle {\boldsymbol {x}}|_{t=0}={\boldsymbol {x}}_{0}} where x ˙ {\displaystyle {\dot {\boldsymbol {x}}}} represents the velocity of the material point x M is a finite dimensional manifold v: T × M → TM is a vector field in Rn or Cn and represents the change of velocity induced by the known forces acting on the given material point in the phase space M. The change is not a vector in the phase space M, but is instead in the tangent space TM. There is no need for higher order derivatives in the equation, nor for the parameter t in v(t,x), because these can be eliminated by considering systems of higher dimensions. Depending on the properties of this vector field, the mechanical system is called autonomous, when v(t, x) = v(x) homogeneous when v(t, 0) = 0 for all t The solution can be found using standard ODE techniques and is denoted as the evolution function already introduced above x ( t ) = Φ ( t , x 0 ) {\displaystyle {\boldsymbol {x}}(t)=\Phi (t,{\boldsymbol {x}}_{0})} The dynamical system is then (T, M, Φ). Some formal manipulation of the system of differential equations shown above gives a more general form of equations a dynamical system must satisfy x ˙ − v ( t , x ) = 0 ⇔ G ( t , Φ ( t , x 0 ) ) = 0 {\displaystyle {\dot {\boldsymbol {x}}}-{\boldsymbol {v}}(t,{\boldsymbol {x}})=0\qquad \Leftrightarrow \qquad {\mathfrak {G}}\left(t,\Phi (t,{\boldsymbol {x}}_{0})\right)=0} where G : ( T × M ) M → C {\displaystyle {\mathfrak {G}}:{{(T\times M)}^{M}}\to \mathbf {C} } is a functional from the set of evolution functions to the field of the complex numbers. This equation is useful when modeling mechanical systems with complicated constraints. Many of the concepts in dynamical systems can be extended to infinite-dimensional manifolds—those that are locally Banach spaces—in which case the differential equations are partial differential equations. == Examples == == Linear dynamical systems == Linear dynamical systems can be solved in terms of simple functions and the behavior of all orbits classified. In a linear system the phase space is the N-dimensional Euclidean space, so any point in phase space can be represented by a vector with N numbers. The analysis of linear systems is possible because they satisfy a superposition principle: if u(t) and w(t) satisfy the differential equation for the vector field (but not necessarily the initial condition), then so will u(t) + w(t). === Flows === For a flow, the vector field v(x) is an affine function of the position in the phase space, that is, x ˙ = v ( x ) = A x + b , {\displaystyle {\dot {x}}=v(x)=Ax+b,} with A a matrix, b a vector of numbers and x the position vector. The solution to this system can be found by using the superposition principle (linearity). The case b ≠ 0 with A = 0 is just a straight line in the direction of b: Φ t ( x 1 ) = x 1 + b t . {\displaystyle \Phi ^{t}(x_{1})=x_{1}+bt.} When b is zero and A ≠ 0 the origin is an equilibrium (or singular) point of the flow, that is, if x0 = 0, then the orbit remains there. For other initial conditions, the equation of motion is given by the exponential of a matrix: for an initial point x0, Φ t ( x 0 ) = e t A x 0 . {\displaystyle \Phi ^{t}(x_{0})=e^{tA}x_{0}.} When b = 0, the eigenvalues of A determine the structure of the phase space. From the eigenvalues and the eigenvectors of A it is possible to determine if an initial point will converge or diverge to the equilibrium point at the origin. The distance between two different initial conditions in the case A ≠ 0 will change exponentially in most cases, either converging exponentially fast towards a point, or diverging exponentially fast. Linear systems display sensitive dependence on initial conditions in the case of divergence. For nonlinear systems this is one of the (necessary but not sufficient) conditions for chaotic behavior. === Maps === A discrete-time, affine dynamical system has the form of a matrix difference equation: x n + 1 = A x n + b , {\displaystyle x_{n+1}=Ax_{n}+b,} with A a matrix and b a vector. As in the continuous case, the change of coordinates x → x + (1 − A) –1b removes the term b from the equation. In the new coordinate system, the origin is a fixed point of the map and the solutions are of the linear system A nx0. The solutions for the map are no longer curves, but points that hop in the phase space. The orbits are organized in curves, or fibers, which are collections of points that map into themselves under the action of the map. As in the continuous case, the eigenvalues and eigenvectors of A determine the structure of phase space. For example, if u1 is an eigenvector of A, with a real eigenvalue smaller than one, then the straight lines given by the points along α u1, with α ∈ R, is an invariant curve of the map. Points in this straight line run into the fixed point. There are also many other discrete dynamical systems. == Local dynamics == The qualitative properties of dynamical systems do not change under a smooth change of coordinates (this is sometimes taken as a definition of qualitative): a singular point of the vector field (a point where v(x) = 0) will remain a singular point under smooth transformations; a periodic orbit is a loop in phase space and smooth deformations of the phase space cannot alter it being a loop. It is in the neighborhood of singular points and periodic orbits that the structure of a phase space of a dynamical system can be well understood. In the qualitative study of dynamical systems, the approach is to show that there is a change of coordinates (usually unspecified, but computable) that makes the dynamical system as simple as possible. === Rectification === A flow in most small patches of the phase space can be made very simple. If y is a point where the vector field v(y) ≠ 0, then there is a change of coordinates for a region around y where the vector field becomes a series of parallel vectors of the same magnitude. This is known as the rectification theorem. The rectification theorem says that away from singular points the dynamics of a point in a small patch is a straight line. The patch can sometimes be enlarged by stitching several patches together, and when this works out in the whole phase space M the dynamical system is integrable. In most cases the patch cannot be extended to the entire phase space. There may be singular points in the vector field (where v(x) = 0); or the patches may become smaller and smaller as some point is approached. The more subtle reason is a global constraint, where the trajectory starts out in a patch, and after visiting a series of other patches comes back to the original one. If the next time the orbit loops around phase space in a different way, then it is impossible to rectify the vector field in the whole series of patches. === Near periodic orbits === In general, in the neighborhood of a periodic orbit the rectification theorem cannot be used. Poincaré developed an approach that transforms the analysis near a periodic orbit to the analysis of a map. Pick a point x0 in the orbit γ and consider the points in phase space in that neighborhood that are perpendicular to v(x0). These points are a Poincaré section S(γ, x0), of the orbit. The flow now defines a map, the Poincaré map F : S → S, for points starting in S and returning to S. Not all these points will take the same amount of time to come back, but the times will be close to the time it takes x0. The intersection of the periodic orbit with the Poincaré section is a fixed point of the Poincaré map F. By a translation, the point can be assumed to be at x = 0. The Taylor series of the map is F(x) = J · x + O(x2), so a change of coordinates h can only be expected to simplify F to its linear part h − 1 ∘ F ∘ h ( x ) = J ⋅ x . {\displaystyle h^{-1}\circ F\circ h(x)=J\cdot x.} This is known as the conjugation equation. Finding conditions for this equation to hold has been one of the major tasks of research in dynamical systems. Poincaré first approached it assuming all functions to be analytic and in the process discovered the non-resonant condition. If λ1, ..., λν are the eigenvalues of J they will be resonant if one eigenvalue is an integer linear combination of two or more of the others. As terms of the form λi – Σ (multiples of other eigenvalues) occurs in the denominator of the terms for the function h, the non-resonant condition is also known as the small divisor problem. === Conjugation results === The results on the existence of a solution to the conjugation equation depend on the eigenvalues of J and the degree of smoothness required from h. As J does not need to have any special symmetries, its eigenvalues will typically be complex numbers. When the eigenvalues of J are not in the unit circle, the dynamics near the fixed point x0 of F is called hyperbolic and when the eigenvalues are on the unit circle and complex, the dynamics is called elliptic. In the hyperbolic case, the Hartman–Grobman theorem gives the conditions for the existence of a continuous function that maps the neighborhood of the fixed point of the map to the linear map J · x. The hyperbolic case is also structurally stable. Small changes in the vector field will only produce small changes in the Poincaré map and these small changes will reflect in small changes in the position of the eigenvalues of J in the complex plane, implying that the map is still hyperbolic. The Kolmogorov–Arnold–Moser (KAM) theorem gives the behavior near an elliptic point. == Bifurcation theory == When the evolution map Φt (or the vector field it is derived from) depends on a parameter μ, the structure of the phase space will also depend on this parameter. Small changes may produce no qualitative changes in the phase space until a special value μ0 is reached. At this point the phase space changes qualitatively and the dynamical system is said to have gone through a bifurcation. Bifurcation theory considers a structure in phase space (typically a fixed point, a periodic orbit, or an invariant torus) and studies its behavior as a function of the parameter μ. At the bifurcation point the structure may change its stability, split into new structures, or merge with other structures. By using Taylor series approximations of the maps and an understanding of the differences that may be eliminated by a change of coordinates, it is possible to catalog the bifurcations of dynamical systems. The bifurcations of a hyperbolic fixed point x0 of a system family Fμ can be characterized by the eigenvalues of the first derivative of the system DFμ(x0) computed at the bifurcation point. For a map, the bifurcation will occur when there are eigenvalues of DFμ on the unit circle. For a flow, it will occur when there are eigenvalues on the imaginary axis. For more information, see the main article on Bifurcation theory. Some bifurcations can lead to very complicated structures in phase space. For example, the Ruelle–Takens scenario describes how a periodic orbit bifurcates into a torus and the torus into a strange attractor. In another example, Feigenbaum period-doubling describes how a stable periodic orbit goes through a series of period-doubling bifurcations. == Ergodic systems == In many dynamical systems, it is possible to choose the coordinates of the system so that the volume (really a ν-dimensional volume) in phase space is invariant. This happens for mechanical systems derived from Newton's laws as long as the coordinates are the position and the momentum and the volume is measured in units of (position) × (momentum). The flow takes points of a subset A into the points Φ t(A) and invariance of the phase space means that v o l ( A ) = v o l ( Φ t ( A ) ) . {\displaystyle \mathrm {vol} (A)=\mathrm {vol} (\Phi ^{t}(A)).} In the Hamiltonian formalism, given a coordinate it is possible to derive the appropriate (generalized) momentum such that the associated volume is preserved by the flow. The volume is said to be computed by the Liouville measure. In a Hamiltonian system, not all possible configurations of position and momentum can be reached from an initial condition. Because of energy conservation, only the states with the same energy as the initial condition are accessible. The states with the same energy form an energy shell Ω, a sub-manifold of the phase space. The volume of the energy shell, computed using the Liouville measure, is preserved under evolution. For systems where the volume is preserved by the flow, Poincaré discovered the recurrence theorem: Assume the phase space has a finite Liouville volume and let F be a phase space volume-preserving map and A a subset of the phase space. Then almost every point of A returns to A infinitely often. The Poincaré recurrence theorem was used by Zermelo to object to Boltzmann's derivation of the increase in entropy in a dynamical system of colliding atoms. One of the questions raised by Boltzmann's work was the possible equality between time averages and space averages, what he called the ergodic hypothesis. The hypothesis states that the length of time a typical trajectory spends in a region A is vol(A)/vol(Ω). The ergodic hypothesis turned out not to be the essential property needed for the development of statistical mechanics and a series of other ergodic-like properties were introduced to capture the relevant aspects of physical systems. Koopman approached the study of ergodic systems by the use of functional analysis. An observable a is a function that to each point of the phase space associates a number (say instantaneous pressure, or average height). The value of an observable can be computed at another time by using the evolution function φ t. This introduces an operator U t, the transfer operator, ( U t a ) ( x ) = a ( Φ − t ( x ) ) . {\displaystyle (U^{t}a)(x)=a(\Phi ^{-t}(x)).} By studying the spectral properties of the linear operator U it becomes possible to classify the ergodic properties of Φ t. In using the Koopman approach of considering the action of the flow on an observable function, the finite-dimensional nonlinear problem involving Φ t gets mapped into an infinite-dimensional linear problem involving U. The Liouville measure restricted to the energy surface Ω is the basis for the averages computed in equilibrium statistical mechanics. An average in time along a trajectory is equivalent to an average in space computed with the Boltzmann factor exp(−βH). This idea has been generalized by Sinai, Bowen, and Ruelle (SRB) to a larger class of dynamical systems that includes dissipative systems. SRB measures replace the Boltzmann factor and they are defined on attractors of chaotic systems. == Nonlinear dynamical systems and chaos == Simple nonlinear dynamical systems, including piecewise linear systems, can exhibit strongly unpredictable behavior, which might seem to be random, despite the fact that they are fundamentally deterministic. This unpredictable behavior has been called chaos. Hyperbolic systems are precisely defined dynamical systems that exhibit the properties ascribed to chaotic systems. In hyperbolic systems the tangent spaces perpendicular to an orbit can be decomposed into a combination of two parts: one with the points that converge towards the orbit (the stable manifold) and another of the points that diverge from the orbit (the unstable manifold). This branch of mathematics deals with the long-term qualitative behavior of dynamical systems. Here, the focus is not on finding precise solutions to the equations defining the dynamical system (which is often hopeless), but rather to answer questions like "Will the system settle down to a steady state in the long term, and if so, what are the possible attractors?" or "Does the long-term behavior of the system depend on its initial condition?" The chaotic behavior of complex systems is not the issue. Meteorology has been known for years to involve complex—even chaotic—behavior. Chaos theory has been so surprising because chaos can be found within almost trivial systems. The Pomeau–Manneville scenario of the logistic map and the Fermi–Pasta–Ulam–Tsingou problem arose with just second-degree polynomials; the horseshoe map is piecewise linear. === Solutions of finite duration === For non-linear autonomous ODEs it is possible under some conditions to develop solutions of finite duration, meaning here that in these solutions the system will reach the value zero at some time, called an ending time, and then stay there forever after. This can occur only when system trajectories are not uniquely determined forwards and backwards in time by the dynamics, thus solutions of finite duration imply a form of "backwards-in-time unpredictability" closely related to the forwards-in-time unpredictability of chaos. This behavior cannot happen for Lipschitz continuous differential equations according to the proof of the Picard-Lindelof theorem. These solutions are non-Lipschitz functions at their ending times and cannot be analytical functions on the whole real line. As example, the equation: y ′ = − sgn ( y ) | y | , y ( 0 ) = 1 {\displaystyle y'=-{\text{sgn}}(y){\sqrt {|y|}},\,\,y(0)=1} Admits the finite duration solution: y ( t ) = 1 4 ( 1 − t 2 + | 1 − t 2 | ) 2 {\displaystyle y(t)={\frac {1}{4}}\left(1-{\frac {t}{2}}+\left|1-{\frac {t}{2}}\right|\right)^{2}} that is zero for t ≥ 2 {\displaystyle t\geq 2} and is not Lipschitz continuous at its ending time t = 2. {\displaystyle t=2.} == See also == == References == == Further reading == == External links == Arxiv preprint server has daily submissions of (non-refereed) manuscripts in dynamical systems. Encyclopedia of dynamical systems A part of Scholarpedia — peer-reviewed and written by invited experts. Nonlinear Dynamics. Models of bifurcation and chaos by Elmer G. Wiens Sci.Nonlinear FAQ 2.0 (Sept 2003) provides definitions, explanations and resources related to nonlinear science Online books or lecture notes Geometrical theory of dynamical systems. Nils Berglund's lecture notes for a course at ETH at the advanced undergraduate level. Dynamical systems. George D. Birkhoff's 1927 book already takes a modern approach to dynamical systems. Chaos: classical and quantum. An introduction to dynamical systems from the periodic orbit point of view. Learning Dynamical Systems. Tutorial on learning dynamical systems. Ordinary Differential Equations and Dynamical Systems. Lecture notes by Gerald Teschl Research groups Dynamical Systems Group Groningen, IWI, University of Groningen. Chaos @ UMD. Concentrates on the applications of dynamical systems. [2], SUNY Stony Brook. Lists of conferences, researchers, and some open problems. Center for Dynamics and Geometry, Penn State. Control and Dynamical Systems, Caltech. Laboratory of Nonlinear Systems, Ecole Polytechnique Fédérale de Lausanne (EPFL). Center for Dynamical Systems, University of Bremen Systems Analysis, Modelling and Prediction Group, University of Oxford Non-Linear Dynamics Group, Instituto Superior Técnico, Technical University of Lisbon Dynamical Systems Archived 2017-06-02 at the Wayback Machine, IMPA, Instituto Nacional de Matemática Pura e Applicada. Nonlinear Dynamics Workgroup Archived 2015-01-21 at the Wayback Machine, Institute of Computer Science, Czech Academy of Sciences. UPC Dynamical Systems Group Barcelona, Polytechnical University of Catalonia. Center for Control, Dynamical Systems, and Computation, University of California, Santa Barbara.
Wikipedia/Dynamical_systems
Springer Science+Business Media, commonly known as Springer, is a German multinational publishing company of books, e-books and peer-reviewed journals in science, humanities, technical and medical (STM) publishing. Originally founded in 1842 in Berlin, it expanded internationally in the 1960s, and through mergers in the 1990s and a sale to venture capitalists it fused with Wolters Kluwer and eventually became part of Springer Nature in 2015. Springer has major offices in Berlin, Heidelberg, Dordrecht, and New York City. == History == Julius Springer founded Springer-Verlag in Berlin in 1842 and his son Ferdinand Springer grew it from a small firm of 4 employees into Germany's then second-largest academic publisher with 65 staff in 1872. In 1964, Springer expanded its business internationally, opening an office in New York City. Offices in Tokyo, Paris, Milan, Hong Kong, and Delhi soon followed. In 1999, the academic publishing company BertelsmannSpringer was formed after the media and entertainment company Bertelsmann bought a majority stake in Springer-Verlag. In 2003, the British investment groups Cinven and Candover bought BertelsmannSpringer from Bertelsmann. They merged the company in 2004 with the Dutch publisher Kluwer Academic Publishers (successor of D. Reidel, Dr. W. Junk, Plenum Publishers, most of Chapman & Hall, and Baltzer Science Publishers) which they bought from Wolters Kluwer in 2002, to form Springer Science+Business Media. In 2006, Springer acquired Humana Press. Springer acquired the open-access publisher BioMed Central in October 2008 for an undisclosed amount. In 2009, Cinven and Candover sold Springer to two private equity firms, EQT AB and Government of Singapore Investment Corporation, confirmed in February 2010 after the competition authorities in the US and in Europe approved the transfer. In 2011, Springer acquired Pharma Marketing and Publishing Services (MPS) from Wolters Kluwer. In 2013, the London-based private equity firm BC Partners acquired a majority stake in Springer from EQT and GIC for $4.4 billion. In January 2015, Holtzbrinck Publishing Group / Nature Publishing Group and Springer Science+Business Media announced a merger. in May 2015 they concluded the transaction and formed a new joint venture company, Springer Nature with Holtzbrinck in the majority 53% share and BC Partners retaining 47% interest in the company. == Products == In 1996, Springer launched electronic book and journal content on its SpringerLink site. SpringerImages was launched in 2008. In 2009, SpringerMaterials, a platform for accessing the Landolt-Börnstein database of research and information on materials and their properties, was launched. AuthorMapper is a free online tool for visualizing scientific research that enables document discovery based on author locations and geographic maps, helping users explore patterns in scientific research, identify literature trends, discover collaborative relationships, and locate experts in several scientific/medical fields. Springer Protocols contained a collection of laboratory protocols, recipes that provide step-by-step instructions for conducting experiments, which in 2018 was made available in SpringerLink instead. Book publications include major reference works, textbooks, monographs and book series; more than 168,000 titles are available as e-books in 24 subject collections. === Open access === Springer is a member of the Open Access Scholarly Publishers Association. For some of its journals, Springer does not require its authors to transfer their copyrights, and allows them to decide whether their articles are published under an open-access license or in the traditional restricted licence model. While open-access publishing typically requires the author to pay a fee for copyright retention, this fee is sometimes covered by a third party. For example, a national institution in Poland allows authors to publish in open-access journals without incurring any personal cost but using public funds. == Controversies == In 1938, Springer-Verlag was pressed to apply Nazi principles on the journal Zentralblatt MATH. Tullio Levi-Civita, who was Jewish, was forced out from the editorial board, and Otto Neugebauer resigned in protest along with most of the rest of the board. In 2014, it was revealed that 16 papers in conference proceedings published by Springer had been computer-generated using SCIgen. Springer subsequently retracted all papers from these proceedings. IEEE had removed more than 100 fake papers from its conference proceedings. In 2015, Springer retracted 64 papers from 10 of its journals it had published after a fraudulent peer review process was uncovered. === Manipulation of bibliometrics === According to Goodhart's law and concerned academics like the signatories of the San Francisco Declaration on Research Assessment, commercial academic publishers benefit from manipulation of bibliometrics and scientometrics like the journal impact factor, which is often used as a proxy of prestige and can influence revenues, including public subsidies in the form of subscriptions and free work from academics. Seven Springer Nature journals, which exhibited unusual levels of self-citation, had their journal impact factor of 2019 suspended from Journal Citation Reports in 2020, a sanction which hit 34 journals in total. == Selected imprints == == Selected publications == Cellular Oncology Encyclopaedia of Mathematics Ergebnisse der Mathematik und ihrer Grenzgebiete (book series) Graduate Texts in Mathematics (book series) Grothendieck's Séminaire de géométrie algébrique The International Journal of Advanced Manufacturing Technology Lecture Notes in Computer Science Undergraduate Texts in Mathematics (book series) Zentralblatt MATH MRS Bulletin == See also == Category:Springer Science+Business Media academic journals List of publishers Media concentration == References == == External links == Official website Mary H. Munroe (2004). "Springer Timeline". The Academic Publishing Industry: A Story of Merger and Acquisition. Archived from the original on 2014-10-20 – via Northern Illinois University.
Wikipedia/Springer_Science_&_Business_Media
In computer science, function composition is an act or mechanism to combine simple functions to build more complicated ones. Like the usual composition of functions in mathematics, the result of each function is passed as the argument of the next, and the result of the last one is the result of the whole. Programmers frequently apply functions to results of other functions, and almost all programming languages allow it. In some cases, the composition of functions is interesting as a function in its own right, to be used later. Such a function can always be defined but languages with first-class functions make it easier. The ability to easily compose functions encourages factoring (breaking apart) functions for maintainability and code reuse. More generally, big systems might be built by composing whole programs. Narrowly speaking, function composition applies to functions that operate on a finite amount of data, each step sequentially processing it before handing it to the next. Functions that operate on potentially infinite data (a stream or other codata) are known as filters, and are instead connected in a pipeline, which is analogous to function composition and can execute concurrently. == Composing function calls == For example, suppose we have two functions f and g, as in z = f(y) and y = g(x). Composing them means we first compute y = g(x), and then use y to compute z = f(y). Here is the example in the C language: The steps can be combined if we don't give a name to the intermediate result: Despite differences in length, these two implementations compute the same result. The second implementation requires only one line of code and is colloquially referred to as a "highly composed" form. Readability and hence maintainability is one advantage of highly composed forms, since they require fewer lines of code, minimizing a program's "surface area". DeMarco and Lister empirically verify an inverse relationship between surface area and maintainability. On the other hand, it may be possible to overuse highly composed forms. A nesting of too many functions may have the opposite effect, making the code less maintainable. In a stack-based language, functional composition is even more natural: it is performed by concatenation, and is usually the primary method of program design. The above example in Forth: g f Which will take whatever was on the stack before, apply g, then f, and leave the result on the stack. See postfix composition notation for the corresponding mathematical notation. == Naming the composition of functions == Now suppose that the combination of calling f() on the result of g() is frequently useful, and which we want to name foo() to be used as a function in its own right. In most languages, we can define a new function implemented by composition. Example in C: (the long form with intermediates would work as well.) Example in Forth: : foo g f ; In languages such as C, the only way to create a new function is to define it in the program source, which means that functions can't be composed at run time. An evaluation of an arbitrary composition of predefined functions, however, is possible: == First-class composition == In functional programming languages, function composition can be naturally expressed as a higher-order function or operator. In other programming languages you can write your own mechanisms to perform function composition. === Haskell === In Haskell, the example foo = f  ∘  g given above becomes: foo = f . g using the built-in composition operator (.) which can be read as f after g or g composed with f. The composition operator  ∘   itself can be defined in Haskell using a lambda expression: The first line describes the type of (.) - it takes a pair of functions, f,  g and returns a function (the lambda expression on the second line). Note that Haskell doesn't require specification of the exact input and output types of f and g; the a, b, c, and x are placeholders; only the relation between f,  g matters (f must accept what g returns). This makes (.) a polymorphic operator. === Lisp === Variants of Lisp, especially Scheme, the interchangeability of code and data together with the treatment of functions lend themselves extremely well for a recursive definition of a variadic compositional operator. === APL === Many dialects of APL feature built in function composition using the symbol ∘. This higher-order function extends function composition to dyadic application of the left side function such that A f∘g B is A f g B. Additionally, you can define function composition: In dialect that does not support inline definition using braces, the traditional definition is available: === Raku === Raku like Haskell has a built in function composition operator, the main difference is it is spelled as ∘ or o. Also like Haskell you could define the operator yourself. In fact the following is the Raku code used to define it in the Rakudo implementation. === Nim === Nim supports uniform function call syntax, which allows for arbitrary function composition through the method syntax . operator. === Python === In Python, a way to define the composition for any group of functions, is using functools.reduce function: === JavaScript === In JavaScript we can define it as a function which takes two functions f and g, and produces a function: === C# === In C# we can define it as an Extension method which takes Funcs f and g, and produces a new Func: === Ruby === Languages like Ruby let you construct a binary operator yourself: However, a native function composition operator was introduced in Ruby 2.6: == Research survey == Notions of composition, including the principle of compositionality and composability, are so ubiquitous that numerous strands of research have separately evolved. The following is a sampling of the kind of research in which the notion of composition is central. Steele (1994) directly applied function composition to the assemblage of building blocks known as 'monads' in the Haskell programming language. Meyer (1988) addressed the software reuse problem in terms of composability. Abadi & Lamport (1993) formally defined a proof rule for functional composition that assures a program's safety and liveness. Kracht (2001) identified a strengthened form of compositionality by placing it into a semiotic system and applying it to the problem of structural ambiguity frequently encountered in computational linguistics. van Gelder & Port (1993) examined the role of compositionality in analog aspects of natural language processing. According to a review by Gibbons (2002), formal treatment of composition underlies validation of component assembly in visual programming languages like IBM's Visual Age for the Java language. == Large-scale composition == Whole programs or systems can be treated as functions, which can be readily composed if their inputs and outputs are well-defined. Pipelines allowing easy composition of filters were so successful that they became a design pattern of operating systems. Imperative procedures with side effects violate referential transparency and therefore are not cleanly composable. However if one considers the "state of the world" before and after running the code as its input and output, one gets a clean function. Composition of such functions corresponds to running the procedures one after the other. The monad formalism uses this idea to incorporate side effects and input/output (I/O) into functional languages. == See also == Currying Functional decomposition Implementation inheritance Inheritance semantics Iteratee Pipeline (Unix) Principle of compositionality Virtual inheritance == Notes == == References == Abadi, Martín; Lamport, Leslie (1993), "Composing specifications" (PDF), ACM Transactions on Programming Languages and Systems, 15 (1): 73–132, doi:10.1145/151646.151649. Cox, Brad (1986), Object-oriented Programming, an Evolutionary Approach, Reading, MA: Addison-Wesley, Bibcode:1986oopa.book.....C, ISBN 978-0-201-54834-1. Daume, Hal III, Yet Another Haskell Tutorial. DeMarco, Tom; Lister, Tim (1995), "Software development: state of the art vs. state of the practice", in DeMarco, Tom (ed.), Why Does Software Cost So Much, and other puzzles of the Information Age, New York, NY: Dorset House, ISBN 0-932633-34-X. van Gelder, Timothy; Port, Robert (1993), "Beyond symbolic: prolegomena to a Kama-Sutra of compositionality", in Honavar, Vasant; Uhr, Leonard (eds.), Symbol Processing and Connectionist Models in Artificial Intelligence and Cognition: Steps Toward Integration, Academic Press. Gibbons, Jeremy (2002), "Towards a Colimit-Based Semantics for Visual Programming", in Arbab, Farhad; Talcott, Carolyn (eds.), Proc. 5th International Conference on Coordination Models and Languages (PDF), Lecture Notes in Computer Science, vol. 2315, Springer-Verlag, pp. 339–350, doi:10.1007/3-540-46000-4_18, ISBN 978-3-540-43410-8. Korn, Henry; Liberi, Albert (1974), An Elementary Approach to Functions, New York, NY: McGraw-Hill, ISBN 0-07-035339-5. Kracht, Marcus (2001), "Strict compositionality and literal movement grammars", Proc. 3rd International Conference on Logical Aspects of Computational Linguistics, Lecture Notes in Computer Science, vol. 2014, Springer-Verlag, pp. 126–143, doi:10.1007/3-540-45738-0_8, ISBN 978-3-540-42251-8. Meyer, Bertrand (1988), Object-oriented Software Construction, New York, NY: Prentice Hall, pp. 13–15, ISBN 0-13-629049-3. Miller, George A. (1956), "The magical number seven, plus or minus two: some limits on our capacity for processing information", Psychological Review, 63 (2): 81–97, doi:10.1037/h0043158, hdl:11858/00-001M-0000-002C-4646-B, PMID 13310704, archived from the original on 2010-06-19, retrieved 2010-05-02. Pierce, Benjamin C.; Turner, David N. (2000), "Pict: A programming language based on the pi-calculus", Proof, Language, and Interaction: Essays in Honour of Robin Milner, Foundations Of Computing Series, Cambridge, MA: MIT Press, pp. 455–494, ISBN 0-262-16188-5. Raymond, Eric S. (2003), "1.6.3 Rule of Composition: Design programs to be connected with other programs", The Art of Unix Programming, Addison-Wesley, pp. 15–16, ISBN 978-0-13-142901-7. Steele, Guy L. Jr. (1994), "Building interpreters by composing monads", Proc. 21st ACM Symposium on Principles of Programming Languages, pp. 472–492, doi:10.1145/174675.178068, ISBN 0-89791-636-0.
Wikipedia/Function_composition_(computer_science)
In universal algebra, a clone is a set C of finitary operations on a set A such that C contains all the projections πkn: An → A, defined by πkn(x1, …, xn) = xk, C is closed under (finitary multiple) composition (or "superposition"): if f, g1, …, gm are members of C such that f is m-ary, and gj is n-ary for all j, then the n-ary operation h(x1, …, xn) := f(g1(x1, …, xn), …, gm(x1, …, xn)) is in C. The question whether clones should contain nullary operations or not is not treated uniformly in the literature. The classical approach as evidenced by the standard monographs on clone theory considers clones only containing at least unary operations. However, with only minor modifications (related to the empty invariant relation) most of the usual theory can be lifted to clones allowing nullary operations.: 4–5  The more general concept includes all clones without nullary operations as subclones of the clone of all at least unary operations: 5  and is in accordance with the custom to allow nullary terms and nullary term operations in universal algebra. Typically, publications studying clones as abstract clones, e.g. in the category theoretic setting of Lawvere's algebraic theories, will include nullary operations. Given an algebra in a signature σ, the set of operations on its carrier definable by a σ-term (the term functions) is a clone. Conversely, every clone can be realized as the clone of term functions in a suitable algebra by simply taking the clone itself as source for the signature σ so that the algebra has the whole clone as its fundamental operations. If A and B are algebras with the same carrier such that every basic function of A is a term function in B and vice versa, then A and B have the same clone. For this reason, modern universal algebra often treats clones as a representation of algebras which abstracts from their signature. There is only one clone on the one-element set (there are two if nullary operations are considered). The lattice of clones on a two-element set is countable,: 39  and has been completely described by Emil Post (see Post's lattice,: 37  which traditionally does not show clones with nullary operations). Clones on larger sets do not admit a simple classification; there are continuum-many clones on a finite set of size at least three,: 39  and 22κ (even just maximal,: 39  i.e. precomplete) clones on an infinite set of cardinality κ.: 39  == Abstract clones == Philip Hall introduced the concept of abstract clone. An abstract clone is different from a concrete clone in that the set A is not given. Formally, an abstract clone comprises a set Cn for each natural number n, elements πk,n in Cn for all k ≤ n, and a family of functions ∗:Cm × (Cn)m → Cn for all m and n such that c * (π1,n, …, πn,n) = c πk,m * (c1, …, cm) = ck c * (d1 * (e1, …, en), …, dm * (e1, …, en)) = (c * (d1, …, dm)) * (e1, …, en). Any concrete clone determines an abstract clone in the obvious manner. Any algebraic theory determines an abstract clone where Cn is the set of terms in n variables, πk,n are variables, and ∗ is substitution. Two theories determine isomorphic clones if and only if the corresponding categories of algebras are isomorphic. Conversely every abstract clone determines an algebraic theory with an n-ary operation for each element of Cn. This gives a bijective correspondence between abstract clones and algebraic theories. Every abstract clone C induces a Lawvere theory in which the morphisms m → n are elements of (Cm)n. This induces a bijective correspondence between Lawvere theories and abstract clones. == See also == Term algebra == Notes == == References == McKenzie, Ralph N.; McNulty, George F.; Taylor, Walter F. (1987). Algebras, Lattices, Varieties. Vol. I. Monterey, CA: Wadsworth & Brooks/Cole Advanced Books & Software. ISBN 978-0-534-07651-1. Lawvere, F. William (1963). Functorial semantics of algebraic theories (PhD). Columbia University. Available online at Reprints in Theory and Applications of Categories
Wikipedia/Clone_(algebra)
In algebra, a transformation semigroup (or composition semigroup) is a collection of transformations (functions from a set to itself) that is closed under function composition. If it includes the identity function, it is a monoid, called a transformation (or composition) monoid. This is the semigroup analogue of a permutation group. A transformation semigroup of a set has a tautological semigroup action on that set. Such actions are characterized by being faithful, i.e., if two elements of the semigroup have the same action, then they are equal. An analogue of Cayley's theorem shows that any semigroup can be realized as a transformation semigroup of some set. In automata theory, some authors use the term transformation semigroup to refer to a semigroup acting faithfully on a set of "states" different from the semigroup's base set. There is a correspondence between the two notions. == Transformation semigroups and monoids == A transformation semigroup is a pair (X,S), where X is a set and S is a semigroup of transformations of X. Here a transformation of X is just a function from a subset of X to X, not necessarily invertible, and therefore S is simply a set of transformations of X which is closed under composition of functions. The set of all partial functions on a given base set, X, forms a regular semigroup called the semigroup of all partial transformations (or the partial transformation semigroup on X), typically denoted by P T X {\displaystyle {\mathcal {PT}}_{X}} . If S includes the identity transformation of X, then it is called a transformation monoid. Any transformation semigroup S determines a transformation monoid M by taking the union of S with the identity transformation. A transformation monoid whose elements are invertible is a permutation group. The set of all transformations of X is a transformation monoid called the full transformation monoid (or semigroup) of X. It is also called the symmetric semigroup of X and is denoted by TX. Thus a transformation semigroup (or monoid) is just a subsemigroup (or submonoid) of the full transformation monoid of X. If (X,S) is a transformation semigroup then X can be made into a semigroup action of S by evaluation: s ⋅ x = s ( x ) for s ∈ S , x ∈ X . {\displaystyle s\cdot x=s(x){\text{ for }}s\in S,x\in X.} This is a monoid action if S is a transformation monoid. The characteristic feature of transformation semigroups, as actions, is that they are faithful, i.e., if s ⋅ x = t ⋅ x for all x ∈ X , {\displaystyle s\cdot x=t\cdot x{\text{ for all }}x\in X,} then s = t. Conversely if a semigroup S acts on a set X by T(s,x) = s • x then we can define, for s ∈ S, a transformation Ts of X by T s ( x ) = T ( s , x ) . {\displaystyle T_{s}(x)=T(s,x).\,} The map sending s to Ts is injective if and only if (X, T) is faithful, in which case the image of this map is a transformation semigroup isomorphic to S. == Cayley representation == In group theory, Cayley's theorem asserts that any group G is isomorphic to a subgroup of the symmetric group of G (regarded as a set), so that G is a permutation group. This theorem generalizes straightforwardly to monoids: any monoid M is a transformation monoid of its underlying set, via the action given by left (or right) multiplication. This action is faithful because if ax = bx for all x in M, then by taking x equal to the identity element, we have a = b. For a semigroup S without a (left or right) identity element, we take X to be the underlying set of the monoid corresponding to S to realise S as a transformation semigroup of X. In particular any finite semigroup can be represented as a subsemigroup of transformations of a set X with |X| ≤ |S| + 1, and if S is a monoid, we have the sharper bound |X| ≤ |S|, as in the case of finite groups.: 21  === In computer science === In computer science, Cayley representations can be applied to improve the asymptotic efficiency of semigroups by reassociating multiple composed multiplications. The action given by left multiplication results in right-associated multiplication, and vice versa for the action given by right multiplication. Despite having the same results for any semigroup, the asymptotic efficiency will differ. Two examples of useful transformation monoids given by an action of left multiplication are the functional variation of the difference list data structure, and the monadic Codensity transformation (a Cayley representation of a monad, which is a monoid in a particular monoidal functor category). == Transformation monoid of an automaton == Let M be a deterministic automaton with state space S and alphabet A. The words in the free monoid A∗ induce transformations of S giving rise to a monoid morphism from A∗ to the full transformation monoid TS. The image of this morphism is the transformation semigroup of M.: 78  For a regular language, the syntactic monoid is isomorphic to the transformation monoid of the minimal automaton of the language.: 81  == See also == Semiautomaton Krohn–Rhodes theory Symmetric inverse semigroup Biordered set Special classes of semigroups Composition ring == References == Clifford, A.H.; Preston, G.B. (1961). The algebraic theory of semigroups. Vol. I. Mathematical Surveys. Vol. 7. Providence, R.I.: American Mathematical Society. ISBN 978-0-8218-0272-4. Zbl 0111.03403. {{cite book}}: ISBN / Date incompatibility (help) Howie, John M. (1995). Fundamentals of Semigroup Theory. London Mathematical Society Monographs. New Series. Vol. 12. Oxford: Clarendon Press. ISBN 978-0-19-851194-6. Zbl 0835.20077. Mati Kilp, Ulrich Knauer, Alexander V. Mikhalev (2000), Monoids, Acts and Categories: with Applications to Wreath Products and Graphs, Expositions in Mathematics 29, Walter de Gruyter, Berlin, ISBN 978-3-11-015248-7.
Wikipedia/Transformation_monoid
In engineering, functional decomposition is the process of resolving a functional relationship into its constituent parts in such a way that the original function can be reconstructed (i.e., recomposed) from those parts. This process of decomposition may be undertaken to gain insight into the identity of the constituent components, which may reflect individual physical processes of interest. Also, functional decomposition may result in a compressed representation of the global function, a task which is feasible only when the constituent processes possess a certain level of modularity (i.e., independence or non-interaction). Interaction (statistics)(a situation in which one causal variable depends on the state of a second causal variable) between the components are critical to the function of the collection. All interactions may not be observable, or measured, but possibly deduced through repetitive perception, synthesis, validation and verification of composite behavior. == Motivation for decomposition == Decomposition of a function into non-interacting components generally permits more economical representations of the function. Intuitively, this reduction in representation size is achieved simply because each variable depends only on a subset of the other variables. Thus, variable x 1 {\displaystyle x_{1}} only depends directly on variable x 2 {\displaystyle x_{2}} , rather than depending on the entire set of variables. We would say that variable x 2 {\displaystyle x_{2}} screens off variable x 1 {\displaystyle x_{1}} from the rest of the world. Practical examples of this phenomenon surround us. Consider the particular case of "northbound traffic on the West Side Highway." Let us assume this variable ( x 1 {\displaystyle {x_{1}}} ) takes on three possible values of {"moving slow", "moving deadly slow", "not moving at all"}. Now, let's say the variable x 1 {\displaystyle {x_{1}}} depends on two other variables, "weather" with values of {"sun", "rain", "snow"}, and "GW Bridge traffic" with values {"10mph", "5mph", "1mph"}. The point here is that while there are certainly many secondary variables that affect the weather variable (e.g., low pressure system over Canada, butterfly flapping in Japan, etc.) and the Bridge traffic variable (e.g., an accident on I-95, presidential motorcade, etc.) all these other secondary variables are not directly relevant to the West Side Highway traffic. All we need (hypothetically) in order to predict the West Side Highway traffic is the weather and the GW Bridge traffic, because these two variables screen off West Side Highway traffic from all other potential influences. That is, all other influences act through them. == Applications == Practical applications of functional decomposition are found in Bayesian networks, structural equation modeling, linear systems, and database systems. == Knowledge representation == Processes related to functional decomposition are prevalent throughout the fields of knowledge representation and machine learning. Hierarchical model induction techniques such as Logic circuit minimization, decision trees, grammatical inference, hierarchical clustering, and quadtree decomposition are all examples of function decomposition. Many statistical inference methods can be thought of as implementing a function decomposition process in the presence of noise; that is, where functional dependencies are only expected to hold approximately. Among such models are mixture models and the recently popular methods referred to as "causal decompositions" or Bayesian networks. == Database theory == See database normalization. == Machine learning == In practical scientific applications, it is almost never possible to achieve perfect functional decomposition because of the incredible complexity of the systems under study. This complexity is manifested in the presence of "noise," which is just a designation for all the unwanted and untraceable influences on our observations. However, while perfect functional decomposition is usually impossible, the spirit lives on in a large number of statistical methods that are equipped to deal with noisy systems. When a natural or artificial system is intrinsically hierarchical, the joint distribution on system variables should provide evidence of this hierarchical structure. The task of an observer who seeks to understand the system is then to infer the hierarchical structure from observations of these variables. This is the notion behind the hierarchical decomposition of a joint distribution, the attempt to recover something of the intrinsic hierarchical structure which generated that joint distribution. As an example, Bayesian network methods attempt to decompose a joint distribution along its causal fault lines, thus "cutting nature at its seams". The essential motivation behind these methods is again that within most systems (natural or artificial), relatively few components/events interact with one another directly on equal footing. Rather, one observes pockets of dense connections (direct interactions) among small subsets of components, but only loose connections between these densely connected subsets. There is thus a notion of "causal proximity" in physical systems under which variables naturally precipitate into small clusters. Identifying these clusters and using them to represent the joint provides the basis for great efficiency of storage (relative to the full joint distribution) as well as for potent inference algorithms. == Software architecture == Functional Decomposition is a design method intending to produce a non-implementation, architectural description of a computer program. The software architect first establishes a series of functions and types that accomplishes the main processing problem of the computer program, decomposes each to reveal common functions and types, and finally derives Modules from this activity. == Signal processing == Functional decomposition is used in the analysis of many signal processing systems, such as LTI systems. The input signal to an LTI system can be expressed as a function, f ( t ) {\displaystyle f(t)} . Then f ( t ) {\displaystyle f(t)} can be decomposed into a linear combination of other functions, called component signals: f ( t ) = a 1 ⋅ g 1 ( t ) + a 2 ⋅ g 2 ( t ) + a 3 ⋅ g 3 ( t ) + ⋯ + a n ⋅ g n ( t ) {\displaystyle f(t)=a_{1}\cdot g_{1}(t)+a_{2}\cdot g_{2}(t)+a_{3}\cdot g_{3}(t)+\dots +a_{n}\cdot g_{n}(t)} Here, { g 1 ( t ) , g 2 ( t ) , g 3 ( t ) , … , g n ( t ) } {\displaystyle \{g_{1}(t),g_{2}(t),g_{3}(t),\dots ,g_{n}(t)\}} are the component signals. Note that { a 1 , a 2 , a 3 , … , a n } {\displaystyle \{a_{1},a_{2},a_{3},\dots ,a_{n}\}} are constants. This decomposition aids in analysis, because now the output of the system can be expressed in terms of the components of the input. If we let T { } {\displaystyle T\{\}} represent the effect of the system, then the output signal is T { f ( t ) } {\displaystyle T\{f(t)\}} , which can be expressed as: T { f ( t ) } = T { a 1 ⋅ g 1 ( t ) + a 2 ⋅ g 2 ( t ) + a 3 ⋅ g 3 ( t ) + ⋯ + a n ⋅ g n ( t ) } {\displaystyle T\{f(t)\}=T\{a_{1}\cdot g_{1}(t)+a_{2}\cdot g_{2}(t)+a_{3}\cdot g_{3}(t)+\dots +a_{n}\cdot g_{n}(t)\}} = a 1 ⋅ T { g 1 ( t ) } + a 2 ⋅ T { g 2 ( t ) } + a 3 ⋅ T { g 3 ( t ) } + ⋯ + a n ⋅ T { g n ( t ) } {\displaystyle =a_{1}\cdot T\{g_{1}(t)\}+a_{2}\cdot T\{g_{2}(t)\}+a_{3}\cdot T\{g_{3}(t)\}+\dots +a_{n}\cdot T\{g_{n}(t)\}} In other words, the system can be seen as acting separately on each of the components of the input signal. Commonly used examples of this type of decomposition are the Fourier series and the Fourier transform. == Systems engineering == Functional decomposition in systems engineering refers to the process of defining a system in functional terms, then defining lower-level functions and sequencing relationships from these higher level systems functions. The basic idea is to try to divide a system in such a way that each block of a block diagram can be described without an "and" or "or" in the description. This exercise forces each part of the system to have a pure function. When a system is designed as pure functions, they can be reused, or replaced. A usual side effect is that the interfaces between blocks become simple and generic. Since the interfaces usually become simple, it is easier to replace a pure function with a related, similar function. For example, say that one needs to make a stereo system. One might functionally decompose this into speakers, amplifier, a tape deck and a front panel. Later, when a different model needs an audio CD, it can probably fit the same interfaces. == See also == Bayesian networks Currying Database normalization Function composition (computer science) Inductive inference Knowledge representation == Further reading == Zupan, Blaž; Bohanec, Marko; Bratko, Ivan; Demšar, Janez (July 1997). "Machine learning by function decomposition". In Douglas H. Fisher (ed.). Proceedings of the Fourteenth International Conference on Machine Learning. ICML '97: July 8–12, 1997. San Francisco: Morgan Kaufmann Publishers. pp. 421–429. ISBN 978-1-55860-486-5. A review of other applications and function decomposition. Also presents methods based on information theory and graph theory. == Notes == == References ==
Wikipedia/Functional_decomposition
In set theory, a projection is one of two closely related types of functions or operations, namely: A set-theoretic operation typified by the j {\displaystyle j} th projection map, written p r o j j , {\displaystyle \mathrm {proj} _{j},} that takes an element x → = ( x 1 , … , x j , … , x k ) {\displaystyle {\vec {x}}=(x_{1},\ \dots ,\ x_{j},\ \dots ,\ x_{k})} of the Cartesian product ( X 1 × ⋯ × X j × ⋯ × X k ) {\displaystyle (X_{1}\times \cdots \times X_{j}\times \cdots \times X_{k})} to the value p r o j j ( x → ) = x j . {\displaystyle \mathrm {proj} _{j}({\vec {x}})=x_{j}.} A function that sends an element x {\displaystyle x} to its equivalence class under a specified equivalence relation E , {\displaystyle E,} or, equivalently, a surjection from a set to another set. The function from elements to equivalence classes is a surjection, and every surjection corresponds to an equivalence relation under which two elements are equivalent when they have the same image. The result of the mapping is written as [ x ] {\displaystyle [x]} when E {\displaystyle E} is understood, or written as [ x ] E {\displaystyle [x]_{E}} when it is necessary to make E {\displaystyle E} explicit. == See also == Cartesian product – Mathematical set formed from two given sets Projection (mathematics) – Mapping equal to its square under mapping composition Projection (measure theory) Projection (linear algebra) – Idempotent linear transformation from a vector space to itself Projection (relational algebra) – Operation that restricts a relation to a specified set of attributes Relation (mathematics) – Relationship between two sets, defined by a set of ordered pairs == References ==
Wikipedia/Projection_function
In mathematics, infinite compositions of analytic functions (ICAF) offer alternative formulations of analytic continued fractions, series, products and other infinite expansions, and the theory evolving from such compositions may shed light on the convergence/divergence of these expansions. Some functions can actually be expanded directly as infinite compositions. In addition, it is possible to use ICAF to evaluate solutions of fixed point equations involving infinite expansions. Complex dynamics offers another venue for iteration of systems of functions rather than a single function. For infinite compositions of a single function see Iterated function. For compositions of a finite number of functions, useful in fractal theory, see Iterated function system. Although the title of this article specifies analytic functions, there are results for more general functions of a complex variable as well. == Notation == There are several notations describing infinite compositions, including the following: Forward compositions: F k , n ( z ) = f k ∘ f k + 1 ∘ ⋯ ∘ f n − 1 ∘ f n ( z ) . {\displaystyle F_{k,n}(z)=f_{k}\circ f_{k+1}\circ \dots \circ f_{n-1}\circ f_{n}(z).} Backward compositions: G k , n ( z ) = f n ∘ f n − 1 ∘ ⋯ ∘ f k + 1 ∘ f k ( z ) . {\displaystyle G_{k,n}(z)=f_{n}\circ f_{n-1}\circ \dots \circ f_{k+1}\circ f_{k}(z).} In each case convergence is interpreted as the existence of the following limits: lim n → ∞ F 1 , n ( z ) , lim n → ∞ G 1 , n ( z ) . {\displaystyle \lim _{n\to \infty }F_{1,n}(z),\qquad \lim _{n\to \infty }G_{1,n}(z).} For convenience, set Fn(z) = F1,n(z) and Gn(z) = G1,n(z). One may also write F n ( z ) = R n k = 1 f k ( z ) = f 1 ∘ f 2 ∘ ⋯ ∘ f n ( z ) {\displaystyle F_{n}(z)={\underset {k=1}{\overset {n}{\mathop {R} }}}\,f_{k}(z)=f_{1}\circ f_{2}\circ \cdots \circ f_{n}(z)} and G n ( z ) = L n k = 1 g k ( z ) = g n ∘ g n − 1 ∘ ⋯ ∘ g 1 ( z ) {\displaystyle G_{n}(z)={\underset {k=1}{\overset {n}{\mathop {L} }}}\,g_{k}(z)=g_{n}\circ g_{n-1}\circ \cdots \circ g_{1}(z)} Comment: Possibly the first exploration of the theory of infinite compositions of analytic functions not restricted to sequences of functions of a specific kind occurs in a 1988 paper by John Gill: Compositions of analytic functions of the form Fn(z)=Fn-1(fn(z)),fn->f. His results are subsumed in the theorem below on Forward Compositions. == Contraction theorem == Many results can be considered extensions of the following result: == Infinite compositions of contractive functions == Let {fn} be a sequence of functions analytic on a simply-connected domain S. Suppose there exists a compact set Ω ⊂ S such that for each n, fn(S) ⊂ Ω. Additional theory resulting from investigations based on these two theorems, particularly Forward Compositions Theorem, include location analysis for the limits obtained in the following reference. For a different approach to Backward Compositions Theorem, see the following reference. Regarding Backward Compositions Theorem, the example f2n(z) = 1/2 and f2n−1(z) = −1/2 for S = {z : |z| < 1} demonstrates the inadequacy of simply requiring contraction into a compact subset, like Forward Compositions Theorem. For functions not necessarily analytic the Lipschitz condition suffices: == Infinite compositions of other functions == === Non-contractive complex functions === Results involving entire functions include the following, as examples. Set f n ( z ) = a n z + c n , 2 z 2 + c n , 3 z 3 + ⋯ ρ n = sup r { | c n , r | 1 r − 1 } {\displaystyle {\begin{aligned}f_{n}(z)&=a_{n}z+c_{n,2}z^{2}+c_{n,3}z^{3}+\cdots \\\rho _{n}&=\sup _{r}\left\{\left|c_{n,r}\right|^{\frac {1}{r-1}}\right\}\end{aligned}}} Then the following results hold: Additional elementary results include: === Linear fractional transformations === Results for compositions of linear fractional (Möbius) transformations include the following, as examples: == Examples and applications == === Continued fractions === The value of the infinite continued fraction a 1 b 1 + a 2 b 2 + ⋯ {\displaystyle {\cfrac {a_{1}}{b_{1}+{\cfrac {a_{2}}{b_{2}+\cdots }}}}} may be expressed as the limit of the sequence {Fn(0)} where f n ( z ) = a n b n + z . {\displaystyle f_{n}(z)={\frac {a_{n}}{b_{n}+z}}.} As a simple example, a well-known result (Worpitsky's circle theorem) follows from an application of Theorem (A): Consider the continued fraction a 1 ζ 1 + a 2 ζ 1 + ⋯ {\displaystyle {\cfrac {a_{1}\zeta }{1+{\cfrac {a_{2}\zeta }{1+\cdots }}}}} with f n ( z ) = a n ζ 1 + z . {\displaystyle f_{n}(z)={\frac {a_{n}\zeta }{1+z}}.} Stipulate that |ζ| < 1 and |z| < R < 1. Then for 0 < r < 1, | a n | < r R ( 1 − R ) ⇒ | f n ( z ) | < r R < R ⇒ a 1 ζ 1 + a 2 ζ 1 + ⋯ = F ( ζ ) {\displaystyle |a_{n}|<rR(1-R)\Rightarrow \left|f_{n}(z)\right|<rR<R\Rightarrow {\frac {a_{1}\zeta }{1+{\frac {a_{2}\zeta }{1+\cdots }}}}=F(\zeta )} , analytic for |z| < 1. Set R = 1/2. Example. F ( z ) = ( i − 1 ) z 1 + i + z + ( 2 − i ) z 1 + 2 i + z + ( 3 − i ) z 1 + 3 i + z + ⋯ , {\displaystyle F(z)={\frac {(i-1)z}{1+i+z{\text{ }}+}}{\text{ }}{\frac {(2-i)z}{1+2i+z{\text{ }}+}}{\text{ }}{\frac {(3-i)z}{1+3i+z{\text{ }}+}}\cdots ,} [ − 15 , 15 ] {\displaystyle [-15,15]} Example. A fixed-point continued fraction form (a single variable). f k , n ( z ) = α k , n β k , n α k , n + β k , n − z , α k , n = α k , n ( z ) , β k , n = β k , n ( z ) , F n ( z ) = ( f 1 , n ∘ ⋯ ∘ f n , n ) ( z ) {\displaystyle f_{k,n}(z)={\frac {\alpha _{k,n}\beta _{k,n}}{\alpha _{k,n}+\beta _{k,n}-z}},\alpha _{k,n}=\alpha _{k,n}(z),\beta _{k,n}=\beta _{k,n}(z),F_{n}(z)=\left(f_{1,n}\circ \cdots \circ f_{n,n}\right)(z)} α k , n = x cos ⁡ ( t y ) + i y sin ⁡ ( t x ) , β k , n = cos ⁡ ( t y ) + i sin ⁡ ( t x ) , t = k / n {\displaystyle \alpha _{k,n}=x\cos(ty)+iy\sin(tx),\beta _{k,n}=\cos(ty)+i\sin(tx),t=k/n} === Direct functional expansion === Examples illustrating the conversion of a function directly into a composition follow: Example 1. Suppose ϕ {\displaystyle \phi } is an entire function satisfying the following conditions: { ϕ ( t z ) = t ( ϕ ( z ) + ϕ ( z ) 2 ) | t | > 1 ϕ ( 0 ) = 0 ϕ ′ ( 0 ) = 1 {\displaystyle {\begin{cases}\phi (tz)=t\left(\phi (z)+\phi (z)^{2}\right)&|t|>1\\\phi (0)=0\\\phi '(0)=1\end{cases}}} Then f n ( z ) = z + z 2 t n ⟹ F n ( z ) → ϕ ( z ) {\displaystyle f_{n}(z)=z+{\frac {z^{2}}{t^{n}}}\Longrightarrow F_{n}(z)\to \phi (z)} . Example 2. f n ( z ) = z + z 2 2 n ⟹ F n ( z ) → 1 2 ( e 2 z − 1 ) {\displaystyle f_{n}(z)=z+{\frac {z^{2}}{2^{n}}}\Longrightarrow F_{n}(z)\to {\frac {1}{2}}\left(e^{2z}-1\right)} Example 3. f n ( z ) = z 1 − z 2 4 n ⟹ F n ( z ) → tan ⁡ ( z ) {\displaystyle f_{n}(z)={\frac {z}{1-{\tfrac {z^{2}}{4^{n}}}}}\Longrightarrow F_{n}(z)\to \tan(z)} Example 4. g n ( z ) = 2 ⋅ 4 n z ( 1 + z 2 4 n − 1 ) ⟹ G n ( z ) → arctan ⁡ ( z ) {\displaystyle g_{n}(z)={\frac {2\cdot 4^{n}}{z}}\left({\sqrt {1+{\frac {z^{2}}{4^{n}}}}}-1\right)\Longrightarrow G_{n}(z)\to \arctan(z)} === Calculation of fixed-points === Theorem (B) can be applied to determine the fixed-points of functions defined by infinite expansions or certain integrals. The following examples illustrate the process: Example FP1. For |ζ| ≤ 1 let G ( ζ ) = e ζ 4 3 + ζ + e ζ 8 3 + ζ + e ζ 12 3 + ζ + ⋯ {\displaystyle G(\zeta )={\frac {\tfrac {e^{\zeta }}{4}}{3+\zeta +{\cfrac {\tfrac {e^{\zeta }}{8}}{3+\zeta +{\cfrac {\tfrac {e^{\zeta }}{12}}{3+\zeta +\cdots }}}}}}} To find α = G(α), first we define: t n ( z ) = e ζ 4 n 3 + ζ + z f n ( ζ ) = t 1 ∘ t 2 ∘ ⋯ ∘ t n ( 0 ) {\displaystyle {\begin{aligned}t_{n}(z)&={\cfrac {\tfrac {e^{\zeta }}{4n}}{3+\zeta +z}}\\f_{n}(\zeta )&=t_{1}\circ t_{2}\circ \cdots \circ t_{n}(0)\end{aligned}}} Then calculate G n ( ζ ) = f n ∘ ⋯ ∘ f 1 ( ζ ) {\displaystyle G_{n}(\zeta )=f_{n}\circ \cdots \circ f_{1}(\zeta )} with ζ = 1, which gives: α = 0.087118118... to ten decimal places after ten iterations. === Evolution functions === Consider a time interval, normalized to I = [0, 1]. ICAFs can be constructed to describe continuous motion of a point, z, over the interval, but in such a way that at each "instant" the motion is virtually zero (see Zeno's Arrow): For the interval divided into n equal subintervals, 1 ≤ k ≤ n set g k , n ( z ) = z + φ k , n ( z ) {\displaystyle g_{k,n}(z)=z+\varphi _{k,n}(z)} analytic or simply continuous – in a domain S, such that lim n → ∞ φ k , n ( z ) = 0 {\displaystyle \lim _{n\to \infty }\varphi _{k,n}(z)=0} for all k and all z in S, and g k , n ( z ) ∈ S {\displaystyle g_{k,n}(z)\in S} . ==== Principal example ==== Source: g k , n ( z ) = z + 1 n ϕ ( z , k n ) G k , n ( z ) = ( g k , n ∘ g k − 1 , n ∘ ⋯ ∘ g 1 , n ) ( z ) G n ( z ) = G n , n ( z ) {\displaystyle {\begin{aligned}g_{k,n}(z)&=z+{\frac {1}{n}}\phi \left(z,{\tfrac {k}{n}}\right)\\G_{k,n}(z)&=\left(g_{k,n}\circ g_{k-1,n}\circ \cdots \circ g_{1,n}\right)(z)\\G_{n}(z)&=G_{n,n}(z)\end{aligned}}} implies λ n ( z ) ≐ G n ( z ) − z = 1 n ∑ k = 1 n ϕ ( G k − 1 , n ( z ) k n ) ≐ 1 n ∑ k = 1 n ψ ( z , k n ) ∼ ∫ 0 1 ψ ( z , t ) d t , {\displaystyle \lambda _{n}(z)\doteq G_{n}(z)-z={\frac {1}{n}}\sum _{k=1}^{n}\phi \left(G_{k-1,n}(z){\tfrac {k}{n}}\right)\doteq {\frac {1}{n}}\sum _{k=1}^{n}\psi \left(z,{\tfrac {k}{n}}\right)\sim \int _{0}^{1}\psi (z,t)\,dt,} where the integral is well-defined if d z d t = ϕ ( z , t ) {\displaystyle {\tfrac {dz}{dt}}=\phi (z,t)} has a closed-form solution z(t). Then λ n ( z 0 ) ≈ ∫ 0 1 ϕ ( z ( t ) , t ) d t . {\displaystyle \lambda _{n}(z_{0})\approx \int _{0}^{1}\phi (z(t),t)\,dt.} Otherwise, the integrand is poorly defined although the value of the integral is easily computed. In this case one might call the integral a "virtual" integral. Example. ϕ ( z , t ) = 2 t − cos ⁡ y 1 − sin ⁡ x cos ⁡ y + i 1 − 2 t sin ⁡ x 1 − sin ⁡ x cos ⁡ y , ∫ 0 1 ψ ( z , t ) d t {\displaystyle \phi (z,t)={\frac {2t-\cos y}{1-\sin x\cos y}}+i{\frac {1-2t\sin x}{1-\sin x\cos y}},\int _{0}^{1}\psi (z,t)\,dt} Example. Let: g n ( z ) = z + c n n ϕ ( z ) , with f ( z ) = z + ϕ ( z ) . {\displaystyle g_{n}(z)=z+{\frac {c_{n}}{n}}\phi (z),\quad {\text{with}}\quad f(z)=z+\phi (z).} Next, set T 1 , n ( z ) = g n ( z ) , T k , n ( z ) = g n ( T k − 1 , n ( z ) ) , {\displaystyle T_{1,n}(z)=g_{n}(z),T_{k,n}(z)=g_{n}(T_{k-1,n}(z)),} and Tn(z) = Tn,n(z). Let T ( z ) = lim n → ∞ T n ( z ) {\displaystyle T(z)=\lim _{n\to \infty }T_{n}(z)} when that limit exists. The sequence {Tn(z)} defines contours γ = γ(cn, z) that follow the flow of the vector field f(z). If there exists an attractive fixed point α, meaning |f(z) − α| ≤ ρ|z − α| for 0 ≤ ρ < 1, then Tn(z) → T(z) ≡ α along γ = γ(cn, z), provided (for example) c n = n {\displaystyle c_{n}={\sqrt {n}}} . If cn ≡ c > 0, then Tn(z) → T(z), a point on the contour γ = γ(c, z). It is easily seen that ∮ γ ϕ ( ζ ) d ζ = lim n → ∞ c n ∑ k = 1 n ϕ 2 ( T k − 1 , n ( z ) ) {\displaystyle \oint _{\gamma }\phi (\zeta )\,d\zeta =\lim _{n\to \infty }{\frac {c}{n}}\sum _{k=1}^{n}\phi ^{2}\left(T_{k-1,n}(z)\right)} and L ( γ ( z ) ) = lim n → ∞ c n ∑ k = 1 n | ϕ ( T k − 1 , n ( z ) ) | , {\displaystyle L(\gamma (z))=\lim _{n\to \infty }{\frac {c}{n}}\sum _{k=1}^{n}\left|\phi \left(T_{k-1,n}(z)\right)\right|,} when these limits exist. These concepts are marginally related to active contour theory in image processing, and are simple generalizations of the Euler method === Self-replicating expansions === ==== Series ==== The series defined recursively by fn(z) = z + gn(z) have the property that the nth term is predicated on the sum of the first n − 1 terms. In order to employ theorem (GF3) it is necessary to show boundedness in the following sense: If each fn is defined for |z| < M then |Gn(z)| < M must follow before |fn(z) − z| = |gn(z)| ≤ Cβn is defined for iterative purposes. This is because g n ( G n − 1 ( z ) ) {\displaystyle g_{n}(G_{n-1}(z))} occurs throughout the expansion. The restriction | z | < R = M − C ∑ k = 1 ∞ β k > 0 {\displaystyle |z|<R=M-C\sum _{k=1}^{\infty }\beta _{k}>0} serves this purpose. Then Gn(z) → G(z) uniformly on the restricted domain. Example (S1). Set f n ( z ) = z + 1 ρ n 2 z , ρ > π 6 {\displaystyle f_{n}(z)=z+{\frac {1}{\rho n^{2}}}{\sqrt {z}},\qquad \rho >{\sqrt {\frac {\pi }{6}}}} and M = ρ2. Then R = ρ2 − (π/6) > 0. Then, if S = { z : | z | < R , Re ⁡ ( z ) > 0 } {\displaystyle S=\left\{z:|z|<R,\operatorname {Re} (z)>0\right\}} , z in S implies |Gn(z)| < M and theorem (GF3) applies, so that G n ( z ) = z + g 1 ( z ) + g 2 ( G 1 ( z ) ) + g 3 ( G 2 ( z ) ) + ⋯ + g n ( G n − 1 ( z ) ) = z + 1 ρ ⋅ 1 2 z + 1 ρ ⋅ 2 2 G 1 ( z ) + 1 ρ ⋅ 3 2 G 2 ( z ) + ⋯ + 1 ρ ⋅ n 2 G n − 1 ( z ) {\displaystyle {\begin{aligned}G_{n}(z)&=z+g_{1}(z)+g_{2}(G_{1}(z))+g_{3}(G_{2}(z))+\cdots +g_{n}(G_{n-1}(z))\\&=z+{\frac {1}{\rho \cdot 1^{2}}}{\sqrt {z}}+{\frac {1}{\rho \cdot 2^{2}}}{\sqrt {G_{1}(z)}}+{\frac {1}{\rho \cdot 3^{2}}}{\sqrt {G_{2}(z)}}+\cdots +{\frac {1}{\rho \cdot n^{2}}}{\sqrt {G_{n-1}(z)}}\end{aligned}}} converges absolutely, hence is convergent. Example (S2): f n ( z ) = z + 1 n 2 ⋅ φ ( z ) , φ ( z ) = 2 cos ⁡ ( x / y ) + i 2 sin ⁡ ( x / y ) , > G n ( z ) = f n ∘ f n − 1 ∘ ⋯ ∘ f 1 ( z ) , [ − 10 , 10 ] , n = 50 {\displaystyle f_{n}(z)=z+{\frac {1}{n^{2}}}\cdot \varphi (z),\varphi (z)=2\cos(x/y)+i2\sin(x/y),>G_{n}(z)=f_{n}\circ f_{n-1}\circ \cdots \circ f_{1}(z),\qquad [-10,10],n=50} ==== Products ==== The product defined recursively by f n ( z ) = z ( 1 + g n ( z ) ) , | z | ⩽ M , {\displaystyle f_{n}(z)=z(1+g_{n}(z)),\qquad |z|\leqslant M,} has the appearance G n ( z ) = z ∏ k = 1 n ( 1 + g k ( G k − 1 ( z ) ) ) . {\displaystyle G_{n}(z)=z\prod _{k=1}^{n}\left(1+g_{k}\left(G_{k-1}(z)\right)\right).} In order to apply Theorem GF3 it is required that: | z g n ( z ) | ≤ C β n , ∑ k = 1 ∞ β k < ∞ . {\displaystyle \left|zg_{n}(z)\right|\leq C\beta _{n},\qquad \sum _{k=1}^{\infty }\beta _{k}<\infty .} Once again, a boundedness condition must support | G n − 1 ( z ) g n ( G n − 1 ( z ) ) | ≤ C β n . {\displaystyle \left|G_{n-1}(z)g_{n}(G_{n-1}(z))\right|\leq C\beta _{n}.} If one knows Cβn in advance, the following will suffice: | z | ⩽ R = M P where P = ∏ n = 1 ∞ ( 1 + C β n ) . {\displaystyle |z|\leqslant R={\frac {M}{P}}\qquad {\text{where}}\quad P=\prod _{n=1}^{\infty }\left(1+C\beta _{n}\right).} Then Gn(z) → G(z) uniformly on the restricted domain. Example (P1). Suppose f n ( z ) = z ( 1 + g n ( z ) ) {\displaystyle f_{n}(z)=z(1+g_{n}(z))} with g n ( z ) = z 2 n 3 , {\displaystyle g_{n}(z)={\tfrac {z^{2}}{n^{3}}},} observing after a few preliminary computations, that |z| ≤ 1/4 implies |Gn(z)| < 0.27. Then | G n ( z ) G n ( z ) 2 n 3 | < ( 0.02 ) 1 n 3 = C β n {\displaystyle \left|G_{n}(z){\frac {G_{n}(z)^{2}}{n^{3}}}\right|<(0.02){\frac {1}{n^{3}}}=C\beta _{n}} and G n ( z ) = z ∏ k = 1 n − 1 ( 1 + G k ( z ) 2 n 3 ) {\displaystyle G_{n}(z)=z\prod _{k=1}^{n-1}\left(1+{\frac {G_{k}(z)^{2}}{n^{3}}}\right)} converges uniformly. Example (P2). g k , n ( z ) = z ( 1 + 1 n φ ( z , k n ) ) , {\displaystyle g_{k,n}(z)=z\left(1+{\frac {1}{n}}\varphi \left(z,{\tfrac {k}{n}}\right)\right),} G n , n ( z ) = ( g n , n ∘ g n − 1 , n ∘ ⋯ ∘ g 1 , n ) ( z ) = z ∏ k = 1 n ( 1 + P k , n ( z ) ) , {\displaystyle G_{n,n}(z)=\left(g_{n,n}\circ g_{n-1,n}\circ \cdots \circ g_{1,n}\right)(z)=z\prod _{k=1}^{n}(1+P_{k,n}(z)),} P k , n ( z ) = 1 n φ ( G k − 1 , n ( z ) , k n ) , {\displaystyle P_{k,n}(z)={\frac {1}{n}}\varphi \left(G_{k-1,n}(z),{\tfrac {k}{n}}\right),} ∏ k = 1 n − 1 ( 1 + P k , n ( z ) ) = 1 + P 1 , n ( z ) + P 2 , n ( z ) + ⋯ + P k − 1 , n ( z ) + R n ( z ) ∼ ∫ 0 1 π ( z , t ) d t + 1 + R n ( z ) , {\displaystyle \prod _{k=1}^{n-1}\left(1+P_{k,n}(z)\right)=1+P_{1,n}(z)+P_{2,n}(z)+\cdots +P_{k-1,n}(z)+R_{n}(z)\sim \int _{0}^{1}\pi (z,t)\,dt+1+R_{n}(z),} φ ( z ) = x cos ⁡ ( y ) + i y sin ⁡ ( x ) , ∫ 0 1 ( z π ( z , t ) − 1 ) d t , [ − 15 , 15 ] : {\displaystyle \varphi (z)=x\cos(y)+iy\sin(x),\int _{0}^{1}(z\pi (z,t)-1)\,dt,\qquad [-15,15]:} ==== Continued fractions ==== Example (CF1): A self-generating continued fraction. F n ( z ) = ρ ( z ) δ 1 + ρ ( F 1 ( z ) ) δ 2 + ρ ( F 2 ( z ) ) δ 3 + ⋯ ρ ( F n − 1 ( z ) ) δ n , ρ ( z ) = cos ⁡ ( y ) cos ⁡ ( y ) + sin ⁡ ( x ) + i sin ⁡ ( x ) cos ⁡ ( y ) + sin ⁡ ( x ) , [ 0 < x < 20 ] , [ 0 < y < 20 ] , δ k ≡ 1 {\displaystyle {\begin{aligned}F_{n}(z)&={\frac {\rho (z)}{\delta _{1}+}}{\frac {\rho (F_{1}(z))}{\delta _{2}+}}{\frac {\rho (F_{2}(z))}{\delta _{3}+}}\cdots {\frac {\rho (F_{n-1}(z))}{\delta _{n}}},\\\rho (z)&={\frac {\cos(y)}{\cos(y)+\sin(x)}}+i{\frac {\sin(x)}{\cos(y)+\sin(x)}},\qquad [0<x<20],[0<y<20],\qquad \delta _{k}\equiv 1\end{aligned}}} Example (CF2): Best described as a self-generating reverse Euler continued fraction. G n ( z ) = ρ ( G n − 1 ( z ) ) 1 + ρ ( G n − 1 ( z ) ) − ρ ( G n − 2 ( z ) ) 1 + ρ ( G n − 2 ( z ) ) − ⋯ ρ ( G 1 ( z ) ) 1 + ρ ( G 1 ( z ) ) − ρ ( z ) 1 + ρ ( z ) − z , {\displaystyle G_{n}(z)={\frac {\rho (G_{n-1}(z))}{1+\rho (G_{n-1}(z))-}}\ {\frac {\rho (G_{n-2}(z))}{1+\rho (G_{n-2}(z))-}}\cdots {\frac {\rho (G_{1}(z))}{1+\rho (G_{1}(z))-}}\ {\frac {\rho (z)}{1+\rho (z)-z}},} ρ ( z ) = ρ ( x + i y ) = x cos ⁡ ( y ) + i y sin ⁡ ( x ) , [ − 15 , 15 ] , n = 30 {\displaystyle \rho (z)=\rho (x+iy)=x\cos(y)+iy\sin(x),\qquad [-15,15],n=30} == See also == Generalized continued fraction == References ==
Wikipedia/Infinite_compositions_of_analytic_functions
In mathematics, the automorphism group of an object X is the group consisting of automorphisms of X under composition of morphisms. For example, if X is a finite-dimensional vector space, then the automorphism group of X is the group of invertible linear transformations from X to itself (the general linear group of X). If instead X is a group, then its automorphism group Aut ⁡ ( X ) {\displaystyle \operatorname {Aut} (X)} is the group consisting of all group automorphisms of X. Especially in geometric contexts, an automorphism group is also called a symmetry group. A subgroup of an automorphism group is sometimes called a transformation group. Automorphism groups are studied in a general way in the field of category theory. == Examples == If X is a set with no additional structure, then any bijection from X to itself is an automorphism, and hence the automorphism group of X in this case is precisely the symmetric group of X. If the set X has additional structure, then it may be the case that not all bijections on the set preserve this structure, in which case the automorphism group will be a subgroup of the symmetric group on X. Some examples of this include the following: The automorphism group of a field extension L / K {\displaystyle L/K} is the group consisting of field automorphisms of L that fix K. If the field extension is Galois, the automorphism group is called the Galois group of the field extension. The automorphism group of the projective n-space over a field k is the projective linear group PGL n ⁡ ( k ) . {\displaystyle \operatorname {PGL} _{n}(k).} The automorphism group G {\displaystyle G} of a finite cyclic group of order n is isomorphic to ( Z / n Z ) × {\displaystyle (\mathbb {Z} /n\mathbb {Z} )^{\times }} , the multiplicative group of integers modulo n, with the isomorphism given by a ¯ ↦ σ a ∈ G , σ a ( x ) = x a {\displaystyle {\overline {a}}\mapsto \sigma _{a}\in G,\,\sigma _{a}(x)=x^{a}} . In particular, G {\displaystyle G} is an abelian group. The automorphism group of a finite-dimensional real Lie algebra g {\displaystyle {\mathfrak {g}}} has the structure of a (real) Lie group (in fact, it is even a linear algebraic group: see below). If G is a Lie group with Lie algebra g {\displaystyle {\mathfrak {g}}} , then the automorphism group of G has a structure of a Lie group induced from that on the automorphism group of g {\displaystyle {\mathfrak {g}}} . If G is a group acting on a set X, the action amounts to a group homomorphism from G to the automorphism group of X and conversely. Indeed, each left G-action on a set X determines G → Aut ⁡ ( X ) , g ↦ σ g , σ g ( x ) = g ⋅ x {\displaystyle G\to \operatorname {Aut} (X),\,g\mapsto \sigma _{g},\,\sigma _{g}(x)=g\cdot x} , and, conversely, each homomorphism φ : G → Aut ⁡ ( X ) {\displaystyle \varphi :G\to \operatorname {Aut} (X)} defines an action by g ⋅ x = φ ( g ) x {\displaystyle g\cdot x=\varphi (g)x} . This extends to the case when the set X has more structure than just a set. For example, if X is a vector space, then a group action of G on X is a group representation of the group G, representing G as a group of linear transformations (automorphisms) of X; these representations are the main object of study in the field of representation theory. Here are some other facts about automorphism groups: Let A , B {\displaystyle A,B} be two finite sets of the same cardinality and Iso ⁡ ( A , B ) {\displaystyle \operatorname {Iso} (A,B)} the set of all bijections A → ∼ B {\displaystyle A\mathrel {\overset {\sim }{\to }} B} . Then Aut ⁡ ( B ) {\displaystyle \operatorname {Aut} (B)} , which is a symmetric group (see above), acts on Iso ⁡ ( A , B ) {\displaystyle \operatorname {Iso} (A,B)} from the left freely and transitively; that is to say, Iso ⁡ ( A , B ) {\displaystyle \operatorname {Iso} (A,B)} is a torsor for Aut ⁡ ( B ) {\displaystyle \operatorname {Aut} (B)} (cf. #In category theory). Let P be a finitely generated projective module over a ring R. Then there is an embedding Aut ⁡ ( P ) ↪ GL n ⁡ ( R ) {\displaystyle \operatorname {Aut} (P)\hookrightarrow \operatorname {GL} _{n}(R)} , unique up to inner automorphisms. == In category theory == Automorphism groups appear very naturally in category theory. If X is an object in a category, then the automorphism group of X is the group consisting of all the invertible morphisms from X to itself. It is the unit group of the endomorphism monoid of X. (For some examples, see PROP.) If A , B {\displaystyle A,B} are objects in some category, then the set Iso ⁡ ( A , B ) {\displaystyle \operatorname {Iso} (A,B)} of all A → ∼ B {\displaystyle A\mathrel {\overset {\sim }{\to }} B} is a left Aut ⁡ ( B ) {\displaystyle \operatorname {Aut} (B)} -torsor. In practical terms, this says that a different choice of a base point of Iso ⁡ ( A , B ) {\displaystyle \operatorname {Iso} (A,B)} differs unambiguously by an element of Aut ⁡ ( B ) {\displaystyle \operatorname {Aut} (B)} , or that each choice of a base point is precisely a choice of a trivialization of the torsor. If X 1 {\displaystyle X_{1}} and X 2 {\displaystyle X_{2}} are objects in categories C 1 {\displaystyle C_{1}} and C 2 {\displaystyle C_{2}} , and if F : C 1 → C 2 {\displaystyle F:C_{1}\to C_{2}} is a functor mapping X 1 {\displaystyle X_{1}} to X 2 {\displaystyle X_{2}} , then F {\displaystyle F} induces a group homomorphism Aut ⁡ ( X 1 ) → Aut ⁡ ( X 2 ) {\displaystyle \operatorname {Aut} (X_{1})\to \operatorname {Aut} (X_{2})} , as it maps invertible morphisms to invertible morphisms. In particular, if G is a group viewed as a category with a single object * or, more generally, if G is a groupoid, then each functor F : G → C {\displaystyle F:G\to C} , C a category, is called an action or a representation of G on the object F ( ∗ ) {\displaystyle F(*)} , or the objects F ( Obj ⁡ ( G ) ) {\displaystyle F(\operatorname {Obj} (G))} . Those objects are then said to be G {\displaystyle G} -objects (as they are acted by G {\displaystyle G} ); cf. S {\displaystyle \mathbb {S} } -object. If C {\displaystyle C} is a module category like the category of finite-dimensional vector spaces, then G {\displaystyle G} -objects are also called G {\displaystyle G} -modules. == Automorphism group functor == Let M {\displaystyle M} be a finite-dimensional vector space over a field k that is equipped with some algebraic structure (that is, M is a finite-dimensional algebra over k). It can be, for example, an associative algebra or a Lie algebra. Now, consider k-linear maps M → M {\displaystyle M\to M} that preserve the algebraic structure: they form a vector subspace End alg ⁡ ( M ) {\displaystyle \operatorname {End} _{\text{alg}}(M)} of End ⁡ ( M ) {\displaystyle \operatorname {End} (M)} . The unit group of End alg ⁡ ( M ) {\displaystyle \operatorname {End} _{\text{alg}}(M)} is the automorphism group Aut ⁡ ( M ) {\displaystyle \operatorname {Aut} (M)} . When a basis on M is chosen, End ⁡ ( M ) {\displaystyle \operatorname {End} (M)} is the space of square matrices and End alg ⁡ ( M ) {\displaystyle \operatorname {End} _{\text{alg}}(M)} is the zero set of some polynomial equations, and the invertibility is again described by polynomials. Hence, Aut ⁡ ( M ) {\displaystyle \operatorname {Aut} (M)} is a linear algebraic group over k. Now base extensions applied to the above discussion determines a functor: namely, for each commutative ring R over k, consider the R-linear maps M ⊗ R → M ⊗ R {\displaystyle M\otimes R\to M\otimes R} preserving the algebraic structure: denote it by End alg ⁡ ( M ⊗ R ) {\displaystyle \operatorname {End} _{\text{alg}}(M\otimes R)} . Then the unit group of the matrix ring End alg ⁡ ( M ⊗ R ) {\displaystyle \operatorname {End} _{\text{alg}}(M\otimes R)} over R is the automorphism group Aut ⁡ ( M ⊗ R ) {\displaystyle \operatorname {Aut} (M\otimes R)} and R ↦ Aut ⁡ ( M ⊗ R ) {\displaystyle R\mapsto \operatorname {Aut} (M\otimes R)} is a group functor: a functor from the category of commutative rings over k to the category of groups. Even better, it is represented by a scheme (since the automorphism groups are defined by polynomials): this scheme is called the automorphism group scheme and is denoted by Aut ⁡ ( M ) {\displaystyle \operatorname {Aut} (M)} . In general, however, an automorphism group functor may not be represented by a scheme. == See also == Outer automorphism group Level structure, a technique to remove an automorphism group Holonomy group == Notes == == Citations == == References == == External links == https://mathoverflow.net/questions/55042/automorphism-group-of-a-scheme
Wikipedia/Transformation_group
In mathematics, an injective function (also known as injection, or one-to-one function ) is a function f that maps distinct elements of its domain to distinct elements of its codomain; that is, x1 ≠ x2 implies f(x1) ≠ f(x2) (equivalently by contraposition, f(x1) = f(x2) implies x1 = x2). In other words, every element of the function's codomain is the image of at most one element of its domain. The term one-to-one function must not be confused with one-to-one correspondence that refers to bijective functions, which are functions such that each element in the codomain is an image of exactly one element in the domain. A homomorphism between algebraic structures is a function that is compatible with the operations of the structures. For all common algebraic structures, and, in particular for vector spaces, an injective homomorphism is also called a monomorphism. However, in the more general context of category theory, the definition of a monomorphism differs from that of an injective homomorphism. This is thus a theorem that they are equivalent for algebraic structures; see Homomorphism § Monomorphism for more details. A function f {\displaystyle f} that is not injective is sometimes called many-to-one. == Definition == Let f {\displaystyle f} be a function whose domain is a set X . {\displaystyle X.} The function f {\displaystyle f} is said to be injective provided that for all a {\displaystyle a} and b {\displaystyle b} in X , {\displaystyle X,} if f ( a ) = f ( b ) , {\displaystyle f(a)=f(b),} then a = b {\displaystyle a=b} ; that is, f ( a ) = f ( b ) {\displaystyle f(a)=f(b)} implies a = b . {\displaystyle a=b.} Equivalently, if a ≠ b , {\displaystyle a\neq b,} then f ( a ) ≠ f ( b ) {\displaystyle f(a)\neq f(b)} in the contrapositive statement. Symbolically, ∀ a , b ∈ X , f ( a ) = f ( b ) ⇒ a = b , {\displaystyle \forall a,b\in X,\;\;f(a)=f(b)\Rightarrow a=b,} which is logically equivalent to the contrapositive, ∀ a , b ∈ X , a ≠ b ⇒ f ( a ) ≠ f ( b ) . {\displaystyle \forall a,b\in X,\;\;a\neq b\Rightarrow f(a)\neq f(b).} An injective function (or, more generally, a monomorphism) is often denoted by using the specialized arrows ↣ or ↪ (for example, f : A ↣ B {\displaystyle f:A\rightarrowtail B} or f : A ↪ B {\displaystyle f:A\hookrightarrow B} ), although some authors specifically reserve ↪ for an inclusion map. == Examples == For visual examples, readers are directed to the gallery section. For any set X {\displaystyle X} and any subset S ⊆ X , {\displaystyle S\subseteq X,} the inclusion map S → X {\displaystyle S\to X} (which sends any element s ∈ S {\displaystyle s\in S} to itself) is injective. In particular, the identity function X → X {\displaystyle X\to X} is always injective (and in fact bijective). If the domain of a function is the empty set, then the function is the empty function, which is injective. If the domain of a function has one element (that is, it is a singleton set), then the function is always injective. The function f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } defined by f ( x ) = 2 x + 1 {\displaystyle f(x)=2x+1} is injective. The function g : R → R {\displaystyle g:\mathbb {R} \to \mathbb {R} } defined by g ( x ) = x 2 {\displaystyle g(x)=x^{2}} is not injective, because (for example) g ( 1 ) = 1 = g ( − 1 ) . {\displaystyle g(1)=1=g(-1).} However, if g {\displaystyle g} is redefined so that its domain is the non-negative real numbers [0,+∞), then g {\displaystyle g} is injective. The exponential function exp : R → R {\displaystyle \exp :\mathbb {R} \to \mathbb {R} } defined by exp ⁡ ( x ) = e x {\displaystyle \exp(x)=e^{x}} is injective (but not surjective, as no real value maps to a negative number). The natural logarithm function ln : ( 0 , ∞ ) → R {\displaystyle \ln :(0,\infty )\to \mathbb {R} } defined by x ↦ ln ⁡ x {\displaystyle x\mapsto \ln x} is injective. The function g : R → R {\displaystyle g:\mathbb {R} \to \mathbb {R} } defined by g ( x ) = x n − x {\displaystyle g(x)=x^{n}-x} is not injective, since, for example, g ( 0 ) = g ( 1 ) = 0. {\displaystyle g(0)=g(1)=0.} More generally, when X {\displaystyle X} and Y {\displaystyle Y} are both the real line R , {\displaystyle \mathbb {R} ,} then an injective function f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } is one whose graph is never intersected by any horizontal line more than once. This principle is referred to as the horizontal line test. == Injections can be undone == Functions with left inverses are always injections. That is, given f : X → Y , {\displaystyle f:X\to Y,} if there is a function g : Y → X {\displaystyle g:Y\to X} such that for every x ∈ X {\displaystyle x\in X} , g ( f ( x ) ) = x {\displaystyle g(f(x))=x} , then f {\displaystyle f} is injective. The proof is that f ( a ) = f ( b ) → g ( f ( a ) ) = g ( f ( b ) ) → a = b . {\displaystyle f(a)=f(b)\rightarrow g(f(a))=g(f(b))\rightarrow a=b.} In this case, g {\displaystyle g} is called a retraction of f . {\displaystyle f.} Conversely, f {\displaystyle f} is called a section of g . {\displaystyle g.} Conversely, every injection f {\displaystyle f} with a non-empty domain has a left inverse g {\displaystyle g} . It can be defined by choosing an element a {\displaystyle a} in the domain of f {\displaystyle f} and setting g ( y ) {\displaystyle g(y)} to the unique element of the pre-image f − 1 [ y ] {\displaystyle f^{-1}[y]} (if it is non-empty) or to a {\displaystyle a} (otherwise). The left inverse g {\displaystyle g} is not necessarily an inverse of f , {\displaystyle f,} because the composition in the other order, f ∘ g , {\displaystyle f\circ g,} may differ from the identity on Y . {\displaystyle Y.} In other words, an injective function can be "reversed" by a left inverse, but is not necessarily invertible, which requires that the function is bijective. == Injections may be made invertible == In fact, to turn an injective function f : X → Y {\displaystyle f:X\to Y} into a bijective (hence invertible) function, it suffices to replace its codomain Y {\displaystyle Y} by its actual image J = f ( X ) . {\displaystyle J=f(X).} That is, let g : X → J {\displaystyle g:X\to J} such that g ( x ) = f ( x ) {\displaystyle g(x)=f(x)} for all x ∈ X {\displaystyle x\in X} ; then g {\displaystyle g} is bijective. Indeed, f {\displaystyle f} can be factored as In J , Y ∘ g , {\displaystyle \operatorname {In} _{J,Y}\circ g,} where In J , Y {\displaystyle \operatorname {In} _{J,Y}} is the inclusion function from J {\displaystyle J} into Y . {\displaystyle Y.} More generally, injective partial functions are called partial bijections. == Other properties == If f {\displaystyle f} and g {\displaystyle g} are both injective then f ∘ g {\displaystyle f\circ g} is injective. If g ∘ f {\displaystyle g\circ f} is injective, then f {\displaystyle f} is injective (but g {\displaystyle g} need not be). f : X → Y {\displaystyle f:X\to Y} is injective if and only if, given any functions g , {\displaystyle g,} h : W → X {\displaystyle h:W\to X} whenever f ∘ g = f ∘ h , {\displaystyle f\circ g=f\circ h,} then g = h . {\displaystyle g=h.} In other words, injective functions are precisely the monomorphisms in the category Set of sets. If f : X → Y {\displaystyle f:X\to Y} is injective and A {\displaystyle A} is a subset of X , {\displaystyle X,} then f − 1 ( f ( A ) ) = A . {\displaystyle f^{-1}(f(A))=A.} Thus, A {\displaystyle A} can be recovered from its image f ( A ) . {\displaystyle f(A).} If f : X → Y {\displaystyle f:X\to Y} is injective and A {\displaystyle A} and B {\displaystyle B} are both subsets of X , {\displaystyle X,} then f ( A ∩ B ) = f ( A ) ∩ f ( B ) . {\displaystyle f(A\cap B)=f(A)\cap f(B).} Every function h : W → Y {\displaystyle h:W\to Y} can be decomposed as h = f ∘ g {\displaystyle h=f\circ g} for a suitable injection f {\displaystyle f} and surjection g . {\displaystyle g.} This decomposition is unique up to isomorphism, and f {\displaystyle f} may be thought of as the inclusion function of the range h ( W ) {\displaystyle h(W)} of h {\displaystyle h} as a subset of the codomain Y {\displaystyle Y} of h . {\displaystyle h.} If f : X → Y {\displaystyle f:X\to Y} is an injective function, then Y {\displaystyle Y} has at least as many elements as X , {\displaystyle X,} in the sense of cardinal numbers. In particular, if, in addition, there is an injection from Y {\displaystyle Y} to X , {\displaystyle X,} then X {\displaystyle X} and Y {\displaystyle Y} have the same cardinal number. (This is known as the Cantor–Bernstein–Schroeder theorem.) If both X {\displaystyle X} and Y {\displaystyle Y} are finite with the same number of elements, then f : X → Y {\displaystyle f:X\to Y} is injective if and only if f {\displaystyle f} is surjective (in which case f {\displaystyle f} is bijective). An injective function which is a homomorphism between two algebraic structures is an embedding. Unlike surjectivity, which is a relation between the graph of a function and its codomain, injectivity is a property of the graph of the function alone; that is, whether a function f {\displaystyle f} is injective can be decided by only considering the graph (and not the codomain) of f . {\displaystyle f.} == Proving that functions are injective == A proof that a function f {\displaystyle f} is injective depends on how the function is presented and what properties the function holds. For functions that are given by some formula there is a basic idea. We use the definition of injectivity, namely that if f ( x ) = f ( y ) , {\displaystyle f(x)=f(y),} then x = y . {\displaystyle x=y.} Here is an example: f ( x ) = 2 x + 3 {\displaystyle f(x)=2x+3} Proof: Let f : X → Y . {\displaystyle f:X\to Y.} Suppose f ( x ) = f ( y ) . {\displaystyle f(x)=f(y).} So 2 x + 3 = 2 y + 3 {\displaystyle 2x+3=2y+3} implies 2 x = 2 y , {\displaystyle 2x=2y,} which implies x = y . {\displaystyle x=y.} Therefore, it follows from the definition that f {\displaystyle f} is injective. There are multiple other methods of proving that a function is injective. For example, in calculus if f {\displaystyle f} is a differentiable function defined on some interval, then it is sufficient to show that the derivative is always positive or always negative on that interval. In linear algebra, if f {\displaystyle f} is a linear transformation it is sufficient to show that the kernel of f {\displaystyle f} contains only the zero vector. If f {\displaystyle f} is a function with finite domain it is sufficient to look through the list of images of each domain element and check that no image occurs twice on the list. A graphical approach for a real-valued function f {\displaystyle f} of a real variable x {\displaystyle x} is the horizontal line test. If every horizontal line intersects the curve of f ( x ) {\displaystyle f(x)} in at most one point, then f {\displaystyle f} is injective or one-to-one. == Gallery == == See also == Bijection, injection and surjection – Properties of mathematical functions Injective metric space – Type of metric space Monotonic function – Order-preserving mathematical function Univalent function – Mathematical concept == Notes == == References == Bartle, Robert G. (1976), The Elements of Real Analysis (2nd ed.), New York: John Wiley & Sons, ISBN 978-0-471-05464-1, p. 17 ff. Halmos, Paul R. (1974), Naive Set Theory, New York: Springer, ISBN 978-0-387-90092-6, p. 38 ff. == External links == Earliest Uses of Some of the Words of Mathematics: entry on Injection, Surjection and Bijection has the history of Injection and related terms. Khan Academy – Surjective (onto) and Injective (one-to-one) functions: Introduction to surjective and injective functions
Wikipedia/One-to-one_function
In mathematics, a transformation, transform, or self-map is a function f, usually with some geometrical underpinning, that maps a set X to itself, i.e. f: X → X. Examples include linear transformations of vector spaces and geometric transformations, which include projective transformations, affine transformations, and specific affine transformations, such as rotations, reflections and translations. == Partial transformations == While it is common to use the term transformation for any function of a set into itself (especially in terms like "transformation semigroup" and similar), there exists an alternative form of terminological convention in which the term "transformation" is reserved only for bijections. When such a narrow notion of transformation is generalized to partial functions, then a partial transformation is a function f: A → B, where both A and B are subsets of some set X. == Algebraic structures == The set of all transformations on a given base set, together with function composition, forms a regular semigroup. == Combinatorics == For a finite set of cardinality n, there are nn transformations and (n+1)n partial transformations. == See also == Coordinate transformation Data transformation (statistics) Geometric transformation Infinitesimal transformation Linear transformation List of transforms Rigid transformation Transformation geometry Transformation semigroup Transformation group Transformation matrix == References == == External links == Media related to Transformation (function) at Wikimedia Commons
Wikipedia/Transformation_(function)
In mathematics, a surjective function (also known as surjection, or onto function ) is a function f such that, for every element y of the function's codomain, there exists at least one element x in the function's domain such that f(x) = y. In other words, for a function f : X → Y, the codomain Y is the image of the function's domain X. It is not required that x be unique; the function f may map one or more elements of X to the same element of Y. The term surjective and the related terms injective and bijective were introduced by Nicolas Bourbaki, a group of mainly French 20th-century mathematicians who, under this pseudonym, wrote a series of books presenting an exposition of modern advanced mathematics, beginning in 1935. The French word sur means over or above, and relates to the fact that the image of the domain of a surjective function completely covers the function's codomain. Any function induces a surjection by restricting its codomain to the image of its domain. Every surjective function has a right inverse assuming the axiom of choice, and every function with a right inverse is necessarily a surjection. The composition of surjective functions is always surjective. Any function can be decomposed into a surjection and an injection. == Definition == A surjective function is a function whose image is equal to its codomain. Equivalently, a function f {\displaystyle f} with domain X {\displaystyle X} and codomain Y {\displaystyle Y} is surjective if for every y {\displaystyle y} in Y {\displaystyle Y} there exists at least one x {\displaystyle x} in X {\displaystyle X} with f ( x ) = y {\displaystyle f(x)=y} . Surjections are sometimes denoted by a two-headed rightwards arrow (U+21A0 ↠ RIGHTWARDS TWO HEADED ARROW), as in f : X ↠ Y {\displaystyle f\colon X\twoheadrightarrow Y} . Symbolically, If f : X → Y {\displaystyle f\colon X\rightarrow Y} , then f {\displaystyle f} is said to be surjective if ∀ y ∈ Y , ∃ x ∈ X , f ( x ) = y {\displaystyle \forall y\in Y,\,\exists x\in X,\;\;f(x)=y} . == Examples == For any set X, the identity function idX on X is surjective. The function f : Z → {0, 1} defined by f(n) = n mod 2 (that is, even integers are mapped to 0 and odd integers to 1) is surjective. The function f : R → R defined by f(x) = 2x + 1 is surjective (and even bijective), because for every real number y, we have an x such that f(x) = y: such an appropriate x is (y − 1)/2. The function f : R → R defined by f(x) = x3 − 3x is surjective, because the pre-image of any real number y is the solution set of the cubic polynomial equation x3 − 3x − y = 0, and every cubic polynomial with real coefficients has at least one real root. However, this function is not injective (and hence not bijective), since, for example, the pre-image of y = 2 is {x = −1, x = 2}. (In fact, the pre-image of this function for every y, −2 ≤ y ≤ 2 has more than one element.) The function g : R → R defined by g(x) = x2 is not surjective, since there is no real number x such that x2 = −1. However, the function g : R → R≥0 defined by g(x) = x2 (with the restricted codomain) is surjective, since for every y in the nonnegative real codomain Y, there is at least one x in the real domain X such that x2 = y. The natural logarithm function ln : (0, +∞) → R is a surjective and even bijective (mapping from the set of positive real numbers to the set of all real numbers). Its inverse, the exponential function, if defined with the set of real numbers as the domain and the codomain, is not surjective (as its range is the set of positive real numbers). The matrix exponential is not surjective when seen as a map from the space of all n×n matrices to itself. It is, however, usually defined as a map from the space of all n×n matrices to the general linear group of degree n (that is, the group of all n×n invertible matrices). Under this definition, the matrix exponential is surjective for complex matrices, although still not surjective for real matrices. The projection from a cartesian product A × B to one of its factors is surjective, unless the other factor is empty. In a 3D video game, vectors are projected onto a 2D flat screen by means of a surjective function. == Properties == A function is bijective if and only if it is both surjective and injective. If (as is often done) a function is identified with its graph, then surjectivity is not a property of the function itself, but rather a property of the mapping. This is, the function together with its codomain. Unlike injectivity, surjectivity cannot be read off of the graph of the function alone. === Surjections as right invertible functions === The function g : Y → X is said to be a right inverse of the function f : X → Y if f(g(y)) = y for every y in Y (g can be undone by f). In other words, g is a right inverse of f if the composition f o g of g and f in that order is the identity function on the domain Y of g. The function g need not be a complete inverse of f because the composition in the other order, g o f, may not be the identity function on the domain X of f. In other words, f can undo or "reverse" g, but cannot necessarily be reversed by it. Every function with a right inverse is necessarily a surjection. The proposition that every surjective function has a right inverse is equivalent to the axiom of choice. If f : X → Y is surjective and B is a subset of Y, then f(f −1(B)) = B. Thus, B can be recovered from its preimage f −1(B). For example, in the first illustration in the gallery, there is some function g such that g(C) = 4. There is also some function f such that f(4) = C. It doesn't matter that g is not unique (it would also work if g(C) equals 3); it only matters that f "reverses" g. === Surjections as epimorphisms === A function f : X → Y is surjective if and only if it is right-cancellative: given any functions g,h : Y → Z, whenever g o f = h o f, then g = h. This property is formulated in terms of functions and their composition and can be generalized to the more general notion of the morphisms of a category and their composition. Right-cancellative morphisms are called epimorphisms. Specifically, surjective functions are precisely the epimorphisms in the category of sets. The prefix epi is derived from the Greek preposition ἐπί meaning over, above, on. Any morphism with a right inverse is an epimorphism, but the converse is not true in general. A right inverse g of a morphism f is called a section of f. A morphism with a right inverse is called a split epimorphism. === Surjections as binary relations === Any function with domain X and codomain Y can be seen as a left-total and right-unique binary relation between X and Y by identifying it with its function graph. A surjective function with domain X and codomain Y is then a binary relation between X and Y that is right-unique and both left-total and right-total. === Cardinality of the domain of a surjection === The cardinality of the domain of a surjective function is greater than or equal to the cardinality of its codomain: If f : X → Y is a surjective function, then X has at least as many elements as Y, in the sense of cardinal numbers. (The proof appeals to the axiom of choice to show that a function g : Y → X satisfying f(g(y)) = y for all y in Y exists. g is easily seen to be injective, thus the formal definition of |Y| ≤ |X| is satisfied.) Specifically, if both X and Y are finite with the same number of elements, then f : X → Y is surjective if and only if f is injective. Given two sets X and Y, the notation X ≤* Y is used to say that either X is empty or that there is a surjection from Y onto X. Using the axiom of choice one can show that X ≤* Y and Y ≤* X together imply that |Y| = |X|, a variant of the Schröder–Bernstein theorem. === Composition and decomposition === The composition of surjective functions is always surjective: If f and g are both surjective, and the codomain of g is equal to the domain of f, then f o g is surjective. Conversely, if f o g is surjective, then f is surjective (but g, the function applied first, need not be). These properties generalize from surjections in the category of sets to any epimorphisms in any category. Any function can be decomposed into a surjection and an injection: For any function h : X → Z there exist a surjection f : X → Y and an injection g : Y → Z such that h = g o f. To see this, define Y to be the set of preimages h−1(z) where z is in h(X). These preimages are disjoint and partition X. Then f carries each x to the element of Y which contains it, and g carries each element of Y to the point in Z to which h sends its points. Then f is surjective since it is a projection map, and g is injective by definition. === Induced surjection and induced bijection === Any function induces a surjection by restricting its codomain to its range. Any surjective function induces a bijection defined on a quotient of its domain by collapsing all arguments mapping to a given fixed image. More precisely, every surjection f : A → B can be factored as a projection followed by a bijection as follows. Let A/~ be the equivalence classes of A under the following equivalence relation: x ~ y if and only if f(x) = f(y). Equivalently, A/~ is the set of all preimages under f. Let P(~) : A → A/~ be the projection map which sends each x in A to its equivalence class [x]~, and let fP : A/~ → B be the well-defined function given by fP([x]~) = f(x). Then f = fP o P(~). == The set of surjections == Given fixed finite sets A and B, one can form the set of surjections A ↠ B. The cardinality of this set is one of the twelve aspects of Rota's Twelvefold way, and is given by | B | ! { | A | | B | } {\textstyle |B|!{\begin{Bmatrix}|A|\\|B|\end{Bmatrix}}} , where { | A | | B | } {\textstyle {\begin{Bmatrix}|A|\\|B|\end{Bmatrix}}} denotes a Stirling number of the second kind. == Gallery == == See also == Bijection, injection and surjection Cover (algebra) Covering map Enumeration Fiber bundle Index set Section (category theory) == References == == Further reading == Bourbaki, N. (2004) [1968]. Theory of Sets. Elements of Mathematics. Vol. 1. Springer. doi:10.1007/978-3-642-59309-3. ISBN 978-3-540-22525-6. LCCN 2004110815.
Wikipedia/Onto_function
Schröder's equation, named after Ernst Schröder, is a functional equation with one independent variable: given the function h, find the function Ψ such that Schröder's equation is an eigenvalue equation for the composition operator Ch that sends a function f to f(h(.)). If a is a fixed point of h, meaning h(a) = a, then either Ψ(a) = 0 (or ∞) or s = 1. Thus, provided that Ψ(a) is finite and Ψ′(a) does not vanish or diverge, the eigenvalue s is given by s = h′(a). == Functional significance == For a = 0, if h is analytic on the unit disk, fixes 0, and 0 < |h′(0)| < 1, then Gabriel Koenigs showed in 1884 that there is an analytic (non-trivial) Ψ satisfying Schröder's equation. This is one of the first steps in a long line of theorems fruitful for understanding composition operators on analytic function spaces, cf. Koenigs function. Equations such as Schröder's are suitable to encoding self-similarity, and have thus been extensively utilized in studies of nonlinear dynamics (often referred to colloquially as chaos theory). It is also used in studies of turbulence, as well as the renormalization group. An equivalent transpose form of Schröder's equation for the inverse Φ = Ψ−1 of Schröder's conjugacy function is h(Φ(y)) = Φ(sy). The change of variables α(x) = log(Ψ(x))/log(s) (the Abel function) further converts Schröder's equation to the older Abel equation, α(h(x)) = α(x) + 1. Similarly, the change of variables Ψ(x) = log(φ(x)) converts Schröder's equation to Böttcher's equation, φ(h(x)) = (φ(x))s. Moreover, for the velocity, β(x) = Ψ/Ψ′, Julia's equation, β(f(x)) = f′(x)β(x), holds. The n-th power of a solution of Schröder's equation provides a solution of Schröder's equation with eigenvalue sn, instead. In the same vein, for an invertible solution Ψ(x) of Schröder's equation, the (non-invertible) function Ψ(x) k(log Ψ(x)) is also a solution, for any periodic function k(x) with period log(s). All solutions of Schröder's equation are related in this manner. == Solutions == Schröder's equation was solved analytically if a is an attracting (but not superattracting) fixed point, that is 0 < |h′(a)| < 1 by Gabriel Koenigs (1884). In the case of a superattracting fixed point, |h′(a)| = 0, Schröder's equation is unwieldy, and had best be transformed to Böttcher's equation. There are a good number of particular solutions dating back to Schröder's original 1870 paper. The series expansion around a fixed point and the relevant convergence properties of the solution for the resulting orbit and its analyticity properties are cogently summarized by Szekeres. Several of the solutions are furnished in terms of asymptotic series, cf. Carleman matrix. == Applications == It is used to analyse discrete dynamical systems by finding a new coordinate system in which the system (orbit) generated by h(x) looks simpler, a mere dilation. More specifically, a system for which a discrete unit time step amounts to x → h(x), can have its smooth orbit (or flow) reconstructed from the solution of the above Schröder's equation, its conjugacy equation. That is, h(x) = Ψ−1(s Ψ(x)) ≡ h1(x). In general, all of its functional iterates (its regular iteration group, see iterated function) are provided by the orbit for t real — not necessarily positive or integer. (Thus a full continuous group.) The set of hn(x), i.e., of all positive integer iterates of h(x) (semigroup) is called the splinter (or Picard sequence) of h(x). However, all iterates (fractional, infinitesimal, or negative) of h(x) are likewise specified through the coordinate transformation Ψ(x) determined to solve Schröder's equation: a holographic continuous interpolation of the initial discrete recursion x → h(x) has been constructed; in effect, the entire orbit. For instance, the functional square root is h1/2(x) = Ψ−1(s1/2 Ψ(x)), so that h1/2(h1/2(x)) = h(x), and so on. For example, special cases of the logistic map such as the chaotic case h(x) = 4x(1 − x) were already worked out by Schröder in his original article (p. 306), Ψ(x) = (arcsin √x)2, s = 4, and hence ht(x) = sin2(2t arcsin √x). In fact, this solution is seen to result as motion dictated by a sequence of switchback potentials, V(x) ∝ x(x − 1) (nπ + arcsin √x)2, a generic feature of continuous iterates effected by Schröder's equation. A nonchaotic case he also illustrated with his method, h(x) = 2x(1 − x), yields Ψ(x) = −⁠1/2⁠ln(1 − 2x), and hence ht(x) = −⁠1/2⁠((1 − 2x)2t − 1). Likewise, for the Beverton–Holt model, h(x) = x/(2 − x), one readily finds Ψ(x) = x/(1 − x), so that h t ( x ) = Ψ − 1 ( 2 − t Ψ ( x ) ) = x 2 t + x ( 1 − 2 t ) . {\displaystyle h_{t}(x)=\Psi ^{-1}{\big (}2^{-t}\Psi (x){\big )}={\frac {x}{2^{t}+x(1-2^{t})}}.} == See also == Böttcher's equation == References ==
Wikipedia/Schröder's_equation
In mathematics, an iterated function is a function that is obtained by composing another function with itself two or several times. The process of repeatedly applying the same function is called iteration. In this process, starting from some initial object, the result of applying a given function is fed again into the function as input, and this process is repeated. For example, on the image on the right: L = F ( K ) , M = F ∘ F ( K ) = F 2 ( K ) . {\displaystyle L=F(K),\ M=F\circ F(K)=F^{2}(K).} Iterated functions are studied in computer science, fractals, dynamical systems, mathematics and renormalization group physics. == Definition == The formal definition of an iterated function on a set X follows. Let X be a set and f: X → X be a function. Defining f n as the n-th iterate of f, where n is a non-negative integer, by: f 0 = d e f id X {\displaystyle f^{0}~{\stackrel {\mathrm {def} }{=}}~\operatorname {id} _{X}} and f n + 1 = d e f f ∘ f n , {\displaystyle f^{n+1}~{\stackrel {\mathrm {def} }{=}}~f\circ f^{n},} where idX is the identity function on X and (f ∘ {\displaystyle \circ } g)(x) = f (g(x)) denotes function composition. This notation has been traced to and John Frederick William Herschel in 1813. Herschel credited Hans Heinrich Bürmann for it, but without giving a specific reference to the work of Bürmann, which remains undiscovered. Because the notation f n may refer to both iteration (composition) of the function f or exponentiation of the function f (the latter is commonly used in trigonometry), some mathematicians choose to use ∘ to denote the compositional meaning, writing f∘n(x) for the n-th iterate of the function f(x), as in, for example, f∘3(x) meaning f(f(f(x))). For the same purpose, f [n](x) was used by Benjamin Peirce whereas Alfred Pringsheim and Jules Molk suggested nf(x) instead. == Abelian property and iteration sequences == In general, the following identity holds for all non-negative integers m and n, f m ∘ f n = f n ∘ f m = f m + n . {\displaystyle f^{m}\circ f^{n}=f^{n}\circ f^{m}=f^{m+n}~.} This is structurally identical to the property of exponentiation that aman = am + n. In general, for arbitrary general (negative, non-integer, etc.) indices m and n, this relation is called the translation functional equation, cf. Schröder's equation and Abel equation. On a logarithmic scale, this reduces to the nesting property of Chebyshev polynomials, Tm(Tn(x)) = Tm n(x), since Tn(x) = cos(n arccos(x)). The relation (f m)n(x) = (f n)m(x) = f mn(x) also holds, analogous to the property of exponentiation that (am)n = (an)m = amn. The sequence of functions f n is called a Picard sequence, named after Charles Émile Picard. For a given x in X, the sequence of values fn(x) is called the orbit of x. If f n (x) = f n+m (x) for some integer m > 0, the orbit is called a periodic orbit. The smallest such value of m for a given x is called the period of the orbit. The point x itself is called a periodic point. The cycle detection problem in computer science is the algorithmic problem of finding the first periodic point in an orbit, and the period of the orbit. == Fixed points == If x = f(x) for some x in X (that is, the period of the orbit of x is 1), then x is called a fixed point of the iterated sequence. The set of fixed points is often denoted as Fix(f). There exist a number of fixed-point theorems that guarantee the existence of fixed points in various situations, including the Banach fixed point theorem and the Brouwer fixed point theorem. There are several techniques for convergence acceleration of the sequences produced by fixed point iteration. For example, the Aitken method applied to an iterated fixed point is known as Steffensen's method, and produces quadratic convergence. == Limiting behaviour == Upon iteration, one may find that there are sets that shrink and converge towards a single point. In such a case, the point that is converged to is known as an attractive fixed point. Conversely, iteration may give the appearance of points diverging away from a single point; this would be the case for an unstable fixed point. When the points of the orbit converge to one or more limits, the set of accumulation points of the orbit is known as the limit set or the ω-limit set. The ideas of attraction and repulsion generalize similarly; one may categorize iterates into stable sets and unstable sets, according to the behavior of small neighborhoods under iteration. Also see infinite compositions of analytic functions. Other limiting behaviors are possible; for example, wandering points are points that move away, and never come back even close to where they started. == Invariant measure == If one considers the evolution of a density distribution, rather than that of individual point dynamics, then the limiting behavior is given by the invariant measure. It can be visualized as the behavior of a point-cloud or dust-cloud under repeated iteration. The invariant measure is an eigenstate of the Ruelle-Frobenius-Perron operator or transfer operator, corresponding to an eigenvalue of 1. Smaller eigenvalues correspond to unstable, decaying states. In general, because repeated iteration corresponds to a shift, the transfer operator, and its adjoint, the Koopman operator can both be interpreted as shift operators action on a shift space. The theory of subshifts of finite type provides general insight into many iterated functions, especially those leading to chaos. == Fractional iterates and flows, and negative iterates == The notion f1/n must be used with care when the equation gn(x) = f(x) has multiple solutions, which is normally the case, as in Babbage's equation of the functional roots of the identity map. For example, for n = 2 and f(x) = 4x − 6, both g(x) = 6 − 2x and g(x) = 2x − 2 are solutions; so the expression f 1/2(x) does not denote a unique function, just as numbers have multiple algebraic roots. A trivial root of f can always be obtained if f's domain can be extended sufficiently, cf. picture. The roots chosen are normally the ones belonging to the orbit under study. Fractional iteration of a function can be defined: for instance, a half iterate of a function f is a function g such that g(g(x)) = f(x). This function g(x) can be written using the index notation as f 1/2(x) . Similarly, f 1/3(x) is the function defined such that f1/3(f1/3(f1/3(x))) = f(x), while f2/3(x) may be defined as equal to f 1/3(f1/3(x)), and so forth, all based on the principle, mentioned earlier, that f m ○ f n = f m + n. This idea can be generalized so that the iteration count n becomes a continuous parameter, a sort of continuous "time" of a continuous orbit. In such cases, one refers to the system as a flow (cf. section on conjugacy below.) If a function is bijective (and so possesses an inverse function), then negative iterates correspond to function inverses and their compositions. For example, f −1(x) is the normal inverse of f, while f −2(x) is the inverse composed with itself, i.e. f −2(x) = f −1(f −1(x)). Fractional negative iterates are defined analogously to fractional positive ones; for example, f −1/2(x) is defined such that f −1/2(f −1/2(x)) = f −1(x), or, equivalently, such that f −1/2(f 1/2(x)) = f 0(x) = x. === Some formulas for fractional iteration === One of several methods of finding a series formula for fractional iteration, making use of a fixed point, is as follows. First determine a fixed point for the function such that f(a) = a. Define f n(a) = a for all n belonging to the reals. This, in some ways, is the most natural extra condition to place upon the fractional iterates. Expand fn(x) around the fixed point a as a Taylor series, f n ( x ) = f n ( a ) + ( x − a ) d d x f n ( x ) | x = a + ( x − a ) 2 2 d 2 d x 2 f n ( x ) | x = a + ⋯ {\displaystyle f^{n}(x)=f^{n}(a)+(x-a)\left.{\frac {d}{dx}}f^{n}(x)\right|_{x=a}+{\frac {(x-a)^{2}}{2}}\left.{\frac {d^{2}}{dx^{2}}}f^{n}(x)\right|_{x=a}+\cdots } Expand out f n ( x ) = f n ( a ) + ( x − a ) f ′ ( a ) f ′ ( f ( a ) ) f ′ ( f 2 ( a ) ) ⋯ f ′ ( f n − 1 ( a ) ) + ⋯ {\displaystyle f^{n}(x)=f^{n}(a)+(x-a)f'(a)f'(f(a))f'(f^{2}(a))\cdots f'(f^{n-1}(a))+\cdots } Substitute in for fk(a) = a, for any k, f n ( x ) = a + ( x − a ) f ′ ( a ) n + ( x − a ) 2 2 ( f ″ ( a ) f ′ ( a ) n − 1 ) ( 1 + f ′ ( a ) + ⋯ + f ′ ( a ) n − 1 ) + ⋯ {\displaystyle f^{n}(x)=a+(x-a)f'(a)^{n}+{\frac {(x-a)^{2}}{2}}(f''(a)f'(a)^{n-1})\left(1+f'(a)+\cdots +f'(a)^{n-1}\right)+\cdots } Make use of the geometric progression to simplify terms, f n ( x ) = a + ( x − a ) f ′ ( a ) n + ( x − a ) 2 2 ( f ″ ( a ) f ′ ( a ) n − 1 ) f ′ ( a ) n − 1 f ′ ( a ) − 1 + ⋯ {\displaystyle f^{n}(x)=a+(x-a)f'(a)^{n}+{\frac {(x-a)^{2}}{2}}(f''(a)f'(a)^{n-1}){\frac {f'(a)^{n}-1}{f'(a)-1}}+\cdots } There is a special case when f '(a) = 1, f n ( x ) = x + ( x − a ) 2 2 ( n f ″ ( a ) ) + ( x − a ) 3 6 ( 3 2 n ( n − 1 ) f ″ ( a ) 2 + n f ‴ ( a ) ) + ⋯ {\displaystyle f^{n}(x)=x+{\frac {(x-a)^{2}}{2}}(nf''(a))+{\frac {(x-a)^{3}}{6}}\left({\frac {3}{2}}n(n-1)f''(a)^{2}+nf'''(a)\right)+\cdots } This can be carried on indefinitely, although inefficiently, as the latter terms become increasingly complicated. A more systematic procedure is outlined in the following section on Conjugacy. ==== Example 1 ==== For example, setting f(x) = Cx + D gives the fixed point a = D/(1 − C), so the above formula terminates to just f n ( x ) = D 1 − C + ( x − D 1 − C ) C n = C n x + 1 − C n 1 − C D , {\displaystyle f^{n}(x)={\frac {D}{1-C}}+\left(x-{\frac {D}{1-C}}\right)C^{n}=C^{n}x+{\frac {1-C^{n}}{1-C}}D~,} which is trivial to check. ==== Example 2 ==== Find the value of 2 2 2 ⋯ {\displaystyle {\sqrt {2}}^{{\sqrt {2}}^{{\sqrt {2}}^{\cdots }}}} where this is done n times (and possibly the interpolated values when n is not an integer). We have f(x) = √2x. A fixed point is a = f(2) = 2. So set x = 1 and f n (1) expanded around the fixed point value of 2 is then an infinite series, 2 2 2 ⋯ = f n ( 1 ) = 2 − ( ln ⁡ 2 ) n + ( ln ⁡ 2 ) n + 1 ( ( ln ⁡ 2 ) n − 1 ) 4 ( ln ⁡ 2 − 1 ) − ⋯ {\displaystyle {\sqrt {2}}^{{\sqrt {2}}^{{\sqrt {2}}^{\cdots }}}=f^{n}(1)=2-(\ln 2)^{n}+{\frac {(\ln 2)^{n+1}((\ln 2)^{n}-1)}{4(\ln 2-1)}}-\cdots } which, taking just the first three terms, is correct to the first decimal place when n is positive. Also see Tetration: f n(1) = n√2. Using the other fixed point a = f(4) = 4 causes the series to diverge. For n = −1, the series computes the inverse function ⁠2+ln x/ln 2⁠. ==== Example 3 ==== With the function f(x) = xb, expand around the fixed point 1 to get the series f n ( x ) = 1 + b n ( x − 1 ) + 1 2 b n ( b n − 1 ) ( x − 1 ) 2 + 1 3 ! b n ( b n − 1 ) ( b n − 2 ) ( x − 1 ) 3 + ⋯ , {\displaystyle f^{n}(x)=1+b^{n}(x-1)+{\frac {1}{2}}b^{n}(b^{n}-1)(x-1)^{2}+{\frac {1}{3!}}b^{n}(b^{n}-1)(b^{n}-2)(x-1)^{3}+\cdots ~,} which is simply the Taylor series of x(bn ) expanded around 1. == Conjugacy == If f and g are two iterated functions, and there exists a homeomorphism h such that g = h−1 ○ f ○ h , then f and g are said to be topologically conjugate. Clearly, topological conjugacy is preserved under iteration, as gn = h−1 ○ f n ○ h. Thus, if one can solve for one iterated function system, one also has solutions for all topologically conjugate systems. For example, the tent map is topologically conjugate to the logistic map. As a special case, taking f(x) = x + 1, one has the iteration of g(x) = h−1(h(x) + 1) as gn(x) = h−1(h(x) + n), for any function h. Making the substitution x = h−1(y) = ϕ(y) yields g(ϕ(y)) = ϕ(y+1), a form known as the Abel equation. Even in the absence of a strict homeomorphism, near a fixed point, here taken to be at x = 0, f(0) = 0, one may often solve Schröder's equation for a function Ψ, which makes f(x) locally conjugate to a mere dilation, g(x) = f '(0) x, that is f(x) = Ψ−1(f '(0) Ψ(x)). Thus, its iteration orbit, or flow, under suitable provisions (e.g., f '(0) ≠ 1), amounts to the conjugate of the orbit of the monomial, Ψ−1(f '(0)n Ψ(x)), where n in this expression serves as a plain exponent: functional iteration has been reduced to multiplication! Here, however, the exponent n no longer needs be integer or positive, and is a continuous "time" of evolution for the full orbit: the monoid of the Picard sequence (cf. transformation semigroup) has generalized to a full continuous group. This method (perturbative determination of the principal eigenfunction Ψ, cf. Carleman matrix) is equivalent to the algorithm of the preceding section, albeit, in practice, more powerful and systematic. == Markov chains == If the function is linear and can be described by a stochastic matrix, that is, a matrix whose rows or columns sum to one, then the iterated system is known as a Markov chain. == Examples == There are many chaotic maps. Well-known iterated functions include the Mandelbrot set and iterated function systems. Ernst Schröder, in 1870, worked out special cases of the logistic map, such as the chaotic case f(x) = 4x(1 − x), so that Ψ(x) = arcsin(√x)2, hence f n(x) = sin(2n arcsin(√x))2. A nonchaotic case Schröder also illustrated with his method, f(x) = 2x(1 − x), yielded Ψ(x) = −⁠1/2⁠ ln(1 − 2x), and hence fn(x) = −⁠1/2⁠((1 − 2x)2n − 1). If f is the action of a group element on a set, then the iterated function corresponds to a free group. Most functions do not have explicit general closed-form expressions for the n-th iterate. The table below lists some that do. Note that all these expressions are valid even for non-integer and negative n, as well as non-negative integer n. Note: these two special cases of ax2 + bx + c are the only cases that have a closed-form solution. Choosing b = 2 = –a and b = 4 = –a, respectively, further reduces them to the nonchaotic and chaotic logistic cases discussed prior to the table. Some of these examples are related among themselves by simple conjugacies. == Means of study == Iterated functions can be studied with the Artin–Mazur zeta function and with transfer operators. == In computer science == In computer science, iterated functions occur as a special case of recursive functions, which in turn anchor the study of such broad topics as lambda calculus, or narrower ones, such as the denotational semantics of computer programs. == Definitions in terms of iterated functions == Two important functionals can be defined in terms of iterated functions. These are summation: { b + 1 , ∑ i = a b g ( i ) } ≡ ( { i , x } → { i + 1 , x + g ( i ) } ) b − a + 1 { a , 0 } {\displaystyle \left\{b+1,\sum _{i=a}^{b}g(i)\right\}\equiv \left(\{i,x\}\rightarrow \{i+1,x+g(i)\}\right)^{b-a+1}\{a,0\}} and the equivalent product: { b + 1 , ∏ i = a b g ( i ) } ≡ ( { i , x } → { i + 1 , x g ( i ) } ) b − a + 1 { a , 1 } {\displaystyle \left\{b+1,\prod _{i=a}^{b}g(i)\right\}\equiv \left(\{i,x\}\rightarrow \{i+1,xg(i)\}\right)^{b-a+1}\{a,1\}} == Functional derivative == The functional derivative of an iterated function is given by the recursive formula: δ f N ( x ) δ f ( y ) = f ′ ( f N − 1 ( x ) ) δ f N − 1 ( x ) δ f ( y ) + δ ( f N − 1 ( x ) − y ) {\displaystyle {\frac {\delta f^{N}(x)}{\delta f(y)}}=f'(f^{N-1}(x)){\frac {\delta f^{N-1}(x)}{\delta f(y)}}+\delta (f^{N-1}(x)-y)} == Lie's data transport equation == Iterated functions crop up in the series expansion of combined functions, such as g(f(x)). Given the iteration velocity, or beta function (physics), v ( x ) = ∂ f n ( x ) ∂ n | n = 0 {\displaystyle v(x)=\left.{\frac {\partial f^{n}(x)}{\partial n}}\right|_{n=0}} for the nth iterate of the function f, we have g ( f ( x ) ) = exp ⁡ [ v ( x ) ∂ ∂ x ] g ( x ) . {\displaystyle g(f(x))=\exp \left[v(x){\frac {\partial }{\partial x}}\right]g(x).} For example, for rigid advection, if f(x) = x + t, then v(x) = t. Consequently, g(x + t) = exp(t ∂/∂x) g(x), action by a plain shift operator. Conversely, one may specify f(x) given an arbitrary v(x), through the generic Abel equation discussed above, f ( x ) = h − 1 ( h ( x ) + 1 ) , {\displaystyle f(x)=h^{-1}(h(x)+1),} where h ( x ) = ∫ 1 v ( x ) d x . {\displaystyle h(x)=\int {\frac {1}{v(x)}}\,dx.} This is evident by noting that f n ( x ) = h − 1 ( h ( x ) + n ) . {\displaystyle f^{n}(x)=h^{-1}(h(x)+n)~.} For continuous iteration index t, then, now written as a subscript, this amounts to Lie's celebrated exponential realization of a continuous group, e t ∂ ∂ h ( x ) g ( x ) = g ( h − 1 ( h ( x ) + t ) ) = g ( f t ( x ) ) . {\displaystyle e^{t~{\frac {\partial ~~}{\partial h(x)}}}g(x)=g(h^{-1}(h(x)+t))=g(f_{t}(x)).} The initial flow velocity v suffices to determine the entire flow, given this exponential realization which automatically provides the general solution to the translation functional equation, f t ( f τ ( x ) ) = f t + τ ( x ) . {\displaystyle f_{t}(f_{\tau }(x))=f_{t+\tau }(x)~.} == See also == == Notes == == References == == External links == Gill, John (January 2017). "A Primer on the Elementary Theory of Infinite Compositions of Complex Functions". Colorado State University.
Wikipedia/Iterated_function
In algebra, a transformation semigroup (or composition semigroup) is a collection of transformations (functions from a set to itself) that is closed under function composition. If it includes the identity function, it is a monoid, called a transformation (or composition) monoid. This is the semigroup analogue of a permutation group. A transformation semigroup of a set has a tautological semigroup action on that set. Such actions are characterized by being faithful, i.e., if two elements of the semigroup have the same action, then they are equal. An analogue of Cayley's theorem shows that any semigroup can be realized as a transformation semigroup of some set. In automata theory, some authors use the term transformation semigroup to refer to a semigroup acting faithfully on a set of "states" different from the semigroup's base set. There is a correspondence between the two notions. == Transformation semigroups and monoids == A transformation semigroup is a pair (X,S), where X is a set and S is a semigroup of transformations of X. Here a transformation of X is just a function from a subset of X to X, not necessarily invertible, and therefore S is simply a set of transformations of X which is closed under composition of functions. The set of all partial functions on a given base set, X, forms a regular semigroup called the semigroup of all partial transformations (or the partial transformation semigroup on X), typically denoted by P T X {\displaystyle {\mathcal {PT}}_{X}} . If S includes the identity transformation of X, then it is called a transformation monoid. Any transformation semigroup S determines a transformation monoid M by taking the union of S with the identity transformation. A transformation monoid whose elements are invertible is a permutation group. The set of all transformations of X is a transformation monoid called the full transformation monoid (or semigroup) of X. It is also called the symmetric semigroup of X and is denoted by TX. Thus a transformation semigroup (or monoid) is just a subsemigroup (or submonoid) of the full transformation monoid of X. If (X,S) is a transformation semigroup then X can be made into a semigroup action of S by evaluation: s ⋅ x = s ( x ) for s ∈ S , x ∈ X . {\displaystyle s\cdot x=s(x){\text{ for }}s\in S,x\in X.} This is a monoid action if S is a transformation monoid. The characteristic feature of transformation semigroups, as actions, is that they are faithful, i.e., if s ⋅ x = t ⋅ x for all x ∈ X , {\displaystyle s\cdot x=t\cdot x{\text{ for all }}x\in X,} then s = t. Conversely if a semigroup S acts on a set X by T(s,x) = s • x then we can define, for s ∈ S, a transformation Ts of X by T s ( x ) = T ( s , x ) . {\displaystyle T_{s}(x)=T(s,x).\,} The map sending s to Ts is injective if and only if (X, T) is faithful, in which case the image of this map is a transformation semigroup isomorphic to S. == Cayley representation == In group theory, Cayley's theorem asserts that any group G is isomorphic to a subgroup of the symmetric group of G (regarded as a set), so that G is a permutation group. This theorem generalizes straightforwardly to monoids: any monoid M is a transformation monoid of its underlying set, via the action given by left (or right) multiplication. This action is faithful because if ax = bx for all x in M, then by taking x equal to the identity element, we have a = b. For a semigroup S without a (left or right) identity element, we take X to be the underlying set of the monoid corresponding to S to realise S as a transformation semigroup of X. In particular any finite semigroup can be represented as a subsemigroup of transformations of a set X with |X| ≤ |S| + 1, and if S is a monoid, we have the sharper bound |X| ≤ |S|, as in the case of finite groups.: 21  === In computer science === In computer science, Cayley representations can be applied to improve the asymptotic efficiency of semigroups by reassociating multiple composed multiplications. The action given by left multiplication results in right-associated multiplication, and vice versa for the action given by right multiplication. Despite having the same results for any semigroup, the asymptotic efficiency will differ. Two examples of useful transformation monoids given by an action of left multiplication are the functional variation of the difference list data structure, and the monadic Codensity transformation (a Cayley representation of a monad, which is a monoid in a particular monoidal functor category). == Transformation monoid of an automaton == Let M be a deterministic automaton with state space S and alphabet A. The words in the free monoid A∗ induce transformations of S giving rise to a monoid morphism from A∗ to the full transformation monoid TS. The image of this morphism is the transformation semigroup of M.: 78  For a regular language, the syntactic monoid is isomorphic to the transformation monoid of the minimal automaton of the language.: 81  == See also == Semiautomaton Krohn–Rhodes theory Symmetric inverse semigroup Biordered set Special classes of semigroups Composition ring == References == Clifford, A.H.; Preston, G.B. (1961). The algebraic theory of semigroups. Vol. I. Mathematical Surveys. Vol. 7. Providence, R.I.: American Mathematical Society. ISBN 978-0-8218-0272-4. Zbl 0111.03403. {{cite book}}: ISBN / Date incompatibility (help) Howie, John M. (1995). Fundamentals of Semigroup Theory. London Mathematical Society Monographs. New Series. Vol. 12. Oxford: Clarendon Press. ISBN 978-0-19-851194-6. Zbl 0835.20077. Mati Kilp, Ulrich Knauer, Alexander V. Mikhalev (2000), Monoids, Acts and Categories: with Applications to Wreath Products and Graphs, Expositions in Mathematics 29, Walter de Gruyter, Berlin, ISBN 978-3-11-015248-7.
Wikipedia/Full_transformation_semigroup
Mathematics, Form and Function, a book published in 1986 by Springer-Verlag, is a survey of the whole of mathematics, including its origins and deep structure, by the American mathematician Saunders Mac Lane. == Mathematics and human activities == Throughout his book, and especially in chapter I.11, Mac Lane informally discusses how mathematics is grounded in more ordinary concrete and abstract human activities. The following table is adapted from one given on p. 35 of Mac Lane (1986). The rows are very roughly ordered from most to least fundamental. For a bullet list that can be compared and contrasted with this table, see section 3 of Where Mathematics Comes From. Also see the related diagrams appearing on the following pages of Mac Lane (1986): 149, 184, 306, 408, 416, 422-28. Mac Lane (1986) cites a related monograph by Lars Gårding (1977). == Mac Lane's relevance to the philosophy of mathematics == Mac Lane cofounded category theory with Samuel Eilenberg, which enables a unified treatment of mathematical structures and of the relations among them, at the cost of breaking away from their cognitive grounding. Nevertheless, his views—however informal—are a valuable contribution to the philosophy and anthropology of mathematics. His views anticipate, in some respects, the more detailed account of the cognitive basis of mathematics given by George Lakoff and Rafael E. Núñez in their Where Mathematics Comes From. Lakoff and Núñez argue that mathematics emerges via conceptual metaphors grounded in the human body, its motion through space and time, and in human sense perceptions. == See also == 1986 in philosophy == Notes == == References == Gårding, Lars, 1977. Encounter with Mathematics. Springer-Verlag. Reuben Hersh, 1997. What Is Mathematics, Really? Oxford Univ. Press. George Lakoff and Rafael E. Núñez, 2000. Where Mathematics Comes From. Basic Books. Mac Lane, Saunders (1986). Mathematics, Form and Function. Springer-Verlag. ISBN 0-387-96217-4. Leslie White, 1947, "The Locus of Mathematical Reality: An Anthropological Footnote," Philosophy of Science 14: 289-303. Reprinted in Hersh, R., ed., 2006. 18 Unconventional Essays on the Nature of Mathematics. Springer: 304–19.
Wikipedia/Mathematics,_Form_and_Function
In mathematics, if A {\displaystyle A} is a subset of B , {\displaystyle B,} then the inclusion map is the function ι {\displaystyle \iota } that sends each element x {\displaystyle x} of A {\displaystyle A} to x , {\displaystyle x,} treated as an element of B : {\displaystyle B:} ι : A → B , ι ( x ) = x . {\displaystyle \iota :A\rightarrow B,\qquad \iota (x)=x.} An inclusion map may also be referred to as an inclusion function, an insertion, or a canonical injection. A "hooked arrow" (U+21AA ↪ RIGHTWARDS ARROW WITH HOOK) is sometimes used in place of the function arrow above to denote an inclusion map; thus: ι : A ↪ B . {\displaystyle \iota :A\hookrightarrow B.} (However, some authors use this hooked arrow for any embedding.) This and other analogous injective functions from substructures are sometimes called natural injections. Given any morphism f {\displaystyle f} between objects X {\displaystyle X} and Y {\displaystyle Y} , if there is an inclusion map ι : A → X {\displaystyle \iota :A\to X} into the domain X {\displaystyle X} , then one can form the restriction f ∘ ι {\displaystyle f\circ \iota } of f . {\displaystyle f.} In many instances, one can also construct a canonical inclusion into the codomain R → Y {\displaystyle R\to Y} known as the range of f . {\displaystyle f.} == Applications of inclusion maps == Inclusion maps tend to be homomorphisms of algebraic structures; thus, such inclusion maps are embeddings. More precisely, given a substructure closed under some operations, the inclusion map will be an embedding for tautological reasons. For example, for some binary operation ⋆ , {\displaystyle \star ,} to require that ι ( x ⋆ y ) = ι ( x ) ⋆ ι ( y ) {\displaystyle \iota (x\star y)=\iota (x)\star \iota (y)} is simply to say that ⋆ {\displaystyle \star } is consistently computed in the sub-structure and the large structure. The case of a unary operation is similar; but one should also look at nullary operations, which pick out a constant element. Here the point is that closure means such constants must already be given in the substructure. Inclusion maps are seen in algebraic topology where if A {\displaystyle A} is a strong deformation retract of X , {\displaystyle X,} the inclusion map yields an isomorphism between all homotopy groups (that is, it is a homotopy equivalence). Inclusion maps in geometry come in different kinds: for example embeddings of submanifolds. Contravariant objects (which is to say, objects that have pullbacks; these are called covariant in an older and unrelated terminology) such as differential forms restrict to submanifolds, giving a mapping in the other direction. Another example, more sophisticated, is that of affine schemes, for which the inclusions Spec ⁡ ( R / I ) → Spec ⁡ ( R ) {\displaystyle \operatorname {Spec} \left(R/I\right)\to \operatorname {Spec} (R)} and Spec ⁡ ( R / I 2 ) → Spec ⁡ ( R ) {\displaystyle \operatorname {Spec} \left(R/I^{2}\right)\to \operatorname {Spec} (R)} may be different morphisms, where R {\displaystyle R} is a commutative ring and I {\displaystyle I} is an ideal of R . {\displaystyle R.} == See also == Cofibration – continuous mapping between topological spacesPages displaying wikidata descriptions as a fallback Identity function – In mathematics, a function that always returns the same value that was used as its argument == References ==
Wikipedia/Inclusion_function
In mathematics, a functional equation is, in the broadest meaning, an equation in which one or several functions appear as unknowns. So, differential equations and integral equations are functional equations. However, a more restricted meaning is often used, where a functional equation is an equation that relates several values of the same function. For example, the logarithm functions are essentially characterized by the logarithmic functional equation log ⁡ ( x y ) = log ⁡ ( x ) + log ⁡ ( y ) . {\displaystyle \log(xy)=\log(x)+\log(y).} If the domain of the unknown function is supposed to be the natural numbers, the function is generally viewed as a sequence, and, in this case, a functional equation (in the narrower meaning) is called a recurrence relation. Thus the term functional equation is used mainly for real functions and complex functions. Moreover a smoothness condition is often assumed for the solutions, since without such a condition, most functional equations have very irregular solutions. For example, the gamma function is a function that satisfies the functional equation f ( x + 1 ) = x f ( x ) {\displaystyle f(x+1)=xf(x)} and the initial value f ( 1 ) = 1. {\displaystyle f(1)=1.} There are many functions that satisfy these conditions, but the gamma function is the unique one that is meromorphic in the whole complex plane, and logarithmically convex for x real and positive (Bohr–Mollerup theorem). == Examples == Recurrence relations can be seen as functional equations in functions over the integers or natural numbers, in which the differences between terms' indexes can be seen as an application of the shift operator. For example, the recurrence relation defining the Fibonacci numbers, F n = F n − 1 + F n − 2 {\displaystyle F_{n}=F_{n-1}+F_{n-2}} , where F 0 = 0 {\displaystyle F_{0}=0} and F 1 = 1 {\displaystyle F_{1}=1} f ( x + P ) = f ( x ) {\displaystyle f(x+P)=f(x)} , which characterizes the periodic functions f ( x ) = f ( − x ) {\displaystyle f(x)=f(-x)} , which characterizes the even functions, and likewise f ( x ) = − f ( − x ) {\displaystyle f(x)=-f(-x)} , which characterizes the odd functions f ( f ( x ) ) = g ( x ) {\displaystyle f(f(x))=g(x)} , which characterizes the functional square roots of the function g f ( x + y ) = f ( x ) + f ( y ) {\displaystyle f(x+y)=f(x)+f(y)\,\!} (Cauchy's functional equation), satisfied by linear maps. The equation may, contingent on the axiom of choice, also have other pathological nonlinear solutions, whose existence can be proven with a Hamel basis for the real numbers f ( x + y ) = f ( x ) f ( y ) , {\displaystyle f(x+y)=f(x)f(y),\,\!} satisfied by all exponential functions. Like Cauchy's additive functional equation, this too may have pathological, discontinuous solutions f ( x y ) = f ( x ) + f ( y ) {\displaystyle f(xy)=f(x)+f(y)\,\!} , satisfied by all logarithmic functions and, over coprime integer arguments, additive functions f ( x y ) = f ( x ) f ( y ) {\displaystyle f(xy)=f(x)f(y)\,\!} , satisfied by all power functions and, over coprime integer arguments, multiplicative functions f ( x + y ) + f ( x − y ) = 2 [ f ( x ) + f ( y ) ] {\displaystyle f(x+y)+f(x-y)=2[f(x)+f(y)]\,\!} (quadratic equation or parallelogram law) f ( ( x + y ) / 2 ) = ( f ( x ) + f ( y ) ) / 2 {\displaystyle f((x+y)/2)=(f(x)+f(y))/2\,\!} (Jensen's functional equation) g ( x + y ) + g ( x − y ) = 2 [ g ( x ) g ( y ) ] {\displaystyle g(x+y)+g(x-y)=2[g(x)g(y)]\,\!} (d'Alembert's functional equation) f ( h ( x ) ) = h ( x + 1 ) {\displaystyle f(h(x))=h(x+1)\,\!} (Abel equation) f ( h ( x ) ) = c f ( x ) {\displaystyle f(h(x))=cf(x)\,\!} (Schröder's equation). f ( h ( x ) ) = ( f ( x ) ) c {\displaystyle f(h(x))=(f(x))^{c}\,\!} (Böttcher's equation). f ( h ( x ) ) = h ′ ( x ) f ( x ) {\displaystyle f(h(x))=h'(x)f(x)\,\!} (Julia's equation). f ( x y ) = ∑ g l ( x ) h l ( y ) {\displaystyle f(xy)=\sum g_{l}(x)h_{l}(y)\,\!} (Levi-Civita), f ( x + y ) = f ( x ) g ( y ) + f ( y ) g ( x ) {\displaystyle f(x+y)=f(x)g(y)+f(y)g(x)\,\!} (sine addition formula and hyperbolic sine addition formula), g ( x + y ) = g ( x ) g ( y ) − f ( y ) f ( x ) {\displaystyle g(x+y)=g(x)g(y)-f(y)f(x)\,\!} (cosine addition formula), g ( x + y ) = g ( x ) g ( y ) + f ( y ) f ( x ) {\displaystyle g(x+y)=g(x)g(y)+f(y)f(x)\,\!} (hyperbolic cosine addition formula). The commutative and associative laws are functional equations. In its familiar form, the associative law is expressed by writing the binary operation in infix notation, ( a ∘ b ) ∘ c = a ∘ ( b ∘ c ) , {\displaystyle (a\circ b)\circ c=a\circ (b\circ c)~,} but if we write f(a, b) instead of a ○ b then the associative law looks more like a conventional functional equation, f ( f ( a , b ) , c ) = f ( a , f ( b , c ) ) . {\displaystyle f(f(a,b),c)=f(a,f(b,c)).\,\!} The functional equation f ( s ) = 2 s π s − 1 sin ⁡ ( π s 2 ) Γ ( 1 − s ) f ( 1 − s ) {\displaystyle f(s)=2^{s}\pi ^{s-1}\sin \left({\frac {\pi s}{2}}\right)\Gamma (1-s)f(1-s)} is satisfied by the Riemann zeta function, as proved here. The capital Γ denotes the gamma function. The gamma function is the unique solution of the following system of three equations: f ( x ) = f ( x + 1 ) x {\displaystyle f(x)={f(x+1) \over x}} f ( y ) f ( y + 1 2 ) = π 2 2 y − 1 f ( 2 y ) {\displaystyle f(y)f\left(y+{\frac {1}{2}}\right)={\frac {\sqrt {\pi }}{2^{2y-1}}}f(2y)} f ( z ) f ( 1 − z ) = π sin ⁡ ( π z ) {\displaystyle f(z)f(1-z)={\pi \over \sin(\pi z)}} (Euler's reflection formula) The functional equation f ( a z + b c z + d ) = ( c z + d ) k f ( z ) {\displaystyle f\left({az+b \over cz+d}\right)=(cz+d)^{k}f(z)} where a, b, c, d are integers satisfying a d − b c = 1 {\displaystyle ad-bc=1} , i.e. | a b c d | {\displaystyle {\begin{vmatrix}a&b\\c&d\end{vmatrix}}} = 1, defines f to be a modular form of order k. One feature that all of the examples listed above have in common is that, in each case, two or more known functions (sometimes multiplication by a constant, sometimes addition of two variables, sometimes the identity function) are inside the argument of the unknown functions to be solved for. When it comes to asking for all solutions, it may be the case that conditions from mathematical analysis should be applied; for example, in the case of the Cauchy equation mentioned above, the solutions that are continuous functions are the 'reasonable' ones, while other solutions that are not likely to have practical application can be constructed (by using a Hamel basis for the real numbers as vector space over the rational numbers). The Bohr–Mollerup theorem is another well-known example. === Involutions === The involutions are characterized by the functional equation f ( f ( x ) ) = x {\displaystyle f(f(x))=x} . These appear in Babbage's functional equation (1820), f ( f ( x ) ) = 1 − ( 1 − x ) = x . {\displaystyle f(f(x))=1-(1-x)=x\,.} Other involutions, and solutions of the equation, include f ( x ) = a − x , {\displaystyle f(x)=a-x\,,} f ( x ) = a x , {\displaystyle f(x)={\frac {a}{x}}\,,} and f ( x ) = b − x 1 + c x , {\displaystyle f(x)={\frac {b-x}{1+cx}}~,} which includes the previous three as special cases or limits. == Solution == One method of solving elementary functional equations is substitution. Some solutions to functional equations have exploited surjectivity, injectivity, oddness, and evenness. Some functional equations have been solved with the use of ansatzes, mathematical induction. Some classes of functional equations can be solved by computer-assisted techniques. In dynamic programming a variety of successive approximation methods are used to solve Bellman's functional equation, including methods based on fixed point iterations. == See also == Functional equation (L-function) Bellman equation Dynamic programming Implicit function Functional differential equation == Notes == == References == János Aczél, Lectures on Functional Equations and Their Applications, Academic Press, 1966, reprinted by Dover Publications, ISBN 0486445232. János Aczél & J. Dhombres, Functional Equations in Several Variables, Cambridge University Press, 1989. C. Efthimiou, Introduction to Functional Equations, AMS, 2011, ISBN 978-0-8218-5314-6 ; online. Pl. Kannappan, Functional Equations and Inequalities with Applications, Springer, 2009. Marek Kuczma, Introduction to the Theory of Functional Equations and Inequalities, second edition, Birkhäuser, 2009. Henrik Stetkær, Functional Equations on Groups, first edition, World Scientific Publishing, 2013. Christopher G. Small (3 April 2007). Functional Equations and How to Solve Them. Springer Science & Business Media. ISBN 978-0-387-48901-8. == External links == Functional Equations: Exact Solutions at EqWorld: The World of Mathematical Equations. Functional Equations: Index at EqWorld: The World of Mathematical Equations. IMO Compendium text (archived) on functional equations in problem solving.
Wikipedia/Functional_equation
In mathematics, a function from a set X to a set Y assigns to each element of X exactly one element of Y. The set X is called the domain of the function and the set Y is called the codomain of the function. Functions were originally the idealization of how a varying quantity depends on another quantity. For example, the position of a planet is a function of time. Historically, the concept was elaborated with the infinitesimal calculus at the end of the 17th century, and, until the 19th century, the functions that were considered were differentiable (that is, they had a high degree of regularity). The concept of a function was formalized at the end of the 19th century in terms of set theory, and this greatly increased the possible applications of the concept. A function is often denoted by a letter such as f, g or h. The value of a function f at an element x of its domain (that is, the element of the codomain that is associated with x) is denoted by f(x); for example, the value of f at x = 4 is denoted by f(4). Commonly, a specific function is defined by means of an expression depending on x, such as f ( x ) = x 2 + 1 ; {\displaystyle f(x)=x^{2}+1;} in this case, some computation, called function evaluation, may be needed for deducing the value of the function at a particular value; for example, if f ( x ) = x 2 + 1 , {\displaystyle f(x)=x^{2}+1,} then f ( 4 ) = 4 2 + 1 = 17. {\displaystyle f(4)=4^{2}+1=17.} Given its domain and its codomain, a function is uniquely represented by the set of all pairs (x, f (x)), called the graph of the function, a popular means of illustrating the function. When the domain and the codomain are sets of real numbers, each such pair may be thought of as the Cartesian coordinates of a point in the plane. Functions are widely used in science, engineering, and in most fields of mathematics. It has been said that functions are "the central objects of investigation" in most fields of mathematics. The concept of a function has evolved significantly over centuries, from its informal origins in ancient mathematics to its formalization in the 19th century. See History of the function concept for details. == Definition == A function f from a set X to a set Y is an assignment of one element of Y to each element of X. The set X is called the domain of the function and the set Y is called the codomain of the function. If the element y in Y is assigned to x in X by the function f, one says that f maps x to y, and this is commonly written y = f ( x ) . {\displaystyle y=f(x).} In this notation, x is the argument or variable of the function. A specific element x of X is a value of the variable, and the corresponding element of Y is the value of the function at x, or the image of x under the function. The image of a function, sometimes called its range, is the set of the images of all elements in the domain. A function f, its domain X, and its codomain Y are often specified by the notation f : X → Y . {\displaystyle f:X\to Y.} One may write x ↦ y {\displaystyle x\mapsto y} instead of y = f ( x ) {\displaystyle y=f(x)} , where the symbol ↦ {\displaystyle \mapsto } (read 'maps to') is used to specify where a particular element x in the domain is mapped to by f. This allows the definition of a function without naming. For example, the square function is the function x ↦ x 2 . {\displaystyle x\mapsto x^{2}.} The domain and codomain are not always explicitly given when a function is defined. In particular, it is common that one might only know, without some (possibly difficult) computation, that the domain of a specific function is contained in a larger set. For example, if f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } is a real function, the determination of the domain of the function x ↦ 1 / f ( x ) {\displaystyle x\mapsto 1/f(x)} requires knowing the zeros of f. This is one of the reasons for which, in mathematical analysis, "a function from X to Y " may refer to a function having a proper subset of X as a domain. For example, a "function from the reals to the reals" may refer to a real-valued function of a real variable whose domain is a proper subset of the real numbers, typically a subset that contains a non-empty open interval. Such a function is then called a partial function. A function f on a set S means a function from the domain S, without specifying a codomain. However, some authors use it as shorthand for saying that the function is f : S → S. === Formal definition === The above definition of a function is essentially that of the founders of calculus, Leibniz, Newton and Euler. However, it cannot be formalized, since there is no mathematical definition of an "assignment". It is only at the end of the 19th century that the first formal definition of a function could be provided, in terms of set theory. This set-theoretic definition is based on the fact that a function establishes a relation between the elements of the domain and some (possibly all) elements of the codomain. Mathematically, a binary relation between two sets X and Y is a subset of the set of all ordered pairs ( x , y ) {\displaystyle (x,y)} such that x ∈ X {\displaystyle x\in X} and y ∈ Y . {\displaystyle y\in Y.} The set of all these pairs is called the Cartesian product of X and Y and denoted X × Y . {\displaystyle X\times Y.} Thus, the above definition may be formalized as follows. A function with domain X and codomain Y is a binary relation R between X and Y that satisfies the two following conditions: For every x {\displaystyle x} in X {\displaystyle X} there exists y {\displaystyle y} in Y {\displaystyle Y} such that ( x , y ) ∈ R . {\displaystyle (x,y)\in R.} If ( x , y ) ∈ R {\displaystyle (x,y)\in R} and ( x , z ) ∈ R , {\displaystyle (x,z)\in R,} then y = z . {\displaystyle y=z.} This definition may be rewritten more formally, without referring explicitly to the concept of a relation, but using more notation (including set-builder notation): A function is formed by three sets, the domain X , {\displaystyle X,} the codomain Y , {\displaystyle Y,} and the graph R {\displaystyle R} that satisfy the three following conditions. R ⊆ { ( x , y ) ∣ x ∈ X , y ∈ Y } {\displaystyle R\subseteq \{(x,y)\mid x\in X,y\in Y\}} ∀ x ∈ X , ∃ y ∈ Y , ( x , y ) ∈ R {\displaystyle \forall x\in X,\exists y\in Y,\left(x,y\right)\in R\qquad } ( x , y ) ∈ R ∧ ( x , z ) ∈ R ⟹ y = z {\displaystyle (x,y)\in R\land (x,z)\in R\implies y=z\qquad } === Partial functions === Partial functions are defined similarly to ordinary functions, with the "total" condition removed. That is, a partial function from X to Y is a binary relation R between X and Y such that, for every x ∈ X , {\displaystyle x\in X,} there is at most one y in Y such that ( x , y ) ∈ R . {\displaystyle (x,y)\in R.} Using functional notation, this means that, given x ∈ X , {\displaystyle x\in X,} either f ( x ) {\displaystyle f(x)} is in Y, or it is undefined. The set of the elements of X such that f ( x ) {\displaystyle f(x)} is defined and belongs to Y is called the domain of definition of the function. A partial function from X to Y is thus an ordinary function that has as its domain a subset of X called the domain of definition of the function. If the domain of definition equals X, one often says that the partial function is a total function. In several areas of mathematics, the term "function" refers to partial functions rather than to ordinary (total) functions. This is typically the case when functions may be specified in a way that makes difficult or even impossible to determine their domain. In calculus, a real-valued function of a real variable or real function is a partial function from the set R {\displaystyle \mathbb {R} } of the real numbers to itself. Given a real function f : x ↦ f ( x ) {\displaystyle f:x\mapsto f(x)} its multiplicative inverse x ↦ 1 / f ( x ) {\displaystyle x\mapsto 1/f(x)} is also a real function. The determination of the domain of definition of a multiplicative inverse of a (partial) function amounts to compute the zeros of the function, the values where the function is defined but not its multiplicative inverse. Similarly, a function of a complex variable is generally a partial function whose domain of definition is a subset of the complex numbers C {\displaystyle \mathbb {C} } . The difficulty of determining the domain of definition of a complex function is illustrated by the multiplicative inverse of the Riemann zeta function: the determination of the domain of definition of the function z ↦ 1 / ζ ( z ) {\displaystyle z\mapsto 1/\zeta (z)} is more or less equivalent to the proof or disproof of one of the major open problems in mathematics, the Riemann hypothesis. In computability theory, a general recursive function is a partial function from the integers to the integers whose values can be computed by an algorithm (roughly speaking). The domain of definition of such a function is the set of inputs for which the algorithm does not run forever. A fundamental theorem of computability theory is that there cannot exist an algorithm that takes an arbitrary general recursive function as input and tests whether 0 belongs to its domain of definition (see Halting problem). === Multivariate functions === A multivariate function, multivariable function, or function of several variables is a function that depends on several arguments. Such functions are commonly encountered. For example, the position of a car on a road is a function of the time travelled and its average speed. Formally, a function of n variables is a function whose domain is a set of n-tuples. For example, multiplication of integers is a function of two variables, or bivariate function, whose domain is the set of all ordered pairs (2-tuples) of integers, and whose codomain is the set of integers. The same is true for every binary operation. The graph of a bivariate surface over a two-dimensional real domain may be interpreted as defining a parametric surface, as used in, e.g., bivariate interpolation. Commonly, an n-tuple is denoted enclosed between parentheses, such as in ( 1 , 2 , … , n ) . {\displaystyle (1,2,\ldots ,n).} When using functional notation, one usually omits the parentheses surrounding tuples, writing f ( x 1 , … , x n ) {\displaystyle f(x_{1},\ldots ,x_{n})} instead of f ( ( x 1 , … , x n ) ) . {\displaystyle f((x_{1},\ldots ,x_{n})).} Given n sets X 1 , … , X n , {\displaystyle X_{1},\ldots ,X_{n},} the set of all n-tuples ( x 1 , … , x n ) {\displaystyle (x_{1},\ldots ,x_{n})} such that x 1 ∈ X 1 , … , x n ∈ X n {\displaystyle x_{1}\in X_{1},\ldots ,x_{n}\in X_{n}} is called the Cartesian product of X 1 , … , X n , {\displaystyle X_{1},\ldots ,X_{n},} and denoted X 1 × ⋯ × X n . {\displaystyle X_{1}\times \cdots \times X_{n}.} Therefore, a multivariate function is a function that has a Cartesian product or a proper subset of a Cartesian product as a domain. f : U → Y , {\displaystyle f:U\to Y,} where the domain U has the form U ⊆ X 1 × ⋯ × X n . {\displaystyle U\subseteq X_{1}\times \cdots \times X_{n}.} If all the X i {\displaystyle X_{i}} are equal to the set R {\displaystyle \mathbb {R} } of the real numbers or to the set C {\displaystyle \mathbb {C} } of the complex numbers, one talks respectively of a function of several real variables or of a function of several complex variables. == Notation == There are various standard ways for denoting functions. The most commonly used notation is functional notation, which is the first notation described below. === Functional notation === The functional notation requires that a name is given to the function, which, in the case of a unspecified function is often the letter f. Then, the application of the function to an argument is denoted by its name followed by its argument (or, in the case of a multivariate functions, its arguments) enclosed between parentheses, such as in f ( x ) , sin ⁡ ( 3 ) , or f ( x 2 + 1 ) . {\displaystyle f(x),\quad \sin(3),\quad {\text{or}}\quad f(x^{2}+1).} The argument between the parentheses may be a variable, often x, that represents an arbitrary element of the domain of the function, a specific element of the domain (3 in the above example), or an expression that can be evaluated to an element of the domain ( x 2 + 1 {\displaystyle x^{2}+1} in the above example). The use of a unspecified variable between parentheses is useful for defining a function explicitly such as in "let f ( x ) = sin ⁡ ( x 2 + 1 ) {\displaystyle f(x)=\sin(x^{2}+1)} ". When the symbol denoting the function consists of several characters and no ambiguity may arise, the parentheses of functional notation might be omitted. For example, it is common to write sin x instead of sin(x). Functional notation was first used by Leonhard Euler in 1734. Some widely used functions are represented by a symbol consisting of several letters (usually two or three, generally an abbreviation of their name). In this case, a roman type is customarily used instead, such as "sin" for the sine function, in contrast to italic font for single-letter symbols. The functional notation is often used colloquially for referring to a function and simultaneously naming its argument, such as in "let f ( x ) {\displaystyle f(x)} be a function". This is an abuse of notation that is useful for a simpler formulation. === Arrow notation === Arrow notation defines the rule of a function inline, without requiring a name to be given to the function. It uses the ↦ arrow symbol, pronounced "maps to". For example, x ↦ x + 1 {\displaystyle x\mapsto x+1} is the function which takes a real number as input and outputs that number plus 1. Again, a domain and codomain of R {\displaystyle \mathbb {R} } is implied. The domain and codomain can also be explicitly stated, for example: sqr : Z → Z x ↦ x 2 . {\displaystyle {\begin{aligned}\operatorname {sqr} \colon \mathbb {Z} &\to \mathbb {Z} \\x&\mapsto x^{2}.\end{aligned}}} This defines a function sqr from the integers to the integers that returns the square of its input. As a common application of the arrow notation, suppose f : X × X → Y ; ( x , t ) ↦ f ( x , t ) {\displaystyle f:X\times X\to Y;\;(x,t)\mapsto f(x,t)} is a function in two variables, and we want to refer to a partially applied function X → Y {\displaystyle X\to Y} produced by fixing the second argument to the value t0 without introducing a new function name. The map in question could be denoted x ↦ f ( x , t 0 ) {\displaystyle x\mapsto f(x,t_{0})} using the arrow notation. The expression x ↦ f ( x , t 0 ) {\displaystyle x\mapsto f(x,t_{0})} (read: "the map taking x to f of x comma t nought") represents this new function with just one argument, whereas the expression f(x0, t0) refers to the value of the function f at the point (x0, t0). === Index notation === Index notation may be used instead of functional notation. That is, instead of writing f (x), one writes f x . {\displaystyle f_{x}.} This is typically the case for functions whose domain is the set of the natural numbers. Such a function is called a sequence, and, in this case the element f n {\displaystyle f_{n}} is called the nth element of the sequence. The index notation can also be used for distinguishing some variables called parameters from the "true variables". In fact, parameters are specific variables that are considered as being fixed during the study of a problem. For example, the map x ↦ f ( x , t ) {\displaystyle x\mapsto f(x,t)} (see above) would be denoted f t {\displaystyle f_{t}} using index notation, if we define the collection of maps f t {\displaystyle f_{t}} by the formula f t ( x ) = f ( x , t ) {\displaystyle f_{t}(x)=f(x,t)} for all x , t ∈ X {\displaystyle x,t\in X} . === Dot notation === In the notation x ↦ f ( x ) , {\displaystyle x\mapsto f(x),} the symbol x does not represent any value; it is simply a placeholder, meaning that, if x is replaced by any value on the left of the arrow, it should be replaced by the same value on the right of the arrow. Therefore, x may be replaced by any symbol, often an interpunct " ⋅ ". This may be useful for distinguishing the function f (⋅) from its value f (x) at x. For example, a ( ⋅ ) 2 {\displaystyle a(\cdot )^{2}} may stand for the function x ↦ a x 2 {\displaystyle x\mapsto ax^{2}} , and ∫ a ( ⋅ ) f ( u ) d u {\textstyle \int _{a}^{\,(\cdot )}f(u)\,du} may stand for a function defined by an integral with variable upper bound: x ↦ ∫ a x f ( u ) d u {\textstyle x\mapsto \int _{a}^{x}f(u)\,du} . === Specialized notations === There are other, specialized notations for functions in sub-disciplines of mathematics. For example, in linear algebra and functional analysis, linear forms and the vectors they act upon are denoted using a dual pair to show the underlying duality. This is similar to the use of bra–ket notation in quantum mechanics. In logic and the theory of computation, the function notation of lambda calculus is used to explicitly express the basic notions of function abstraction and application. In category theory and homological algebra, networks of functions are described in terms of how they and their compositions commute with each other using commutative diagrams that extend and generalize the arrow notation for functions described above. === Functions of more than one variable === In some cases the argument of a function may be an ordered pair of elements taken from some set or sets. For example, a function f can be defined as mapping any pair of real numbers ( x , y ) {\displaystyle (x,y)} to the sum of their squares, x 2 + y 2 {\displaystyle x^{2}+y^{2}} . Such a function is commonly written as f ( x , y ) = x 2 + y 2 {\displaystyle f(x,y)=x^{2}+y^{2}} and referred to as "a function of two variables". Likewise one can have a function of three or more variables, with notations such as f ( w , x , y ) {\displaystyle f(w,x,y)} , f ( w , x , y , z ) {\displaystyle f(w,x,y,z)} . == Other terms == A function may also be called a map or a mapping, but some authors make a distinction between the term "map" and "function". For example, the term "map" is often reserved for a "function" with some sort of special structure (e.g. maps of manifolds). In particular map may be used in place of homomorphism for the sake of succinctness (e.g., linear map or map from G to H instead of group homomorphism from G to H). Some authors reserve the word mapping for the case where the structure of the codomain belongs explicitly to the definition of the function. Some authors, such as Serge Lang, use "function" only to refer to maps for which the codomain is a subset of the real or complex numbers, and use the term mapping for more general functions. In the theory of dynamical systems, a map denotes an evolution function used to create discrete dynamical systems. See also Poincaré map. Whichever definition of map is used, related terms like domain, codomain, injective, continuous have the same meaning as for a function. == Specifying a function == Given a function f {\displaystyle f} , by definition, to each element x {\displaystyle x} of the domain of the function f {\displaystyle f} , there is a unique element associated to it, the value f ( x ) {\displaystyle f(x)} of f {\displaystyle f} at x {\displaystyle x} . There are several ways to specify or describe how x {\displaystyle x} is related to f ( x ) {\displaystyle f(x)} , both explicitly and implicitly. Sometimes, a theorem or an axiom asserts the existence of a function having some properties, without describing it more precisely. Often, the specification or description is referred to as the definition of the function f {\displaystyle f} . === By listing function values === On a finite set a function may be defined by listing the elements of the codomain that are associated to the elements of the domain. For example, if A = { 1 , 2 , 3 } {\displaystyle A=\{1,2,3\}} , then one can define a function f : A → R {\displaystyle f:A\to \mathbb {R} } by f ( 1 ) = 2 , f ( 2 ) = 3 , f ( 3 ) = 4. {\displaystyle f(1)=2,f(2)=3,f(3)=4.} === By a formula === Functions are often defined by an expression that describes a combination of arithmetic operations and previously defined functions; such a formula allows computing the value of the function from the value of any element of the domain. For example, in the above example, f {\displaystyle f} can be defined by the formula f ( n ) = n + 1 {\displaystyle f(n)=n+1} , for n ∈ { 1 , 2 , 3 } {\displaystyle n\in \{1,2,3\}} . When a function is defined this way, the determination of its domain is sometimes difficult. If the formula that defines the function contains divisions, the values of the variable for which a denominator is zero must be excluded from the domain; thus, for a complicated function, the determination of the domain passes through the computation of the zeros of auxiliary functions. Similarly, if square roots occur in the definition of a function from R {\displaystyle \mathbb {R} } to R , {\displaystyle \mathbb {R} ,} the domain is included in the set of the values of the variable for which the arguments of the square roots are nonnegative. For example, f ( x ) = 1 + x 2 {\displaystyle f(x)={\sqrt {1+x^{2}}}} defines a function f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } whose domain is R , {\displaystyle \mathbb {R} ,} because 1 + x 2 {\displaystyle 1+x^{2}} is always positive if x is a real number. On the other hand, f ( x ) = 1 − x 2 {\displaystyle f(x)={\sqrt {1-x^{2}}}} defines a function from the reals to the reals whose domain is reduced to the interval [−1, 1]. (In old texts, such a domain was called the domain of definition of the function.) Functions can be classified by the nature of formulas that define them: A quadratic function is a function that may be written f ( x ) = a x 2 + b x + c , {\displaystyle f(x)=ax^{2}+bx+c,} where a, b, c are constants. More generally, a polynomial function is a function that can be defined by a formula involving only additions, subtractions, multiplications, and exponentiation to nonnegative integer powers. For example, f ( x ) = x 3 − 3 x − 1 {\displaystyle f(x)=x^{3}-3x-1} and f ( x ) = ( x − 1 ) ( x 3 + 1 ) + 2 x 2 − 1 {\displaystyle f(x)=(x-1)(x^{3}+1)+2x^{2}-1} are polynomial functions of x {\displaystyle x} . A rational function is the same, with divisions also allowed, such as f ( x ) = x − 1 x + 1 , {\displaystyle f(x)={\frac {x-1}{x+1}},} and f ( x ) = 1 x + 1 + 3 x − 2 x − 1 . {\displaystyle f(x)={\frac {1}{x+1}}+{\frac {3}{x}}-{\frac {2}{x-1}}.} An algebraic function is the same, with nth roots and roots of polynomials also allowed. An elementary function is the same, with logarithms and exponential functions allowed. === Inverse and implicit functions === A function f : X → Y , {\displaystyle f:X\to Y,} with domain X and codomain Y, is bijective, if for every y in Y, there is one and only one element x in X such that y = f(x). In this case, the inverse function of f is the function f − 1 : Y → X {\displaystyle f^{-1}:Y\to X} that maps y ∈ Y {\displaystyle y\in Y} to the element x ∈ X {\displaystyle x\in X} such that y = f(x). For example, the natural logarithm is a bijective function from the positive real numbers to the real numbers. It thus has an inverse, called the exponential function, that maps the real numbers onto the positive numbers. If a function f : X → Y {\displaystyle f:X\to Y} is not bijective, it may occur that one can select subsets E ⊆ X {\displaystyle E\subseteq X} and F ⊆ Y {\displaystyle F\subseteq Y} such that the restriction of f to E is a bijection from E to F, and has thus an inverse. The inverse trigonometric functions are defined this way. For example, the cosine function induces, by restriction, a bijection from the interval [0, π] onto the interval [−1, 1], and its inverse function, called arccosine, maps [−1, 1] onto [0, π]. The other inverse trigonometric functions are defined similarly. More generally, given a binary relation R between two sets X and Y, let E be a subset of X such that, for every x ∈ E , {\displaystyle x\in E,} there is some y ∈ Y {\displaystyle y\in Y} such that x R y. If one has a criterion allowing selecting such a y for every x ∈ E , {\displaystyle x\in E,} this defines a function f : E → Y , {\displaystyle f:E\to Y,} called an implicit function, because it is implicitly defined by the relation R. For example, the equation of the unit circle x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} defines a relation on real numbers. If −1 < x < 1 there are two possible values of y, one positive and one negative. For x = ± 1, these two values become both equal to 0. Otherwise, there is no possible value of y. This means that the equation defines two implicit functions with domain [−1, 1] and respective codomains [0, +∞) and (−∞, 0]. In this example, the equation can be solved in y, giving y = ± 1 − x 2 , {\displaystyle y=\pm {\sqrt {1-x^{2}}},} but, in more complicated examples, this is impossible. For example, the relation y 5 + y + x = 0 {\displaystyle y^{5}+y+x=0} defines y as an implicit function of x, called the Bring radical, which has R {\displaystyle \mathbb {R} } as domain and range. The Bring radical cannot be expressed in terms of the four arithmetic operations and nth roots. The implicit function theorem provides mild differentiability conditions for existence and uniqueness of an implicit function in the neighborhood of a point. === Using differential calculus === Many functions can be defined as the antiderivative of another function. This is the case of the natural logarithm, which is the antiderivative of 1/x that is 0 for x = 1. Another common example is the error function. More generally, many functions, including most special functions, can be defined as solutions of differential equations. The simplest example is probably the exponential function, which can be defined as the unique function that is equal to its derivative and takes the value 1 for x = 0. Power series can be used to define functions on the domain in which they converge. For example, the exponential function is given by e x = ∑ n = 0 ∞ x n n ! {\textstyle e^{x}=\sum _{n=0}^{\infty }{x^{n} \over n!}} . However, as the coefficients of a series are quite arbitrary, a function that is the sum of a convergent series is generally defined otherwise, and the sequence of the coefficients is the result of some computation based on another definition. Then, the power series can be used to enlarge the domain of the function. Typically, if a function for a real variable is the sum of its Taylor series in some interval, this power series allows immediately enlarging the domain to a subset of the complex numbers, the disc of convergence of the series. Then analytic continuation allows enlarging further the domain for including almost the whole complex plane. This process is the method that is generally used for defining the logarithm, the exponential and the trigonometric functions of a complex number. === By recurrence === Functions whose domain are the nonnegative integers, known as sequences, are sometimes defined by recurrence relations. The factorial function on the nonnegative integers ( n ↦ n ! {\displaystyle n\mapsto n!} ) is a basic example, as it can be defined by the recurrence relation n ! = n ( n − 1 ) ! for n > 0 , {\displaystyle n!=n(n-1)!\quad {\text{for}}\quad n>0,} and the initial condition 0 ! = 1. {\displaystyle 0!=1.} == Representing a function == A graph is commonly used to give an intuitive picture of a function. As an example of how a graph helps to understand a function, it is easy to see from its graph whether a function is increasing or decreasing. Some functions may also be represented by bar charts. === Graphs and plots === Given a function f : X → Y , {\displaystyle f:X\to Y,} its graph is, formally, the set G = { ( x , f ( x ) ) ∣ x ∈ X } . {\displaystyle G=\{(x,f(x))\mid x\in X\}.} In the frequent case where X and Y are subsets of the real numbers (or may be identified with such subsets, e.g. intervals), an element ( x , y ) ∈ G {\displaystyle (x,y)\in G} may be identified with a point having coordinates x, y in a 2-dimensional coordinate system, e.g. the Cartesian plane. Parts of this may create a plot that represents (parts of) the function. The use of plots is so ubiquitous that they too are called the graph of the function. Graphic representations of functions are also possible in other coordinate systems. For example, the graph of the square function x ↦ x 2 , {\displaystyle x\mapsto x^{2},} consisting of all points with coordinates ( x , x 2 ) {\displaystyle (x,x^{2})} for x ∈ R , {\displaystyle x\in \mathbb {R} ,} yields, when depicted in Cartesian coordinates, the well known parabola. If the same quadratic function x ↦ x 2 , {\displaystyle x\mapsto x^{2},} with the same formal graph, consisting of pairs of numbers, is plotted instead in polar coordinates ( r , θ ) = ( x , x 2 ) , {\displaystyle (r,\theta )=(x,x^{2}),} the plot obtained is Fermat's spiral. === Tables === A function can be represented as a table of values. If the domain of a function is finite, then the function can be completely specified in this way. For example, the multiplication function f : { 1 , … , 5 } 2 → R {\displaystyle f:\{1,\ldots ,5\}^{2}\to \mathbb {R} } defined as f ( x , y ) = x y {\displaystyle f(x,y)=xy} can be represented by the familiar multiplication table On the other hand, if a function's domain is continuous, a table can give the values of the function at specific values of the domain. If an intermediate value is needed, interpolation can be used to estimate the value of the function. For example, a portion of a table for the sine function might be given as follows, with values rounded to 6 decimal places: Before the advent of handheld calculators and personal computers, such tables were often compiled and published for functions such as logarithms and trigonometric functions. === Bar chart === A bar chart can represent a function whose domain is a finite set, the natural numbers, or the integers. In this case, an element x of the domain is represented by an interval of the x-axis, and the corresponding value of the function, f(x), is represented by a rectangle whose base is the interval corresponding to x and whose height is f(x) (possibly negative, in which case the bar extends below the x-axis). == General properties == This section describes general properties of functions, that are independent of specific properties of the domain and the codomain. === Standard functions === There are a number of standard functions that occur frequently: For every set X, there is a unique function, called the empty function, or empty map, from the empty set to X. The graph of an empty function is the empty set. The existence of empty functions is needed both for the coherency of the theory and for avoiding exceptions concerning the empty set in many statements. Under the usual set-theoretic definition of a function as an ordered triplet (or equivalent ones), there is exactly one empty function for each set, thus the empty function ∅ → X {\displaystyle \varnothing \to X} is not equal to ∅ → Y {\displaystyle \varnothing \to Y} if and only if X ≠ Y {\displaystyle X\neq Y} , although their graphs are both the empty set. For every set X and every singleton set {s}, there is a unique function from X to {s}, which maps every element of X to s. This is a surjection (see below) unless X is the empty set. Given a function f : X → Y , {\displaystyle f:X\to Y,} the canonical surjection of f onto its image f ( X ) = { f ( x ) ∣ x ∈ X } {\displaystyle f(X)=\{f(x)\mid x\in X\}} is the function from X to f(X) that maps x to f(x). For every subset A of a set X, the inclusion map of A into X is the injective (see below) function that maps every element of A to itself. The identity function on a set X, often denoted by idX, is the inclusion of X into itself. === Function composition === Given two functions f : X → Y {\displaystyle f:X\to Y} and g : Y → Z {\displaystyle g:Y\to Z} such that the domain of g is the codomain of f, their composition is the function g ∘ f : X → Z {\displaystyle g\circ f:X\rightarrow Z} defined by ( g ∘ f ) ( x ) = g ( f ( x ) ) . {\displaystyle (g\circ f)(x)=g(f(x)).} That is, the value of g ∘ f {\displaystyle g\circ f} is obtained by first applying f to x to obtain y = f(x) and then applying g to the result y to obtain g(y) = g(f(x)). In this notation, the function that is applied first is always written on the right. The composition g ∘ f {\displaystyle g\circ f} is an operation on functions that is defined only if the codomain of the first function is the domain of the second one. Even when both g ∘ f {\displaystyle g\circ f} and f ∘ g {\displaystyle f\circ g} satisfy these conditions, the composition is not necessarily commutative, that is, the functions g ∘ f {\displaystyle g\circ f} and f ∘ g {\displaystyle f\circ g} need not be equal, but may deliver different values for the same argument. For example, let f(x) = x2 and g(x) = x + 1, then g ( f ( x ) ) = x 2 + 1 {\displaystyle g(f(x))=x^{2}+1} and f ( g ( x ) ) = ( x + 1 ) 2 {\displaystyle f(g(x))=(x+1)^{2}} agree just for x = 0. {\displaystyle x=0.} The function composition is associative in the sense that, if one of ( h ∘ g ) ∘ f {\displaystyle (h\circ g)\circ f} and h ∘ ( g ∘ f ) {\displaystyle h\circ (g\circ f)} is defined, then the other is also defined, and they are equal, that is, ( h ∘ g ) ∘ f = h ∘ ( g ∘ f ) . {\displaystyle (h\circ g)\circ f=h\circ (g\circ f).} Therefore, it is usual to just write h ∘ g ∘ f . {\displaystyle h\circ g\circ f.} The identity functions id X {\displaystyle \operatorname {id} _{X}} and id Y {\displaystyle \operatorname {id} _{Y}} are respectively a right identity and a left identity for functions from X to Y. That is, if f is a function with domain X, and codomain Y, one has f ∘ id X = id Y ∘ f = f . {\displaystyle f\circ \operatorname {id} _{X}=\operatorname {id} _{Y}\circ f=f.} === Image and preimage === Let f : X → Y . {\displaystyle f:X\to Y.} The image under f of an element x of the domain X is f(x). If A is any subset of X, then the image of A under f, denoted f(A), is the subset of the codomain Y consisting of all images of elements of A, that is, f ( A ) = { f ( x ) ∣ x ∈ A } . {\displaystyle f(A)=\{f(x)\mid x\in A\}.} The image of f is the image of the whole domain, that is, f(X). It is also called the range of f, although the term range may also refer to the codomain. On the other hand, the inverse image or preimage under f of an element y of the codomain Y is the set of all elements of the domain X whose images under f equal y. In symbols, the preimage of y is denoted by f − 1 ( y ) {\displaystyle f^{-1}(y)} and is given by the equation f − 1 ( y ) = { x ∈ X ∣ f ( x ) = y } . {\displaystyle f^{-1}(y)=\{x\in X\mid f(x)=y\}.} Likewise, the preimage of a subset B of the codomain Y is the set of the preimages of the elements of B, that is, it is the subset of the domain X consisting of all elements of X whose images belong to B. It is denoted by f − 1 ( B ) {\displaystyle f^{-1}(B)} and is given by the equation f − 1 ( B ) = { x ∈ X ∣ f ( x ) ∈ B } . {\displaystyle f^{-1}(B)=\{x\in X\mid f(x)\in B\}.} For example, the preimage of { 4 , 9 } {\displaystyle \{4,9\}} under the square function is the set { − 3 , − 2 , 2 , 3 } {\displaystyle \{-3,-2,2,3\}} . By definition of a function, the image of an element x of the domain is always a single element of the codomain. However, the preimage f − 1 ( y ) {\displaystyle f^{-1}(y)} of an element y of the codomain may be empty or contain any number of elements. For example, if f is the function from the integers to themselves that maps every integer to 0, then f − 1 ( 0 ) = Z {\displaystyle f^{-1}(0)=\mathbb {Z} } . If f : X → Y {\displaystyle f:X\to Y} is a function, A and B are subsets of X, and C and D are subsets of Y, then one has the following properties: A ⊆ B ⟹ f ( A ) ⊆ f ( B ) {\displaystyle A\subseteq B\Longrightarrow f(A)\subseteq f(B)} C ⊆ D ⟹ f − 1 ( C ) ⊆ f − 1 ( D ) {\displaystyle C\subseteq D\Longrightarrow f^{-1}(C)\subseteq f^{-1}(D)} A ⊆ f − 1 ( f ( A ) ) {\displaystyle A\subseteq f^{-1}(f(A))} C ⊇ f ( f − 1 ( C ) ) {\displaystyle C\supseteq f(f^{-1}(C))} f ( f − 1 ( f ( A ) ) ) = f ( A ) {\displaystyle f(f^{-1}(f(A)))=f(A)} f − 1 ( f ( f − 1 ( C ) ) ) = f − 1 ( C ) {\displaystyle f^{-1}(f(f^{-1}(C)))=f^{-1}(C)} The preimage by f of an element y of the codomain is sometimes called, in some contexts, the fiber of y under f. If a function f has an inverse (see below), this inverse is denoted f − 1 . {\displaystyle f^{-1}.} In this case f − 1 ( C ) {\displaystyle f^{-1}(C)} may denote either the image by f − 1 {\displaystyle f^{-1}} or the preimage by f of C. This is not a problem, as these sets are equal. The notation f ( A ) {\displaystyle f(A)} and f − 1 ( C ) {\displaystyle f^{-1}(C)} may be ambiguous in the case of sets that contain some subsets as elements, such as { x , { x } } . {\displaystyle \{x,\{x\}\}.} In this case, some care may be needed, for example, by using square brackets f [ A ] , f − 1 [ C ] {\displaystyle f[A],f^{-1}[C]} for images and preimages of subsets and ordinary parentheses for images and preimages of elements. === Injective, surjective and bijective functions === Let f : X → Y {\displaystyle f:X\to Y} be a function. The function f is injective (or one-to-one, or is an injection) if f(a) ≠ f(b) for every two different elements a and b of X. Equivalently, f is injective if and only if, for every y ∈ Y , {\displaystyle y\in Y,} the preimage f − 1 ( y ) {\displaystyle f^{-1}(y)} contains at most one element. An empty function is always injective. If X is not the empty set, then f is injective if and only if there exists a function g : Y → X {\displaystyle g:Y\to X} such that g ∘ f = id X , {\displaystyle g\circ f=\operatorname {id} _{X},} that is, if f has a left inverse. Proof: If f is injective, for defining g, one chooses an element x 0 {\displaystyle x_{0}} in X (which exists as X is supposed to be nonempty), and one defines g by g ( y ) = x {\displaystyle g(y)=x} if y = f ( x ) {\displaystyle y=f(x)} and g ( y ) = x 0 {\displaystyle g(y)=x_{0}} if y ∉ f ( X ) . {\displaystyle y\not \in f(X).} Conversely, if g ∘ f = id X , {\displaystyle g\circ f=\operatorname {id} _{X},} and y = f ( x ) , {\displaystyle y=f(x),} then x = g ( y ) , {\displaystyle x=g(y),} and thus f − 1 ( y ) = { x } . {\displaystyle f^{-1}(y)=\{x\}.} The function f is surjective (or onto, or is a surjection) if its range f ( X ) {\displaystyle f(X)} equals its codomain Y {\displaystyle Y} , that is, if, for each element y {\displaystyle y} of the codomain, there exists some element x {\displaystyle x} of the domain such that f ( x ) = y {\displaystyle f(x)=y} (in other words, the preimage f − 1 ( y ) {\displaystyle f^{-1}(y)} of every y ∈ Y {\displaystyle y\in Y} is nonempty). If, as usual in modern mathematics, the axiom of choice is assumed, then f is surjective if and only if there exists a function g : Y → X {\displaystyle g:Y\to X} such that f ∘ g = id Y , {\displaystyle f\circ g=\operatorname {id} _{Y},} that is, if f has a right inverse. The axiom of choice is needed, because, if f is surjective, one defines g by g ( y ) = x , {\displaystyle g(y)=x,} where x {\displaystyle x} is an arbitrarily chosen element of f − 1 ( y ) . {\displaystyle f^{-1}(y).} The function f is bijective (or is a bijection or a one-to-one correspondence) if it is both injective and surjective. That is, f is bijective if, for every y ∈ Y , {\displaystyle y\in Y,} the preimage f − 1 ( y ) {\displaystyle f^{-1}(y)} contains exactly one element. The function f is bijective if and only if it admits an inverse function, that is, a function g : Y → X {\displaystyle g:Y\to X} such that g ∘ f = id X {\displaystyle g\circ f=\operatorname {id} _{X}} and f ∘ g = id Y . {\displaystyle f\circ g=\operatorname {id} _{Y}.} (Contrarily to the case of surjections, this does not require the axiom of choice; the proof is straightforward). Every function f : X → Y {\displaystyle f:X\to Y} may be factorized as the composition i ∘ s {\displaystyle i\circ s} of a surjection followed by an injection, where s is the canonical surjection of X onto f(X) and i is the canonical injection of f(X) into Y. This is the canonical factorization of f. "One-to-one" and "onto" are terms that were more common in the older English language literature; "injective", "surjective", and "bijective" were originally coined as French words in the second quarter of the 20th century by the Bourbaki group and imported into English. As a word of caution, "a one-to-one function" is one that is injective, while a "one-to-one correspondence" refers to a bijective function. Also, the statement "f maps X onto Y" differs from "f maps X into B", in that the former implies that f is surjective, while the latter makes no assertion about the nature of f. In a complicated reasoning, the one letter difference can easily be missed. Due to the confusing nature of this older terminology, these terms have declined in popularity relative to the Bourbakian terms, which have also the advantage of being more symmetrical. === Restriction and extension === If f : X → Y {\displaystyle f:X\to Y} is a function and S is a subset of X, then the restriction of f {\displaystyle f} to S, denoted f | S {\displaystyle f|_{S}} , is the function from S to Y defined by f | S ( x ) = f ( x ) {\displaystyle f|_{S}(x)=f(x)} for all x in S. Restrictions can be used to define partial inverse functions: if there is a subset S of the domain of a function f {\displaystyle f} such that f | S {\displaystyle f|_{S}} is injective, then the canonical surjection of f | S {\displaystyle f|_{S}} onto its image f | S ( S ) = f ( S ) {\displaystyle f|_{S}(S)=f(S)} is a bijection, and thus has an inverse function from f ( S ) {\displaystyle f(S)} to S. One application is the definition of inverse trigonometric functions. For example, the cosine function is injective when restricted to the interval [0, π]. The image of this restriction is the interval [−1, 1], and thus the restriction has an inverse function from [−1, 1] to [0, π], which is called arccosine and is denoted arccos. Function restriction may also be used for "gluing" functions together. Let X = ⋃ i ∈ I U i {\textstyle X=\bigcup _{i\in I}U_{i}} be the decomposition of X as a union of subsets, and suppose that a function f i : U i → Y {\displaystyle f_{i}:U_{i}\to Y} is defined on each U i {\displaystyle U_{i}} such that for each pair i , j {\displaystyle i,j} of indices, the restrictions of f i {\displaystyle f_{i}} and f j {\displaystyle f_{j}} to U i ∩ U j {\displaystyle U_{i}\cap U_{j}} are equal. Then this defines a unique function f : X → Y {\displaystyle f:X\to Y} such that f | U i = f i {\displaystyle f|_{U_{i}}=f_{i}} for all i. This is the way that functions on manifolds are defined. An extension of a function f is a function g such that f is a restriction of g. A typical use of this concept is the process of analytic continuation, that allows extending functions whose domain is a small part of the complex plane to functions whose domain is almost the whole complex plane. Here is another classical example of a function extension that is encountered when studying homographies of the real line. A homography is a function h ( x ) = a x + b c x + d {\displaystyle h(x)={\frac {ax+b}{cx+d}}} such that ad − bc ≠ 0. Its domain is the set of all real numbers different from − d / c , {\displaystyle -d/c,} and its image is the set of all real numbers different from a / c . {\displaystyle a/c.} If one extends the real line to the projectively extended real line by including ∞, one may extend h to a bijection from the extended real line to itself by setting h ( ∞ ) = a / c {\displaystyle h(\infty )=a/c} and h ( − d / c ) = ∞ {\displaystyle h(-d/c)=\infty } . == In calculus == The idea of function, starting in the 17th century, was fundamental to the new infinitesimal calculus. At that time, only real-valued functions of a real variable were considered, and all functions were assumed to be smooth. But the definition was soon extended to functions of several variables and to functions of a complex variable. In the second half of the 19th century, the mathematically rigorous definition of a function was introduced, and functions with arbitrary domains and codomains were defined. Functions are now used throughout all areas of mathematics. In introductory calculus, when the word function is used without qualification, it means a real-valued function of a single real variable. The more general definition of a function is usually introduced to second or third year college students with STEM majors, and in their senior year they are introduced to calculus in a larger, more rigorous setting in courses such as real analysis and complex analysis. === Real function === A real function is a real-valued function of a real variable, that is, a function whose codomain is the field of real numbers and whose domain is a set of real numbers that contains an interval. In this section, these functions are simply called functions. The functions that are most commonly considered in mathematics and its applications have some regularity, that is they are continuous, differentiable, and even analytic. This regularity insures that these functions can be visualized by their graphs. In this section, all functions are differentiable in some interval. Functions enjoy pointwise operations, that is, if f and g are functions, their sum, difference and product are functions defined by ( f + g ) ( x ) = f ( x ) + g ( x ) ( f − g ) ( x ) = f ( x ) − g ( x ) ( f ⋅ g ) ( x ) = f ( x ) ⋅ g ( x ) . {\displaystyle {\begin{aligned}(f+g)(x)&=f(x)+g(x)\\(f-g)(x)&=f(x)-g(x)\\(f\cdot g)(x)&=f(x)\cdot g(x)\\\end{aligned}}.} The domains of the resulting functions are the intersection of the domains of f and g. The quotient of two functions is defined similarly by f g ( x ) = f ( x ) g ( x ) , {\displaystyle {\frac {f}{g}}(x)={\frac {f(x)}{g(x)}},} but the domain of the resulting function is obtained by removing the zeros of g from the intersection of the domains of f and g. The polynomial functions are defined by polynomials, and their domain is the whole set of real numbers. They include constant functions, linear functions and quadratic functions. Rational functions are quotients of two polynomial functions, and their domain is the real numbers with a finite number of them removed to avoid division by zero. The simplest rational function is the function x ↦ 1 x , {\displaystyle x\mapsto {\frac {1}{x}},} whose graph is a hyperbola, and whose domain is the whole real line except for 0. The derivative of a real differentiable function is a real function. An antiderivative of a continuous real function is a real function that has the original function as a derivative. For example, the function x ↦ 1 x {\textstyle x\mapsto {\frac {1}{x}}} is continuous, and even differentiable, on the positive real numbers. Thus one antiderivative, which takes the value zero for x = 1, is a differentiable function called the natural logarithm. A real function f is monotonic in an interval if the sign of f ( x ) − f ( y ) x − y {\displaystyle {\frac {f(x)-f(y)}{x-y}}} does not depend of the choice of x and y in the interval. If the function is differentiable in the interval, it is monotonic if the sign of the derivative is constant in the interval. If a real function f is monotonic in an interval I, it has an inverse function, which is a real function with domain f(I) and image I. This is how inverse trigonometric functions are defined in terms of trigonometric functions, where the trigonometric functions are monotonic. Another example: the natural logarithm is monotonic on the positive real numbers, and its image is the whole real line; therefore it has an inverse function that is a bijection between the real numbers and the positive real numbers. This inverse is the exponential function. Many other real functions are defined either by the implicit function theorem (the inverse function is a particular instance) or as solutions of differential equations. For example, the sine and the cosine functions are the solutions of the linear differential equation y ″ + y = 0 {\displaystyle y''+y=0} such that sin ⁡ 0 = 0 , cos ⁡ 0 = 1 , ∂ sin ⁡ x ∂ x ( 0 ) = 1 , ∂ cos ⁡ x ∂ x ( 0 ) = 0. {\displaystyle \sin 0=0,\quad \cos 0=1,\quad {\frac {\partial \sin x}{\partial x}}(0)=1,\quad {\frac {\partial \cos x}{\partial x}}(0)=0.} === Vector-valued function === When the elements of the codomain of a function are vectors, the function is said to be a vector-valued function. These functions are particularly useful in applications, for example modeling physical properties. For example, the function that associates to each point of a fluid its velocity vector is a vector-valued function. Some vector-valued functions are defined on a subset of R n {\displaystyle \mathbb {R} ^{n}} or other spaces that share geometric or topological properties of R n {\displaystyle \mathbb {R} ^{n}} , such as manifolds. These vector-valued functions are given the name vector fields. == Function space == In mathematical analysis, and more specifically in functional analysis, a function space is a set of scalar-valued or vector-valued functions, which share a specific property and form a topological vector space. For example, the real smooth functions with a compact support (that is, they are zero outside some compact set) form a function space that is at the basis of the theory of distributions. Function spaces play a fundamental role in advanced mathematical analysis, by allowing the use of their algebraic and topological properties for studying properties of functions. For example, all theorems of existence and uniqueness of solutions of ordinary or partial differential equations result of the study of function spaces. == Multi-valued functions == Several methods for specifying functions of real or complex variables start from a local definition of the function at a point or on a neighbourhood of a point, and then extend by continuity the function to a much larger domain. Frequently, for a starting point x 0 , {\displaystyle x_{0},} there are several possible starting values for the function. For example, in defining the square root as the inverse function of the square function, for any positive real number x 0 , {\displaystyle x_{0},} there are two choices for the value of the square root, one of which is positive and denoted x 0 , {\displaystyle {\sqrt {x_{0}}},} and another which is negative and denoted − x 0 . {\displaystyle -{\sqrt {x_{0}}}.} These choices define two continuous functions, both having the nonnegative real numbers as a domain, and having either the nonnegative or the nonpositive real numbers as images. When looking at the graphs of these functions, one can see that, together, they form a single smooth curve. It is therefore often useful to consider these two square root functions as a single function that has two values for positive x, one value for 0 and no value for negative x. In the preceding example, one choice, the positive square root, is more natural than the other. This is not the case in general. For example, let consider the implicit function that maps y to a root x of x 3 − 3 x − y = 0 {\displaystyle x^{3}-3x-y=0} (see the figure on the right). For y = 0 one may choose either 0 , 3 , or − 3 {\displaystyle 0,{\sqrt {3}},{\text{ or }}-{\sqrt {3}}} for x. By the implicit function theorem, each choice defines a function; for the first one, the (maximal) domain is the interval [−2, 2] and the image is [−1, 1]; for the second one, the domain is [−2, ∞) and the image is [1, ∞); for the last one, the domain is (−∞, 2] and the image is (−∞, −1]. As the three graphs together form a smooth curve, and there is no reason for preferring one choice, these three functions are often considered as a single multi-valued function of y that has three values for −2 < y < 2, and only one value for y ≤ −2 and y ≥ −2. Usefulness of the concept of multi-valued functions is clearer when considering complex functions, typically analytic functions. The domain to which a complex function may be extended by analytic continuation generally consists of almost the whole complex plane. However, when extending the domain through two different paths, one often gets different values. For example, when extending the domain of the square root function, along a path of complex numbers with positive imaginary parts, one gets i for the square root of −1; while, when extending through complex numbers with negative imaginary parts, one gets −i. There are generally two ways of solving the problem. One may define a function that is not continuous along some curve, called a branch cut. Such a function is called the principal value of the function. The other way is to consider that one has a multi-valued function, which is analytic everywhere except for isolated singularities, but whose value may "jump" if one follows a closed loop around a singularity. This jump is called the monodromy. == In the foundations of mathematics == The definition of a function that is given in this article requires the concept of set, since the domain and the codomain of a function must be a set. This is not a problem in usual mathematics, as it is generally not difficult to consider only functions whose domain and codomain are sets, which are well defined, even if the domain is not explicitly defined. However, it is sometimes useful to consider more general functions. For example, the singleton set may be considered as a function x ↦ { x } . {\displaystyle x\mapsto \{x\}.} Its domain would include all sets, and therefore would not be a set. In usual mathematics, one avoids this kind of problem by specifying a domain, which means that one has many singleton functions. However, when establishing foundations of mathematics, one may have to use functions whose domain, codomain or both are not specified, and some authors, often logicians, give precise definitions for these weakly specified functions. These generalized functions may be critical in the development of a formalization of the foundations of mathematics. For example, Von Neumann–Bernays–Gödel set theory, is an extension of the set theory in which the collection of all sets is a class. This theory includes the replacement axiom, which may be stated as: If X is a set and F is a function, then F[X] is a set. In alternative formulations of the foundations of mathematics using type theory rather than set theory, functions are taken as primitive notions rather than defined from other kinds of object. They are the inhabitants of function types, and may be constructed using expressions in the lambda calculus. == In computer science == In computer programming, a function is, in general, a subroutine which implements the abstract concept of function. That is, it is a program unit that produces an output for each input. Functional programming is the programming paradigm consisting of building programs by using only subroutines that behave like mathematical functions, meaning that they have no side effects and depend only on their arguments: they are referentially transparent. For example, if_then_else is a function that takes three (nullary) functions as arguments, and, depending on the value of the first argument (true or false), returns the value of either the second or the third argument. An important advantage of functional programming is that it makes easier program proofs, as being based on a well founded theory, the lambda calculus (see below). However, side effects are generally necessary for practical programs, ones that perform input/output. There is a class of purely functional languages, such as Haskell, which encapsulate the possibility of side effects in the type of a function. Others, such as the ML family, simply allow side effects. In many programming languages, every subroutine is called a function, even when there is no output but only side effects, and when the functionality consists simply of modifying some data in the computer memory. Outside the context of programming languages, "function" has the usual mathematical meaning in computer science. In this area, a property of major interest is the computability of a function. For giving a precise meaning to this concept, and to the related concept of algorithm, several models of computation have been introduced, the old ones being general recursive functions, lambda calculus, and Turing machine. The fundamental theorem of computability theory is that these three models of computation define the same set of computable functions, and that all the other models of computation that have ever been proposed define the same set of computable functions or a smaller one. The Church–Turing thesis is the claim that every philosophically acceptable definition of a computable function defines also the same functions. General recursive functions are partial functions from integers to integers that can be defined from constant functions, successor, and projection functions via the operators composition, primitive recursion, and minimization. Although defined only for functions from integers to integers, they can model any computable function as a consequence of the following properties: a computation is the manipulation of finite sequences of symbols (digits of numbers, formulas, etc.), every sequence of symbols may be coded as a sequence of bits, a bit sequence can be interpreted as the binary representation of an integer. Lambda calculus is a theory that defines computable functions without using set theory, and is the theoretical background of functional programming. It consists of terms that are either variables, function definitions (𝜆-terms), or applications of functions to terms. Terms are manipulated by interpreting its axioms (the α-equivalence, the β-reduction, and the η-conversion) as rewriting rules, which can be used for computation. In its original form, lambda calculus does not include the concepts of domain and codomain of a function. Roughly speaking, they have been introduced in the theory under the name of type in typed lambda calculus. Most kinds of typed lambda calculi can define fewer functions than untyped lambda calculus. == See also == === Subpages === === Generalizations === === Related topics === == Notes == == References == == Sources == == Further reading == == External links == The Wolfram Functions – website giving formulae and visualizations of many mathematical functions NIST Digital Library of Mathematical Functions
Wikipedia/Multivariate_function
In mathematical optimization, Dantzig's simplex algorithm (or simplex method) is a popular algorithm for linear programming. The name of the algorithm is derived from the concept of a simplex and was suggested by T. S. Motzkin. Simplices are not actually used in the method, but one interpretation of it is that it operates on simplicial cones, and these become proper simplices with an additional constraint. The simplicial cones in question are the corners (i.e., the neighborhoods of the vertices) of a geometric object called a polytope. The shape of this polytope is defined by the constraints applied to the objective function. == History == George Dantzig worked on planning methods for the US Army Air Force during World War II using a desk calculator. During 1946, his colleague challenged him to mechanize the planning process to distract him from taking another job. Dantzig formulated the problem as linear inequalities inspired by the work of Wassily Leontief, however, at that time he didn't include an objective as part of his formulation. Without an objective, a vast number of solutions can be feasible, and therefore to find the "best" feasible solution, military-specified "ground rules" must be used that describe how goals can be achieved as opposed to specifying a goal itself. Dantzig's core insight was to realize that most such ground rules can be translated into a linear objective function that needs to be maximized. Development of the simplex method was evolutionary and happened over a period of about a year. After Dantzig included an objective function as part of his formulation during mid-1947, the problem was mathematically more tractable. Dantzig realized that one of the unsolved problems that he had mistaken as homework in his professor Jerzy Neyman's class (and actually later solved), was applicable to finding an algorithm for linear programs. This problem involved finding the existence of Lagrange multipliers for general linear programs over a continuum of variables, each bounded between zero and one, and satisfying linear constraints expressed in the form of Lebesgue integrals. Dantzig later published his "homework" as a thesis to earn his doctorate. The column geometry used in this thesis gave Dantzig insight that made him believe that the Simplex method would be very efficient. == Overview == The simplex algorithm operates on linear programs in the canonical form maximize c T x {\textstyle \mathbf {c^{T}} \mathbf {x} } subject to A x ≤ b {\displaystyle A\mathbf {x} \leq \mathbf {b} } and x ≥ 0 {\displaystyle \mathbf {x} \geq 0} with c = ( c 1 , … , c n ) {\displaystyle \mathbf {c} =(c_{1},\,\dots ,\,c_{n})} the coefficients of the objective function, ( ⋅ ) T {\displaystyle (\cdot )^{\mathrm {T} }} is the matrix transpose, and x = ( x 1 , … , x n ) {\displaystyle \mathbf {x} =(x_{1},\,\dots ,\,x_{n})} are the variables of the problem, A {\displaystyle A} is a p×n matrix, and b = ( b 1 , … , b p ) {\displaystyle \mathbf {b} =(b_{1},\,\dots ,\,b_{p})} . There is a straightforward process to convert any linear program into one in standard form, so using this form of linear programs results in no loss of generality. In geometric terms, the feasible region defined by all values of x {\displaystyle \mathbf {x} } such that A x ≤ b {\textstyle A\mathbf {x} \leq \mathbf {b} } and ∀ i , x i ≥ 0 {\displaystyle \forall i,x_{i}\geq 0} is a (possibly unbounded) convex polytope. An extreme point or vertex of this polytope is known as basic feasible solution (BFS). It can be shown that for a linear program in standard form, if the objective function has a maximum value on the feasible region, then it has this value on (at least) one of the extreme points. This in itself reduces the problem to a finite computation since there is a finite number of extreme points, but the number of extreme points is unmanageably large for all but the smallest linear programs. It can also be shown that, if an extreme point is not a maximum point of the objective function, then there is an edge containing the point so that the value of the objective function is strictly increasing on the edge moving away from the point. If the edge is finite, then the edge connects to another extreme point where the objective function has a greater value, otherwise the objective function is unbounded above on the edge and the linear program has no solution. The simplex algorithm applies this insight by walking along edges of the polytope to extreme points with greater and greater objective values. This continues until the maximum value is reached, or an unbounded edge is visited (concluding that the problem has no solution). The algorithm always terminates because the number of vertices in the polytope is finite; moreover since we jump between vertices always in the same direction (that of the objective function), we hope that the number of vertices visited will be small. The solution of a linear program is accomplished in two steps. In the first step, known as Phase I, a starting extreme point is found. Depending on the nature of the program this may be trivial, but in general it can be solved by applying the simplex algorithm to a modified version of the original program. The possible results of Phase I are either that a basic feasible solution is found or that the feasible region is empty. In the latter case the linear program is called infeasible. In the second step, Phase II, the simplex algorithm is applied using the basic feasible solution found in Phase I as a starting point. The possible results from Phase II are either an optimum basic feasible solution or an infinite edge on which the objective function is unbounded above. == Standard form == The transformation of a linear program to one in standard form may be accomplished as follows. First, for each variable with a lower bound other than 0, a new variable is introduced representing the difference between the variable and bound. The original variable can then be eliminated by substitution. For example, given the constraint x 1 ≥ 5 {\displaystyle x_{1}\geq 5} a new variable, y 1 {\displaystyle y_{1}} , is introduced with y 1 = x 1 − 5 x 1 = y 1 + 5 {\displaystyle {\begin{aligned}y_{1}=x_{1}-5\\x_{1}=y_{1}+5\end{aligned}}} The second equation may be used to eliminate x 1 {\displaystyle x_{1}} from the linear program. In this way, all lower bound constraints may be changed to non-negativity restrictions. Second, for each remaining inequality constraint, a new variable, called a slack variable, is introduced to change the constraint to an equality constraint. This variable represents the difference between the two sides of the inequality and is assumed to be non-negative. For example, the inequalities x 2 + 2 x 3 ≤ 3 − x 4 + 3 x 5 ≥ 2 {\displaystyle {\begin{aligned}x_{2}+2x_{3}&\leq 3\\-x_{4}+3x_{5}&\geq 2\end{aligned}}} are replaced with x 2 + 2 x 3 + s 1 = 3 − x 4 + 3 x 5 − s 2 = 2 s 1 , s 2 ≥ 0 {\displaystyle {\begin{aligned}x_{2}+2x_{3}+s_{1}&=3\\-x_{4}+3x_{5}-s_{2}&=2\\s_{1},\,s_{2}&\geq 0\end{aligned}}} It is much easier to perform algebraic manipulation on inequalities in this form. In inequalities where ≥ appears such as the second one, some authors refer to the variable introduced as a surplus variable. Third, each unrestricted variable is eliminated from the linear program. This can be done in two ways, one is by solving for the variable in one of the equations in which it appears and then eliminating the variable by substitution. The other is to replace the variable with the difference of two restricted variables. For example, if z 1 {\displaystyle z_{1}} is unrestricted then write z 1 = z 1 + − z 1 − z 1 + , z 1 − ≥ 0 {\displaystyle {\begin{aligned}&z_{1}=z_{1}^{+}-z_{1}^{-}\\&z_{1}^{+},\,z_{1}^{-}\geq 0\end{aligned}}} The equation may be used to eliminate z 1 {\displaystyle z_{1}} from the linear program. When this process is complete the feasible region will be in the form A x = b , ∀ x i ≥ 0 {\displaystyle \mathbf {A} \mathbf {x} =\mathbf {b} ,\,\forall \ x_{i}\geq 0} It is also useful to assume that the rank of A {\displaystyle \mathbf {A} } is the number of rows. This results in no loss of generality since otherwise either the system A x = b {\displaystyle \mathbf {A} \mathbf {x} =\mathbf {b} } has redundant equations which can be dropped, or the system is inconsistent and the linear program has no solution. == Simplex tableau == A linear program in standard form can be represented as a tableau of the form [ 1 − c T 0 0 A b ] {\displaystyle {\begin{bmatrix}1&-\mathbf {c} ^{T}&0\\\mathbf {0} &\mathbf {A} &\mathbf {b} \end{bmatrix}}} The first row defines the objective function and the remaining rows specify the constraints. The zero in the first column represents the zero vector of the same dimension as the vector b {\displaystyle \mathbf {b} } (different authors use different conventions as to the exact layout). If the columns of A {\displaystyle \mathbf {A} } can be rearranged so that it contains the identity matrix of order p {\displaystyle p} (the number of rows in A {\displaystyle \mathbf {A} } ) then the tableau is said to be in canonical form. The variables corresponding to the columns of the identity matrix are called basic variables while the remaining variables are called nonbasic or free variables. If the values of the nonbasic variables are set to 0, then the values of the basic variables are easily obtained as entries in b {\displaystyle \mathbf {b} } and this solution is a basic feasible solution. The algebraic interpretation here is that the coefficients of the linear equation represented by each row are either 0 {\displaystyle 0} , 1 {\displaystyle 1} , or some other number. Each row will have 1 {\displaystyle 1} column with value 1 {\displaystyle 1} , p − 1 {\displaystyle p-1} columns with coefficients 0 {\displaystyle 0} , and the remaining columns with some other coefficients (these other variables represent our non-basic variables). By setting the values of the non-basic variables to zero we ensure in each row that the value of the variable represented by a 1 {\displaystyle 1} in its column is equal to the b {\displaystyle b} value at that row. Conversely, given a basic feasible solution, the columns corresponding to the nonzero variables can be expanded to a nonsingular matrix. If the corresponding tableau is multiplied by the inverse of this matrix then the result is a tableau in canonical form. Let [ 1 − c B T − c D T 0 0 I D b ] {\displaystyle {\begin{bmatrix}1&-\mathbf {c} _{B}^{T}&-\mathbf {c} _{D}^{T}&0\\0&I&\mathbf {D} &\mathbf {b} \end{bmatrix}}} be a tableau in canonical form. Additional row-addition transformations can be applied to remove the coefficients cTB from the objective function. This process is called pricing out and results in a canonical tableau [ 1 0 − c ¯ D T z B 0 I D b ] {\displaystyle {\begin{bmatrix}1&0&-{\bar {\mathbf {c} }}_{D}^{T}&z_{B}\\0&I&\mathbf {D} &\mathbf {b} \end{bmatrix}}} where zB is the value of the objective function at the corresponding basic feasible solution. The updated coefficients, also known as relative cost coefficients, are the rates of change of the objective function with respect to the nonbasic variables. == Pivot operations == The geometrical operation of moving from a basic feasible solution to an adjacent basic feasible solution is implemented as a pivot operation. First, a nonzero pivot element is selected in a nonbasic column. The row containing this element is multiplied by its reciprocal to change this element to 1, and then multiples of the row are added to the other rows to change the other entries in the column to 0. The result is that, if the pivot element is in a row r, then the column becomes the r-th column of the identity matrix. The variable for this column is now a basic variable, replacing the variable which corresponded to the r-th column of the identity matrix before the operation. In effect, the variable corresponding to the pivot column enters the set of basic variables and is called the entering variable, and the variable being replaced leaves the set of basic variables and is called the leaving variable. The tableau is still in canonical form but with the set of basic variables changed by one element. == Algorithm == Let a linear program be given by a canonical tableau. The simplex algorithm proceeds by performing successive pivot operations each of which give an improved basic feasible solution; the choice of pivot element at each step is largely determined by the requirement that this pivot improves the solution. === Entering variable selection === Since the entering variable will, in general, increase from 0 to a positive number, the value of the objective function will decrease if the derivative of the objective function with respect to this variable is negative. Equivalently, the value of the objective function is increased if the pivot column is selected so that the corresponding entry in the objective row of the tableau is positive. If there is more than one column so that the entry in the objective row is positive then the choice of which one to add to the set of basic variables is somewhat arbitrary and several entering variable choice rules such as Devex algorithm have been developed. If all the entries in the objective row are less than or equal to 0 then no choice of entering variable can be made and the solution is in fact optimal. It is easily seen to be optimal since the objective row now corresponds to an equation of the form z ( x ) = z B + non-positive terms corresponding to non-basic variables {\displaystyle z(\mathbf {x} )=z_{B}+{\text{non-positive terms corresponding to non-basic variables}}} By changing the entering variable choice rule so that it selects a column where the entry in the objective row is negative, the algorithm is changed so that it finds the minimum of the objective function rather than the maximum. === Leaving variable selection === Once the pivot column has been selected, the choice of pivot row is largely determined by the requirement that the resulting solution be feasible. First, only positive entries in the pivot column are considered since this guarantees that the value of the entering variable will be nonnegative. If there are no positive entries in the pivot column then the entering variable can take any non-negative value with the solution remaining feasible. In this case the objective function is unbounded below and there is no minimum. Next, the pivot row must be selected so that all the other basic variables remain positive. A calculation shows that this occurs when the resulting value of the entering variable is at a minimum. In other words, if the pivot column is c, then the pivot row r is chosen so that b r / a r c {\displaystyle b_{r}/a_{rc}\,} is the minimum over all r so that arc > 0. This is called the minimum ratio test. If there is more than one row for which the minimum is achieved then a dropping variable choice rule can be used to make the determination. === Example === Consider the linear program Minimize Z = − 2 x − 3 y − 4 z {\displaystyle Z=-2x-3y-4z\,} Subject to 3 x + 2 y + z ≤ 10 2 x + 5 y + 3 z ≤ 15 x , y , z ≥ 0 {\displaystyle {\begin{aligned}3x+2y+z&\leq 10\\2x+5y+3z&\leq 15\\x,\,y,\,z&\geq 0\end{aligned}}} With the addition of slack variables s and t, this is represented by the canonical tableau [ 1 2 3 4 0 0 0 0 3 2 1 1 0 10 0 2 5 3 0 1 15 ] {\displaystyle {\begin{bmatrix}1&2&3&4&0&0&0\\0&3&2&1&1&0&10\\0&2&5&3&0&1&15\end{bmatrix}}} where columns 5 and 6 represent the basic variables s and t and the corresponding basic feasible solution is x = y = z = 0 , s = 10 , t = 15. {\displaystyle x=y=z=0,\,s=10,\,t=15.} Columns 2, 3, and 4 can be selected as pivot columns, for this example column 4 is selected. The values of z resulting from the choice of rows 2 and 3 as pivot rows are 10/1 = 10 and 15/3 = 5 respectively. Of these the minimum is 5, so row 3 must be the pivot row. Performing the pivot produces [ 1 − 2 3 − 11 3 0 0 − 4 3 − 20 0 7 3 1 3 0 1 − 1 3 5 0 2 3 5 3 1 0 1 3 5 ] {\displaystyle {\begin{bmatrix}1&-{\frac {2}{3}}&-{\frac {11}{3}}&0&0&-{\frac {4}{3}}&-20\\0&{\frac {7}{3}}&{\frac {1}{3}}&0&1&-{\frac {1}{3}}&5\\0&{\frac {2}{3}}&{\frac {5}{3}}&1&0&{\frac {1}{3}}&5\end{bmatrix}}} Now columns 4 and 5 represent the basic variables z and s and the corresponding basic feasible solution is x = y = t = 0 , z = 5 , s = 5. {\displaystyle x=y=t=0,\,z=5,\,s=5.} For the next step, there are no positive entries in the objective row and in fact Z = − 20 + 2 3 x + 11 3 y + 4 3 t {\displaystyle Z=-20+{\frac {2}{3}}x+{\frac {11}{3}}y+{\frac {4}{3}}t} so the minimum value of Z is −20. == Finding an initial canonical tableau == In general, a linear program will not be given in the canonical form and an equivalent canonical tableau must be found before the simplex algorithm can start. This can be accomplished by the introduction of artificial variables. Columns of the identity matrix are added as column vectors for these variables. If the b value for a constraint equation is negative, the equation is negated before adding the identity matrix columns. This does not change the set of feasible solutions or the optimal solution, and it ensures that the slack variables will constitute an initial feasible solution. The new tableau is in canonical form but it is not equivalent to the original problem. So a new objective function, equal to the sum of the artificial variables, is introduced and the simplex algorithm is applied to find the minimum; the modified linear program is called the Phase I problem. The simplex algorithm applied to the Phase I problem must terminate with a minimum value for the new objective function since, being the sum of nonnegative variables, its value is bounded below by 0. If the minimum is 0 then the artificial variables can be eliminated from the resulting canonical tableau producing a canonical tableau equivalent to the original problem. The simplex algorithm can then be applied to find the solution; this step is called Phase II. If the minimum is positive then there is no feasible solution for the Phase I problem where the artificial variables are all zero. This implies that the feasible region for the original problem is empty, and so the original problem has no solution. === Example === Consider the linear program Minimize Z = − 2 x − 3 y − 4 z {\displaystyle Z=-2x-3y-4z\,} Subject to 3 x + 2 y + z = 10 2 x + 5 y + 3 z = 15 x , y , z ≥ 0 {\displaystyle {\begin{aligned}3x+2y+z&=10\\2x+5y+3z&=15\\x,\,y,\,z&\geq 0\end{aligned}}} It differs from the previous example by having equality instead of inequality constraints. The previous solution x = y = 0 , z = 5 {\displaystyle x=y=0\,,z=5} violates the first constraint. This new problem is represented by the (non-canonical) tableau [ 1 2 3 4 0 0 3 2 1 10 0 2 5 3 15 ] {\displaystyle {\begin{bmatrix}1&2&3&4&0\\0&3&2&1&10\\0&2&5&3&15\end{bmatrix}}} Introduce artificial variables u and v and objective function W = u + v, giving a new tableau [ 1 0 0 0 0 − 1 − 1 0 0 1 2 3 4 0 0 0 0 0 3 2 1 1 0 10 0 0 2 5 3 0 1 15 ] {\displaystyle {\begin{bmatrix}1&0&0&0&0&-1&-1&0\\0&1&2&3&4&0&0&0\\0&0&3&2&1&1&0&10\\0&0&2&5&3&0&1&15\end{bmatrix}}} The equation defining the original objective function is retained in anticipation of Phase II. By construction, u and v are both basic variables since they are part of the initial identity matrix. However, the objective function W currently assumes that u and v are both 0. In order to adjust the objective function to be the correct value where u = 10 and v = 15, add the third and fourth rows to the first row giving [ 1 0 5 7 4 0 0 25 0 1 2 3 4 0 0 0 0 0 3 2 1 1 0 10 0 0 2 5 3 0 1 15 ] {\displaystyle {\begin{bmatrix}1&0&5&7&4&0&0&25\\0&1&2&3&4&0&0&0\\0&0&3&2&1&1&0&10\\0&0&2&5&3&0&1&15\end{bmatrix}}} Select column 5 as a pivot column, so the pivot row must be row 4, and the updated tableau is [ 3 0 7 1 0 0 − 4 15 0 3 − 2 − 11 0 0 − 4 − 60 0 0 7 1 0 3 − 1 15 0 0 2 5 3 0 1 15 ] {\displaystyle {\begin{bmatrix}3&0&7&1&0&0&-4&15\\0&3&-2&-11&0&0&-4&-60\\0&0&7&1&0&3&-1&15\\0&0&2&5&3&0&1&15\end{bmatrix}}} Now select column 3 as a pivot column, for which row 3 must be the pivot row, to get [ 7 0 0 0 0 − 7 − 7 0 0 7 0 − 25 0 2 − 10 − 130 0 0 7 1 0 3 − 1 15 0 0 0 11 7 − 2 3 25 ] {\displaystyle {\begin{bmatrix}7&0&0&0&0&-7&-7&0\\0&7&0&-25&0&2&-10&-130\\0&0&7&1&0&3&-1&15\\0&0&0&11&7&-2&3&25\end{bmatrix}}} The artificial variables are now 0 and they may be dropped giving a canonical tableau equivalent to the original problem: [ 7 0 − 25 0 − 130 0 7 1 0 15 0 0 11 7 25 ] {\displaystyle {\begin{bmatrix}7&0&-25&0&-130\\0&7&1&0&15\\0&0&11&7&25\end{bmatrix}}} This is, fortuitously, already optimal and the optimum value for the original linear program is −130/7. This value is "worse" than -20 which is to be expected for a problem which is more constrained. == Advanced topics == === Implementation === The tableau form used above to describe the algorithm lends itself to an immediate implementation in which the tableau is maintained as a rectangular (m + 1)-by-(m + n + 1) array. It is straightforward to avoid storing the m explicit columns of the identity matrix that will occur within the tableau by virtue of B being a subset of the columns of [A, I]. This implementation is referred to as the "standard simplex algorithm". The storage and computation overhead is such that the standard simplex method is a prohibitively expensive approach to solving large linear programming problems. In each simplex iteration, the only data required are the first row of the tableau, the (pivotal) column of the tableau corresponding to the entering variable and the right-hand-side. The latter can be updated using the pivotal column and the first row of the tableau can be updated using the (pivotal) row corresponding to the leaving variable. Both the pivotal column and pivotal row may be computed directly using the solutions of linear systems of equations involving the matrix B and a matrix-vector product using A. These observations motivate the "revised simplex algorithm", for which implementations are distinguished by their invertible representation of B. In large linear-programming problems A is typically a sparse matrix and, when the resulting sparsity of B is exploited when maintaining its invertible representation, the revised simplex algorithm is much more efficient than the standard simplex method. Commercial simplex solvers are based on the revised simplex algorithm. === Degeneracy: stalling and cycling === If the values of all basic variables are strictly positive, then a pivot must result in an improvement in the objective value. When this is always the case no set of basic variables occurs twice and the simplex algorithm must terminate after a finite number of steps. Basic feasible solutions where at least one of the basic variables is zero are called degenerate and may result in pivots for which there is no improvement in the objective value. In this case there is no actual change in the solution but only a change in the set of basic variables. When several such pivots occur in succession, there is no improvement; in large industrial applications, degeneracy is common and such "stalling" is notable. Worse than stalling is the possibility the same set of basic variables occurs twice, in which case, the deterministic pivoting rules of the simplex algorithm will produce an infinite loop, or "cycle". While degeneracy is the rule in practice and stalling is common, cycling is rare in practice. A discussion of an example of practical cycling occurs in Padberg. Bland's rule prevents cycling and thus guarantees that the simplex algorithm always terminates. Another pivoting algorithm, the criss-cross algorithm never cycles on linear programs. History-based pivot rules such as Zadeh's rule and Cunningham's rule also try to circumvent the issue of stalling and cycling by keeping track of how often particular variables are being used and then favor such variables that have been used least often. === Efficiency in the worst case === The simplex method is remarkably efficient in practice and was a great improvement over earlier methods such as Fourier–Motzkin elimination. However, in 1972, Klee and Minty gave an example, the Klee–Minty cube, showing that the worst-case complexity of simplex method as formulated by Dantzig is exponential time. Since then, for almost every variation on the method, it has been shown that there is a family of linear programs for which it performs badly. It is an open question if there is a variation with polynomial time, although sub-exponential pivot rules are known. In 2014, it was proved that a particular variant of the simplex method is NP-mighty, i.e., it can be used to solve, with polynomial overhead, any problem in NP implicitly during the algorithm's execution. Moreover, deciding whether a given variable ever enters the basis during the algorithm's execution on a given input, and determining the number of iterations needed for solving a given problem, are both NP-hard problems. At about the same time it was shown that there exists an artificial pivot rule for which computing its output is PSPACE-complete. In 2015, this was strengthened to show that computing the output of Dantzig's pivot rule is PSPACE-complete. === Efficiency in practice === Analyzing and quantifying the observation that the simplex algorithm is efficient in practice despite its exponential worst-case complexity has led to the development of other measures of complexity. The simplex algorithm has polynomial-time average-case complexity under various probability distributions, with the precise average-case performance of the simplex algorithm depending on the choice of a probability distribution for the random matrices. Another approach to studying "typical phenomena" uses Baire category theory from general topology, and to show that (topologically) "most" matrices can be solved by the simplex algorithm in a polynomial number of steps. Another method to analyze the performance of the simplex algorithm studies the behavior of worst-case scenarios under small perturbation – are worst-case scenarios stable under a small change (in the sense of structural stability), or do they become tractable? This area of research, called smoothed analysis, was introduced specifically to study the simplex method. Indeed, the running time of the simplex method on input with noise is polynomial in the number of variables and the magnitude of the perturbations. == Other algorithms == Other algorithms for solving linear-programming problems are described in the linear-programming article. Another basis-exchange pivoting algorithm is the criss-cross algorithm. There are polynomial-time algorithms for linear programming that use interior point methods: these include Khachiyan's ellipsoidal algorithm, Karmarkar's projective algorithm, and path-following algorithms. The Big-M method is an alternative strategy for solving a linear program, using a single-phase simplex. == Linear-fractional programming == Linear–fractional programming (LFP) is a generalization of linear programming (LP). In LP the objective function is a linear function, while the objective function of a linear–fractional program is a ratio of two linear functions. In other words, a linear program is a fractional–linear program in which the denominator is the constant function having the value one everywhere. A linear–fractional program can be solved by a variant of the simplex algorithm or by the criss-cross algorithm. == See also == == Notes == == References == Murty, Katta G. (1983). Linear programming. New York: John Wiley & Sons, Inc. pp. xix+482. ISBN 978-0-471-09725-9. MR 0720547. == Further reading == These introductions are written for students of computer science and operations research: Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Section 29.3: The simplex algorithm, pp. 790–804. Frederick S. Hillier and Gerald J. Lieberman: Introduction to Operations Research, 8th edition. McGraw-Hill. ISBN 0-07-123828-X Rardin, Ronald L. (1997). Optimization in operations research. Prentice Hall. p. 919. ISBN 978-0-02-398415-0. == External links == An Introduction to Linear Programming and the Simplex Algorithm by Spyros Reveliotis of the Georgia Institute of Technology. Greenberg, Harvey J., Klee–Minty Polytope Shows Exponential Time Complexity of Simplex Method the University of Colorado at Denver (1997) PDF download Simplex Method A tutorial for Simplex Method with examples (also two-phase and M-method). Mathstools Simplex Calculator from www.mathstools.com Example of Simplex Procedure for a Standard Linear Programming Problem by Thomas McFarland of the University of Wisconsin-Whitewater. PHPSimplex: online tool to solve Linear Programming Problems by Daniel Izquierdo and Juan José Ruiz of the University of Málaga (UMA, Spain) simplex-m Online Simplex Solver
Wikipedia/Simplex_algorithm
In mathematics, a category (sometimes called an abstract category to distinguish it from a concrete category) is a collection of "objects" that are linked by "arrows". A category has two basic properties: the ability to compose the arrows associatively and the existence of an identity arrow for each object. A simple example is the category of sets, whose objects are sets and whose arrows are functions. Category theory is a branch of mathematics that seeks to generalize all of mathematics in terms of categories, independent of what their objects and arrows represent. Virtually every branch of modern mathematics can be described in terms of categories, and doing so often reveals deep insights and similarities between seemingly different areas of mathematics. As such, category theory provides an alternative foundation for mathematics to set theory and other proposed axiomatic foundations. In general, the objects and arrows may be abstract entities of any kind, and the notion of category provides a fundamental and abstract way to describe mathematical entities and their relationships. In addition to formalizing mathematics, category theory is also used to formalize many other systems in computer science, such as the semantics of programming languages. Two categories are the same if they have the same collection of objects, the same collection of arrows, and the same associative method of composing any pair of arrows. Two different categories may also be considered "equivalent" for purposes of category theory, even if they do not have precisely the same structure. Well-known categories are denoted by a short capitalized word or abbreviation in bold or italics: examples include Set, the category of sets and set functions; Ring, the category of rings and ring homomorphisms; and Top, the category of topological spaces and continuous maps. All of the preceding categories have the identity map as identity arrows and composition as the associative operation on arrows. The classic and still much used text on category theory is Categories for the Working Mathematician by Saunders Mac Lane. Other references are given in the References below. The basic definitions in this article are contained within the first few chapters of any of these books. Any monoid can be understood as a special sort of category (with a single object whose self-morphisms are represented by the elements of the monoid), and so can any preorder. == Definition == There are many equivalent definitions of a category. One commonly used definition is as follows. A category C consists of a class ob(C) of objects, a class mor(C) of morphisms or arrows, a domain or source class function dom: mor(C) → ob(C), a codomain or target class function cod: mor(C) → ob(C), for every three objects a, b and c, a binary operation hom(a, b) × hom(b, c) → hom(a, c) called composition of morphisms. Here hom(a, b) denotes the subclass of morphisms f in mor(C) such that dom(f) = a and cod(f) = b. Morphisms in this subclass are written f : a → b, and the composite of f : a → b and g : b → c is often written as g ∘ f or gf. such that the following axioms hold: the associative law: if f : a → b, g : b → c and h : c → d then h ∘ (g ∘ f) = (h ∘ g) ∘ f, and the (left and right unit laws): for every object x, there exists a morphism 1x : x → x (some authors write idx) called the identity morphism for x, such that every morphism f : a → x satisfies 1x ∘ f = f, and every morphism g : x → b satisfies g ∘ 1x = g. We write f: a → b, and we say "f is a morphism from a to b". We write hom(a, b) (or homC(a, b) when there may be confusion about to which category hom(a, b) refers) to denote the hom-class of all morphisms from a to b. Some authors write the composite of morphisms in "diagrammatic order", writing f;g or fg instead of g ∘ f. From these axioms, one can prove that there is exactly one identity morphism for every object. Often the map assigning each object its identity morphism is treated as an extra part of the structure of a category, namely a class function i: ob(C) → mor(C). Some authors use a slight variant of the definition in which each object is identified with the corresponding identity morphism. This stems from the idea that the fundamental data of categories are morphisms and not objects. In fact, categories can be defined without reference to objects at all using a partial binary operation with additional properties. == Small and large categories == A category C is called small if both ob(C) and mor(C) are actually sets and not proper classes, and large otherwise. A locally small category is a category such that for all objects a and b, the hom-class hom(a, b) is a set, called a homset. Many important categories in mathematics (such as the category of sets), although not small, are at least locally small. Since, in small categories, the objects form a set, a small category can be viewed as an algebraic structure similar to a monoid but without requiring closure properties. Large categories on the other hand can be used to create "structures" of algebraic structures. == Examples == The class of all sets (as objects) together with all functions between them (as morphisms), where the composition of morphisms is the usual function composition, forms a large category, Set. It is the most basic and the most commonly used category in mathematics. The category Rel consists of all sets (as objects) with binary relations between them (as morphisms). Abstracting from relations instead of functions yields allegories, a special class of categories. Any class can be viewed as a category whose only morphisms are the identity morphisms. Such categories are called discrete. For any given set I, the discrete category on I is the small category that has the elements of I as objects and only the identity morphisms as morphisms. Discrete categories are the simplest kind of category. Any preordered set (P, ≤) forms a small category, where the objects are the members of P, the morphisms are arrows pointing from x to y when x ≤ y. Furthermore, if ≤ is antisymmetric, there can be at most one morphism between any two objects. The existence of identity morphisms and the composability of the morphisms are guaranteed by the reflexivity and the transitivity of the preorder. By the same argument, any partially ordered set and any equivalence relation can be seen as a small category. Any ordinal number can be seen as a category when viewed as an ordered set. Any monoid (any algebraic structure with a single associative binary operation and an identity element) forms a small category with a single object x. (Here, x is any fixed set.) The morphisms from x to x are precisely the elements of the monoid, the identity morphism of x is the identity of the monoid, and the categorical composition of morphisms is given by the monoid operation. Several definitions and theorems about monoids may be generalized for categories. Similarly any group can be seen as a category with a single object in which every morphism is invertible, that is, for every morphism f there is a morphism g that is both left and right inverse to f under composition. A morphism that is invertible in this sense is called an isomorphism. A groupoid is a category in which every morphism is an isomorphism. Groupoids are generalizations of groups, group actions and equivalence relations. Actually, in the view of category the only difference between groupoid and group is that a groupoid may have more than one object but the group must have only one. Consider a topological space X and fix a base point x 0 {\displaystyle x_{0}} of X, then π 1 ( X , x 0 ) {\displaystyle \pi _{1}(X,x_{0})} is the fundamental group of the topological space X and the base point x 0 {\displaystyle x_{0}} , and as a set it has the structure of group; if then let the base point x 0 {\displaystyle x_{0}} runs over all points of X, and take the union of all π 1 ( X , x 0 ) {\displaystyle \pi _{1}(X,x_{0})} , then the set we get has only the structure of groupoid (which is called as the fundamental groupoid of X): two loops (under equivalence relation of homotopy) may not have the same base point so they cannot multiply with each other. In the language of category, this means here two morphisms may not have the same source object (or target object, because in this case for any morphism the source object and the target object are same: the base point) so they can not compose with each other. Any directed graph generates a small category: the objects are the vertices of the graph, and the morphisms are the paths in the graph (augmented with loops as needed) where composition of morphisms is concatenation of paths. Such a category is called the free category generated by the graph. The class of all preordered sets with order-preserving functions (i.e., monotone-increasing functions) as morphisms forms a category, Ord. It is a concrete category, i.e. a category obtained by adding some type of structure onto Set, and requiring that morphisms are functions that respect this added structure. The class of all groups with group homomorphisms as morphisms and function composition as the composition operation forms a large category, Grp. Like Ord, Grp is a concrete category. The category Ab, consisting of all abelian groups and their group homomorphisms, is a full subcategory of Grp, and the prototype of an abelian category. The class of all graphs forms another concrete category, where morphisms are graph homomorphisms (i.e., mappings between graphs which send vertices to vertices and edges to edges in a way that preserves all adjacency and incidence relations). Other examples of concrete categories are given by the following table. Fiber bundles with bundle maps between them form a concrete category. The category Cat consists of all small categories, with functors between them as morphisms. == Construction of new categories == === Dual category === Any category C can itself be considered as a new category in a different way: the objects are the same as those in the original category but the arrows are those of the original category reversed. This is called the dual or opposite category and is denoted Cop. === Product categories === If C and D are categories, one can form the product category C × D: the objects are pairs consisting of one object from C and one from D, and the morphisms are also pairs, consisting of one morphism in C and one in D. Such pairs can be composed componentwise. == Types of morphisms == A morphism f : a → b is called a monomorphism (or monic) if it is left-cancellable, i.e. fg1 = fg2 implies g1 = g2 for all morphisms g1, g2 : x → a. an epimorphism (or epic) if it is right-cancellable, i.e. g1f = g2f implies g1 = g2 for all morphisms g1, g2 : b → x. a bimorphism if it is both a monomorphism and an epimorphism. a retraction if it has a right inverse, i.e. if there exists a morphism g : b → a with fg = 1b. a section if it has a left inverse, i.e. if there exists a morphism g : b → a with gf = 1a. an isomorphism if it has an inverse, i.e. if there exists a morphism g : b → a with fg = 1b and gf = 1a. an endomorphism if a = b. The class of endomorphisms of a is denoted end(a). For locally small categories, end(a) is a set and forms a monoid under morphism composition. an automorphism if f is both an endomorphism and an isomorphism. The class of automorphisms of a is denoted aut(a). For locally small categories, it forms a group under morphism composition called the automorphism group of a. Every retraction is an epimorphism. Every section is a monomorphism. The following three statements are equivalent: f is a monomorphism and a retraction; f is an epimorphism and a section; f is an isomorphism. Relations among morphisms (such as fg = h) can most conveniently be represented with commutative diagrams, where the objects are represented as points and the morphisms as arrows. == Types of categories == In many categories, e.g. Ab or VectK, the hom-sets hom(a, b) are not just sets but actually abelian groups, and the composition of morphisms is compatible with these group structures; i.e. is bilinear. Such a category is called preadditive. If, furthermore, the category has all finite products and coproducts, it is called an additive category. If all morphisms have a kernel and a cokernel, and all epimorphisms are cokernels and all monomorphisms are kernels, then we speak of an abelian category. A typical example of an abelian category is the category of abelian groups. A category is called complete if all small limits exist in it. The categories of sets, abelian groups and topological spaces are complete. A category is called cartesian closed if it has finite direct products and a morphism defined on a finite product can always be represented by a morphism defined on just one of the factors. Examples include Set and CPO, the category of complete partial orders with Scott-continuous functions. A topos is a certain type of cartesian closed category in which all of mathematics can be formulated (just like classically all of mathematics is formulated in the category of sets). A topos can also be used to represent a logical theory. == See also == Enriched category Higher category theory Quantaloid Table of mathematical symbols Space (mathematics) Structure (mathematics) == Notes == == References ==
Wikipedia/Object_(category_theory)
In category theory, a branch of mathematics, duality is a correspondence between the properties of a category C and the dual properties of the opposite category Cop. Given a statement regarding the category C, by interchanging the source and target of each morphism as well as interchanging the order of composing two morphisms, a corresponding dual statement is obtained regarding the opposite category Cop. (Cop is composed by reversing every morphism of C.) Duality, as such, is the assertion that truth is invariant under this operation on statements. In other words, if a statement S is true about C, then its dual statement is true about Cop. Also, if a statement is false about C, then its dual has to be false about Cop. (Compactly saying, S for C is true if and only if its dual for Cop is true.) Given a concrete category C, it is often the case that the opposite category Cop per se is abstract. Cop need not be a category that arises from mathematical practice. In this case, another category D is also termed to be in duality with C if D and Cop are equivalent as categories. In the case when C and its opposite Cop are equivalent, such a category is self-dual. == Formal definition == We define the elementary language of category theory as the two-sorted first order language with objects and morphisms as distinct sorts, together with the relations of an object being the source or target of a morphism and a symbol for composing two morphisms. Let σ be any statement in this language. We form the dual σop as follows: Interchange each occurrence of "source" in σ with "target". Interchange the order of composing morphisms. That is, replace each occurrence of g ∘ f {\displaystyle g\circ f} with f ∘ g {\displaystyle f\circ g} Informally, these conditions state that the dual of a statement is formed by reversing arrows and compositions. Duality is the observation that σ is true for some category C if and only if σop is true for Cop. == Examples == A morphism f : A → B {\displaystyle f\colon A\to B} is a monomorphism if f ∘ g = f ∘ h {\displaystyle f\circ g=f\circ h} implies g = h {\displaystyle g=h} . Performing the dual operation, we get the statement that g ∘ f = h ∘ f {\displaystyle g\circ f=h\circ f} implies g = h . {\displaystyle g=h.} This reversed morphism f : B → A {\displaystyle f\colon B\to A} is by definition precisely an epimorphism. In short, the property of being a monomorphism is dual to the property of being an epimorphism. Applying duality, this means that a morphism in some category C is a monomorphism if and only if the reverse morphism in the opposite category Cop (composed by reversing all morphisms in C) is an epimorphism. An example comes from reversing the direction of inequalities in a partial order. So, if X is a set and ≤ a partial order relation, we can define a new partial order relation ≤new by x ≤new y if and only if y ≤ x. This example on orders is a special case, since partial orders correspond to a certain kind of category in which Hom(A,B) (a set of all morphisms from A to B of a category) can have at most one element. In applications to logic, this then looks like a very general description of negation (that is, proofs run in the opposite direction). For example, if we take the opposite of a lattice, we will find that meets and joins have their roles interchanged. This is an abstract form of De Morgan's laws, or of duality applied to lattices. Limits and colimits are dual notions. Fibrations and cofibrations are examples of dual notions in algebraic topology and homotopy theory. In this context, the duality is often called Eckmann–Hilton duality. == See also == Adjoint functor Dual object Duality (mathematics) Opposite category Pulation square == References ==
Wikipedia/Dual_(category_theory)
In category theory, the coproduct, or categorical sum, is a construction which includes as examples the disjoint union of sets and of topological spaces, the free product of groups, and the direct sum of modules and vector spaces. The coproduct of a family of objects is essentially the "least specific" object to which each object in the family admits a morphism. It is the category-theoretic dual notion to the categorical product, which means the definition is the same as the product but with all arrows reversed. Despite this seemingly innocuous change in the name and notation, coproducts can be and typically are dramatically different from products within a given category. == Definition == Let C {\displaystyle C} be a category and let X 1 {\displaystyle X_{1}} and X 2 {\displaystyle X_{2}} be objects of C . {\displaystyle C.} An object is called the coproduct of X 1 {\displaystyle X_{1}} and X 2 , {\displaystyle X_{2},} written X 1 ⊔ X 2 , {\displaystyle X_{1}\sqcup X_{2},} or X 1 ⊕ X 2 , {\displaystyle X_{1}\oplus X_{2},} or sometimes simply X 1 + X 2 , {\displaystyle X_{1}+X_{2},} if there exist morphisms i 1 : X 1 → X 1 ⊔ X 2 {\displaystyle i_{1}:X_{1}\to X_{1}\sqcup X_{2}} and i 2 : X 2 → X 1 ⊔ X 2 {\displaystyle i_{2}:X_{2}\to X_{1}\sqcup X_{2}} that satisfies the following universal property: for any object Y {\displaystyle Y} and any morphisms f 1 : X 1 → Y {\displaystyle f_{1}:X_{1}\to Y} and f 2 : X 2 → Y , {\displaystyle f_{2}:X_{2}\to Y,} there exists a unique morphism f : X 1 ⊔ X 2 → Y {\displaystyle f:X_{1}\sqcup X_{2}\to Y} such that f 1 = f ∘ i 1 {\displaystyle f_{1}=f\circ i_{1}} and f 2 = f ∘ i 2 . {\displaystyle f_{2}=f\circ i_{2}.} That is, the following diagram commutes: The unique arrow f {\displaystyle f} making this diagram commute may be denoted f 1 ⊔ f 2 , {\displaystyle f_{1}\sqcup f_{2},} f 1 ⊕ f 2 , {\displaystyle f_{1}\oplus f_{2},} f 1 + f 2 , {\displaystyle f_{1}+f_{2},} or [ f 1 , f 2 ] . {\displaystyle \left[f_{1},f_{2}\right].} The morphisms i 1 {\displaystyle i_{1}} and i 2 {\displaystyle i_{2}} are called canonical injections, although they need not be injections or even monic. The definition of a coproduct can be extended to an arbitrary family of objects indexed by a set J . {\displaystyle J.} The coproduct of the family { X j : j ∈ J } {\displaystyle \left\{X_{j}:j\in J\right\}} is an object X {\displaystyle X} together with a collection of morphisms i j : X j → X {\displaystyle i_{j}:X_{j}\to X} such that, for any object Y {\displaystyle Y} and any collection of morphisms f j : X j → Y {\displaystyle f_{j}:X_{j}\to Y} there exists a unique morphism f : X → Y {\displaystyle f:X\to Y} such that f j = f ∘ i j . {\displaystyle f_{j}=f\circ i_{j}.} That is, the following diagram commutes for each j ∈ J {\displaystyle j\in J} : The coproduct X {\displaystyle X} of the family { X j } {\displaystyle \left\{X_{j}\right\}} is often denoted ∐ j ∈ J X j {\displaystyle \coprod _{j\in J}X_{j}} or ⨁ j ∈ J X j . {\displaystyle \bigoplus _{j\in J}X_{j}.} Sometimes the morphism f : X → Y {\displaystyle f:X\to Y} may be denoted ∐ j ∈ J f j {\displaystyle \coprod _{j\in J}f_{j}} to indicate its dependence on the individual f j {\displaystyle f_{j}} s. == Examples == The coproduct in the category of sets is simply the disjoint union with the maps ij being the inclusion maps. Unlike direct products, coproducts in other categories are not all obviously based on the notion for sets, because unions don't behave well with respect to preserving operations (e.g. the union of two groups need not be a group), and so coproducts in different categories can be dramatically different from each other. For example, the coproduct in the category of groups, called the free product, is quite complicated. On the other hand, in the category of abelian groups (and equally for vector spaces), the coproduct, called the direct sum, consists of the elements of the direct product which have only finitely many nonzero terms. (It therefore coincides exactly with the direct product in the case of finitely many factors.) Given a commutative ring R, the coproduct in the category of commutative R-algebras is the tensor product. In the category of (noncommutative) R-algebras, the coproduct is a quotient of the tensor algebra (see Free product of associative algebras). In the case of topological spaces, coproducts are disjoint unions with their disjoint union topologies. That is, it is a disjoint union of the underlying sets, and the open sets are sets open in each of the spaces, in a rather evident sense. In the category of pointed spaces, fundamental in homotopy theory, the coproduct is the wedge sum (which amounts to joining a collection of spaces with base points at a common base point). The concept of disjoint union secretly underlies the above examples: the direct sum of abelian groups is the group generated by the "almost" disjoint union (disjoint union of all nonzero elements, together with a common zero), similarly for vector spaces: the space spanned by the "almost" disjoint union; the free product for groups is generated by the set of all letters from a similar "almost disjoint" union where no two elements from different sets are allowed to commute. This pattern holds for any variety in the sense of universal algebra. The coproduct in the category of Banach spaces with short maps is the l1 sum, which cannot be so easily conceptualized as an "almost disjoint" sum, but does have a unit ball almost-disjointly generated by the unit ball is the cofactors. The coproduct of a poset category is the join operation. == Discussion == The coproduct construction given above is actually a special case of a colimit in category theory. The coproduct in a category C {\displaystyle C} can be defined as the colimit of any functor from a discrete category J {\displaystyle J} into C {\displaystyle C} . Not every family { X j } {\displaystyle \lbrace X_{j}\rbrace } will have a coproduct in general, but if it does, then the coproduct is unique in a strong sense: if i j : X j → X {\displaystyle i_{j}:X_{j}\rightarrow X} and k j : X j → Y {\displaystyle k_{j}:X_{j}\rightarrow Y} are two coproducts of the family { X j } {\displaystyle \lbrace X_{j}\rbrace } , then (by the definition of coproducts) there exists a unique isomorphism f : X → Y {\displaystyle f:X\rightarrow Y} such that f ∘ i j = k j {\displaystyle f\circ i_{j}=k_{j}} for each j ∈ J {\displaystyle j\in J} . As with any universal property, the coproduct can be understood as a universal morphism. Let Δ : C → C × C {\displaystyle \Delta :C\rightarrow C\times C} be the diagonal functor which assigns to each object X {\displaystyle X} the ordered pair ( X , X ) {\displaystyle \left(X,X\right)} and to each morphism f : X → Y {\displaystyle f:X\rightarrow Y} the pair ( f , f ) {\displaystyle \left(f,f\right)} . Then the coproduct X + Y {\displaystyle X+Y} in C {\displaystyle C} is given by a universal morphism to the functor Δ {\displaystyle \Delta } from the object ( X , Y ) {\displaystyle \left(X,Y\right)} in C × C {\displaystyle C\times C} . The coproduct indexed by the empty set (that is, an empty coproduct) is the same as an initial object in C {\displaystyle C} . If J {\displaystyle J} is a set such that all coproducts for families indexed with J {\displaystyle J} exist, then it is possible to choose the products in a compatible fashion so that the coproduct turns into a functor C J → C {\displaystyle C^{J}\rightarrow C} . The coproduct of the family { X j } {\displaystyle \lbrace X_{j}\rbrace } is then often denoted by ∐ j ∈ J X j {\displaystyle \coprod _{j\in J}X_{j}} and the maps i j {\displaystyle i_{j}} are known as the natural injections. Letting Hom C ⁡ ( U , V ) {\displaystyle \operatorname {Hom} _{C}\left(U,V\right)} denote the set of all morphisms from U {\displaystyle U} to V {\displaystyle V} in C {\displaystyle C} (that is, a hom-set in C {\displaystyle C} ), we have a natural isomorphism Hom C ⁡ ( ∐ j ∈ J X j , Y ) ≅ ∏ j ∈ J Hom C ⁡ ( X j , Y ) {\displaystyle \operatorname {Hom} _{C}\left(\coprod _{j\in J}X_{j},Y\right)\cong \prod _{j\in J}\operatorname {Hom} _{C}(X_{j},Y)} given by the bijection which maps every tuple of morphisms ( f j ) j ∈ J ∈ ∏ j ∈ J Hom ⁡ ( X j , Y ) {\displaystyle (f_{j})_{j\in J}\in \prod _{j\in J}\operatorname {Hom} (X_{j},Y)} (a product in Set, the category of sets, which is the Cartesian product, so it is a tuple of morphisms) to the morphism ∐ j ∈ J f j ∈ Hom ⁡ ( ∐ j ∈ J X j , Y ) . {\displaystyle \coprod _{j\in J}f_{j}\in \operatorname {Hom} \left(\coprod _{j\in J}X_{j},Y\right).} That this map is a surjection follows from the commutativity of the diagram: any morphism f {\displaystyle f} is the coproduct of the tuple ( f ∘ i j ) j ∈ J . {\displaystyle (f\circ i_{j})_{j\in J}.} That it is an injection follows from the universal construction which stipulates the uniqueness of such maps. The naturality of the isomorphism is also a consequence of the diagram. Thus the contravariant hom-functor changes coproducts into products. Stated another way, the hom-functor, viewed as a functor from the opposite category C op {\displaystyle C^{\operatorname {op} }} to Set is continuous; it preserves limits (a coproduct in C {\displaystyle C} is a product in C op {\displaystyle C^{\operatorname {op} }} ). If J {\displaystyle J} is a finite set, say J = { 1 , … , n } {\displaystyle J=\lbrace 1,\ldots ,n\rbrace } , then the coproduct of objects X 1 , … , X n {\displaystyle X_{1},\ldots ,X_{n}} is often denoted by X 1 ⊕ … ⊕ X n {\displaystyle X_{1}\oplus \ldots \oplus X_{n}} . Suppose all finite coproducts exist in C, coproduct functors have been chosen as above, and 0 denotes the initial object of C corresponding to the empty coproduct. We then have natural isomorphisms X ⊕ ( Y ⊕ Z ) ≅ ( X ⊕ Y ) ⊕ Z ≅ X ⊕ Y ⊕ Z {\displaystyle X\oplus (Y\oplus Z)\cong (X\oplus Y)\oplus Z\cong X\oplus Y\oplus Z} X ⊕ 0 ≅ 0 ⊕ X ≅ X {\displaystyle X\oplus 0\cong 0\oplus X\cong X} X ⊕ Y ≅ Y ⊕ X . {\displaystyle X\oplus Y\cong Y\oplus X.} These properties are formally similar to those of a commutative monoid; a category with finite coproducts is an example of a symmetric monoidal category. If the category has a zero object Z {\displaystyle Z} , then we have a unique morphism X → Z {\displaystyle X\rightarrow Z} (since Z {\displaystyle Z} is terminal) and thus a morphism X ⊕ Y → Z ⊕ Y {\displaystyle X\oplus Y\rightarrow Z\oplus Y} . Since Z {\displaystyle Z} is also initial, we have a canonical isomorphism Z ⊕ Y ≅ Y {\displaystyle Z\oplus Y\cong Y} as in the preceding paragraph. We thus have morphisms X ⊕ Y → X {\displaystyle X\oplus Y\rightarrow X} and X ⊕ Y → Y {\displaystyle X\oplus Y\rightarrow Y} , by which we infer a canonical morphism X ⊕ Y → X × Y {\displaystyle X\oplus Y\rightarrow X\times Y} . This may be extended by induction to a canonical morphism from any finite coproduct to the corresponding product. This morphism need not in general be an isomorphism; in Grp it is a proper epimorphism while in Set* (the category of pointed sets) it is a proper monomorphism. In any preadditive category, this morphism is an isomorphism and the corresponding object is known as the biproduct. A category with all finite biproducts is known as a semiadditive category. If all families of objects indexed by J {\displaystyle J} have coproducts in C {\displaystyle C} , then the coproduct comprises a functor C J → C {\displaystyle C^{J}\rightarrow C} . Note that, like the product, this functor is covariant. == See also == Product Limits and colimits Coequalizer Direct limit == References == Mac Lane, Saunders (1998). Categories for the Working Mathematician. Graduate Texts in Mathematics. Vol. 5 (2nd ed.). New York, NY: Springer-Verlag. ISBN 0-387-98403-8. Zbl 0906.18001. == External links == Interactive Web page which generates examples of coproducts in the category of finite sets. Written by Jocelyn Paine.
Wikipedia/Coproduct_(category_theory)
Computability theory, also known as recursion theory, is a branch of mathematical logic, computer science, and the theory of computation that originated in the 1930s with the study of computable functions and Turing degrees. The field has since expanded to include the study of generalized computability and definability. In these areas, computability theory overlaps with proof theory and effective descriptive set theory. Basic questions addressed by computability theory include: What does it mean for a function on the natural numbers to be computable? How can noncomputable functions be classified into a hierarchy based on their level of noncomputability? Although there is considerable overlap in terms of knowledge and methods, mathematical computability theorists study the theory of relative computability, reducibility notions, and degree structures; those in the computer science field focus on the theory of subrecursive hierarchies, formal methods, and formal languages. The study of which mathematical constructions can be effectively performed is sometimes called recursive mathematics. == Introduction == Computability theory originated in the 1930s, with the work of Kurt Gödel, Alonzo Church, Rózsa Péter, Alan Turing, Stephen Kleene, and Emil Post. The fundamental results the researchers obtained established Turing computability as the correct formalization of the informal idea of effective calculation. In 1952, these results led Kleene to coin the two names "Church's thesis": 300  and "Turing's thesis".: 376  Nowadays these are often considered as a single hypothesis, the Church–Turing thesis, which states that any function that is computable by an algorithm is a computable function. Although initially skeptical, by 1946 Gödel argued in favor of this thesis:: 84  "Tarski has stressed in his lecture (and I think justly) the great importance of the concept of general recursiveness (or Turing's computability). It seems to me that this importance is largely due to the fact that with this concept one has for the first time succeeded in giving an absolute notion to an interesting epistemological notion, i.e., one not depending on the formalism chosen.": 84  With a definition of effective calculation came the first proofs that there are problems in mathematics that cannot be effectively decided. In 1936, Church and Turing were inspired by techniques used by Gödel to prove his incompleteness theorems - in 1931, Gödel independently demonstrated that the Entscheidungsproblem is not effectively decidable. This result showed that there is no algorithmic procedure that can correctly decide whether arbitrary mathematical propositions are true or false. Many problems in mathematics have been shown to be undecidable after these initial examples were established. In 1947, Markov and Post published independent papers showing that the word problem for semigroups cannot be effectively decided. Extending this result, Pyotr Novikov and William Boone showed independently in the 1950s that the word problem for groups is not effectively solvable: there is no effective procedure that, given a word in a finitely presented group, will decide whether the element represented by the word is the identity element of the group. In 1970, Yuri Matiyasevich proved (using results of Julia Robinson) Matiyasevich's theorem, which implies that Hilbert's tenth problem has no effective solution; this problem asked whether there is an effective procedure to decide whether a Diophantine equation over the integers has a solution in the integers. == Turing computability == The main form of computability studied in the field was introduced by Turing in 1936. A set of natural numbers is said to be a computable set (also called a decidable, recursive, or Turing computable set) if there is a Turing machine that, given a number n, halts with output 1 if n is in the set and halts with output 0 if n is not in the set. A function f from natural numbers to natural numbers is a (Turing) computable, or recursive function if there is a Turing machine that, on input n, halts and returns output f(n). The use of Turing machines here is not necessary; there are many other models of computation that have the same computing power as Turing machines; for example the μ-recursive functions obtained from primitive recursion and the μ operator. The terminology for computable functions and sets is not completely standardized. The definition in terms of μ-recursive functions as well as a different definition of rekursiv functions by Gödel led to the traditional name recursive for sets and functions computable by a Turing machine. The word decidable stems from the German word Entscheidungsproblem which was used in the original papers of Turing and others. In contemporary use, the term "computable function" has various definitions: according to Nigel J. Cutland, it is a partial recursive function (which can be undefined for some inputs), while according to Robert I. Soare it is a total recursive (equivalently, general recursive) function. This article follows the second of these conventions. In 1996, Soare gave additional comments about the terminology. Not every set of natural numbers is computable. The halting problem, which is the set of (descriptions of) Turing machines that halt on input 0, is a well-known example of a noncomputable set. The existence of many noncomputable sets follows from the facts that there are only countably many Turing machines, and thus only countably many computable sets, but according to the Cantor's theorem, there are uncountably many sets of natural numbers. Although the halting problem is not computable, it is possible to simulate program execution and produce an infinite list of the programs that do halt. Thus the halting problem is an example of a computably enumerable (c.e.) set, which is a set that can be enumerated by a Turing machine (other terms for computably enumerable include recursively enumerable and semidecidable). Equivalently, a set is c.e. if and only if it is the range of some computable function. The c.e. sets, although not decidable in general, have been studied in detail in computability theory. == Areas of research == Beginning with the theory of computable sets and functions described above, the field of computability theory has grown to include the study of many closely related topics. These are not independent areas of research: each of these areas draws ideas and results from the others, and most computability theorists are familiar with the majority of them. === Relative computability and the Turing degrees === Computability theory in mathematical logic has traditionally focused on relative computability, a generalization of Turing computability defined using oracle Turing machines, introduced by Turing in 1939. An oracle Turing machine is a hypothetical device which, in addition to performing the actions of a regular Turing machine, is able to ask questions of an oracle, which is a particular set of natural numbers. The oracle machine may only ask questions of the form "Is n in the oracle set?". Each question will be immediately answered correctly, even if the oracle set is not computable. Thus an oracle machine with a noncomputable oracle will be able to compute sets that a Turing machine without an oracle cannot. Informally, a set of natural numbers A is Turing reducible to a set B if there is an oracle machine that correctly tells whether numbers are in A when run with B as the oracle set (in this case, the set A is also said to be (relatively) computable from B and recursive in B). If a set A is Turing reducible to a set B and B is Turing reducible to A then the sets are said to have the same Turing degree (also called degree of unsolvability). The Turing degree of a set gives a precise measure of how uncomputable the set is. The natural examples of sets that are not computable, including many different sets that encode variants of the halting problem, have two properties in common: They are computably enumerable, and Each can be translated into any other via a many-one reduction. That is, given such sets A and B, there is a total computable function f such that A = {x : f(x) ∈ B}. These sets are said to be many-one equivalent (or m-equivalent). Many-one reductions are "stronger" than Turing reductions: if a set A is many-one reducible to a set B, then A is Turing reducible to B, but the converse does not always hold. Although the natural examples of noncomputable sets are all many-one equivalent, it is possible to construct computably enumerable sets A and B such that A is Turing reducible to B but not many-one reducible to B. It can be shown that every computably enumerable set is many-one reducible to the halting problem, and thus the halting problem is the most complicated computably enumerable set with respect to many-one reducibility and with respect to Turing reducibility. In 1944, Post asked whether every computably enumerable set is either computable or Turing equivalent to the halting problem, that is, whether there is no computably enumerable set with a Turing degree intermediate between those two. As intermediate results, Post defined natural types of computably enumerable sets like the simple, hypersimple and hyperhypersimple sets. Post showed that these sets are strictly between the computable sets and the halting problem with respect to many-one reducibility. Post also showed that some of them are strictly intermediate under other reducibility notions stronger than Turing reducibility. But Post left open the main problem of the existence of computably enumerable sets of intermediate Turing degree; this problem became known as Post's problem. After ten years, Kleene and Post showed in 1954 that there are intermediate Turing degrees between those of the computable sets and the halting problem, but they failed to show that any of these degrees contains a computably enumerable set. Very soon after this, Friedberg and Muchnik independently solved Post's problem by establishing the existence of computably enumerable sets of intermediate degree. This groundbreaking result opened a wide study of the Turing degrees of the computably enumerable sets which turned out to possess a very complicated and non-trivial structure. There are uncountably many sets that are not computably enumerable, and the investigation of the Turing degrees of all sets is as central in computability theory as the investigation of the computably enumerable Turing degrees. Many degrees with special properties were constructed: hyperimmune-free degrees where every function computable relative to that degree is majorized by a (unrelativized) computable function; high degrees relative to which one can compute a function f which dominates every computable function g in the sense that there is a constant c depending on g such that g(x) < f(x) for all x > c; random degrees containing algorithmically random sets; 1-generic degrees of 1-generic sets; and the degrees below the halting problem of limit-computable sets. The study of arbitrary (not necessarily computably enumerable) Turing degrees involves the study of the Turing jump. Given a set A, the Turing jump of A is a set of natural numbers encoding a solution to the halting problem for oracle Turing machines running with oracle A. The Turing jump of any set is always of higher Turing degree than the original set, and a theorem of Friedburg shows that any set that computes the Halting problem can be obtained as the Turing jump of another set. Post's theorem establishes a close relationship between the Turing jump operation and the arithmetical hierarchy, which is a classification of certain subsets of the natural numbers based on their definability in arithmetic. Much recent research on Turing degrees has focused on the overall structure of the set of Turing degrees and the set of Turing degrees containing computably enumerable sets. A deep theorem of Shore and Slaman states that the function mapping a degree x to the degree of its Turing jump is definable in the partial order of the Turing degrees. A survey by Ambos-Spies and Fejer gives an overview of this research and its historical progression. === Other reducibilities === An ongoing area of research in computability theory studies reducibility relations other than Turing reducibility. Post introduced several strong reducibilities, so named because they imply truth-table reducibility. A Turing machine implementing a strong reducibility will compute a total function regardless of which oracle it is presented with. Weak reducibilities are those where a reduction process may not terminate for all oracles; Turing reducibility is one example. The strong reducibilities include: One-one reducibility: A is one-one reducible (or 1-reducible) to B if there is a total computable injective function f such that each n is in A if and only if f(n) is in B. Many-one reducibility: This is essentially one-one reducibility without the constraint that f be injective. A is many-one reducible (or m-reducible) to B if there is a total computable function f such that each n is in A if and only if f(n) is in B. Truth-table reducibility: A is truth-table reducible to B if A is Turing reducible to B via an oracle Turing machine that computes a total function regardless of the oracle it is given. Because of compactness of Cantor space, this is equivalent to saying that the reduction presents a single list of questions (depending only on the input) to the oracle simultaneously, and then having seen their answers is able to produce an output without asking additional questions regardless of the oracle's answer to the initial queries. Many variants of truth-table reducibility have also been studied. Further reducibilities (positive, disjunctive, conjunctive, linear and their weak and bounded versions) are discussed in the article Reduction (computability theory). The major research on strong reducibilities has been to compare their theories, both for the class of all computably enumerable sets as well as for the class of all subsets of the natural numbers. Furthermore, the relations between the reducibilities has been studied. For example, it is known that every Turing degree is either a truth-table degree or is the union of infinitely many truth-table degrees. Reducibilities weaker than Turing reducibility (that is, reducibilities that are implied by Turing reducibility) have also been studied. The most well known are arithmetical reducibility and hyperarithmetical reducibility. These reducibilities are closely connected to definability over the standard model of arithmetic. === Rice's theorem and the arithmetical hierarchy === Rice showed that for every nontrivial class C (which contains some but not all c.e. sets) the index set E = {e: the eth c.e. set We is in C} has the property that either the halting problem or its complement is many-one reducible to E, that is, can be mapped using a many-one reduction to E (see Rice's theorem for more detail). But, many of these index sets are even more complicated than the halting problem. These type of sets can be classified using the arithmetical hierarchy. For example, the index set FIN of the class of all finite sets is on the level Σ2, the index set REC of the class of all recursive sets is on the level Σ3, the index set COFIN of all cofinite sets is also on the level Σ3 and the index set COMP of the class of all Turing-complete sets Σ4. These hierarchy levels are defined inductively, Σn+1 contains just all sets which are computably enumerable relative to Σn; Σ1 contains the computably enumerable sets. The index sets given here are even complete for their levels, that is, all the sets in these levels can be many-one reduced to the given index sets. === Reverse mathematics === The program of reverse mathematics asks which set-existence axioms are necessary to prove particular theorems of mathematics in subsystems of second-order arithmetic. This study was initiated by Harvey Friedman and was studied in detail by Stephen Simpson and others; in 1999, Simpson gave a detailed discussion of the program. The set-existence axioms in question correspond informally to axioms saying that the powerset of the natural numbers is closed under various reducibility notions. The weakest such axiom studied in reverse mathematics is recursive comprehension, which states that the powerset of the naturals is closed under Turing reducibility. === Numberings === A numbering is an enumeration of functions; it has two parameters, e and x and outputs the value of the e-th function in the numbering on the input x. Numberings can be partial-computable although some of its members are total computable functions. Admissible numberings are those into which all others can be translated. A Friedberg numbering (named after its discoverer) is a one-one numbering of all partial-computable functions; it is necessarily not an admissible numbering. Later research dealt also with numberings of other classes like classes of computably enumerable sets. Goncharov discovered for example a class of computably enumerable sets for which the numberings fall into exactly two classes with respect to computable isomorphisms. === The priority method === Post's problem was solved with a method called the priority method; a proof using this method is called a priority argument. This method is primarily used to construct computably enumerable sets with particular properties. To use this method, the desired properties of the set to be constructed are broken up into an infinite list of goals, known as requirements, so that satisfying all the requirements will cause the set constructed to have the desired properties. Each requirement is assigned to a natural number representing the priority of the requirement; so 0 is assigned to the most important priority, 1 to the second most important, and so on. The set is then constructed in stages, each stage attempting to satisfy one of more of the requirements by either adding numbers to the set or banning numbers from the set so that the final set will satisfy the requirement. It may happen that satisfying one requirement will cause another to become unsatisfied; the priority order is used to decide what to do in such an event. Priority arguments have been employed to solve many problems in computability theory, and have been classified into a hierarchy based on their complexity. Because complex priority arguments can be technical and difficult to follow, it has traditionally been considered desirable to prove results without priority arguments, or to see if results proved with priority arguments can also be proved without them. For example, Kummer published a paper on a proof for the existence of Friedberg numberings without using the priority method. === The lattice of computably enumerable sets === When Post defined the notion of a simple set as a c.e. set with an infinite complement not containing any infinite c.e. set, he started to study the structure of the computably enumerable sets under inclusion. This lattice became a well-studied structure. Computable sets can be defined in this structure by the basic result that a set is computable if and only if the set and its complement are both computably enumerable. Infinite c.e. sets have always infinite computable subsets; but on the other hand, simple sets exist but do not always have a coinfinite computable superset. Post introduced already hypersimple and hyperhypersimple sets; later maximal sets were constructed which are c.e. sets such that every c.e. superset is either a finite variant of the given maximal set or is co-finite. Post's original motivation in the study of this lattice was to find a structural notion such that every set which satisfies this property is neither in the Turing degree of the computable sets nor in the Turing degree of the halting problem. Post did not find such a property and the solution to his problem applied priority methods instead; in 1991, Harrington and Soare found eventually such a property. === Automorphism problems === Another important question is the existence of automorphisms in computability-theoretic structures. One of these structures is that one of computably enumerable sets under inclusion modulo finite difference; in this structure, A is below B if and only if the set difference B − A is finite. Maximal sets (as defined in the previous paragraph) have the property that they cannot be automorphic to non-maximal sets, that is, if there is an automorphism of the computably enumerable sets under the structure just mentioned, then every maximal set is mapped to another maximal set. In 1974, Soare showed that also the converse holds, that is, every two maximal sets are automorphic. So the maximal sets form an orbit, that is, every automorphism preserves maximality and any two maximal sets are transformed into each other by some automorphism. Harrington gave a further example of an automorphic property: that of the creative sets, the sets which are many-one equivalent to the halting problem. Besides the lattice of computably enumerable sets, automorphisms are also studied for the structure of the Turing degrees of all sets as well as for the structure of the Turing degrees of c.e. sets. In both cases, Cooper claims to have constructed nontrivial automorphisms which map some degrees to other degrees; this construction has, however, not been verified and some colleagues believe that the construction contains errors and that the question of whether there is a nontrivial automorphism of the Turing degrees is still one of the main unsolved questions in this area. === Kolmogorov complexity === The field of Kolmogorov complexity and algorithmic randomness was developed during the 1960s and 1970s by Chaitin, Kolmogorov, Levin, Martin-Löf and Solomonoff (the names are given here in alphabetical order; much of the research was independent, and the unity of the concept of randomness was not understood at the time). The main idea is to consider a universal Turing machine U and to measure the complexity of a number (or string) x as the length of the shortest input p such that U(p) outputs x. This approach revolutionized earlier ways to determine when an infinite sequence (equivalently, characteristic function of a subset of the natural numbers) is random or not by invoking a notion of randomness for finite objects. Kolmogorov complexity became not only a subject of independent study but is also applied to other subjects as a tool for obtaining proofs. There are still many open problems in this area. === Frequency computation === This branch of computability theory analyzed the following question: For fixed m and n with 0 < m < n, for which functions A is it possible to compute for any different n inputs x1, x2, ..., xn a tuple of n numbers y1, y2, ..., yn such that at least m of the equations A(xk) = yk are true. Such sets are known as (m, n)-recursive sets. The first major result in this branch of computability theory is Trakhtenbrot's result that a set is computable if it is (m, n)-recursive for some m, n with 2m > n. On the other hand, Jockusch's semirecursive sets (which were already known informally before Jockusch introduced them 1968) are examples of a set which is (m, n)-recursive if and only if 2m < n + 1. There are uncountably many of these sets and also some computably enumerable but noncomputable sets of this type. Later, Degtev established a hierarchy of computably enumerable sets that are (1, n + 1)-recursive but not (1, n)-recursive. After a long phase of research by Russian scientists, this subject became repopularized in the west by Beigel's thesis on bounded queries, which linked frequency computation to the above-mentioned bounded reducibilities and other related notions. One of the major results was Kummer's Cardinality Theory which states that a set A is computable if and only if there is an n such that some algorithm enumerates for each tuple of n different numbers up to n many possible choices of the cardinality of this set of n numbers intersected with A; these choices must contain the true cardinality but leave out at least one false one. === Inductive inference === This is the computability-theoretic branch of learning theory. It is based on E. Mark Gold's model of learning in the limit from 1967 and has developed since then more and more models of learning. The general scenario is the following: Given a class S of computable functions, is there a learner (that is, computable functional) which outputs for any input of the form (f(0), f(1), ..., f(n)) a hypothesis. A learner M learns a function f if almost all hypotheses are the same index e of f with respect to a previously agreed on acceptable numbering of all computable functions; M learns S if M learns every f in S. Basic results are that all computably enumerable classes of functions are learnable while the class REC of all computable functions is not learnable. Many related models have been considered and also the learning of classes of computably enumerable sets from positive data is a topic studied from Gold's pioneering paper in 1967 onwards. === Generalizations of Turing computability === Computability theory includes the study of generalized notions of this field such as arithmetic reducibility, hyperarithmetical reducibility and α-recursion theory, as described by Sacks in 1990. These generalized notions include reducibilities that cannot be executed by Turing machines but are nevertheless natural generalizations of Turing reducibility. These studies include approaches to investigate the analytical hierarchy which differs from the arithmetical hierarchy by permitting quantification over sets of natural numbers in addition to quantification over individual numbers. These areas are linked to the theories of well-orderings and trees; for example the set of all indices of computable (nonbinary) trees without infinite branches is complete for level Π 1 1 {\displaystyle \Pi _{1}^{1}} of the analytical hierarchy. Both Turing reducibility and hyperarithmetical reducibility are important in the field of effective descriptive set theory. The even more general notion of degrees of constructibility is studied in set theory. === Continuous computability theory === Computability theory for digital computation is well developed. Computability theory is less well developed for analog computation that occurs in analog computers, analog signal processing, analog electronics, artificial neural networks and continuous-time control theory, modelled by differential equations and continuous dynamical systems. For example, models of computation such as the Blum–Shub–Smale machine model have formalized computation on the reals. == Relationships between definability, proof and computability == There are close relationships between the Turing degree of a set of natural numbers and the difficulty (in terms of the arithmetical hierarchy) of defining that set using a first-order formula. One such relationship is made precise by Post's theorem. A weaker relationship was demonstrated by Kurt Gödel in the proofs of his completeness theorem and incompleteness theorems. Gödel's proofs show that the set of logical consequences of an effective first-order theory is a computably enumerable set, and that if the theory is strong enough this set will be uncomputable. Similarly, Tarski's indefinability theorem can be interpreted both in terms of definability and in terms of computability. Computability theory is also linked to second-order arithmetic, a formal theory of natural numbers and sets of natural numbers. The fact that certain sets are computable or relatively computable often implies that these sets can be defined in weak subsystems of second-order arithmetic. The program of reverse mathematics uses these subsystems to measure the non-computability inherent in well known mathematical theorems. In 1999, Simpson discussed many aspects of second-order arithmetic and reverse mathematics. The field of proof theory includes the study of second-order arithmetic and Peano arithmetic, as well as formal theories of the natural numbers weaker than Peano arithmetic. One method of classifying the strength of these weak systems is by characterizing which computable functions the system can prove to be total. For example, in primitive recursive arithmetic any computable function that is provably total is actually primitive recursive, while Peano arithmetic proves that functions like the Ackermann function, which are not primitive recursive, are total. Not every total computable function is provably total in Peano arithmetic, however; an example of such a function is provided by Goodstein's theorem. == Name == The field of mathematical logic dealing with computability and its generalizations has been called "recursion theory" since its early days. Robert I. Soare, a prominent researcher in the field, has proposed that the field should be called "computability theory" instead. He argues that Turing's terminology using the word "computable" is more natural and more widely understood than the terminology using the word "recursive" introduced by Kleene. Many contemporary researchers have begun to use this alternate terminology. These researchers also use terminology such as partial computable function and computably enumerable (c.e.) set instead of partial recursive function and recursively enumerable (r.e.) set. Not all researchers have been convinced, however, as explained by Fortnow and Simpson. Some commentators argue that both the names recursion theory and computability theory fail to convey the fact that most of the objects studied in computability theory are not computable. In 1967, Rogers has suggested that a key property of computability theory is that its results and structures should be invariant under computable bijections on the natural numbers (this suggestion draws on the ideas of the Erlangen program in geometry). The idea is that a computable bijection merely renames numbers in a set, rather than indicating any structure in the set, much as a rotation of the Euclidean plane does not change any geometric aspect of lines drawn on it. Since any two infinite computable sets are linked by a computable bijection, this proposal identifies all the infinite computable sets (the finite computable sets are viewed as trivial). According to Rogers, the sets of interest in computability theory are the noncomputable sets, partitioned into equivalence classes by computable bijections of the natural numbers. == Professional organizations == The main professional organization for computability theory is the Association for Symbolic Logic, which holds several research conferences each year. The interdisciplinary research Association Computability in Europe (CiE) also organizes a series of annual conferences. == See also == Recursion (computer science) Computability logic Transcomputational problem == Notes == == References == == Further reading == Undergraduate level texts Cooper, S. Barry (2004). Computability Theory. Chapman & Hall/CRC. ISBN 1-58488-237-9. Matiyasevich, Yuri Vladimirovich (1993). Hilbert's Tenth Problem. MIT Press. ISBN 0-262-13295-8. Advanced texts Jain, Sanjay; Osherson, Daniel Nathan; Royer, James S.; Sharma, Arun (1999). Systems that learn: an introduction to learning theory (2nd ed.). Bradford Book / MIT Press. ISBN 0-262-10077-0. LCCN 98-34861. Lerman, Manuel (1983). Degrees of unsolvability. Perspectives in Mathematical Logic. Springer-Verlag. ISBN 3-540-12155-2. Nies, André (2009). Computability and Randomness. Oxford University Press. ISBN 978-0-19-923076-1. Odifreddi, Piergiorgio (1989). Classical Recursion Theory. North-Holland. ISBN 0-444-87295-7. Odifreddi, Piergiorgio (1999). Classical Recursion Theory. Vol. II. Elsevier. ISBN 0-444-50205-X. Survey papers and collections Enderton, Herbert Bruce (1977). "Elements of Recursion Theory". In Barwise, Jon (ed.). Handbook of Mathematical Logic. North-Holland. pp. 527–566. ISBN 0-7204-2285-X. Research papers and collections Burgin, Mark; Klinger, Allen (2004). "Experience, Generations, and Limits in Machine Learning". Theoretical Computer Science. 317 (1–3): 71–91. doi:10.1016/j.tcs.2003.12.005. Friedberg, Richard M. (1958). "Three theorems on recursive enumeration: I. Decomposition, II. Maximal Set, III. Enumeration without repetition". The Journal of Symbolic Logic. 23 (3): 309–316. doi:10.2307/2964290. JSTOR 2964290. S2CID 25834814. Gold, E. Mark (1967). "Language Identification in the Limit" (PDF). Information and Control. 10 (5): 447–474. doi:10.1016/s0019-9958(67)91165-5. [1] Jockusch, Carl Groos Jr. (1968). "Semirecursive sets and positive reducibility". Transactions of the American Mathematical Society. 137 (2): 420–436. doi:10.1090/S0002-9947-1968-0220595-7. JSTOR 1994957. Kleene, Stephen Cole; Post, Emil Leon (1954). "The upper semi-lattice of degrees of recursive unsolvability". Annals of Mathematics. Series 2. 59 (3): 379–407. doi:10.2307/1969708. JSTOR 1969708. Myhill, John R. Sr. (1956). "The lattice of recursively enumerable sets". The Journal of Symbolic Logic. 21: 215–220. doi:10.1017/S002248120008525X. S2CID 123260425. Post, Emil Leon (1947). "Recursive unsolvability of a problem of Thue". Journal of Symbolic Logic. 12 (1): 1–11. doi:10.2307/2267170. JSTOR 2267170. S2CID 30320278. Reprinted in Davis 1965. == External links == Association for Symbolic Logic homepage Computability in Europe homepage Archived 2011-02-17 at the Wayback Machine Webpage on Recursion Theory Course at Graduate Level with approximately 100 pages of lecture notes German language lecture notes on inductive inference
Wikipedia/Computability_theory_(computer_science)
In mathematics, specifically in category theory, an F {\displaystyle F} -coalgebra is a structure defined according to a functor F {\displaystyle F} , with specific properties as defined below. For both algebras and coalgebras, a functor is a convenient and general way of organizing a signature. This has applications in computer science: examples of coalgebras include lazy evaluation, infinite data structures, such as streams, and also transition systems. F {\displaystyle F} -coalgebras are dual to F {\displaystyle F} -algebras. Just as the class of all algebras for a given signature and equational theory form a variety, so does the class of all F {\displaystyle F} -coalgebras satisfying a given equational theory form a covariety, where the signature is given by F {\displaystyle F} . == Definition == Let F : C ⟶ C {\displaystyle F:{\mathcal {C}}\longrightarrow {\mathcal {C}}} be an endofunctor on a category C {\displaystyle {\mathcal {C}}} . An F {\displaystyle F} -coalgebra is an object A {\displaystyle A} of C {\displaystyle {\mathcal {C}}} together with a morphism α : A ⟶ F A {\displaystyle \alpha :A\longrightarrow FA} of C {\displaystyle {\mathcal {C}}} , usually written as ( A , α ) {\displaystyle (A,\alpha )} . An F {\displaystyle F} -coalgebra homomorphism from ( A , α ) {\displaystyle (A,\alpha )} to another F {\displaystyle F} -coalgebra ( B , β ) {\displaystyle (B,\beta )} is a morphism f : A ⟶ B {\displaystyle f:A\longrightarrow B} in C {\displaystyle {\mathcal {C}}} such that F f ∘ α = β ∘ f {\displaystyle Ff\circ \alpha =\beta \circ f} . Thus the F {\displaystyle F} -coalgebras for a given functor F constitute a category. == Examples == Consider the endofunctor X ↦ X ⊔ { ∗ } : S e t → S e t {\displaystyle X\mapsto X\sqcup \{*\}:\mathbf {Set} \to \mathbf {Set} } that sends a set to its disjoint union with the singleton set { ∗ } {\displaystyle \{\ast \}} . A coalgebra of this endofunctor is given by ( N ¯ , α ) {\displaystyle ({\overline {\mathbb {N} }},\alpha )} , where N ¯ = { 0 , 1 , 2 , … } ⊔ { ∞ } {\displaystyle {\overline {\mathbb {N} }}=\{0,1,2,\ldots \}\sqcup \{\infty \}} is the so-called conatural numbers, consisting of the nonnegative integers and also infinity, and the function α {\displaystyle \alpha } is given by α ( 0 ) = ∗ {\displaystyle \alpha (0)=\ast } , α ( n ) = n − 1 {\displaystyle \alpha (n)=n-1} for n = 1 , 2 , … {\displaystyle n=1,2,\ldots } and α ( ∞ ) = ∞ {\displaystyle \alpha (\infty )=\infty } . In fact, ( N ¯ , α ) {\displaystyle ({\overline {\mathbb {N} }},\alpha )} is the terminal coalgebra of this endofunctor. More generally, fix some set A {\displaystyle A} , and consider the functor F : S e t ⟶ S e t {\displaystyle F:\mathbf {Set} \longrightarrow \mathbf {Set} } that sends X {\displaystyle X} to ( X × A ) ∪ { 1 } {\displaystyle (X\times A)\cup \{1\}} . Then an F {\displaystyle F} -coalgebra α : X ⟶ ( X × A ) ∪ { 1 } = F X {\displaystyle \alpha :X\longrightarrow (X\times A)\cup \{1\}=FX} is a finite or infinite stream over the alphabet A {\displaystyle A} , where X {\displaystyle X} is the set of states and α {\displaystyle \alpha } is the state-transition function. Applying the state-transition function to a state may yield two possible results: either an element of A {\displaystyle A} together with the next state of the stream, or the element of the singleton set { 1 } {\displaystyle \{1\}} as a separate "final state" indicating that there are no more values in the stream. In many practical applications, the state-transition function of such a coalgebraic object may be of the form X → f 1 × f 2 × … × f n {\displaystyle X\rightarrow f_{1}\times f_{2}\times \ldots \times f_{n}} , which readily factorizes into a collection of "selectors", "observers", "methods" X → f 1 , X → f 2 … X → f n {\displaystyle X\rightarrow f_{1},\,X\rightarrow f_{2}\,\ldots \,X\rightarrow f_{n}} . Special cases of practical interest include observers yielding attribute values, and mutator methods of the form X → X A 1 × … × A n {\displaystyle X\rightarrow X^{A_{1}\times \ldots \times A_{n}}} taking additional parameters and yielding states. This decomposition is dual to the decomposition of initial F {\displaystyle F} -algebras into sums of 'constructors'. Let P be the power set construction on the category of sets, considered as a covariant functor. The P-coalgebras are in bijective correspondence with sets with a binary relation. Now fix another set, A. Then coalgebras for the endofunctor P(A×(-)) are in bijective correspondence with labelled transition systems, and homomorphisms between coalgebras correspond to functional bisimulations between labelled transition systems. == Applications == In computer science, coalgebra has emerged as a convenient and suitably general way of specifying the behaviour of systems and data structures that are potentially infinite, for example classes in object-oriented programming, streams and transition systems. While algebraic specification deals with functional behaviour, typically using inductive datatypes generated by constructors, coalgebraic specification is concerned with behaviour modelled by coinductive process types that are observable by selectors, much in the spirit of automata theory. An important role is played here by final coalgebras, which are complete sets of possibly infinite behaviours, such as streams. The natural logic to express properties of such systems is coalgebraic modal logic. == See also == Initial algebra Coinduction Coalgebra == References == B. Jacobs and J. Rutten, A Tutorial on (Co)Algebras and (Co)Induction. EATCS Bulletin 62, 1997, p.222-259. Jan J. M. M. Rutten: Universal coalgebra: a theory of systems. Theor. Comput. Sci. 249(1): 3-80 (2000). J. Adámek, Introduction to coalgebra. Theory and Applications of Categories 14 (2005), 157-199 B. Jacobs, Introduction to Coalgebra. Towards Mathematics of States and Observations (book draft) Yde Venema: Automata and Fixed Point Logics: a Coalgebraic Perspective. Information and Computation, 204 (2006) 637-678. == External links == CALCO 2009: Conference on Algebra and Coalgebra in Computer Science CALCO 2011
Wikipedia/F-coalgebra
In computability theory, the Ackermann function, named after Wilhelm Ackermann, is one of the simplest and earliest-discovered examples of a total computable function that is not primitive recursive. All primitive recursive functions are total and computable, but the Ackermann function illustrates that not all total computable functions are primitive recursive. After Ackermann's publication of his function (which had three non-negative integer arguments), many authors modified it to suit various purposes, so that today "the Ackermann function" may refer to any of numerous variants of the original function. One common version is the two-argument Ackermann–Péter function developed by Rózsa Péter and Raphael Robinson. This function is defined from the recurrence relation A ⁡ ( m + 1 , n + 1 ) = A ⁡ ( m , A ⁡ ( m + 1 , n ) ) {\displaystyle \operatorname {A} (m+1,n+1)=\operatorname {A} (m,\operatorname {A} (m+1,n))} with appropriate base cases. Its value grows very rapidly; for example, A ⁡ ( 4 , 2 ) {\displaystyle \operatorname {A} (4,2)} results in 2 65536 − 3 {\displaystyle 2^{65536}-3} , an integer with 19,729 decimal digits. == History == In the late 1920s, the mathematicians Gabriel Sudan and Wilhelm Ackermann, students of David Hilbert, were studying the foundations of computation. Both Sudan and Ackermann are credited with discovering total computable functions (termed simply "recursive" in some references) that are not primitive recursive. Sudan published the lesser-known Sudan function, then shortly afterwards and independently, in 1928, Ackermann published his function φ {\displaystyle \varphi } (from Greek, the letter phi). Ackermann's three-argument function, φ ( m , n , p ) {\displaystyle \varphi (m,n,p)} , is defined such that for p = 0 , 1 , 2 {\displaystyle p=0,1,2} , it reproduces the basic operations of addition, multiplication, and exponentiation as φ ( m , n , 0 ) = m + n φ ( m , n , 1 ) = m × n φ ( m , n , 2 ) = m n {\displaystyle {\begin{aligned}\varphi (m,n,0)&=m+n\\\varphi (m,n,1)&=m\times n\\\varphi (m,n,2)&=m^{n}\end{aligned}}} and for p > 2 {\displaystyle p>2} it extends these basic operations in a way that can be compared to the hyperoperations: φ ( m , n , 3 ) = m [ 4 ] ( n + 1 ) φ ( m , n , p ) ⪆ m [ p + 1 ] ( n + 1 ) for p > 3 {\displaystyle {\begin{aligned}\varphi (m,n,3)&=m[4](n+1)\\\varphi (m,n,p)&\gtrapprox m[p+1](n+1)&&{\text{for }}p>3\end{aligned}}} (Aside from its historic role as a total-computable-but-not-primitive-recursive function, Ackermann's original function is seen to extend the basic arithmetic operations beyond exponentiation, although not as seamlessly as do variants of Ackermann's function that are specifically designed for that purpose—such as Goodstein's hyperoperation sequence.) In On the Infinite, David Hilbert hypothesized that the Ackermann function was not primitive recursive, but it was Ackermann, Hilbert's personal secretary and former student, who actually proved the hypothesis in his paper On Hilbert's Construction of the Real Numbers. Rózsa Péter and Raphael Robinson later developed a two-variable version of the Ackermann function that became preferred by almost all authors. The generalized hyperoperation sequence, e.g. G ( m , a , b ) = a [ m ] b {\displaystyle G(m,a,b)=a[m]b} , is a version of the Ackermann function as well. In 1963 R.C. Buck based an intuitive two-variable variant F {\displaystyle \operatorname {F} } on the hyperoperation sequence: F ⁡ ( m , n ) = 2 [ m ] n . {\displaystyle \operatorname {F} (m,n)=2[m]n.} Compared to most other versions, Buck's function has no unessential offsets: F ⁡ ( 0 , n ) = 2 [ 0 ] n = n + 1 F ⁡ ( 1 , n ) = 2 [ 1 ] n = 2 + n F ⁡ ( 2 , n ) = 2 [ 2 ] n = 2 × n F ⁡ ( 3 , n ) = 2 [ 3 ] n = 2 n F ⁡ ( 4 , n ) = 2 [ 4 ] n = 2 2 2 . . . 2 ⋮ {\displaystyle {\begin{aligned}\operatorname {F} (0,n)&=2[0]n=n+1\\\operatorname {F} (1,n)&=2[1]n=2+n\\\operatorname {F} (2,n)&=2[2]n=2\times n\\\operatorname {F} (3,n)&=2[3]n=2^{n}\\\operatorname {F} (4,n)&=2[4]n=2^{2^{2^{{}^{.^{.^{{}_{.}2}}}}}}\\&\quad \vdots \end{aligned}}} Many other versions of Ackermann function have been investigated. == Definition == === Definition: as m-ary function === Ackermann's original three-argument function φ ( m , n , p ) {\displaystyle \varphi (m,n,p)} is defined recursively as follows for nonnegative integers m , n , {\displaystyle m,n,} and p {\displaystyle p} : φ ( m , n , 0 ) = m + n φ ( m , 0 , 1 ) = 0 φ ( m , 0 , 2 ) = 1 φ ( m , 0 , p ) = m for p > 2 φ ( m , n , p ) = φ ( m , φ ( m , n − 1 , p ) , p − 1 ) for n , p > 0 {\displaystyle {\begin{aligned}\varphi (m,n,0)&=m+n\\\varphi (m,0,1)&=0\\\varphi (m,0,2)&=1\\\varphi (m,0,p)&=m&&{\text{for }}p>2\\\varphi (m,n,p)&=\varphi (m,\varphi (m,n-1,p),p-1)&&{\text{for }}n,p>0\end{aligned}}} Of the various two-argument versions, the one developed by Péter and Robinson (called "the" Ackermann function by most authors) is defined for nonnegative integers m {\displaystyle m} and n {\displaystyle n} as follows: A ⁡ ( 0 , n ) = n + 1 A ⁡ ( m + 1 , 0 ) = A ⁡ ( m , 1 ) A ⁡ ( m + 1 , n + 1 ) = A ⁡ ( m , A ⁡ ( m + 1 , n ) ) {\displaystyle {\begin{array}{lcl}\operatorname {A} (0,n)&=&n+1\\\operatorname {A} (m+1,0)&=&\operatorname {A} (m,1)\\\operatorname {A} (m+1,n+1)&=&\operatorname {A} (m,\operatorname {A} (m+1,n))\end{array}}} The Ackermann function has also been expressed in relation to the hyperoperation sequence: A ( m , n ) = { n + 1 m = 0 2 [ m ] ( n + 3 ) − 3 m > 0 {\displaystyle A(m,n)={\begin{cases}n+1&m=0\\2[m](n+3)-3&m>0\\\end{cases}}} or, written in Knuth's up-arrow notation (extended to integer indices ≥ − 2 {\displaystyle \geq -2} ): A ( m , n ) = { n + 1 m = 0 2 ↑ m − 2 ( n + 3 ) − 3 m > 0 {\displaystyle A(m,n)={\begin{cases}n+1&m=0\\2\uparrow ^{m-2}(n+3)-3&m>0\\\end{cases}}} or, equivalently, in terms of Buck's function F: A ( m , n ) = { n + 1 m = 0 F ( m , n + 3 ) − 3 m > 0 {\displaystyle A(m,n)={\begin{cases}n+1&m=0\\F(m,n+3)-3&m>0\\\end{cases}}} === Definition: as iterated 1-ary function === Define f n {\displaystyle f^{n}} as the n-th iterate of f {\displaystyle f} : f 0 ( x ) = x f n + 1 ( x ) = f ( f n ( x ) ) {\displaystyle {\begin{array}{rll}f^{0}(x)&=&x\\f^{n+1}(x)&=&f(f^{n}(x))\end{array}}} Iteration is the process of composing a function with itself a certain number of times. Function composition is an associative operation, so f ( f n ( x ) ) = f n ( f ( x ) ) {\displaystyle f(f^{n}(x))=f^{n}(f(x))} . Conceiving the Ackermann function as a sequence of unary functions, one can set A m ⁡ ( n ) = A ⁡ ( m , n ) {\displaystyle \operatorname {A} _{m}(n)=\operatorname {A} (m,n)} . The function then becomes a sequence A 0 , A 1 , A 2 , . . . {\displaystyle \operatorname {A} _{0},\operatorname {A} _{1},\operatorname {A} _{2},...} of unary functions, defined from iteration: A 0 ⁡ ( n ) = n + 1 A m + 1 ⁡ ( n ) = A m n + 1 ⁡ ( 1 ) {\displaystyle {\begin{array}{lcl}\operatorname {A} _{0}(n)&=&n+1\\\operatorname {A} _{m+1}(n)&=&\operatorname {A} _{m}^{n+1}(1)\\\end{array}}} == Computation == The recursive definition of the Ackermann function can naturally be transposed to a term rewriting system (TRS). === TRS, based on 2-ary function === The definition of the 2-ary Ackermann function leads to the obvious reduction rules (r1) A ( 0 , n ) → S ( n ) (r2) A ( S ( m ) , 0 ) → A ( m , S ( 0 ) ) (r3) A ( S ( m ) , S ( n ) ) → A ( m , A ( S ( m ) , n ) ) {\displaystyle {\begin{array}{lll}{\text{(r1)}}&A(0,n)&\rightarrow &S(n)\\{\text{(r2)}}&A(S(m),0)&\rightarrow &A(m,S(0))\\{\text{(r3)}}&A(S(m),S(n))&\rightarrow &A(m,A(S(m),n))\end{array}}} Example Compute A ( 1 , 2 ) → ∗ 4 {\displaystyle A(1,2)\rightarrow _{*}4} The reduction sequence is To compute A ⁡ ( m , n ) {\displaystyle \operatorname {A} (m,n)} one can use a stack, which initially contains the elements ⟨ m , n ⟩ {\displaystyle \langle m,n\rangle } . Then repeatedly the two top elements are replaced according to the rules (r1) 0 , n → ( n + 1 ) (r2) ( m + 1 ) , 0 → m , 1 (r3) ( m + 1 ) , ( n + 1 ) → m , ( m + 1 ) , n {\displaystyle {\begin{array}{lllllllll}{\text{(r1)}}&0&,&n&\rightarrow &(n+1)\\{\text{(r2)}}&(m+1)&,&0&\rightarrow &m&,&1\\{\text{(r3)}}&(m+1)&,&(n+1)&\rightarrow &m&,&(m+1)&,&n\end{array}}} Schematically, starting from ⟨ m , n ⟩ {\displaystyle \langle m,n\rangle } : WHILE stackLength <> 1 { POP 2 elements; PUSH 1 or 2 or 3 elements, applying the rules r1, r2, r3 } The pseudocode is published in Grossman & Zeitman (1988). For example, on input ⟨ 2 , 1 ⟩ {\displaystyle \langle 2,1\rangle } , Remarks The leftmost-innermost strategy is implemented in 225 computer languages on Rosetta Code. For all m , n {\displaystyle m,n} the computation of A ( m , n ) {\displaystyle A(m,n)} takes no more than ( A ( m , n ) + 1 ) m {\displaystyle (A(m,n)+1)^{m}} steps. Grossman & Zeitman (1988) pointed out that in the computation of A ⁡ ( m , n ) {\displaystyle \operatorname {A} (m,n)} the maximum length of the stack is A ⁡ ( m , n ) {\displaystyle \operatorname {A} (m,n)} , as long as m > 0 {\displaystyle m>0} . Their own algorithm, inherently iterative, computes A ⁡ ( m , n ) {\displaystyle \operatorname {A} (m,n)} within O ( m A ⁡ ( m , n ) ) {\displaystyle {\mathcal {O}}(m\operatorname {A} (m,n))} time and within O ( m ) {\displaystyle {\mathcal {O}}(m)} space. === TRS, based on iterated 1-ary function === The definition of the iterated 1-ary Ackermann functions leads to different reduction rules (r4) A ( S ( 0 ) , 0 , n ) → S ( n ) (r5) A ( S ( 0 ) , S ( m ) , n ) → A ( S ( n ) , m , S ( 0 ) ) (r6) A ( S ( S ( x ) ) , m , n ) → A ( S ( 0 ) , m , A ( S ( x ) , m , n ) ) {\displaystyle {\begin{array}{lll}{\text{(r4)}}&A(S(0),0,n)&\rightarrow &S(n)\\{\text{(r5)}}&A(S(0),S(m),n)&\rightarrow &A(S(n),m,S(0))\\{\text{(r6)}}&A(S(S(x)),m,n)&\rightarrow &A(S(0),m,A(S(x),m,n))\end{array}}} As function composition is associative, instead of rule r6 one can define (r7) A ( S ( S ( x ) ) , m , n ) → A ( S ( x ) , m , A ( S ( 0 ) , m , n ) ) {\displaystyle {\begin{array}{lll}{\text{(r7)}}&A(S(S(x)),m,n)&\rightarrow &A(S(x),m,A(S(0),m,n))\end{array}}} Like in the previous section the computation of A m 1 ⁡ ( n ) {\displaystyle \operatorname {A} _{m}^{1}(n)} can be implemented with a stack. Initially the stack contains the three elements ⟨ 1 , m , n ⟩ {\displaystyle \langle 1,m,n\rangle } . Then repeatedly the three top elements are replaced according to the rules (r4) 1 , 0 , n → ( n + 1 ) (r5) 1 , ( m + 1 ) , n → ( n + 1 ) , m , 1 (r6) ( x + 2 ) , m , n → 1 , m , ( x + 1 ) , m , n {\displaystyle {\begin{array}{lllllllll}{\text{(r4)}}&1&,0&,n&\rightarrow &(n+1)\\{\text{(r5)}}&1&,(m+1)&,n&\rightarrow &(n+1)&,m&,1\\{\text{(r6)}}&(x+2)&,m&,n&\rightarrow &1&,m&,(x+1)&,m&,n\\\end{array}}} Schematically, starting from ⟨ 1 , m , n ⟩ {\displaystyle \langle 1,m,n\rangle } : WHILE stackLength <> 1 { POP 3 elements; PUSH 1 or 3 or 5 elements, applying the rules r4, r5, r6; } Example On input ⟨ 1 , 2 , 1 ⟩ {\displaystyle \langle 1,2,1\rangle } the successive stack configurations are 1 , 2 , 1 _ → r 5 2 , 1 , 1 _ → r 6 1 , 1 , 1 , 1 , 1 _ → r 5 1 , 1 , 2 , 0 , 1 _ → r 6 1 , 1 , 1 , 0 , 1 , 0 , 1 _ → r 4 1 , 1 , 1 , 0 , 2 _ → r 4 1 , 1 , 3 _ → r 5 4 , 0 , 1 _ → r 6 1 , 0 , 3 , 0 , 1 _ → r 6 1 , 0 , 1 , 0 , 2 , 0 , 1 _ → r 6 1 , 0 , 1 , 0 , 1 , 0 , 1 , 0 , 1 _ → r 4 1 , 0 , 1 , 0 , 1 , 0 , 2 _ → r 4 1 , 0 , 1 , 0 , 3 _ → r 4 1 , 0 , 4 _ → r 4 5 {\displaystyle {\begin{aligned}&{\underline {1,2,1}}\rightarrow _{r5}{\underline {2,1,1}}\rightarrow _{r6}1,1,{\underline {1,1,1}}\rightarrow _{r5}1,1,{\underline {2,0,1}}\rightarrow _{r6}1,1,1,0,{\underline {1,0,1}}\\&\rightarrow _{r4}1,1,{\underline {1,0,2}}\rightarrow _{r4}{\underline {1,1,3}}\rightarrow _{r5}{\underline {4,0,1}}\rightarrow _{r6}1,0,{\underline {3,0,1}}\rightarrow _{r6}1,0,1,0,{\underline {2,0,1}}\\&\rightarrow _{r6}1,0,1,0,1,0,{\underline {1,0,1}}\rightarrow _{r4}1,0,1,0,{\underline {1,0,2}}\rightarrow _{r4}1,0,{\underline {1,0,3}}\rightarrow _{r4}{\underline {1,0,4}}\rightarrow _{r4}5\end{aligned}}} The corresponding equalities are A 2 ( 1 ) = A 1 2 ( 1 ) = A 1 ( A 1 ( 1 ) ) = A 1 ( A 0 2 ( 1 ) ) = A 1 ( A 0 ( A 0 ( 1 ) ) ) = A 1 ( A 0 ( 2 ) ) = A 1 ( 3 ) = A 0 4 ( 1 ) = A 0 ( A 0 3 ( 1 ) ) = A 0 ( A 0 ( A 0 2 ( 1 ) ) ) = A 0 ( A 0 ( A 0 ( A 0 ( 1 ) ) ) ) = A 0 ( A 0 ( A 0 ( 2 ) ) ) = A 0 ( A 0 ( 3 ) ) = A 0 ( 4 ) = 5 {\displaystyle {\begin{aligned}&A_{2}(1)=A_{1}^{2}(1)=A_{1}(A_{1}(1))=A_{1}(A_{0}^{2}(1))=A_{1}(A_{0}(A_{0}(1)))\\&=A_{1}(A_{0}(2))=A_{1}(3)=A_{0}^{4}(1)=A_{0}(A_{0}^{3}(1))=A_{0}(A_{0}(A_{0}^{2}(1)))\\&=A_{0}(A_{0}(A_{0}(A_{0}(1))))=A_{0}(A_{0}(A_{0}(2)))=A_{0}(A_{0}(3))=A_{0}(4)=5\end{aligned}}} When reduction rule r7 is used instead of rule r6, the replacements in the stack will follow (r7) ( x + 2 ) , m , n → ( x + 1 ) , m , 1 , m , n {\displaystyle {\begin{array}{lllllllll}{\text{(r7)}}&(x+2)&,m&,n&\rightarrow &(x+1)&,m&,1&,m&,n\end{array}}} The successive stack configurations will then be 1 , 2 , 1 _ → r 5 2 , 1 , 1 _ → r 7 1 , 1 , 1 , 1 , 1 _ → r 5 1 , 1 , 2 , 0 , 1 _ → r 7 1 , 1 , 1 , 0 , 1 , 0 , 1 _ → r 4 1 , 1 , 1 , 0 , 2 _ → r 4 1 , 1 , 3 _ → r 5 4 , 0 , 1 _ → r 7 3 , 0 , 1 , 0 , 1 _ → r 4 3 , 0 , 2 _ → r 7 2 , 0 , 1 , 0 , 2 _ → r 4 2 , 0 , 3 _ → r 7 1 , 0 , 1 , 0 , 3 _ → r 4 1 , 0 , 4 _ → r 4 5 {\displaystyle {\begin{aligned}&{\underline {1,2,1}}\rightarrow _{r5}{\underline {2,1,1}}\rightarrow _{r7}1,1,{\underline {1,1,1}}\rightarrow _{r5}1,1,{\underline {2,0,1}}\rightarrow _{r7}1,1,1,0,{\underline {1,0,1}}\\&\rightarrow _{r4}1,1,{\underline {1,0,2}}\rightarrow _{r4}{\underline {1,1,3}}\rightarrow _{r5}{\underline {4,0,1}}\rightarrow _{r7}3,0,{\underline {1,0,1}}\rightarrow _{r4}{\underline {3,0,2}}\\&\rightarrow _{r7}2,0,{\underline {1,0,2}}\rightarrow _{r4}{\underline {2,0,3}}\rightarrow _{r7}1,0,{\underline {1,0,3}}\rightarrow _{r4}{\underline {1,0,4}}\rightarrow _{r4}5\end{aligned}}} The corresponding equalities are A 2 ( 1 ) = A 1 2 ( 1 ) = A 1 ( A 1 ( 1 ) ) = A 1 ( A 0 2 ( 1 ) ) = A 1 ( A 0 ( A 0 ( 1 ) ) ) = A 1 ( A 0 ( 2 ) ) = A 1 ( 3 ) = A 0 4 ( 1 ) = A 0 3 ( A 0 ( 1 ) ) = A 0 3 ( 2 ) = A 0 2 ( A 0 ( 2 ) ) = A 0 2 ( 3 ) = A 0 ( A 0 ( 3 ) ) = A 0 ( 4 ) = 5 {\displaystyle {\begin{aligned}&A_{2}(1)=A_{1}^{2}(1)=A_{1}(A_{1}(1))=A_{1}(A_{0}^{2}(1))=A_{1}(A_{0}(A_{0}(1)))\\&=A_{1}(A_{0}(2))=A_{1}(3)=A_{0}^{4}(1)=A_{0}^{3}(A_{0}(1))=A_{0}^{3}(2)\\&=A_{0}^{2}(A_{0}(2))=A_{0}^{2}(3)=A_{0}(A_{0}(3))=A_{0}(4)=5\end{aligned}}} Remarks On any given input the TRSs presented so far converge in the same number of steps. They also use the same reduction rules (in this comparison the rules r1, r2, r3 are considered "the same as" the rules r4, r5, r6/r7 respectively). For example, the reduction of A ( 2 , 1 ) {\displaystyle A(2,1)} converges in 14 steps: 6 × r1, 3 × r2, 5 × r3. The reduction of A 2 ( 1 ) {\displaystyle A_{2}(1)} converges in the same 14 steps: 6 × r4, 3 × r5, 5 × r6/r7. The TRSs differ in the order in which the reduction rules are applied. When A i ( n ) {\displaystyle A_{i}(n)} is computed following the rules {r4, r5, r6}, the maximum length of the stack stays below 2 × A ( i , n ) {\displaystyle 2\times A(i,n)} . When reduction rule r7 is used instead of rule r6, the maximum length of the stack is only 2 ( i + 2 ) {\displaystyle 2(i+2)} . The length of the stack reflects the recursion depth. As the reduction according to the rules {r4, r5, r7} involves a smaller maximum depth of recursion, this computation is more efficient in that respect. === TRS, based on hyperoperators === As Sundblad (1971) — or Porto & Matos (1980) — showed explicitly, the Ackermann function can be expressed in terms of the hyperoperation sequence: A ( m , n ) = { n + 1 m = 0 2 [ m ] ( n + 3 ) − 3 m > 0 {\displaystyle A(m,n)={\begin{cases}n+1&m=0\\2[m](n+3)-3&m>0\\\end{cases}}} or, after removal of the constant 2 from the parameter list, in terms of Buck's function A ( m , n ) = { n + 1 m = 0 F ( m , n + 3 ) − 3 m > 0 {\displaystyle A(m,n)={\begin{cases}n+1&m=0\\F(m,n+3)-3&m>0\\\end{cases}}} Buck's function F ⁡ ( m , n ) = 2 [ m ] n {\displaystyle \operatorname {F} (m,n)=2[m]n} , a variant of Ackermann function by itself, can be computed with the following reduction rules: (b1) F ( S ( 0 ) , 0 , n ) → S ( n ) (b2) F ( S ( 0 ) , S ( 0 ) , 0 ) → S ( S ( 0 ) ) (b3) F ( S ( 0 ) , S ( S ( 0 ) ) , 0 ) → 0 (b4) F ( S ( 0 ) , S ( S ( S ( m ) ) ) , 0 ) → S ( 0 ) (b5) F ( S ( 0 ) , S ( m ) , S ( n ) ) → F ( S ( n ) , m , F ( S ( 0 ) , S ( m ) , 0 ) ) (b6) F ( S ( S ( x ) ) , m , n ) → F ( S ( 0 ) , m , F ( S ( x ) , m , n ) ) {\displaystyle {\begin{array}{lll}{\text{(b1)}}&F(S(0),0,n)&\rightarrow &S(n)\\{\text{(b2)}}&F(S(0),S(0),0)&\rightarrow &S(S(0))\\{\text{(b3)}}&F(S(0),S(S(0)),0)&\rightarrow &0\\{\text{(b4)}}&F(S(0),S(S(S(m))),0)&\rightarrow &S(0)\\{\text{(b5)}}&F(S(0),S(m),S(n))&\rightarrow &F(S(n),m,F(S(0),S(m),0))\\{\text{(b6)}}&F(S(S(x)),m,n)&\rightarrow &F(S(0),m,F(S(x),m,n))\end{array}}} Instead of rule b6 one can define the rule (b7) F ( S ( S ( x ) ) , m , n ) → F ( S ( x ) , m , F ( S ( 0 ) , m , n ) ) {\displaystyle {\begin{array}{lll}{\text{(b7)}}&F(S(S(x)),m,n)&\rightarrow &F(S(x),m,F(S(0),m,n))\end{array}}} To compute the Ackermann function it suffices to add three reduction rules (r8) A ( 0 , n ) → S ( n ) (r9) A ( S ( m ) , n ) → P ( F ( S ( 0 ) , S ( m ) , S ( S ( S ( n ) ) ) ) ) (r10) P ( S ( S ( S ( m ) ) ) ) → m {\displaystyle {\begin{array}{lll}{\text{(r8)}}&A(0,n)&\rightarrow &S(n)\\{\text{(r9)}}&A(S(m),n)&\rightarrow &P(F(S(0),S(m),S(S(S(n)))))\\{\text{(r10)}}&P(S(S(S(m))))&\rightarrow &m\\\end{array}}} These rules take care of the base case A(0,n), the alignment (n+3) and the fudge (-3). Example Compute A ( 2 , 1 ) → ∗ 5 {\displaystyle A(2,1)\rightarrow _{*}5} The matching equalities are when the TRS with the reduction rule b6 {\displaystyle {\text{b6}}} is applied: A ( 2 , 1 ) + 3 = F ( 2 , 4 ) = ⋯ = F 6 ( 0 , 2 ) = F ( 0 , F 5 ( 0 , 2 ) ) = F ( 0 , F ( 0 , F 4 ( 0 , 2 ) ) ) = F ( 0 , F ( 0 , F ( 0 , F 3 ( 0 , 2 ) ) ) ) = F ( 0 , F ( 0 , F ( 0 , F ( 0 , F 2 ( 0 , 2 ) ) ) ) ) = F ( 0 , F ( 0 , F ( 0 , F ( 0 , F ( 0 , F ( 0 , 2 ) ) ) ) ) ) = F ( 0 , F ( 0 , F ( 0 , F ( 0 , F ( 0 , 3 ) ) ) ) ) = F ( 0 , F ( 0 , F ( 0 , F ( 0 , 4 ) ) ) ) = F ( 0 , F ( 0 , F ( 0 , 5 ) ) ) = F ( 0 , F ( 0 , 6 ) ) = F ( 0 , 7 ) = 8 {\displaystyle {\begin{aligned}&A(2,1)+3=F(2,4)=\dots =F^{6}(0,2)=F(0,F^{5}(0,2))=F(0,F(0,F^{4}(0,2)))\\&=F(0,F(0,F(0,F^{3}(0,2))))=F(0,F(0,F(0,F(0,F^{2}(0,2)))))=F(0,F(0,F(0,F(0,F(0,F(0,2))))))\\&=F(0,F(0,F(0,F(0,F(0,3)))))=F(0,F(0,F(0,F(0,4))))=F(0,F(0,F(0,5)))=F(0,F(0,6))=F(0,7)=8\end{aligned}}} when the TRS with the reduction rule b7 {\displaystyle {\text{b7}}} is applied: A ( 2 , 1 ) + 3 = F ( 2 , 4 ) = ⋯ = F 6 ( 0 , 2 ) = F 5 ( 0 , F ( 0 , 2 ) ) = F 5 ( 0 , 3 ) = F 4 ( 0 , F ( 0 , 3 ) ) = F 4 ( 0 , 4 ) = F 3 ( 0 , F ( 0 , 4 ) ) = F 3 ( 0 , 5 ) = F 2 ( 0 , F ( 0 , 5 ) ) = F 2 ( 0 , 6 ) = F ( 0 , F ( 0 , 6 ) ) = F ( 0 , 7 ) = 8 {\displaystyle {\begin{aligned}&A(2,1)+3=F(2,4)=\dots =F^{6}(0,2)=F^{5}(0,F(0,2))=F^{5}(0,3)=F^{4}(0,F(0,3))=F^{4}(0,4)\\&=F^{3}(0,F(0,4))=F^{3}(0,5)=F^{2}(0,F(0,5))=F^{2}(0,6)=F(0,F(0,6))=F(0,7)=8\end{aligned}}} Remarks The computation of A i ⁡ ( n ) {\displaystyle \operatorname {A} _{i}(n)} according to the rules {b1 - b5, b6, r8 - r10} is deeply recursive. The maximum depth of nested F {\displaystyle F} s is A ( i , n ) + 1 {\displaystyle A(i,n)+1} . The culprit is the order in which iteration is executed: F n + 1 ( x ) = F ( F n ( x ) ) {\displaystyle F^{n+1}(x)=F(F^{n}(x))} . The first F {\displaystyle F} disappears only after the whole sequence is unfolded. The computation according to the rules {b1 - b5, b7, r8 - r10} is more efficient in that respect. The iteration F n + 1 ( x ) = F n ( F ( x ) ) {\displaystyle F^{n+1}(x)=F^{n}(F(x))} simulates the repeated loop over a block of code. The nesting is limited to ( i + 1 ) {\displaystyle (i+1)} , one recursion level per iterated function. Meyer & Ritchie (1967) showed this correspondence. These considerations concern the recursion depth only. Either way of iterating leads to the same number of reduction steps, involving the same rules (when the rules b6 and b7 are considered "the same"). The reduction of A ( 2 , 1 ) {\displaystyle A(2,1)} for instance converges in 35 steps: 12 × b1, 4 × b2, 1 × b3, 4 × b5, 12 × b6/b7, 1 × r9, 1 × r10. The modus iterandi only affects the order in which the reduction rules are applied. A real gain of execution time can only be achieved by not recalculating subresults over and over again. Memoization is an optimization technique where the results of function calls are cached and returned when the same inputs occur again. See for instance Ward (1993). Grossman & Zeitman (1988) published a cunning algorithm which computes A ( i , n ) {\displaystyle A(i,n)} within O ( i A ( i , n ) ) {\displaystyle {\mathcal {O}}(iA(i,n))} time and within O ( i ) {\displaystyle {\mathcal {O}}(i)} space. === Huge numbers === To demonstrate how the computation of A ( 4 , 3 ) {\displaystyle A(4,3)} results in many steps and in a large number: A ( 4 , 3 ) → A ( 3 , A ( 4 , 2 ) ) → A ( 3 , A ( 3 , A ( 4 , 1 ) ) ) → A ( 3 , A ( 3 , A ( 3 , A ( 4 , 0 ) ) ) ) → A ( 3 , A ( 3 , A ( 3 , A ( 3 , 1 ) ) ) ) → A ( 3 , A ( 3 , A ( 3 , A ( 2 , A ( 3 , 0 ) ) ) ) ) → A ( 3 , A ( 3 , A ( 3 , A ( 2 , A ( 2 , 1 ) ) ) ) ) → A ( 3 , A ( 3 , A ( 3 , A ( 2 , A ( 1 , A ( 2 , 0 ) ) ) ) ) ) → A ( 3 , A ( 3 , A ( 3 , A ( 2 , A ( 1 , A ( 1 , 1 ) ) ) ) ) ) → A ( 3 , A ( 3 , A ( 3 , A ( 2 , A ( 1 , A ( 0 , A ( 1 , 0 ) ) ) ) ) ) ) → A ( 3 , A ( 3 , A ( 3 , A ( 2 , A ( 1 , A ( 0 , A ( 0 , 1 ) ) ) ) ) ) ) → A ( 3 , A ( 3 , A ( 3 , A ( 2 , A ( 1 , A ( 0 , 2 ) ) ) ) ) ) → A ( 3 , A ( 3 , A ( 3 , A ( 2 , A ( 1 , 3 ) ) ) ) ) → A ( 3 , A ( 3 , A ( 3 , A ( 2 , A ( 0 , A ( 1 , 2 ) ) ) ) ) ) → A ( 3 , A ( 3 , A ( 3 , A ( 2 , A ( 0 , A ( 0 , A ( 1 , 1 ) ) ) ) ) ) ) → A ( 3 , A ( 3 , A ( 3 , A ( 2 , A ( 0 , A ( 0 , A ( 0 , A ( 1 , 0 ) ) ) ) ) ) ) ) → A ( 3 , A ( 3 , A ( 3 , A ( 2 , A ( 0 , A ( 0 , A ( 0 , A ( 0 , 1 ) ) ) ) ) ) ) ) → A ( 3 , A ( 3 , A ( 3 , A ( 2 , A ( 0 , A ( 0 , A ( 0 , 2 ) ) ) ) ) ) ) → A ( 3 , A ( 3 , A ( 3 , A ( 2 , A ( 0 , A ( 0 , 3 ) ) ) ) ) ) → A ( 3 , A ( 3 , A ( 3 , A ( 2 , A ( 0 , 4 ) ) ) ) ) → A ( 3 , A ( 3 , A ( 3 , A ( 2 , 5 ) ) ) ) ⋮ → A ( 3 , A ( 3 , A ( 3 , 13 ) ) ) ⋮ → A ( 3 , A ( 3 , 65533 ) ) ⋮ → A ( 3 , 2 65536 − 3 ) ⋮ → 2 2 65536 − 3. {\displaystyle {\begin{aligned}A(4,3)&\rightarrow A(3,A(4,2))\\&\rightarrow A(3,A(3,A(4,1)))\\&\rightarrow A(3,A(3,A(3,A(4,0))))\\&\rightarrow A(3,A(3,A(3,A(3,1))))\\&\rightarrow A(3,A(3,A(3,A(2,A(3,0)))))\\&\rightarrow A(3,A(3,A(3,A(2,A(2,1)))))\\&\rightarrow A(3,A(3,A(3,A(2,A(1,A(2,0))))))\\&\rightarrow A(3,A(3,A(3,A(2,A(1,A(1,1))))))\\&\rightarrow A(3,A(3,A(3,A(2,A(1,A(0,A(1,0)))))))\\&\rightarrow A(3,A(3,A(3,A(2,A(1,A(0,A(0,1)))))))\\&\rightarrow A(3,A(3,A(3,A(2,A(1,A(0,2))))))\\&\rightarrow A(3,A(3,A(3,A(2,A(1,3)))))\\&\rightarrow A(3,A(3,A(3,A(2,A(0,A(1,2))))))\\&\rightarrow A(3,A(3,A(3,A(2,A(0,A(0,A(1,1)))))))\\&\rightarrow A(3,A(3,A(3,A(2,A(0,A(0,A(0,A(1,0))))))))\\&\rightarrow A(3,A(3,A(3,A(2,A(0,A(0,A(0,A(0,1))))))))\\&\rightarrow A(3,A(3,A(3,A(2,A(0,A(0,A(0,2)))))))\\&\rightarrow A(3,A(3,A(3,A(2,A(0,A(0,3))))))\\&\rightarrow A(3,A(3,A(3,A(2,A(0,4)))))\\&\rightarrow A(3,A(3,A(3,A(2,5))))\\&\qquad \vdots \\&\rightarrow A(3,A(3,A(3,13)))\\&\qquad \vdots \\&\rightarrow A(3,A(3,65533))\\&\qquad \vdots \\&\rightarrow A(3,2^{65536}-3)\\&\qquad \vdots \\&\rightarrow 2^{2^{65536}}-3.\\\end{aligned}}} == Table of values == Computing the Ackermann function can be restated in terms of an infinite table. First, place the natural numbers along the top row. To determine a number in the table, take the number immediately to the left. Then use that number to look up the required number in the column given by that number and one row up. If there is no number to its left, simply look at the column headed "1" in the previous row. Here is a small upper-left portion of the table: The numbers here which are only expressed with recursive exponentiation or Knuth arrows are very large and would take up too much space to notate in plain decimal digits. Despite the large values occurring in this early section of the table, some even larger numbers have been defined, such as Graham's number, which cannot be written with any small number of Knuth arrows. This number is constructed with a technique similar to applying the Ackermann function to itself recursively. This is a repeat of the above table, but with the values replaced by the relevant expression from the function definition to show the pattern clearly: == Properties == === General remarks === It may not be immediately obvious that the evaluation of A ( m , n ) {\displaystyle A(m,n)} always terminates. However, the recursion is bounded because in each recursive application either m {\displaystyle m} decreases, or m {\displaystyle m} remains the same and n {\displaystyle n} decreases. Each time that n {\displaystyle n} reaches zero, m {\displaystyle m} decreases, so m {\displaystyle m} eventually reaches zero as well. (Expressed more technically, in each case the pair ( m , n ) {\displaystyle (m,n)} decreases in the lexicographic order on pairs, which is a well-ordering, just like the ordering of single non-negative integers; this means one cannot go down in the ordering infinitely many times in succession.) However, when m {\displaystyle m} decreases there is no upper bound on how much n {\displaystyle n} can increase — and it will often increase greatly. For small values of m like 1, 2, or 3, the Ackermann function grows relatively slowly with respect to n (at most exponentially). For m ≥ 4 {\displaystyle m\geq 4} , however, it grows much more quickly; even A ( 4 , 2 ) {\displaystyle A(4,2)} is about 2.00353×1019728, and the decimal expansion of A ( 4 , 3 ) {\displaystyle A(4,3)} is very large by any typical measure, about 2.12004×106.03123×1019727. An interesting aspect is that the only arithmetic operation it ever uses is addition of 1. Its fast growing power is based solely on nested recursion. This also implies that its running time is at least proportional to its output, and so is also extremely huge. In actuality, for most cases the running time is far larger than the output; see above. A single-argument version f ( n ) = A ( n , n ) {\displaystyle f(n)=A(n,n)} that increases both m {\displaystyle m} and n {\displaystyle n} at the same time dwarfs every primitive recursive function, including very fast-growing functions such as the exponential function, the factorial function, multi- and superfactorial functions, and even functions defined using Knuth's up-arrow notation (except when the indexed up-arrow is used). It can be seen that f ( n ) {\displaystyle f(n)} is roughly comparable to f ω ( n ) {\displaystyle f_{\omega }(n)} in the fast-growing hierarchy. This extreme growth can be exploited to show that f {\displaystyle f} which is obviously computable on a machine with infinite memory such as a Turing machine and so is a computable function, grows faster than any primitive recursive function and is therefore not primitive recursive. === Not primitive recursive === The Ackermann function grows faster than any primitive recursive function and therefore is not itself primitive recursive. Proof sketch: primitive recursive function defined using up to k recursions must grow slower than f k + 1 ( n ) {\displaystyle f_{k+1}(n)} , the (k+1)-th function in the fast-growing hierarchy, but the Ackermann function grows at least as fast as f ω ( n ) {\displaystyle f_{\omega }(n)} . Specifically, one shows that, for every primitive recursive function f ( x 1 , … , x n ) {\displaystyle f(x_{1},\ldots ,x_{n})} , there exists a non-negative integer t {\displaystyle t} , such that for all non-negative integers x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} , f ( x 1 , … , x n ) < A ( t , max i x i ) . {\displaystyle f(x_{1},\ldots ,x_{n})<A(t,\max _{i}x_{i}).} Once this is established, it follows that A {\displaystyle A} itself is not primitive recursive, since otherwise putting x 1 = x 2 = t {\displaystyle x_{1}=x_{2}=t} would lead to the contradiction A ( t , t ) < A ( t , t ) . {\displaystyle A(t,t)<A(t,t).} The proof proceeds as follows: define the class A {\displaystyle {\mathcal {A}}} of all functions that grow slower than the Ackermann function A = { f | ∃ t ∀ x 1 ⋯ ∀ x n : f ( x 1 , … , x n ) < A ( t , max i x i ) } {\displaystyle {\mathcal {A}}=\left\{f\,{\bigg |}\,\exists t\ \forall x_{1}\cdots \forall x_{n}:\ f(x_{1},\ldots ,x_{n})<A(t,\max _{i}x_{i})\right\}} and show that A {\displaystyle {\mathcal {A}}} contains all primitive recursive functions. The latter is achieved by showing that A {\displaystyle {\mathcal {A}}} contains the constant functions, the successor function, the projection functions and that it is closed under the operations of function composition and primitive recursion. == Inverse == Since the function f(n) = A(n, n) considered above grows very rapidly, its inverse function, f−1, grows very slowly. This inverse Ackermann function f−1 is usually denoted by α. In fact, α(n) is less than 5 for any practical input size n, since A(4, 4) is on the order of 2 2 2 2 16 {\displaystyle 2^{2^{2^{2^{16}}}}} . This inverse appears in the time complexity of some algorithms, such as the disjoint-set data structure and Chazelle's algorithm for minimum spanning trees. Sometimes Ackermann's original function or other variations are used in these settings, but they all grow at similarly high rates. In particular, some modified functions simplify the expression by eliminating the −3 and similar terms. A two-parameter variation of the inverse Ackermann function can be defined as follows, where ⌊ x ⌋ {\displaystyle \lfloor x\rfloor } is the floor function: α ( m , n ) = min { i ≥ 1 : A ( i , ⌊ m / n ⌋ ) ≥ log 2 ⁡ n } . {\displaystyle \alpha (m,n)=\min\{i\geq 1:A(i,\lfloor m/n\rfloor )\geq \log _{2}n\}.} This function arises in more precise analyses of the algorithms mentioned above, and gives a more refined time bound. In the disjoint-set data structure, m represents the number of operations while n represents the number of elements; in the minimum spanning tree algorithm, m represents the number of edges while n represents the number of vertices. Several slightly different definitions of α(m, n) exist; for example, log2 n is sometimes replaced by n, and the floor function is sometimes replaced by a ceiling. Other studies might define an inverse function of one where m is set to a constant, such that the inverse applies to a particular row. The inverse of the Ackermann function is primitive recursive, since it is graph primitive recursive, and it is upper bounded by a primitive recursive function. == Usage == === In computational complexity === The Ackermann function appears in the time complexity of some algorithms, such as vector addition systems and Petri net reachability, thus showing they are computationally infeasible for large instances. The inverse of the Ackermann function appears in some time complexity results. For instance, the disjoint-set data structure takes amortized time per operation proportional to the inverse Ackermann function, and cannot be made faster within the cell-probe model of computational complexity. === In discrete geometry === Certain problems in discrete geometry related to Davenport–Schinzel sequences have complexity bounds in which the inverse Ackermann function α ( n ) {\displaystyle \alpha (n)} appears. For instance, for n {\displaystyle n} line segments in the plane, the unbounded face of the arrangement of the segments has complexity O ( n α ( n ) ) {\displaystyle O(n\alpha (n))} , and some systems of n {\displaystyle n} line segments have an unbounded face of complexity Ω ( n α ( n ) ) {\displaystyle \Omega (n\alpha (n))} . === As a benchmark === The Ackermann function, due to its definition in terms of extremely deep recursion, can be used as a benchmark of a compiler's ability to optimize recursion. The first published use of Ackermann's function in this way was in 1970 by Dragoș Vaida and, almost simultaneously, in 1971, by Yngve Sundblad. Sundblad's seminal paper was taken up by Brian Wichmann (co-author of the Whetstone benchmark) in a trilogy of papers written between 1975 and 1982. == See also == Computability theory Double recursion Fast-growing hierarchy Goodstein function Primitive recursive function Recursion (computer science) == Notes == == References == == Bibliography == == External links == "Ackermann function". Encyclopedia of Mathematics. EMS Press. 2001 [1994]. Weisstein, Eric W. "Ackermann function". MathWorld. This article incorporates public domain material from Paul E. Black. "Ackermann's function". Dictionary of Algorithms and Data Structures. NIST. An animated Ackermann function calculator Aaronson, Scott (1999). "Who Can Name the Bigger Number?". Ackermann functions. Includes a table of some values. Brubaker, Ben (4 December 2023). "An Easy-Sounding Problem Yields Numbers Too Big for Our Universe". Munafo, Robert. "Large Numbers". describes several variations on the definition of A. Nivasch, Gabriel (October 2021). "Inverse Ackermann without pain". Archived from the original on 21 August 2007. Retrieved 18 June 2023. Seidel, Raimund. "Understanding the inverse Ackermann function" (PDF). The Ackermann function written in different programming languages, (on Rosetta Code) Smith, Harry J. "Ackermann's Function". Archived from the original on 26 October 2009.) Some study and programming. Wiernik, Ady; Sharir, Micha (1988). "Planar realizations of nonlinear Davenport–Schinzel sequences by segments". Discrete & Computational Geometry. 3 (1): 15–47. doi:10.1007/BF02187894. MR 0918177.
Wikipedia/Ackermann_function
In computer programming, especially functional programming and type theory, an algebraic data type (ADT) is a kind of composite data type, i.e., a data type formed by combining other types. Two common classes of algebraic types are product types (i.e., tuples, and records) and sum types (i.e., tagged or disjoint unions, coproduct types or variant types). The values of a product type typically contain several values, called fields. All values of that type have the same combination of field types. The set of all possible values of a product type is the set-theoretic product, i.e., the Cartesian product, of the sets of all possible values of its field types. The values of a sum type are typically grouped into several classes, called variants. A value of a variant type is usually created with a quasi-functional entity called a constructor. Each variant has its own constructor, which takes a specified number of arguments with specified types. The set of all possible values of a sum type is the set-theoretic sum, i.e., the disjoint union, of the sets of all possible values of its variants. Enumerated types are a special case of sum types in which the constructors take no arguments, as exactly one value is defined for each constructor. Values of algebraic types are analyzed with pattern matching, which identifies a value by its constructor or field names and extracts the data it contains. == History == Algebraic data types were introduced in Hope, a small functional programming language developed in the 1970s at the University of Edinburgh. == Examples == === Singly linked list === One of the most common examples of an algebraic data type is the singly linked list. A list type is a sum type with two variants, Nil for an empty list and Cons x xs for the combination of a new element x with a list xs to create a new list. Here is an example of how a singly linked list would be declared in Haskell: or Cons is an abbreviation of construct. Many languages have special syntax for lists defined in this way. For example, Haskell and ML use [] for Nil, : or :: for Cons, respectively, and square brackets for entire lists. So Cons 1 (Cons 2 (Cons 3 Nil)) would normally be written as 1:2:3:[] or [1,2,3] in Haskell, or as 1::2::3::[] or [1,2,3] in ML. === Binary tree === For a slightly more complex example, binary trees may be implemented in Haskell as follows: or Here, Empty represents an empty tree, Leaf represents a leaf node, and Node organizes the data into branches. In most languages that support algebraic data types, it is possible to define parametric types. Examples are given later in this article. Somewhat similar to a function, a data constructor is applied to arguments of an appropriate type, yielding an instance of the data type to which the type constructor belongs. For example, the data constructor Leaf is logically a function Int -> Tree, meaning that giving an integer as an argument to Leaf produces a value of the type Tree. As Node takes two arguments of the type Tree itself, the datatype is recursive. Operations on algebraic data types can be defined by using pattern matching to retrieve the arguments. For example, consider a function to find the depth of a Tree, given here in Haskell: Thus, a Tree given to depth can be constructed using any of Empty, Leaf, or Node and must be matched for any of them respectively to deal with all cases. In case of Node, the pattern extracts the subtrees l and r for further processing. === Abstract syntax === Algebraic data types are highly suited to implementing abstract syntax. For example, the following algebraic data type describes a simple language representing numerical expressions: An element of such a data type would have a form such as Mult (Add (Number 4) (Minus (Number 0) (Number 1))) (Number 2). Writing an evaluation function for this language is a simple exercise; however, more complex transformations also become feasible. For example, an optimization pass in a compiler might be written as a function taking an abstract expression as input and returning an optimized form. == Pattern matching == Algebraic data types are used to represent values that can be one of several types of things. Each type of thing is associated with an identifier called a constructor, which can be considered a tag for that kind of data. Each constructor can carry with it a different type of data. For example, considering the binary Tree example shown above, a constructor could carry no data (e.g., Empty), or one piece of data (e.g., Leaf has one Int value), or multiple pieces of data (e.g., Node has one Int value and two Tree values). To do something with a value of this Tree algebraic data type, it is deconstructed using a process called pattern matching. This involves matching the data with a series of patterns. The example function depth above pattern-matches its argument with three patterns. When the function is called, it finds the first pattern that matches its argument, performs any variable bindings that are found in the pattern, and evaluates the expression corresponding to the pattern. Each pattern above has a form that resembles the structure of some possible value of this datatype. The first pattern simply matches values of the constructor Empty. The second pattern matches values of the constructor Leaf. Patterns are recursive, so then the data that is associated with that constructor is matched with the pattern "n". In this case, a lowercase identifier represents a pattern that matches any value, which then is bound to a variable of that name — in this case, a variable "n" is bound to the integer value stored in the data type — to be used in the expression to evaluate. The recursion in patterns in this example are trivial, but a possible more complex recursive pattern would be something like: Node i (Node j (Leaf 4) x) (Node k y (Node Empty z)) Recursive patterns several layers deep are used for example in balancing red–black trees, which involve cases that require looking at colors several layers deep. The example above is operationally equivalent to the following pseudocode: The advantages of algebraic data types can be highlighted by comparison of the above pseudocode with a pattern matching equivalent. Firstly, there is type safety. In the pseudocode example above, programmer diligence is required to not access field2 when the constructor is a Leaf. The type system would have difficulties assigning a static type in a safe way for traditional record data structures. However, in pattern matching such problems are not faced. The type of each extracted value is based on the types declared by the relevant constructor. The number of values that can be extracted is known based on the constructor. Secondly, in pattern matching, the compiler performs exhaustiveness checking to ensure all cases are handled. If one of the cases of the depth function above were missing, the compiler would issue a warning. Exhaustiveness checking may seem easy for simple patterns, but with many complex recursive patterns, the task soon becomes difficult for the average human (or compiler, if it must check arbitrary nested if-else constructs). Similarly, there may be patterns which never match (i.e., are already covered by prior patterns). The compiler can also check and issue warnings for these, as they may indicate an error in reasoning. Algebraic data type pattern matching should not be confused with regular expression string pattern matching. The purpose of both is similar (to extract parts from a piece of data matching certain constraints) however, the implementation is very different. Pattern matching on algebraic data types matches on the structural properties of an object rather than on the character sequence of strings. == Theory == A general algebraic data type is a possibly recursive sum type of product types. Each constructor tags a product type to separate it from others, or if there is only one constructor, the data type is a product type. Further, the parameter types of a constructor are the factors of the product type. A parameterless constructor corresponds to the empty product. If a datatype is recursive, the entire sum of products is wrapped in a recursive type, and each constructor also rolls the datatype into the recursive type. For example, the Haskell datatype: is represented in type theory as λ α . μ β .1 + α × β {\displaystyle \lambda \alpha .\mu \beta .1+\alpha \times \beta } with constructors n i l α = r o l l ( i n l ⟨ ⟩ ) {\displaystyle \mathrm {nil} _{\alpha }=\mathrm {roll} \ (\mathrm {inl} \ \langle \rangle )} and c o n s α x l = r o l l ( i n r ⟨ x , l ⟩ ) {\displaystyle \mathrm {cons} _{\alpha }\ x\ l=\mathrm {roll} \ (\mathrm {inr} \ \langle x,l\rangle )} . The Haskell List datatype can also be represented in type theory in a slightly different form, thus: μ ϕ . λ α .1 + α × ϕ α {\displaystyle \mu \phi .\lambda \alpha .1+\alpha \times \phi \ \alpha } . (Note how the μ {\displaystyle \mu } and λ {\displaystyle \lambda } constructs are reversed relative to the original.) The original formation specified a type function whose body was a recursive type. The revised version specifies a recursive function on types. (The type variable ϕ {\displaystyle \phi } is used to suggest a function rather than a base type like β {\displaystyle \beta } , since ϕ {\displaystyle \phi } is like a Greek f.) The function must also now be applied ϕ {\displaystyle \phi } to its argument type α {\displaystyle \alpha } in the body of the type. For the purposes of the List example, these two formulations are not significantly different; but the second form allows expressing so-called nested data types, i.e., those where the recursive type differs parametrically from the original. (For more information on nested data types, see the works of Richard Bird, Lambert Meertens, and Ross Paterson.) In set theory the equivalent of a sum type is a disjoint union, a set whose elements are pairs consisting of a tag (equivalent to a constructor) and an object of a type corresponding to the tag (equivalent to the constructor arguments). == Programming languages with algebraic data types == Many programming languages incorporate algebraic data types as a first class notion, including: == See also == Disjoint union Generalized algebraic data type Initial algebra Quotient type Tagged union Type theory Visitor pattern == References ==
Wikipedia/Algebraic_data_type
In category theory, a branch of mathematics, a monad is a triple ( T , η , μ ) {\displaystyle (T,\eta ,\mu )} consisting of a functor T from a category to itself and two natural transformations η , μ {\displaystyle \eta ,\mu } that satisfy the conditions like associativity. For example, if F , G {\displaystyle F,G} are functors adjoint to each other, then T = G ∘ F {\displaystyle T=G\circ F} together with η , μ {\displaystyle \eta ,\mu } determined by the adjoint relation is a monad. In concise terms, a monad is a monoid in the category of endofunctors of some fixed category (an endofunctor is a functor mapping a category to itself). According to John Baez, a monad can be considered at least in two ways: A monad as a generalized monoid; this is clear since a monad is a monoid in a certain category, A monad as a tool for studying algebraic gadgets; for example, a group can be described by a certain monad. Monads are used in the theory of pairs of adjoint functors, and they generalize closure operators on partially ordered sets to arbitrary categories. Monads are also useful in the theory of datatypes, the denotational semantics of imperative programming languages, and in functional programming languages, allowing languages without mutable state to do things such as simulate for-loops; see Monad (functional programming). A monad is also called, especially in old literature, a triple, triad, standard construction and fundamental construction. == Introduction and definition == A monad is a certain type of endofunctor. For example, if F {\displaystyle F} and G {\displaystyle G} are a pair of adjoint functors, with F {\displaystyle F} left adjoint to G {\displaystyle G} , then the composition G ∘ F {\displaystyle G\circ F} is a monad. If F {\displaystyle F} and G {\displaystyle G} are inverse to each other, the corresponding monad is the identity functor. In general, adjunctions are not equivalences—they relate categories of different natures. The monad theory matters as part of the effort to capture what it is that adjunctions 'preserve'. The other half of the theory, of what can be learned likewise from consideration of F ∘ G {\displaystyle F\circ G} , is discussed under the dual theory of comonads. === Formal definition === Throughout this article, C {\displaystyle C} denotes a category. A monad on C {\displaystyle C} consists of an endofunctor T : C → C {\displaystyle T\colon C\to C} together with two natural transformations: η : 1 C → T {\displaystyle \eta \colon 1_{C}\to T} (where 1 C {\displaystyle 1_{C}} denotes the identity functor on C {\displaystyle C} ) and μ : T 2 → T {\displaystyle \mu \colon T^{2}\to T} (where T 2 {\displaystyle T^{2}} is the functor T ∘ T {\displaystyle T\circ T} from C {\displaystyle C} to C {\displaystyle C} ). These are required to fulfill the following conditions (sometimes called coherence conditions): μ ∘ T μ = μ ∘ μ T {\displaystyle \mu \circ T\mu =\mu \circ \mu T} (as natural transformations T 3 → T {\displaystyle T^{3}\to T} ); here T μ {\displaystyle T\mu } and μ T {\displaystyle \mu T} are formed by "horizontal composition". μ ∘ T η = μ ∘ η T = 1 T {\displaystyle \mu \circ T\eta =\mu \circ \eta T=1_{T}} (as natural transformations T → T {\displaystyle T\to T} ; here 1 T {\displaystyle 1_{T}} denotes the identity transformation from T {\displaystyle T} to T {\displaystyle T} ). We can rewrite these conditions using the following commutative diagrams: See the article on natural transformations for the explanation of the notations T μ {\displaystyle T\mu } and μ T {\displaystyle \mu T} , or see below the commutative diagrams not using these notions: The first axiom is akin to the associativity in monoids if we think of μ {\displaystyle \mu } as the monoid's binary operation, and the second axiom is akin to the existence of an identity element (which we think of as given by η {\displaystyle \eta } ). Indeed, a monad on C {\displaystyle C} can alternatively be defined as a monoid in the category E n d C {\displaystyle \mathbf {End} _{C}} whose objects are the endofunctors of C {\displaystyle C} and whose morphisms are the natural transformations between them, with the monoidal structure induced by the composition of endofunctors. === The power set monad === The power set monad is a monad P {\displaystyle {\mathcal {P}}} on the category S e t {\displaystyle \mathbf {Set} } : For a set A {\displaystyle A} let T ( A ) {\displaystyle T(A)} be the power set of A {\displaystyle A} and for a function f : A → B {\displaystyle f\colon A\to B} let T ( f ) {\displaystyle T(f)} be the function between the power sets induced by taking direct images under f {\displaystyle f} . For every set A {\displaystyle A} , we have a map η A : A → T ( A ) {\displaystyle \eta _{A}\colon A\to T(A)} , which assigns to every a ∈ A {\displaystyle a\in A} the singleton { a } {\displaystyle \{a\}} . The function μ A : T ( T ( A ) ) → T ( A ) {\displaystyle \mu _{A}\colon T(T(A))\to T(A)} takes a set of sets to its union. These data describe a monad. === Remarks === The axioms of a monad are formally similar to the monoid axioms. In fact, monads are special cases of monoids, namely they are precisely the monoids among endofunctors End ⁡ ( C ) {\displaystyle \operatorname {End} (C)} , which is equipped with the multiplication given by composition of endofunctors. Composition of monads is not, in general, a monad. For example, the double power set functor P ∘ P {\displaystyle {\mathcal {P}}\circ {\mathcal {P}}} does not admit any monad structure. === Comonads === The categorical dual definition is a formal definition of a comonad (or cotriple); this can be said quickly in the terms that a comonad for a category C {\displaystyle C} is a monad for the opposite category C o p {\displaystyle C^{\mathrm {op} }} . It is therefore a functor U {\displaystyle U} from C {\displaystyle C} to itself, with a set of axioms for counit and comultiplication that come from reversing the arrows everywhere in the definition just given. Monads are to monoids as comonads are to comonoids. Every set is a comonoid in a unique way, so comonoids are less familiar in abstract algebra than monoids; however, comonoids in the category of vector spaces with its usual tensor product are important and widely studied under the name of coalgebras. === Terminological history === The notion of monad was invented by Roger Godement in 1958 under the name "standard construction". Monad has been called "dual standard construction", "triple", "monoid" and "triad". The term "monad" is used at latest 1967, by Jean Bénabou. == Examples == === Identity === The identity functor on a category C {\displaystyle C} is a monad. Its multiplication and unit are the identity function on the objects of C {\displaystyle C} . === Monads arising from adjunctions === Any adjunction F : C ⇄ D : G {\displaystyle F:C\rightleftarrows D:G} gives rise to a monad on C. This very widespread construction works as follows: the endofunctor is the composite T = G ∘ F . {\displaystyle T=G\circ F.} This endofunctor is quickly seen to be a monad, where the unit map stems from the unit map id C → G ∘ F {\displaystyle \operatorname {id} _{C}\to G\circ F} of the adjunction, and the multiplication map is constructed using the counit map of the adjunction: T 2 = G ∘ F ∘ G ∘ F → G ∘ counit ∘ F G ∘ F = T . {\displaystyle T^{2}=G\circ F\circ G\circ F\xrightarrow {G\circ {\text{counit}}\circ F} G\circ F=T.} In fact, any monad can be found as an explicit adjunction of functors using the Eilenberg–Moore category C T {\displaystyle C^{T}} (the category of T {\displaystyle T} -algebras). ==== Double dualization ==== The double dualization monad, for a fixed field k arises from the adjunction ( − ) ∗ : V e c t k ⇄ V e c t k o p : ( − ) ∗ {\displaystyle (-)^{*}:\mathbf {Vect} _{k}\rightleftarrows \mathbf {Vect} _{k}^{op}:(-)^{*}} where both functors are given by sending a vector space V to its dual vector space V ∗ := Hom ⁡ ( V , k ) {\displaystyle V^{*}:=\operatorname {Hom} (V,k)} . The associated monad sends a vector space V to its double dual V ∗ ∗ {\displaystyle V^{**}} . This monad is discussed, in much greater generality, by Kock (1970). ==== Closure operators on partially ordered sets ==== For categories arising from partially ordered sets ( P , ≤ ) {\displaystyle (P,\leq )} (with a single morphism from x {\displaystyle x} to y {\displaystyle y} if and only if x ≤ y {\displaystyle x\leq y} ), then the formalism becomes much simpler: adjoint pairs are Galois connections and monads are closure operators. ==== Free-forgetful adjunctions ==== For example, let G {\displaystyle G} be the forgetful functor from the category Grp of groups to the category Set of sets, and let F {\displaystyle F} be the free group functor from the category of sets to the category of groups. Then F {\displaystyle F} is left adjoint of G {\displaystyle G} . In this case, the associated monad T = G ∘ F {\displaystyle T=G\circ F} takes a set X {\displaystyle X} and returns the underlying set of the free group F r e e ( X ) {\displaystyle \mathrm {Free} (X)} . The unit map of this monad is given by the maps X → T ( X ) {\displaystyle X\to T(X)} including any set X {\displaystyle X} into the set F r e e ( X ) {\displaystyle \mathrm {Free} (X)} in the natural way, as strings of length 1. Further, the multiplication of this monad is the map T ( T ( X ) ) → T ( X ) {\displaystyle T(T(X))\to T(X)} made out of a natural concatenation or 'flattening' of 'strings of strings'. This amounts to two natural transformations. The preceding example about free groups can be generalized to any type of algebra in the sense of a variety of algebras in universal algebra. Thus, every such type of algebra gives rise to a monad on the category of sets. Importantly, the algebra type can be recovered from the monad (as the category of Eilenberg–Moore algebras), so monads can also be seen as generalizing varieties of universal algebras. Another monad arising from an adjunction is when T {\displaystyle T} is the endofunctor on the category of vector spaces which maps a vector space V {\displaystyle V} to its tensor algebra T ( V ) {\displaystyle T(V)} , and which maps linear maps to their tensor product. We then have a natural transformation corresponding to the embedding of V {\displaystyle V} into its tensor algebra, and a natural transformation corresponding to the map from T ( T ( V ) ) {\displaystyle T(T(V))} to T ( V ) {\displaystyle T(V)} obtained by simply expanding all tensor products. === Codensity monads === Under mild conditions, functors not admitting a left adjoint also give rise to a monad, the so-called codensity monad. For example, the inclusion F i n S e t ⊂ S e t {\displaystyle \mathbf {FinSet} \subset \mathbf {Set} } does not admit a left adjoint. Its codensity monad is the monad on sets sending any set X to the set of ultrafilters on X. This and similar examples are discussed in Leinster (2013). === Monads used in denotational semantics === The following monads over the category of sets are used in denotational semantics of imperative programming languages, and analogous constructions are used in functional programming. ==== The maybe monad ==== The endofunctor of the maybe or partiality monad adds a disjoint point: ( − ) ∗ : S e t → S e t {\displaystyle (-)_{*}:\mathbf {Set} \to \mathbf {Set} } X ↦ X ∪ { ∗ } {\displaystyle X\mapsto X\cup \{*\}} The unit is given by the inclusion of a set X {\displaystyle X} into X ∗ {\displaystyle X_{*}} : η X : X → X ∗ {\displaystyle \eta _{X}:X\to X_{*}} x ↦ x {\displaystyle x\mapsto x} The multiplication maps elements of X {\displaystyle X} to themselves, and the two disjoint points in ( X ∗ ) ∗ {\displaystyle (X_{*})_{*}} to the one in X ∗ {\displaystyle X_{*}} . In both functional programming and denotational semantics, the maybe monad models partial computations, that is, computations that may fail. ==== The state monad ==== Given a set S {\displaystyle S} , the endofunctor of the state monad maps each set X {\displaystyle X} to the set of functions S → S × X {\displaystyle S\to S\times X} . The component of the unit at X {\displaystyle X} maps each element x ∈ X {\displaystyle x\in X} to the function η X ( x ) : S → S × X {\displaystyle \eta _{X}(x):S\to S\times X} s ↦ ( s , x ) {\displaystyle s\mapsto (s,x)} The multiplication maps the function f : S → S × ( S → S × X ) , s ↦ ( s ′ , f ′ ) {\displaystyle f:S\to S\times (S\to S\times X),s\mapsto (s',f')} to the function μ X ( f ) : S → S × X {\displaystyle \mu _{X}(f):S\to S\times X} s ↦ f ′ ( s ′ ) {\displaystyle s\mapsto f'(s')} In functional programming and denotational semantics, the state monad models stateful computations. ==== The environment monad ==== Given a set E {\displaystyle E} , the endofunctor of the reader or environment monad maps each set X {\displaystyle X} to the set of functions E → X {\displaystyle E\to X} . Thus, the endofunctor of this monad is exactly the hom functor H o m ( E , − ) {\displaystyle \mathrm {Hom} (E,-)} . The component of the unit at X {\displaystyle X} maps each element x ∈ X {\displaystyle x\in X} to the constant function e ↦ x {\displaystyle e\mapsto x} . In functional programming and denotational semantics, the environment monad models computations with access to some read-only data. ==== The list and set monads ==== The list or nondeterminism monad maps a set X to the set of finite sequences (i.e., lists) with elements from X. The unit maps an element x in X to the singleton list [x]. The multiplication concatenates a list of lists into a single list. In functional programming, the list monad is used to model nondeterministic computations. The covariant powerset monad is also known as the set monad, and is also used to model nondeterministic computation. == Algebras for a monad == Given a monad ( T , η , μ ) {\displaystyle (T,\eta ,\mu )} on a category C {\displaystyle C} , it is natural to consider T {\displaystyle T} -algebras, i.e., objects of C {\displaystyle C} acted upon by T {\displaystyle T} in a way which is compatible with the unit and multiplication of the monad. More formally, a T {\displaystyle T} -algebra ( x , h ) {\displaystyle (x,h)} is an object x {\displaystyle x} of C {\displaystyle C} together with an arrow h : T x → x {\displaystyle h\colon Tx\to x} of C {\displaystyle C} called the structure map of the algebra such that the diagrams commute. A morphism f : ( x , h ) → ( x ′ , h ′ ) {\displaystyle f\colon (x,h)\to (x',h')} of T {\displaystyle T} -algebras is an arrow f : x → x ′ {\displaystyle f\colon x\to x'} of C {\displaystyle C} such that the diagram commutes. T {\displaystyle T} -algebras form a category called the Eilenberg–Moore category and denoted by C T {\displaystyle C^{T}} . === Examples === ==== Algebras over the free group monad ==== For example, for the free group monad discussed above, a T {\displaystyle T} -algebra is a set X {\displaystyle X} together with a map from the free group generated by X {\displaystyle X} towards X {\displaystyle X} subject to associativity and unitality conditions. Such a structure is equivalent to saying that X {\displaystyle X} is a group itself. ==== Algebras over the distribution monad ==== Another example is the distribution monad D {\displaystyle {\mathcal {D}}} on the category of sets. It is defined by sending a set X {\displaystyle X} to the set of functions f : X → [ 0 , 1 ] {\displaystyle f:X\to [0,1]} with finite support and such that their sum is equal to 1 {\displaystyle 1} . In set-builder notation, this is the set D ( X ) = { f : X → [ 0 , 1 ] : # supp ( f ) < + ∞ ∑ x ∈ X f ( x ) = 1 } {\displaystyle {\mathcal {D}}(X)=\left\{f:X\to [0,1]:{\begin{matrix}\#{\text{supp}}(f)<+\infty \\\sum _{x\in X}f(x)=1\end{matrix}}\right\}} By inspection of the definitions, it can be shown that algebras over the distribution monad are equivalent to convex sets, i.e., sets equipped with operations x + r y {\displaystyle x+_{r}y} for r ∈ [ 0 , 1 ] {\displaystyle r\in [0,1]} subject to axioms resembling the behavior of convex linear combinations r x + ( 1 − r ) y {\displaystyle rx+(1-r)y} in Euclidean space. ==== Algebras over the symmetric monad ==== Another useful example of a monad is the symmetric algebra functor on the category of R {\displaystyle R} -modules for a commutative ring R {\displaystyle R} . Sym ∙ ( − ) : Mod ( R ) → Mod ( R ) {\displaystyle {\text{Sym}}^{\bullet }(-):{\text{Mod}}(R)\to {\text{Mod}}(R)} sending an R {\displaystyle R} -module M {\displaystyle M} to the direct sum of symmetric tensor powers Sym ∙ ( M ) = ⨁ k = 0 ∞ Sym k ( M ) {\displaystyle {\text{Sym}}^{\bullet }(M)=\bigoplus _{k=0}^{\infty }{\text{Sym}}^{k}(M)} where Sym 0 ( M ) = R {\displaystyle {\text{Sym}}^{0}(M)=R} . For example, Sym ∙ ( R ⊕ n ) ≅ R [ x 1 , … , x n ] {\displaystyle {\text{Sym}}^{\bullet }(R^{\oplus n})\cong R[x_{1},\ldots ,x_{n}]} where the R {\displaystyle R} -algebra on the right is considered as a module. Then, an algebra over this monad are commutative R {\displaystyle R} -algebras. There are also algebras over the monads for the alternating tensors Alt ∙ ( − ) {\displaystyle {\text{Alt}}^{\bullet }(-)} and total tensor functors T ∙ ( − ) {\displaystyle T^{\bullet }(-)} giving anti-symmetric R {\displaystyle R} -algebras, and free R {\displaystyle R} -algebras, so Alt ∙ ( R ⊕ n ) = R ( x 1 , … , x n ) T ∙ ( R ⊕ n ) = R ⟨ x 1 , … , x n ⟩ {\displaystyle {\begin{aligned}{\text{Alt}}^{\bullet }(R^{\oplus n})&=R(x_{1},\ldots ,x_{n})\\{\text{T}}^{\bullet }(R^{\oplus n})&=R\langle x_{1},\ldots ,x_{n}\rangle \end{aligned}}} where the first ring is the free anti-symmetric algebra over R {\displaystyle R} in n {\displaystyle n} -generators and the second ring is the free algebra over R {\displaystyle R} in n {\displaystyle n} -generators. ==== Commutative algebras in E-infinity ring spectra ==== There is an analogous construction for commutative S {\displaystyle \mathbb {S} } -algebraspg 113 which gives commutative A {\displaystyle A} -algebras for a commutative S {\displaystyle \mathbb {S} } -algebra A {\displaystyle A} . If M A {\displaystyle {\mathcal {M}}_{A}} is the category of A {\displaystyle A} -modules, then the functor P : M A → M A {\displaystyle \mathbb {P} :{\mathcal {M}}_{A}\to {\mathcal {M}}_{A}} is the monad given by P ( M ) = ⋁ j ≥ 0 M j / Σ j {\displaystyle \mathbb {P} (M)=\bigvee _{j\geq 0}M^{j}/\Sigma _{j}} where M j = M ∧ A ⋯ ∧ A M {\displaystyle M^{j}=M\wedge _{A}\cdots \wedge _{A}M} j {\displaystyle j} -times. Then there is an associated category C A {\displaystyle {\mathcal {C}}_{A}} of commutative A {\displaystyle A} -algebras from the category of algebras over this monad. == Monads and adjunctions == As was mentioned above, any adjunction gives rise to a monad. Conversely, every monad arises from some adjunction, namely the free–forgetful adjunction T ( − ) : C ⇄ C T : forget {\displaystyle T(-):C\rightleftarrows C^{T}:{\text{forget}}} whose left adjoint sends an object X to the free T-algebra T(X). However, there are usually several distinct adjunctions giving rise to a monad: let A d j ( C , T ) {\displaystyle \mathbf {Adj} (C,T)} be the category whose objects are the adjunctions ( F , G , e , ε ) {\displaystyle (F,G,e,\varepsilon )} such that ( G F , e , G ε F ) = ( T , η , μ ) {\displaystyle (GF,e,G\varepsilon F)=(T,\eta ,\mu )} and whose arrows are the morphisms of adjunctions that are the identity on C {\displaystyle C} . Then the above free–forgetful adjunction involving the Eilenberg–Moore category C T {\displaystyle C^{T}} is a terminal object in A d j ( C , T ) {\displaystyle \mathbf {Adj} (C,T)} . An initial object is the Kleisli category, which is by definition the full subcategory of C T {\displaystyle C^{T}} consisting only of free T-algebras, i.e., T-algebras of the form T ( x ) {\displaystyle T(x)} for some object x of C. === Monadic adjunctions === Given any adjunction ( F : C → D , G : D → C , η , ε ) {\displaystyle (F:C\to D,G:D\to C,\eta ,\varepsilon )} with associated monad T, the functor G can be factored as D ⟶ G ~ C T → forget C , {\displaystyle D{\overset {\widetilde {G}}{\longrightarrow }}C^{T}\xrightarrow {\text{forget}} C,} i.e., G(Y) can be naturally endowed with a T-algebra structure for any Y in D. The adjunction is called a monadic adjunction if the first functor G ~ {\displaystyle {\tilde {G}}} yields an equivalence of categories between D and the Eilenberg–Moore category C T {\displaystyle C^{T}} . By extension, a functor G : D → C {\displaystyle G\colon D\to C} is said to be monadic if it has a left adjoint F forming a monadic adjunction. For example, the free–forgetful adjunction between groups and sets is monadic, since algebras over the associated monad are groups, as was mentioned above. In general, knowing that an adjunction is monadic allows one to reconstruct objects in D out of objects in C and the T-action. === Beck's monadicity theorem === Beck's monadicity theorem gives a necessary and sufficient condition for an adjunction to be monadic. A simplified version of this theorem states that G is monadic if it is conservative (or G reflects isomorphisms, i.e., a morphism in D is an isomorphism if and only if its image under G is an isomorphism in C) and C has and G preserves coequalizers. For example, the forgetful functor from the category of compact Hausdorff spaces to sets is monadic. However the forgetful functor from all topological spaces to sets is not conservative since there are continuous bijective maps (between non-compact or non-Hausdorff spaces) that fail to be homeomorphisms. Thus, this forgetful functor is not monadic. The dual version of Beck's theorem, characterizing comonadic adjunctions, is relevant in different fields such as topos theory and topics in algebraic geometry related to descent. A first example of a comonadic adjunction is the adjunction − ⊗ A B : M o d A ⇄ M o d B : forget {\displaystyle -\otimes _{A}B:\mathbf {Mod} _{A}\rightleftarrows \mathbf {Mod} _{B}:\operatorname {forget} } for a ring homomorphism A → B {\displaystyle A\to B} between commutative rings. This adjunction is comonadic, by Beck's theorem, if and only if B is faithfully flat as an A-module. It thus allows to descend B-modules, equipped with a descent datum (i.e., an action of the comonad given by the adjunction) to A-modules. The resulting theory of faithfully flat descent is widely applied in algebraic geometry. == Uses == Monads are used in functional programming to express types of sequential computation (sometimes with side-effects). See monads in functional programming, and the more mathematically oriented Wikibook module b:Haskell/Category theory. Monads are used in the denotational semantics of impure functional and imperative programming languages. In categorical logic, an analogy has been drawn between the monad-comonad theory, and modal logic via closure operators, interior algebras, and their relation to models of S4 and intuitionistic logics. == Generalization == It is possible to define monads in a 2-category C {\displaystyle C} . Monads described above are monads for C = C a t {\displaystyle C=\mathbf {Cat} } . == See also == Distributive law between monads Lawvere theory Monad (functional programming) Polyad Strong monad Giry monad Monoidal monad == References == == Further reading == Barr, Michael; Wells, Charles (1999), Category Theory for Computing Science (PDF) Godement, Roger (1958), Topologie Algébrique et Théorie des Faisceaux., Actualités Sci. Ind., Publ. Math. Univ. Strasbourg, vol. 1252, Paris: Hermann, pp. viii+283 pp Kock, Anders (1970), "On Double Dualization Monads", Mathematica Scandinavica, 27: 151, doi:10.7146/math.scand.a-10995 Leinster, Tom (2013), "Codensity and the ultrafilter monad" (PDF), Theory and Applications of Categories, 28: 332–370, arXiv:1209.3606, Bibcode:2012arXiv1209.3606L MacLane, Saunders (1978), Categories for the Working Mathematician, Graduate Texts in Mathematics, vol. 5, doi:10.1007/978-1-4757-4721-8, ISBN 978-1-4419-3123-8 Pedicchio, Maria Cristina; Tholen, Walter, eds. (2004). Categorical Foundations. Special Topics in Order, Topology, Algebra, and Sheaf Theory. Encyclopedia of Mathematics and Its Applications. Vol. 97. Cambridge: Cambridge University Press. ISBN 0-521-83414-7. Zbl 1034.18001. Perrone, Paolo (2024), "Chapter 5. Monads and Comonads", Starting Category Theory, World Scientific, doi:10.1142/9789811286018_0005, ISBN 978-981-12-8600-1 Riehl, Emily (2017), Category Theory in Context, Courier Dover Publications, ISBN 9780486820804 Turi, Daniele (1996–2001), Category Theory Lecture Notes (PDF) https://mathoverflow.net/questions/55182/what-is-known-about-the-category-of-monads-on-set Ross Street, The formal theory of monads [1] == External links == Monads, a YouTube video of five short lectures (with one appendix). John Baez's This Week's Finds in Mathematical Physics (Week 89) covers monads in 2-categories. Monads and comonads, video tutorial. https://medium.com/@felix.kuehl/a-monad-is-just-a-monoid-in-the-category-of-endofunctors-lets-actually-unravel-this-f5d4b7dbe5d6
Wikipedia/Monad_(category_theory)
In category theory, the product of two (or more) objects in a category is a notion designed to capture the essence behind constructions in other areas of mathematics such as the Cartesian product of sets, the direct product of groups or rings, and the product of topological spaces. Essentially, the product of a family of objects is the "most general" object which admits a morphism to each of the given objects. == Definition == === Product of two objects === Fix a category C . {\displaystyle C.} Let X 1 {\displaystyle X_{1}} and X 2 {\displaystyle X_{2}} be objects of C . {\displaystyle C.} A product of X 1 {\displaystyle X_{1}} and X 2 {\displaystyle X_{2}} is an object X , {\displaystyle X,} typically denoted X 1 × X 2 , {\displaystyle X_{1}\times X_{2},} equipped with a pair of morphisms π 1 : X → X 1 , {\displaystyle \pi _{1}:X\to X_{1},} π 2 : X → X 2 {\displaystyle \pi _{2}:X\to X_{2}} satisfying the following universal property: For every object Y {\displaystyle Y} and every pair of morphisms f 1 : Y → X 1 , {\displaystyle f_{1}:Y\to X_{1},} f 2 : Y → X 2 , {\displaystyle f_{2}:Y\to X_{2},} there exists a unique morphism f : Y → X 1 × X 2 {\displaystyle f:Y\to X_{1}\times X_{2}} such that the following diagram commutes: Whether a product exists may depend on C {\displaystyle C} or on X 1 {\displaystyle X_{1}} and X 2 . {\displaystyle X_{2}.} If it does exist, it is unique up to canonical isomorphism, because of the universal property, so one may speak of the product. This has the following meaning: if X ′ , π 1 ′ , π 2 ′ {\displaystyle X',\pi _{1}',\pi _{2}'} is another product, there exists a unique isomorphism h : X ′ → X 1 × X 2 {\displaystyle h:X'\to X_{1}\times X_{2}} such that π 1 ′ = π 1 ∘ h {\displaystyle \pi _{1}'=\pi _{1}\circ h} and π 2 ′ = π 2 ∘ h {\displaystyle \pi _{2}'=\pi _{2}\circ h} . The morphisms π 1 {\displaystyle \pi _{1}} and π 2 {\displaystyle \pi _{2}} are called the canonical projections or projection morphisms; the letter π {\displaystyle \pi } alliterates with projection. Given Y {\displaystyle Y} and f 1 , {\displaystyle f_{1},} f 2 , {\displaystyle f_{2},} the unique morphism f {\displaystyle f} is called the product of morphisms f 1 {\displaystyle f_{1}} and f 2 {\displaystyle f_{2}} and may be denoted ⟨ f 1 , f 2 ⟩ {\displaystyle \langle f_{1},f_{2}\rangle } , f 1 × f 2 {\displaystyle f_{1}\times f_{2}} , or f 1 ⊗ f 2 {\displaystyle f_{1}\otimes f_{2}} . === Product of an arbitrary family === Instead of two objects, we can start with an arbitrary family of objects indexed by a set I . {\displaystyle I.} Given a family ( X i ) i ∈ I {\displaystyle \left(X_{i}\right)_{i\in I}} of objects, a product of the family is an object X {\displaystyle X} equipped with morphisms π i : X → X i , {\displaystyle \pi _{i}:X\to X_{i},} satisfying the following universal property: For every object Y {\displaystyle Y} and every I {\displaystyle I} -indexed family of morphisms f i : Y → X i , {\displaystyle f_{i}:Y\to X_{i},} there exists a unique morphism f : Y → X {\displaystyle f:Y\to X} such that the following diagrams commute for all i ∈ I : {\displaystyle i\in I:} The product is denoted ∏ i ∈ I X i . {\displaystyle \prod _{i\in I}X_{i}.} If I = { 1 , … , n } , {\displaystyle I=\{1,\ldots ,n\},} then it is denoted X 1 × ⋯ × X n {\displaystyle X_{1}\times \cdots \times X_{n}} and the product of morphisms is denoted ⟨ f 1 , … , f n ⟩ . {\displaystyle \langle f_{1},\ldots ,f_{n}\rangle .} === Equational definition === Alternatively, the product may be defined through equations. So, for example, for the binary product: Existence of f {\displaystyle f} is guaranteed by existence of the operation ⟨ ⋅ , ⋅ ⟩ . {\displaystyle \langle \cdot ,\cdot \rangle .} Commutativity of the diagrams above is guaranteed by the equality: for all f 1 , f 2 {\displaystyle f_{1},f_{2}} and all i ∈ { 1 , 2 } , {\displaystyle i\in \{1,2\},} π i ∘ ⟨ f 1 , f 2 ⟩ = f i {\displaystyle \pi _{i}\circ \left\langle f_{1},f_{2}\right\rangle =f_{i}} Uniqueness of f {\displaystyle f} is guaranteed by the equality: for all g : Y → X 1 × X 2 , {\displaystyle g:Y\to X_{1}\times X_{2},} ⟨ π 1 ∘ g , π 2 ∘ g ⟩ = g . {\displaystyle \left\langle \pi _{1}\circ g,\pi _{2}\circ g\right\rangle =g.} === As a limit === The product is a special case of a limit. This may be seen by using a discrete category (a family of objects without any morphisms, other than their identity morphisms) as the diagram required for the definition of the limit. The discrete objects will serve as the index of the components and projections. If we regard this diagram as a functor, it is a functor from the index set I {\displaystyle I} considered as a discrete category. The definition of the product then coincides with the definition of the limit, { f } i {\displaystyle \{f\}_{i}} being a cone and projections being the limit (limiting cone). === Universal property === Just as the limit is a special case of the universal construction, so is the product. Starting with the definition given for the universal property of limits, take J {\displaystyle \mathbf {J} } as the discrete category with two objects, so that C J {\displaystyle \mathbf {C} ^{\mathbf {J} }} is simply the product category C × C . {\displaystyle \mathbf {C} \times \mathbf {C} .} The diagonal functor Δ : C → C × C {\displaystyle \Delta :\mathbf {C} \to \mathbf {C} \times \mathbf {C} } assigns to each object X {\displaystyle X} the ordered pair ( X , X ) {\displaystyle (X,X)} and to each morphism f {\displaystyle f} the pair ( f , f ) . {\displaystyle (f,f).} The product X 1 × X 2 {\displaystyle X_{1}\times X_{2}} in C {\displaystyle C} is given by a universal morphism from the functor Δ {\displaystyle \Delta } to the object ( X 1 , X 2 ) {\displaystyle \left(X_{1},X_{2}\right)} in C × C . {\displaystyle \mathbf {C} \times \mathbf {C} .} This universal morphism consists of an object X {\displaystyle X} of C {\displaystyle C} and a morphism ( X , X ) → ( X 1 , X 2 ) {\displaystyle (X,X)\to \left(X_{1},X_{2}\right)} which contains projections. == Examples == In the category of sets, the product (in the category theoretic sense) is the Cartesian product. Given a family of sets X i {\displaystyle X_{i}} the product is defined as ∏ i ∈ I X i := { ( x i ) i ∈ I : x i ∈ X i for all i ∈ I } {\displaystyle \prod _{i\in I}X_{i}:=\left\{\left(x_{i}\right)_{i\in I}:x_{i}\in X_{i}{\text{ for all }}i\in I\right\}} with the canonical projections π j : ∏ i ∈ I X i → X j , π j ( ( x i ) i ∈ I ) := x j . {\displaystyle \pi _{j}:\prod _{i\in I}X_{i}\to X_{j},\quad \pi _{j}\left(\left(x_{i}\right)_{i\in I}\right):=x_{j}.} Given any set Y {\displaystyle Y} with a family of functions f i : Y → X i , {\displaystyle f_{i}:Y\to X_{i},} the universal arrow f : Y → ∏ i ∈ I X i {\displaystyle f:Y\to \prod _{i\in I}X_{i}} is defined by f ( y ) := ( f i ( y ) ) i ∈ I . {\displaystyle f(y):=\left(f_{i}(y)\right)_{i\in I}.} Other examples: In the category of topological spaces, the product is the space whose underlying set is the Cartesian product and which carries the product topology. The product topology is the coarsest topology for which all the projections are continuous. In the category of modules over some ring R , {\displaystyle R,} the product is the Cartesian product with addition defined componentwise and distributive multiplication. In the category of groups, the product is the direct product of groups given by the Cartesian product with multiplication defined componentwise. In the category of graphs, the product is the tensor product of graphs. In the category of relations, the product is given by the disjoint union. (This may come as a bit of a surprise given that the category of sets is a subcategory of the category of relations.) In the category of algebraic varieties, the product is given by the Segre embedding. In the category of semi-abelian monoids, the product is given by the history monoid. In the category of Banach spaces and short maps, the product carries the l∞ norm. A partially ordered set can be treated as a category, using the order relation as the morphisms. In this case the products and coproducts correspond to greatest lower bounds (meets) and least upper bounds (joins). == Discussion == An example in which the product does not exist: In the category of fields, the product Q × F p {\displaystyle \mathbb {Q} \times F_{p}} does not exist, since there is no field with homomorphisms to both Q {\displaystyle \mathbb {Q} } and F p . {\displaystyle F_{p}.} Another example: An empty product (that is, I {\displaystyle I} is the empty set) is the same as a terminal object, and some categories, such as the category of infinite groups, do not have a terminal object: given any infinite group G {\displaystyle G} there are infinitely many morphisms Z → G , {\displaystyle \mathbb {Z} \to G,} so G {\displaystyle G} cannot be terminal. If I {\displaystyle I} is a set such that all products for families indexed with I {\displaystyle I} exist, then one can treat each product as a functor C I → C . {\displaystyle \mathbf {C} ^{I}\to \mathbf {C} .} How this functor maps objects is obvious. Mapping of morphisms is subtle, because the product of morphisms defined above does not fit. First, consider the binary product functor, which is a bifunctor. For f 1 : X 1 → Y 1 , f 2 : X 2 → Y 2 {\displaystyle f_{1}:X_{1}\to Y_{1},f_{2}:X_{2}\to Y_{2}} we should find a morphism X 1 × X 2 → Y 1 × Y 2 . {\displaystyle X_{1}\times X_{2}\to Y_{1}\times Y_{2}.} We choose ⟨ f 1 ∘ π 1 , f 2 ∘ π 2 ⟩ . {\displaystyle \left\langle f_{1}\circ \pi _{1},f_{2}\circ \pi _{2}\right\rangle .} This operation on morphisms is called Cartesian product of morphisms. Second, consider the general product functor. For families { X } i , { Y } i , f i : X i → Y i {\displaystyle \left\{X\right\}_{i},\left\{Y\right\}_{i},f_{i}:X_{i}\to Y_{i}} we should find a morphism ∏ i ∈ I X i → ∏ i ∈ I Y i . {\displaystyle \prod _{i\in I}X_{i}\to \prod _{i\in I}Y_{i}.} We choose the product of morphisms { f i ∘ π i } i . {\displaystyle \left\{f_{i}\circ \pi _{i}\right\}_{i}.} A category where every finite set of objects has a product is sometimes called a Cartesian category (although some authors use this phrase to mean "a category with all finite limits"). The product is associative. Suppose C {\displaystyle C} is a Cartesian category, product functors have been chosen as above, and 1 {\displaystyle 1} denotes a terminal object of C . {\displaystyle C.} We then have natural isomorphisms X × ( Y × Z ) ≃ ( X × Y ) × Z ≃ X × Y × Z , {\displaystyle X\times (Y\times Z)\simeq (X\times Y)\times Z\simeq X\times Y\times Z,} X × 1 ≃ 1 × X ≃ X , {\displaystyle X\times 1\simeq 1\times X\simeq X,} X × Y ≃ Y × X . {\displaystyle X\times Y\simeq Y\times X.} These properties are formally similar to those of a commutative monoid; a Cartesian category with its finite products is an example of a symmetric monoidal category. == Distributivity == For any objects X , Y , and Z {\displaystyle X,Y,{\text{ and }}Z} of a category with finite products and coproducts, there is a canonical morphism X × Y + X × Z → X × ( Y + Z ) , {\displaystyle X\times Y+X\times Z\to X\times (Y+Z),} where the plus sign here denotes the coproduct. To see this, note that the universal property of the coproduct X × Y + X × Z {\displaystyle X\times Y+X\times Z} guarantees the existence of unique arrows filling out the following diagram (the induced arrows are dashed): The universal property of the product X × ( Y + Z ) {\displaystyle X\times (Y+Z)} then guarantees a unique morphism X × Y + X × Z → X × ( Y + Z ) {\displaystyle X\times Y+X\times Z\to X\times (Y+Z)} induced by the dashed arrows in the above diagram. A distributive category is one in which this morphism is actually an isomorphism. Thus in a distributive category, there is the canonical isomorphism X × ( Y + Z ) ≃ ( X × Y ) + ( X × Z ) . {\displaystyle X\times (Y+Z)\simeq (X\times Y)+(X\times Z).} == See also == Coproduct – the dual of the product Diagonal functor – the left adjoint of the product functor. Limit and colimits – Mathematical concept Equalizer – Set of arguments where two or more functions have the same value Inverse limit – Construction in category theory Cartesian closed category – Type of category in category theory Categorical pullback – Most general completion of a commutative square given two morphisms with same codomainPages displaying short descriptions of redirect targets == References == Adámek, Jiří; Horst Herrlich; George E. Strecker (1990). Abstract and Concrete Categories (PDF). John Wiley & Sons. ISBN 0-471-60922-6. Barr, Michael; Charles Wells (1999). Category Theory for Computing Science (PDF). Les Publications CRM Montreal (publication PM023). Archived from the original (PDF) on 2016-03-04. Retrieved 2016-03-21. Chapter 5. Mac Lane, Saunders (1998). Categories for the Working Mathematician. Graduate Texts in Mathematics 5 (2nd ed.). Springer. ISBN 0-387-98403-8. Definition 2.1.1 in Borceux, Francis (1994). Handbook of categorical algebra. Encyclopedia of mathematics and its applications 50–51, 53 [i.e. 52]. Vol. 1. Cambridge University Press. p. 39. ISBN 0-521-44178-1. == External links == Interactive Web page which generates examples of products in the category of finite sets. Written by Jocelyn Paine. Product at the nLab
Wikipedia/Product_(category_theory)
In mathematics, an algebra over a field (often simply called an algebra) is a vector space equipped with a bilinear product. Thus, an algebra is an algebraic structure consisting of a set together with operations of multiplication and addition and scalar multiplication by elements of a field and satisfying the axioms implied by "vector space" and "bilinear". The multiplication operation in an algebra may or may not be associative, leading to the notions of associative algebras where associativity of multiplication is assumed, and non-associative algebras, where associativity is not assumed (but not excluded, either). Given an integer n, the ring of real square matrices of order n is an example of an associative algebra over the field of real numbers under matrix addition and matrix multiplication since matrix multiplication is associative. Three-dimensional Euclidean space with multiplication given by the vector cross product is an example of a nonassociative algebra over the field of real numbers since the vector cross product is nonassociative, satisfying the Jacobi identity instead. An algebra is unital or unitary if it has an identity element with respect to the multiplication. The ring of real square matrices of order n forms a unital algebra since the identity matrix of order n is the identity element with respect to matrix multiplication. It is an example of a unital associative algebra, a (unital) ring that is also a vector space. Many authors use the term algebra to mean associative algebra, or unital associative algebra, or in some subjects such as algebraic geometry, unital associative commutative algebra. Replacing the field of scalars by a commutative ring leads to the more general notion of an algebra over a ring. Algebras are not to be confused with vector spaces equipped with a bilinear form, like inner product spaces, as, for such a space, the result of a product is not in the space, but rather in the field of coefficients. == Definition and motivation == === Motivating examples === === Definition === Let K be a field, and let A be a vector space over K equipped with an additional binary operation from A × A to A, denoted here by · (that is, if x and y are any two elements of A, then x · y is an element of A that is called the product of x and y). Then A is an algebra over K if the following identities hold for all elements x, y, z in A , and all elements (often called scalars) a and b in K: Right distributivity: (x + y) · z = x · z + y · z Left distributivity: z · (x + y) = z · x + z · y Compatibility with scalars: (ax) · (by) = (ab) (x · y). These three axioms are another way of saying that the binary operation is bilinear. An algebra over K is sometimes also called a K-algebra, and K is called the base field of A. The binary operation is often referred to as multiplication in A. The convention adopted in this article is that multiplication of elements of an algebra is not necessarily associative, although some authors use the term algebra to refer to an associative algebra. When a binary operation on a vector space is commutative, left distributivity and right distributivity are equivalent, and, in this case, only one distributivity requires a proof. In general, for non-commutative operations left distributivity and right distributivity are not equivalent, and require separate proofs. == Basic concepts == === Algebra homomorphisms === Given K-algebras A and B, a homomorphism of K-algebras or K-algebra homomorphism is a K-linear map f: A → B such that f(xy) = f(x) f(y) for all x, y in A. If A and B are unital, then a homomorphism satisfying f(1A) = 1B is said to be a unital homomorphism. The space of all K-algebra homomorphisms between A and B is frequently written as H o m K -alg ( A , B ) . {\displaystyle \mathbf {Hom} _{K{\text{-alg}}}(A,B).} A K-algebra isomorphism is a bijective K-algebra homomorphism. === Subalgebras and ideals === A subalgebra of an algebra over a field K is a linear subspace that has the property that the product of any two of its elements is again in the subspace. In other words, a subalgebra of an algebra is a non-empty subset of elements that is closed under addition, multiplication, and scalar multiplication. In symbols, we say that a subset L of a K-algebra A is a subalgebra if for every x, y in L and c in K, we have that x · y, x + y, and cx are all in L. In the above example of the complex numbers viewed as a two-dimensional algebra over the real numbers, the one-dimensional real line is a subalgebra. A left ideal of a K-algebra is a linear subspace that has the property that any element of the subspace multiplied on the left by any element of the algebra produces an element of the subspace. In symbols, we say that a subset L of a K-algebra A is a left ideal if for every x and y in L, z in A and c in K, we have the following three statements. x + y is in L (L is closed under addition), cx is in L (L is closed under scalar multiplication), z · x is in L (L is closed under left multiplication by arbitrary elements). If (3) were replaced with x · z is in L, then this would define a right ideal. A two-sided ideal is a subset that is both a left and a right ideal. The term ideal on its own is usually taken to mean a two-sided ideal. Of course when the algebra is commutative, then all of these notions of ideal are equivalent. Conditions (1) and (2) together are equivalent to L being a linear subspace of A. It follows from condition (3) that every left or right ideal is a subalgebra. This definition is different from the definition of an ideal of a ring, in that here we require the condition (2). Of course if the algebra is unital, then condition (3) implies condition (2). === Extension of scalars === If we have a field extension F/K, which is to say a bigger field F that contains K, then there is a natural way to construct an algebra over F from any algebra over K. It is the same construction one uses to make a vector space over a bigger field, namely the tensor product V F := V ⊗ K F {\displaystyle V_{F}:=V\otimes _{K}F} . So if A is an algebra over K, then A F {\displaystyle A_{F}} is an algebra over F. == Kinds of algebras and examples == Algebras over fields come in many different types. These types are specified by insisting on some further axioms, such as commutativity or associativity of the multiplication operation, which are not required in the broad definition of an algebra. The theories corresponding to the different types of algebras are often very different. === Unital algebra === An algebra is unital or unitary if it has a unit or identity element I with Ix = x = xI for all x in the algebra. === Zero algebra === An algebra is called a zero algebra if uv = 0 for all u, v in the algebra, not to be confused with the algebra with one element. It is inherently non-unital (except in the case of only one element), associative and commutative. A unital zero algebra is the direct sum ⁠ K ⊕ V {\displaystyle K\oplus V} ⁠ of a field ⁠ K {\displaystyle K} ⁠ and a ⁠ K {\displaystyle K} ⁠-vector space ⁠ V {\displaystyle V} ⁠, that is equipped by the only multiplication that is zero on the vector space (or module), and makes it an unital algebra. More precisely, every element of the algebra may be uniquely written as ⁠ k + v {\displaystyle k+v} ⁠ with ⁠ k ∈ K {\displaystyle k\in K} ⁠ and ⁠ v ∈ V {\displaystyle v\in V} ⁠, and the product is the only bilinear operation such that ⁠ v w = 0 {\displaystyle vw=0} ⁠ for every ⁠ v {\displaystyle v} ⁠ and ⁠ w {\displaystyle w} ⁠ in ⁠ V {\displaystyle V} ⁠. So, if ⁠ k 1 , k 2 ∈ K {\displaystyle k_{1},k_{2}\in K} ⁠ and ⁠ v 1 , v 2 ∈ V {\displaystyle v_{1},v_{2}\in V} ⁠, one has ( k 1 + v 1 ) ( k 2 + v 2 ) = k 1 k 2 + ( k 1 v 2 + k 2 v 1 ) . {\displaystyle (k_{1}+v_{1})(k_{2}+v_{2})=k_{1}k_{2}+(k_{1}v_{2}+k_{2}v_{1}).} A classical example of unital zero algebra is the algebra of dual numbers, the unital zero R-algebra built from a one dimensional real vector space. This definition extends verbatim to the definition of a unital zero algebra over a commutative ring, with the replacement of "field" and "vector space" with "commutative ring" and "module". Unital zero algebras allow the unification of the theory of submodules of a given module and the theory of ideals of a unital algebra. Indeed, the submodules of a module ⁠ V {\displaystyle V} ⁠ correspond exactly to the ideals of ⁠ K ⊕ V {\displaystyle K\oplus V} ⁠ that are contained in ⁠ V {\displaystyle V} ⁠. For example, the theory of Gröbner bases was introduced by Bruno Buchberger for ideals in a polynomial ring R = K[x1, ..., xn] over a field. The construction of the unital zero algebra over a free R-module allows extending this theory as a Gröbner basis theory for submodules of a free module. This extension allows, for computing a Gröbner basis of a submodule, to use, without any modification, any algorithm and any software for computing Gröbner bases of ideals. Similarly, unital zero algebras allow to deduce straightforwardly the Lasker–Noether theorem for modules (over a commutative ring) from the original Lasker–Noether theorem for ideals. === Associative algebra === Examples of associative algebras include the algebra of all n-by-n matrices over a field (or commutative ring) K. Here the multiplication is ordinary matrix multiplication. group algebras, where a group serves as a basis of the vector space and algebra multiplication extends group multiplication. the commutative algebra K[x] of all polynomials over K (see polynomial ring). algebras of functions, such as the R-algebra of all real-valued continuous functions defined on the interval [0,1], or the C-algebra of all holomorphic functions defined on some fixed open set in the complex plane. These are also commutative. Incidence algebras are built on certain partially ordered sets. algebras of linear operators, for example on a Hilbert space. Here the algebra multiplication is given by the composition of operators. These algebras also carry a topology; many of them are defined on an underlying Banach space, which turns them into Banach algebras. If an involution is given as well, we obtain B*-algebras and C*-algebras. These are studied in functional analysis. === Non-associative algebra === A non-associative algebra (or distributive algebra) over a field K is a K-vector space A equipped with a K-bilinear map A × A → A {\displaystyle A\times A\rightarrow A} . The usage of "non-associative" here is meant to convey that associativity is not assumed, but it does not mean it is prohibited – that is, it means "not necessarily associative". Examples detailed in the main article include: Euclidean space R3 with multiplication given by the vector cross product Octonions Lie algebras Jordan algebras Alternative algebras Flexible algebras Power-associative algebras == Algebras and rings == The definition of an associative K-algebra with unit is also frequently given in an alternative way. In this case, an algebra over a field K is a ring A together with a ring homomorphism η : K → Z ( A ) , {\displaystyle \eta \colon K\to Z(A),} where Z(A) is the center of A. Since η is a ring homomorphism, then one must have either that A is the zero ring, or that η is injective. This definition is equivalent to that above, with scalar multiplication K × A → A {\displaystyle K\times A\to A} given by ( k , a ) ↦ η ( k ) a . {\displaystyle (k,a)\mapsto \eta (k)a.} Given two such associative unital K-algebras A and B, a unital K-algebra homomorphism f: A → B is a ring homomorphism that commutes with the scalar multiplication defined by η, which one may write as f ( k a ) = k f ( a ) {\displaystyle f(ka)=kf(a)} for all k ∈ K {\displaystyle k\in K} and a ∈ A {\displaystyle a\in A} . In other words, the following diagram commutes: K η A ↙ η B ↘ A f ⟶ B {\displaystyle {\begin{matrix}&&K&&\\&\eta _{A}\swarrow &\,&\eta _{B}\searrow &\\A&&{\begin{matrix}f\\\longrightarrow \end{matrix}}&&B\end{matrix}}} == Structure coefficients == For algebras over a field, the bilinear multiplication from A × A to A is completely determined by the multiplication of basis elements of A. Conversely, once a basis for A has been chosen, the products of basis elements can be set arbitrarily, and then extended in a unique way to a bilinear operator on A, i.e., so the resulting multiplication satisfies the algebra laws. Thus, given the field K, any finite-dimensional algebra can be specified up to isomorphism by giving its dimension (say n), and specifying n3 structure coefficients ci,j,k, which are scalars. These structure coefficients determine the multiplication in A via the following rule: e i e j = ∑ k = 1 n c i , j , k e k {\displaystyle \mathbf {e} _{i}\mathbf {e} _{j}=\sum _{k=1}^{n}c_{i,j,k}\mathbf {e} _{k}} where e1,...,en form a basis of A. Note however that several different sets of structure coefficients can give rise to isomorphic algebras. In mathematical physics, the structure coefficients are generally written with upper and lower indices, so as to distinguish their transformation properties under coordinate transformations. Specifically, lower indices are covariant indices, and transform via pullbacks, while upper indices are contravariant, transforming under pushforwards. Thus, the structure coefficients are often written ci,jk, and their defining rule is written using the Einstein notation as eiej = ci,jkek. If you apply this to vectors written in index notation, then this becomes (xy)k = ci,jkxiyj. If K is only a commutative ring and not a field, then the same process works if A is a free module over K. If it isn't, then the multiplication is still completely determined by its action on a set that spans A; however, the structure constants can't be specified arbitrarily in this case, and knowing only the structure constants does not specify the algebra up to isomorphism. == Classification of low-dimensional unital associative algebras over the complex numbers == Two-dimensional, three-dimensional and four-dimensional unital associative algebras over the field of complex numbers were completely classified up to isomorphism by Eduard Study. There exist two such two-dimensional algebras. Each algebra consists of linear combinations (with complex coefficients) of two basis elements, 1 (the identity element) and a. According to the definition of an identity element, 1 ⋅ 1 = 1 , 1 ⋅ a = a , a ⋅ 1 = a . {\displaystyle \textstyle 1\cdot 1=1\,,\quad 1\cdot a=a\,,\quad a\cdot 1=a\,.} It remains to specify a a = 1 {\displaystyle \textstyle aa=1} for the first algebra, a a = 0 {\displaystyle \textstyle aa=0} for the second algebra. There exist five such three-dimensional algebras. Each algebra consists of linear combinations of three basis elements, 1 (the identity element), a and b. Taking into account the definition of an identity element, it is sufficient to specify a a = a , b b = b , a b = b a = 0 {\displaystyle \textstyle aa=a\,,\quad bb=b\,,\quad ab=ba=0} for the first algebra, a a = a , b b = 0 , a b = b a = 0 {\displaystyle \textstyle aa=a\,,\quad bb=0\,,\quad ab=ba=0} for the second algebra, a a = b , b b = 0 , a b = b a = 0 {\displaystyle \textstyle aa=b\,,\quad bb=0\,,\quad ab=ba=0} for the third algebra, a a = 1 , b b = 0 , a b = − b a = b {\displaystyle \textstyle aa=1\,,\quad bb=0\,,\quad ab=-ba=b} for the fourth algebra, a a = 0 , b b = 0 , a b = b a = 0 {\displaystyle \textstyle aa=0\,,\quad bb=0\,,\quad ab=ba=0} for the fifth algebra. The fourth of these algebras is non-commutative, and the others are commutative. == Generalization: algebra over a ring == In some areas of mathematics, such as commutative algebra, it is common to consider the more general concept of an algebra over a ring, where a commutative ring R replaces the field K. The only part of the definition that changes is that A is assumed to be an R-module (instead of a K-vector space). === Associative algebras over rings === A ring A is always an associative algebra over its center, and over the integers. A classical example of an algebra over its center is the split-biquaternion algebra, which is isomorphic to H × H {\displaystyle \mathbb {H} \times \mathbb {H} } , the direct product of two quaternion algebras. The center of that ring is R × R {\displaystyle \mathbb {R} \times \mathbb {R} } , and hence it has the structure of an algebra over its center, which is not a field. Note that the split-biquaternion algebra is also naturally an 8-dimensional R {\displaystyle \mathbb {R} } -algebra. In commutative algebra, if A is a commutative ring, then any unital ring homomorphism R → A {\displaystyle R\to A} defines an R-module structure on A, and this is what is known as the R-algebra structure. So a ring comes with a natural Z {\displaystyle \mathbb {Z} } -module structure, since one can take the unique homomorphism Z → A {\displaystyle \mathbb {Z} \to A} . On the other hand, not all rings can be given the structure of an algebra over a field (for example the integers). See Field with one element for a description of an attempt to give to every ring a structure that behaves like an algebra over a field. == See also == Algebra over an operad Alternative algebra Clifford algebra Composition algebra Differential algebra Free algebra Geometric algebra Max-plus algebra Mutation (algebra) Operator algebra Zariski's lemma == Notes == == References == Hazewinkel, Michiel; Gubareni, Nadiya; Kirichenko, Vladimir V. (2004). Algebras, rings and modules. Vol. 1. Springer. ISBN 1-4020-2690-0.
Wikipedia/Algebras_over_a_field
In mathematics, an initial algebra is an initial object in the category of F-algebras for a given endofunctor F. This initiality provides a general framework for induction and recursion. == Examples == === Functor 1 + (−) === Consider the endofunctor 1 + (−), i.e. F : Set → Set sending X to 1 + X, where 1 is a one-point (singleton) set, a terminal object in the category. An algebra for this endofunctor is a set X (called the carrier of the algebra) together with a function f : (1 + X) → X. Defining such a function amounts to defining a point x ∈ X and a function X → X. Define zero : 1 ⟶ N ∗ ⟼ 0 {\displaystyle {\begin{aligned}\operatorname {zero} \colon 1&\longrightarrow \mathbf {N} \\*&\longmapsto 0\end{aligned}}} and succ : N ⟶ N n ⟼ n + 1. {\displaystyle {\begin{aligned}\operatorname {succ} \colon \mathbf {N} &\longrightarrow \mathbf {N} \\n&\longmapsto n+1.\end{aligned}}} Then the set N of natural numbers together with the function [zero,succ]: 1 + N → N is an initial F-algebra. The initiality (the universal property for this case) is not hard to establish; the unique homomorphism to an arbitrary F-algebra (A, [e, f]), for e: 1 → A an element of A and f: A → A a function on A, is the function sending the natural number n to fn(e), that is, f(f(…(f(e))…)), the n-fold application of f to e. The set of natural numbers is the carrier of an initial algebra for this functor: the point is zero and the function is the successor function. === Functor 1 + N × (−) === For a second example, consider the endofunctor 1 + N × (−) on the category of sets, where N is the set of natural numbers. An algebra for this endofunctor is a set X together with a function 1 + N × X → X. To define such a function, we need a point x ∈ X and a function N × X → X. The set of finite lists of natural numbers is an initial algebra for this functor. The point is the empty list, and the function is cons, taking a number and a finite list, and returning a new finite list with the number at the head. In categories with binary coproducts, the definitions just given are equivalent to the usual definitions of a natural number object and a list object, respectively. == Final coalgebra == Dually, a final coalgebra is a terminal object in the category of F-coalgebras. The finality provides a general framework for coinduction and corecursion. For example, using the same functor 1 + (−) as before, a coalgebra is defined as a set X together with a function f : X → (1 + X). Defining such a function amounts to defining a partial function f': X ⇸ X whose domain is formed by those x ∈ X {\displaystyle x\in X} for which f(x) does not belong to 1. Having such a structure, we can define a chain of sets: X0 being a subset of X on which f′ is not defined, X1 which elements map into X0 by f′, X2 which elements map into X1 by f′, etc., and Xω containing the remaining elements of X. With this in view, the set N ∪ { ω } {\displaystyle \mathbf {N} \cup \{\omega \}} , consisting of the set of natural numbers extended with a new element ω, is the carrier of the final coalgebra, where f ′ {\displaystyle f'} is the predecessor function (the inverse of the successor function) on the positive naturals, but acts like the identity on the new element ω: f(n + 1) = n, f(ω) = ω. This set N ∪ { ω } {\displaystyle \mathbf {N} \cup \{\omega \}} that is the carrier of the final coalgebra of 1 + (−) is known as the set of conatural numbers. For a second example, consider the same functor 1 + N × (−) as before. In this case the carrier of the final coalgebra consists of all lists of natural numbers, finite as well as infinite. The operations are a test function testing whether a list is empty, and a deconstruction function defined on non-empty lists returning a pair consisting of the head and the tail of the input list. == Theorems == Initial algebras are minimal (i.e., have no proper subalgebra). Final coalgebras are simple (i.e., have no proper quotients). == Use in computer science == Various finite data structures used in programming, such as lists and trees, can be obtained as initial algebras of specific endofunctors. While there may be several initial algebras for a given endofunctor, they are unique up to isomorphism, which informally means that the "observable" properties of a data structure can be adequately captured by defining it as an initial algebra. To obtain the type List(A) of lists whose elements are members of set A, consider that the list-forming operations are: n i l : 1 → L i s t ( A ) {\displaystyle \mathrm {nil} \colon 1\to \mathrm {List} (A)} c o n s : A × L i s t ( A ) → L i s t ( A ) {\displaystyle \mathrm {cons} \colon A\times \mathrm {List} (A)\to \mathrm {List} (A)} Combined into one function, they give: [ n i l , c o n s ] : ( 1 + A × L i s t ( A ) ) → L i s t ( A ) , {\displaystyle [\mathrm {nil} ,\mathrm {cons} ]\colon (1+A\times \mathrm {List} (A))\to \mathrm {List} (A),} which makes this an F-algebra for the endofunctor F sending X to 1 + (A × X). It is, in fact, the initial F-algebra. Initiality is established by the function known as foldr in functional programming languages such as Haskell and ML. Likewise, binary trees with elements at the leaves can be obtained as the initial algebra [ t i p , j o i n ] : A + ( T r e e ( A ) × T r e e ( A ) ) → T r e e ( A ) . {\displaystyle [\mathrm {tip} ,\mathrm {join} ]\colon A+(\mathrm {Tree} (A)\times \mathrm {Tree} (A))\to \mathrm {Tree} (A).} Types obtained this way are known as algebraic data types. Types defined by using least fixed point construct with functor F can be regarded as an initial F-algebra, provided that parametricity holds for the type. In a dual way, similar relationship exists between notions of greatest fixed point and terminal F-coalgebra, with applications to coinductive types. These can be used for allowing potentially infinite objects while maintaining strong normalization property. In the strongly normalizing (each program terminates) Charity programming language, coinductive data types can be used for achieving surprising results, e.g. defining lookup constructs to implement such “strong” functions like the Ackermann function. == See also == Algebraic data type Catamorphism Anamorphism == Notes == == External links == Categorical programming with inductive and coinductive types by Varmo Vene Recursive types for free! by Philip Wadler, University of Glasgow, 1990-2014. Initial Algebra and Final Coalgebra Semantics for Concurrency by J.J.M.M. Rutten and D. Turi Initiality and finality from CLiki Typed Tagless Final Interpreters by Oleg Kiselyov
Wikipedia/Initial_algebra
In mathematics, specifically in ring theory, a nilpotent algebra over a commutative ring is an algebra over a commutative ring, in which for some positive integer n every product containing at least n elements of the algebra is zero. The concept of a nilpotent Lie algebra has a different definition, which depends upon the Lie bracket. (There is no Lie bracket for many algebras over commutative rings; a Lie algebra involves its Lie bracket, whereas, there is no Lie bracket defined in the general case of an algebra over a commutative ring.) Another possible source of confusion in terminology is the quantum nilpotent algebra, a concept related to quantum groups and Hopf algebras. == Formal definition == An associative algebra A {\displaystyle A} over a commutative ring R {\displaystyle R} is defined to be a nilpotent algebra if and only if there exists some positive integer n {\displaystyle n} such that 0 = y 1 y 2 ⋯ y n {\displaystyle 0=y_{1}\ y_{2}\ \cdots \ y_{n}} for all y 1 , y 2 , … , y n {\displaystyle y_{1},\ y_{2},\ \ldots ,\ y_{n}} in the algebra A {\displaystyle A} . The smallest such n {\displaystyle n} is called the index of the algebra A {\displaystyle A} . In the case of a non-associative algebra, the definition is that every different multiplicative association of the n {\displaystyle n} elements is zero. == Nil algebra == A power associative algebra in which every element of the algebra is nilpotent is called a nil algebra. Nilpotent algebras are trivially nil, whereas nil algebras may not be nilpotent, as each element being nilpotent does not force products of distinct elements to vanish. == See also == Algebraic structure (a much more general term) nil-Coxeter algebra Lie algebra Example of a non-associative algebra == References == Lang, Serge (2002), Algebra, Graduate Texts in Mathematics, vol. 211 (Revised third ed.), New York: Springer-Verlag, ISBN 978-0-387-95385-4, MR 1878556 == External links == Nilpotent algebra – Encyclopedia of Mathematics
Wikipedia/Nil_algebra
In mathematical genetics, a genetic algebra is a (possibly non-associative) algebra used to model inheritance in genetics. Some variations of these algebras are called train algebras, special train algebras, gametic algebras, Bernstein algebras, copular algebras, zygotic algebras, and baric algebras (also called weighted algebra). The study of these algebras was started by Ivor Etherington (1939). In applications to genetics, these algebras often have a basis corresponding to the genetically different gametes, and the structure constants of the algebra encode the probabilities of producing offspring of various types. The laws of inheritance are then encoded as algebraic properties of the algebra. For surveys of genetic algebras see Bertrand (1966), Wörz-Busekros (1980) and Reed (1997). == Baric algebras == Baric algebras (or weighted algebras) were introduced by Etherington (1939). A baric algebra over a field K is a possibly non-associative algebra over K together with a homomorphism w, called the weight, from the algebra to K. == Bernstein algebras == A Bernstein algebra, based on the work of Sergei Natanovich Bernstein (1923) on the Hardy–Weinberg law in genetics, is a (possibly non-associative) baric algebra B over a field K with a weight homomorphism w from B to K satisfying ( x 2 ) 2 = w ( x ) 2 x 2 {\displaystyle (x^{2})^{2}=w(x)^{2}x^{2}} . Every such algebra has idempotents e of the form e = a 2 {\displaystyle e=a^{2}} with w ( a ) = 1 {\displaystyle w(a)=1} . The Peirce decomposition of B corresponding to e is B = K e ⊕ U e ⊕ Z e {\displaystyle B=Ke\oplus U_{e}\oplus Z_{e}} where U e = { a ∈ ker ⁡ w : e a = a / 2 } {\displaystyle U_{e}=\{a\in \ker w:ea=a/2\}} and Z e = { a ∈ ker ⁡ w : e a = 0 } {\displaystyle Z_{e}=\{a\in \ker w:ea=0\}} . Although these subspaces depend on e, their dimensions are invariant and constitute the type of B. An exceptional Bernstein algebra is one with U e 2 = 0 {\displaystyle U_{e}^{2}=0} . == Copular algebras == Copular algebras were introduced by Etherington (1939, section 8) == Evolution algebras == An evolution algebra over a field is an algebra with a basis on which multiplication is defined by the product of distinct basis terms being zero and the square of each basis element being a linear form in basis elements. A real evolution algebra is one defined over the reals: it is non-negative if the structure constants in the linear form are all non-negative. An evolution algebra is necessarily commutative and flexible but not necessarily associative or power-associative. == Gametic algebras == A gametic algebra is a finite-dimensional real algebra for which all structure constants lie between 0 and 1. == Genetic algebras == Genetic algebras were introduced by Schafer (1949) who showed that special train algebras are genetic algebras and genetic algebras are train algebras. == Special train algebras == Special train algebras were introduced by Etherington (1939, section 4) as special cases of baric algebras. A special train algebra is a baric algebra in which the kernel N of the weight function is nilpotent and the principal powers of N are ideals. Etherington (1941) showed that special train algebras are train algebras. == Train algebras == Train algebras were introduced by Etherington (1939, section 4) as special cases of baric algebras. Let c 1 , … , c n {\displaystyle c_{1},\ldots ,c_{n}} be elements of the field K with 1 + c 1 + ⋯ + c n = 0 {\displaystyle 1+c_{1}+\cdots +c_{n}=0} . The formal polynomial x n + c 1 w ( x ) x n − 1 + ⋯ + c n w ( x ) n {\displaystyle x^{n}+c_{1}w(x)x^{n-1}+\cdots +c_{n}w(x)^{n}} is a train polynomial. The baric algebra B with weight w is a train algebra if a n + c 1 w ( a ) a n − 1 + ⋯ + c n w ( a ) n = 0 {\displaystyle a^{n}+c_{1}w(a)a^{n-1}+\cdots +c_{n}w(a)^{n}=0} for all elements a ∈ B {\displaystyle a\in B} , with a k {\displaystyle a^{k}} defined as principal powers, ( a k − 1 ) a {\displaystyle (a^{k-1})a} . == Zygotic algebras == Zygotic algebras were introduced by Etherington (1939, section 7) == References == Bernstein, S. N. (1923), "Principe de stationarité et généralisation de la loi de Mendel", C. R. Acad. Sci. Paris, 177: 581–584. Bertrand, Monique (1966), Algèbres non associatives et algèbres génétiques, Mémorial des Sciences Mathématiques, Fasc. 162, Gauthier-Villars Éditeur, Paris, MR 0215885 Etherington, I. M. H. (1939), "Genetic algebras" (PDF), Proc. R. Soc. Edinburgh, 59: 242–258, doi:10.1017/S0370164600012323, MR 0000597, Zbl 0027.29402, archived from the original (PDF) on 2011-07-06 Etherington, I. M. H. (1941), "Special train algebras", The Quarterly Journal of Mathematics, Second Series, 12: 1–8, doi:10.1093/qmath/os-12.1.1, ISSN 0033-5606, JFM 67.0093.04, MR 0005111, Zbl 0027.29401 Lyubich, Yu.I. (2001) [1994], "Bernstein problem in mathematical genetics", Encyclopedia of Mathematics, EMS Press Micali, A. (2001) [1994], "Baric algebra", Encyclopedia of Mathematics, EMS Press Micali, A. (2001) [1994], "Bernstein algebra", Encyclopedia of Mathematics, EMS Press Reed, Mary Lynn (1997), "Algebraic structure of genetic inheritance", Bulletin of the American Mathematical Society, New Series, 34 (2): 107–130, doi:10.1090/S0273-0979-97-00712-X, ISSN 0002-9904, MR 1414973, Zbl 0876.17040 Schafer, Richard D. (1949), "Structure of genetic algebras", American Journal of Mathematics, 71 (1): 121–135, doi:10.2307/2372100, ISSN 0002-9327, JSTOR 2372100, MR 0027751 Tian, Jianjun Paul (2008), Evolution algebras and their applications, Lecture Notes in Mathematics, vol. 1921, Berlin: Springer-Verlag, ISBN 978-3-540-74283-8, Zbl 1136.17001 Wörz-Busekros, Angelika (1980), Algebras in genetics, Lecture Notes in Biomathematics, vol. 36, Berlin, New York: Springer-Verlag, ISBN 978-0-387-09978-1, MR 0599179 Wörz-Busekros, A. (2001) [1994], "Genetic algebra", Encyclopedia of Mathematics, EMS Press == Further reading == Lyubich, Yu.I. (1983), Mathematical structures in population genetics. (Matematicheskie struktury v populyatsionnoj genetike) (in Russian), Kiev: Naukova Dumka, Zbl 0593.92011
Wikipedia/Genetic_algebra
In mathematics, specifically in ring theory, a nilpotent algebra over a commutative ring is an algebra over a commutative ring, in which for some positive integer n every product containing at least n elements of the algebra is zero. The concept of a nilpotent Lie algebra has a different definition, which depends upon the Lie bracket. (There is no Lie bracket for many algebras over commutative rings; a Lie algebra involves its Lie bracket, whereas, there is no Lie bracket defined in the general case of an algebra over a commutative ring.) Another possible source of confusion in terminology is the quantum nilpotent algebra, a concept related to quantum groups and Hopf algebras. == Formal definition == An associative algebra A {\displaystyle A} over a commutative ring R {\displaystyle R} is defined to be a nilpotent algebra if and only if there exists some positive integer n {\displaystyle n} such that 0 = y 1 y 2 ⋯ y n {\displaystyle 0=y_{1}\ y_{2}\ \cdots \ y_{n}} for all y 1 , y 2 , … , y n {\displaystyle y_{1},\ y_{2},\ \ldots ,\ y_{n}} in the algebra A {\displaystyle A} . The smallest such n {\displaystyle n} is called the index of the algebra A {\displaystyle A} . In the case of a non-associative algebra, the definition is that every different multiplicative association of the n {\displaystyle n} elements is zero. == Nil algebra == A power associative algebra in which every element of the algebra is nilpotent is called a nil algebra. Nilpotent algebras are trivially nil, whereas nil algebras may not be nilpotent, as each element being nilpotent does not force products of distinct elements to vanish. == See also == Algebraic structure (a much more general term) nil-Coxeter algebra Lie algebra Example of a non-associative algebra == References == Lang, Serge (2002), Algebra, Graduate Texts in Mathematics, vol. 211 (Revised third ed.), New York: Springer-Verlag, ISBN 978-0-387-95385-4, MR 1878556 == External links == Nilpotent algebra – Encyclopedia of Mathematics
Wikipedia/Nilpotent_algebra
In mathematics, an Albert algebra is a 27-dimensional exceptional Jordan algebra. They are named after Abraham Adrian Albert, who pioneered the study of non-associative algebras, usually working over the real numbers. Over the real numbers, there are three such Jordan algebras up to isomorphism. One of them, which was first mentioned by Pascual Jordan, John von Neumann, and Eugene Wigner (1934) and studied by Albert (1934), is the set of 3×3 self-adjoint matrices over the octonions, equipped with the binary operation x ∘ y = 1 2 ( x ⋅ y + y ⋅ x ) , {\displaystyle x\circ y={\frac {1}{2}}(x\cdot y+y\cdot x),} where ⋅ {\displaystyle \cdot } denotes matrix multiplication. Another is defined the same way, but using split octonions instead of octonions. The final is constructed from the non-split octonions using a different standard involution. Over any algebraically closed field, there is just one Albert algebra, and its automorphism group G is the simple split group of type F4. (For example, the complexifications of the three Albert algebras over the real numbers are isomorphic Albert algebras over the complex numbers.) Because of this, for a general field F, the Albert algebras are classified by the Galois cohomology group H1(F,G). The Kantor–Koecher–Tits construction applied to an Albert algebra gives a form of the E7 Lie algebra. The split Albert algebra is used in a construction of a 56-dimensional structurable algebra whose automorphism group has identity component the simply-connected algebraic group of type E6. The space of cohomological invariants of Albert algebras a field F (of characteristic not 2) with coefficients in Z/2Z is a free module over the cohomology ring of F with a basis 1, f3, f5, of degrees 0, 3, 5. The cohomological invariants with 3-torsion coefficients have a basis 1, g3 of degrees 0, 3. The invariants f3 and g3 are the primary components of the Rost invariant. == See also == Euclidean Jordan algebra for the Jordan algebras considered by Jordan, von Neumann and Wigner Euclidean Hurwitz algebra for details of the construction of the Albert algebra for the octonions == Notes == == References == Albert, A. Adrian (1934), "On a Certain Algebra of Quantum Mechanics", Annals of Mathematics, Second Series, 35 (1): 65–73, doi:10.2307/1968118, ISSN 0003-486X, JSTOR 1968118 Garibaldi, Skip; Merkurjev, Alexander; Serre, Jean-Pierre (2003), Cohomological invariants in Galois cohomology, University Lecture Series, vol. 28, Providence, RI: American Mathematical Society, ISBN 978-0-8218-3287-5, MR 1999383 Garibaldi, Skip (2009). Cohomological invariants: exceptional groups and Spin groups. Memoirs of the American Mathematical Society. Vol. 200. doi:10.1090/memo/0937. ISBN 978-0-8218-4404-5. Garibaldi, Skip; Petersson, Holger P.; Racine, Michel L. (2024). Albert algebras over commutative rings. New Mathematical Monographs. Vol. 48. Cambridge University Press. doi:10.1017/9781009426862. ISBN 978-1-0094-2685-5. Jordan, Pascual; Neumann, John von; Wigner, Eugene (1934), "On an Algebraic Generalization of the Quantum Mechanical Formalism", Annals of Mathematics, 35 (1): 29–64, doi:10.2307/1968117, JSTOR 1968117 Knus, Max-Albert; Merkurjev, Alexander; Rost, Markus; Tignol, Jean-Pierre (1998), The book of involutions, Colloquium Publications, vol. 44, With a preface by J. Tits, Providence, RI: American Mathematical Society, ISBN 978-0-8218-0904-4, Zbl 0955.16001 McCrimmon, Kevin (2004), A taste of Jordan algebras, Universitext, Berlin, New York: Springer-Verlag, doi:10.1007/b97489, ISBN 978-0-387-95447-9, MR 2014924 Springer, Tonny A.; Veldkamp, Ferdinand D. (2000) [1963], Octonions, Jordan algebras and exceptional groups, Springer Monographs in Mathematics, Berlin, New York: Springer-Verlag, ISBN 978-3-540-66337-9, MR 1763974 == Further reading == Albert algebra at Encyclopedia of Mathematics.
Wikipedia/Albert_algebra
In mathematics, differential algebra is, broadly speaking, the area of mathematics consisting in the study of differential equations and differential operators as algebraic objects in view of deriving properties of differential equations and operators without computing the solutions, similarly as polynomial algebras are used for the study of algebraic varieties, which are solution sets of systems of polynomial equations. Weyl algebras and Lie algebras may be considered as belonging to differential algebra. More specifically, differential algebra refers to the theory introduced by Joseph Ritt in 1950, in which differential rings, differential fields, and differential algebras are rings, fields, and algebras equipped with finitely many derivations. A natural example of a differential field is the field of rational functions in one variable over the complex numbers, C ( t ) , {\displaystyle \mathbb {C} (t),} where the derivation is differentiation with respect to t . {\displaystyle t.} More generally, every differential equation may be viewed as an element of a differential algebra over the differential field generated by the (known) functions appearing in the equation. == History == Joseph Ritt developed differential algebra because he viewed attempts to reduce systems of differential equations to various canonical forms as an unsatisfactory approach. However, the success of algebraic elimination methods and algebraic manifold theory motivated Ritt to consider a similar approach for differential equations. His efforts led to an initial paper Manifolds Of Functions Defined By Systems Of Algebraic Differential Equations and 2 books, Differential Equations From The Algebraic Standpoint and Differential Algebra. Ellis Kolchin, Ritt's student, advanced this field and published Differential Algebra And Algebraic Groups. == Differential rings == === Definition === A derivation ∂ {\textstyle \partial } on a ring R {\textstyle R} is a function ∂ : R → R {\displaystyle \partial :R\to R\,} such that ∂ ( r 1 + r 2 ) = ∂ r 1 + ∂ r 2 {\displaystyle \partial (r_{1}+r_{2})=\partial r_{1}+\partial r_{2}} and ∂ ( r 1 r 2 ) = ( ∂ r 1 ) r 2 + r 1 ( ∂ r 2 ) {\displaystyle \partial (r_{1}r_{2})=(\partial r_{1})r_{2}+r_{1}(\partial r_{2})\quad } (Leibniz product rule), for every r 1 {\displaystyle r_{1}} and r 2 {\displaystyle r_{2}} in R . {\displaystyle R.} A derivation is linear over the integers since these identities imply ∂ ( 0 ) = ∂ ( 1 ) = 0 {\displaystyle \partial (0)=\partial (1)=0} and ∂ ( − r ) = − ∂ ( r ) . {\displaystyle \partial (-r)=-\partial (r).} A differential ring is a commutative ring R {\displaystyle R} equipped with one or more derivations that commute pairwise; that is, ∂ 1 ( ∂ 2 ( r ) ) = ∂ 2 ( ∂ 1 ( r ) ) {\displaystyle \partial _{1}(\partial _{2}(r))=\partial _{2}(\partial _{1}(r))} for every pair of derivations and every r ∈ R . {\displaystyle r\in R.} When there is only one derivation one talks often of an ordinary differential ring; otherwise, one talks of a partial differential ring. A differential field is a differential ring that is also a field. A differential algebra A {\displaystyle A} over a differential field K {\displaystyle K} is a differential ring that contains K {\displaystyle K} as a subring such that the restriction to K {\displaystyle K} of the derivations of A {\displaystyle A} equal the derivations of K . {\displaystyle K.} (A more general definition is given below, which covers the case where K {\displaystyle K} is not a field, and is essentially equivalent when K {\displaystyle K} is a field.) A Witt algebra is a differential ring that contains the field Q {\displaystyle \mathbb {Q} } of the rational numbers. Equivalently, this is a differential algebra over Q , {\displaystyle \mathbb {Q} ,} since Q {\displaystyle \mathbb {Q} } can be considered as a differential field on which every derivation is the zero function. The constants of a differential ring are the elements r {\displaystyle r} such that ∂ r = 0 {\displaystyle \partial r=0} for every derivation ∂ . {\displaystyle \partial .} The constants of a differential ring form a subring and the constants of a differentiable field form a subfield. This meaning of "constant" generalizes the concept of a constant function, and must not be confused with the common meaning of a constant. === Basic formulas === In the following identities, δ {\displaystyle \delta } is a derivation of a differential ring R . {\displaystyle R.} If r ∈ R {\displaystyle r\in R} and c {\displaystyle c} is a constant in R {\displaystyle R} (that is, δ c = 0 {\displaystyle \delta c=0} ), then δ ( c r ) = c δ ( r ) . {\displaystyle \delta (cr)=c\delta (r).} If r ∈ R {\displaystyle r\in R} and u {\displaystyle u} is a unit in R , {\displaystyle R,} then δ ( r u ) = δ ( r ) u − r δ ( u ) u 2 {\displaystyle \delta \left({\frac {r}{u}}\right)={\frac {\delta (r)u-r\delta (u)}{u^{2}}}} If n {\displaystyle n} is a nonnegative integer and r ∈ R {\displaystyle r\in R} then δ ( r n ) = n r n − 1 δ ( r ) {\displaystyle \delta (r^{n})=nr^{n-1}\delta (r)} If u 1 , … , u n {\displaystyle u_{1},\ldots ,u_{n}} are units in R , {\displaystyle R,} and e 1 , … , e n {\displaystyle e_{1},\ldots ,e_{n}} are integers, one has the logarithmic derivative identity: δ ( u 1 e 1 … u n e n ) u 1 e 1 … u n e n = e 1 δ ( u 1 ) u 1 + ⋯ + e n δ ( u n ) u n . {\displaystyle {\frac {\delta (u_{1}^{e_{1}}\ldots u_{n}^{e_{n}})}{u_{1}^{e_{1}}\ldots u_{n}^{e_{n}}}}=e_{1}{\frac {\delta (u_{1})}{u_{1}}}+\dots +e_{n}{\frac {\delta (u_{n})}{u_{n}}}.} === Higher-order derivations === A derivation operator or higher-order derivation is the composition of several derivations. As the derivations of a differential ring are supposed to commute, the order of the derivations does not matter, and a derivation operator may be written as δ 1 e 1 ∘ ⋯ ∘ δ n e n , {\displaystyle \delta _{1}^{e_{1}}\circ \cdots \circ \delta _{n}^{e_{n}},} where δ 1 , … , δ n {\displaystyle \delta _{1},\ldots ,\delta _{n}} are the derivations under consideration, e 1 , … , e n {\displaystyle e_{1},\ldots ,e_{n}} are nonnegative integers, and the exponent of a derivation denotes the number of times this derivation is composed in the operator. The sum o = e 1 + ⋯ + e n {\displaystyle o=e_{1}+\cdots +e_{n}} is called the order of derivation. If o = 1 {\displaystyle o=1} the derivation operator is one of the original derivations. If o = 0 {\displaystyle o=0} , one has the identity function, which is generally considered as the unique derivation operator of order zero. With these conventions, the derivation operators form a free commutative monoid on the set of derivations under consideration. A derivative of an element x {\displaystyle x} of a differential ring is the application of a derivation operator to x , {\displaystyle x,} that is, with the above notation, δ 1 e 1 ∘ ⋯ ∘ δ n e n ( x ) . {\displaystyle \delta _{1}^{e_{1}}\circ \cdots \circ \delta _{n}^{e_{n}}(x).} A proper derivative is a derivative of positive order. === Differential ideals === A differential ideal I {\displaystyle I} of a differential ring R {\displaystyle R} is an ideal of the ring R {\displaystyle R} that is closed (stable) under the derivations of the ring; that is, ∂ x ∈ I , {\textstyle \partial x\in I,} for every derivation ∂ {\displaystyle \partial } and every x ∈ I . {\displaystyle x\in I.} A differential ideal is said to be proper if it is not the whole ring. For avoiding confusion, an ideal that is not a differential ideal is sometimes called an algebraic ideal. The radical of a differential ideal is the same as its radical as an algebraic ideal, that is, the set of the ring elements that have a power in the ideal. The radical of a differential ideal is also a differential ideal. A radical or perfect differential ideal is a differential ideal that equals its radical. A prime differential ideal is a differential ideal that is prime in the usual sense; that is, if a product belongs to the ideal, at least one of the factors belongs to the ideal. A prime differential ideal is always a radical differential ideal. A discovery of Ritt is that, although the classical theory of algebraic ideals does not work for differential ideals, a large part of it can be extended to radical differential ideals, and this makes them fundamental in differential algebra. The intersection of any family of differential ideals is a differential ideal, and the intersection of any family of radical differential ideals is a radical differential ideal. It follows that, given a subset S {\displaystyle S} of a differential ring, there are three ideals generated by it, which are the intersections of, respectively, all algebraic ideals, all differential ideals, and all radical differential ideals that contain it. The algebraic ideal generated by S {\displaystyle S} is the set of finite linear combinations of elements of S , {\displaystyle S,} and is commonly denoted as ( S ) {\displaystyle (S)} or ⟨ S ⟩ . {\displaystyle \langle S\rangle .} The differential ideal generated by S {\displaystyle S} is the set of the finite linear combinations of elements of S {\displaystyle S} and of the derivatives of any order of these elements; it is commonly denoted as [ S ] . {\displaystyle [S].} When S {\displaystyle S} is finite, [ S ] {\displaystyle [S]} is generally not finitely generated as an algebraic ideal. The radical differential ideal generated by S {\displaystyle S} is commonly denoted as { S } . {\displaystyle \{S\}.} There is no known way to characterize its element in a similar way as for the two other cases. == Differential polynomials == A differential polynomial over a differential field K {\displaystyle K} is a formalization of the concept of differential equation such that the known functions appearing in the equation belong to K , {\displaystyle K,} and the indeterminates are symbols for the unknown functions. So, let K {\displaystyle K} be a differential field, which is typically (but not necessarily) a field of rational fractions K ( X ) = K ( x 1 , … , x n ) {\displaystyle K(X)=K(x_{1},\ldots ,x_{n})} (fractions of multivariate polynomials), equipped with derivations ∂ i {\displaystyle \partial _{i}} such that ∂ i x i = 1 {\displaystyle \partial _{i}x_{i}=1} and ∂ i x j = 0 {\displaystyle \partial _{i}x_{j}=0} if i ≠ j {\displaystyle i\neq j} (the usual partial derivatives). For defining the ring K { Y } = K { y 1 , … , y n } {\textstyle K\{Y\}=K\{y_{1},\ldots ,y_{n}\}} of differential polynomials over K {\displaystyle K} with indeterminates in Y = { y 1 , … , y n } {\displaystyle Y=\{y_{1},\ldots ,y_{n}\}} with derivations ∂ 1 , … , ∂ n , {\displaystyle \partial _{1},\ldots ,\partial _{n},} one introduces an infinity of new indeterminates of the form Δ y i , {\displaystyle \Delta y_{i},} where Δ {\displaystyle \Delta } is any derivation operator of order higher than 1. With this notation, K { Y } {\displaystyle K\{Y\}} is the set of polynomials in all these indeterminates, with the natural derivations (each polynomial involves only a finite number of indeterminates). In particular, if n = 1 , {\displaystyle n=1,} one has K { y } = K [ y , ∂ y , ∂ 2 y , ∂ 3 y , … ] . {\displaystyle K\{y\}=K\left[y,\partial y,\partial ^{2}y,\partial ^{3}y,\ldots \right].} Even when n = 1 , {\displaystyle n=1,} a ring of differential polynomials is not Noetherian. This makes the theory of this generalization of polynomial rings difficult. However, two facts allow such a generalization. Firstly, a finite number of differential polynomials involves together a finite number of indeterminates. Its follows that every property of polynomials that involves a finite number of polynomials remains true for differential polynomials. In particular, greatest common divisors exist, and a ring of differential polynomials is a unique factorization domain. The second fact is that, if the field K {\displaystyle K} contains the field of rational numbers, the rings of differential polynomials over K {\displaystyle K} satisfy the ascending chain condition on radical differential ideals. This Ritt’s theorem is implied by its generalization, sometimes called the Ritt-Raudenbush basis theorem which asserts that if R {\displaystyle R} is a Ritt Algebra (that, is a differential ring containing the field of rational numbers), that satisfies the ascending chain condition on radical differential ideals, then the ring of differential polynomials R { y } {\displaystyle R\{y\}} satisfies the same property (one passes from the univariate to the multivariate case by applying the theorem iteratively). This Noetherian property implies that, in a ring of differential polynomials, every radical differential ideal I is finitely generated as a radical differential ideal; this means that there exists a finite set S of differential polynomials such that I is the smallest radical differential ideal containing S. This allows representing a radical differential ideal by such a finite set of generators, and computing with these ideals. However, some usual computations of the algebraic case cannot be extended. In particular no algorithm is known for testing membership of an element in a radical differential ideal or the equality of two radical differential ideals. Another consequence of the Noetherian property is that a radical differential ideal can be uniquely expressed as the intersection of a finite number of prime differential ideals, called essential prime components of the ideal. == Elimination methods == Elimination methods are algorithms that preferentially eliminate a specified set of derivatives from a set of differential equations, commonly done to better understand and solve sets of differential equations. Categories of elimination methods include characteristic set methods, differential Gröbner bases methods and resultant based methods. Common operations used in elimination algorithms include 1) ranking derivatives, polynomials, and polynomial sets, 2) identifying a polynomial's leading derivative, initial and separant, 3) polynomial reduction, and 4) creating special polynomial sets. === Ranking derivatives === The ranking of derivatives is a total order and an admisible order, defined as: ∀ p ∈ Θ Y , ∀ θ μ ∈ Θ : θ μ p > p . {\textstyle \forall p\in \Theta Y,\ \forall \theta _{\mu }\in \Theta :\theta _{\mu }p>p.} ∀ p , q ∈ Θ Y , ∀ θ μ ∈ Θ : p ≥ q ⇒ θ μ p ≥ θ μ q . {\textstyle \forall p,q\in \Theta Y,\ \forall \theta _{\mu }\in \Theta :p\geq q\Rightarrow \theta _{\mu }p\geq \theta _{\mu }q.} Each derivative has an integer tuple, and a monomial order ranks the derivative by ranking the derivative's integer tuple. The integer tuple identifies the differential indeterminate, the derivative's multi-index and may identify the derivative's order. Types of ranking include: Orderly ranking: ∀ y i , y j ∈ Y , ∀ θ μ , θ ν ∈ Θ : ord ⁡ ( θ μ ) ≥ ord ⁡ ( θ ν ) ⇒ θ μ y i ≥ θ ν y j {\displaystyle \forall y_{i},y_{j}\in Y,\ \forall \theta _{\mu },\theta _{\nu }\in \Theta \ :\ \operatorname {ord} (\theta _{\mu })\geq \operatorname {ord} (\theta _{\nu })\Rightarrow \theta _{\mu }y_{i}\geq \theta _{\nu }y_{j}} Elimination ranking: ∀ y i , y j ∈ Y , ∀ θ μ , θ ν ∈ Θ : y i ≥ y j ⇒ θ μ y i ≥ θ ν y j {\displaystyle \forall y_{i},y_{j}\in Y,\ \forall \theta _{\mu },\theta _{\nu }\in \Theta \ :\ y_{i}\geq y_{j}\Rightarrow \theta _{\mu }y_{i}\geq \theta _{\nu }y_{j}} In this example, the integer tuple identifies the differential indeterminate and derivative's multi-index, and lexicographic monomial order, ≥ lex {\textstyle \geq _{\text{lex}}} , determines the derivative's rank. η ( δ 1 e 1 ∘ ⋯ ∘ δ n e n ( y j ) ) = ( j , e 1 , … , e n ) {\displaystyle \eta (\delta _{1}^{e_{1}}\circ \cdots \circ \delta _{n}^{e_{n}}(y_{j}))=(j,e_{1},\ldots ,e_{n})} . η ( θ μ y j ) ≥ lex η ( θ ν y k ) ⇒ θ μ y j ≥ θ ν y k . {\displaystyle \eta (\theta _{\mu }y_{j})\geq _{\text{lex}}\eta (\theta _{\nu }y_{k})\Rightarrow \theta _{\mu }y_{j}\geq \theta _{\nu }y_{k}.} === Leading derivative, initial and separant === This is the standard polynomial form: p = a d ⋅ u p d + a d − 1 ⋅ u p d − 1 + ⋯ + a 1 ⋅ u p + a 0 {\displaystyle p=a_{d}\cdot u_{p}^{d}+a_{d-1}\cdot u_{p}^{d-1}+\cdots +a_{1}\cdot u_{p}+a_{0}} . Leader or leading derivative is the polynomial's highest ranked derivative: u p {\displaystyle u_{p}} . Coefficients a d , … , a 0 {\displaystyle a_{d},\ldots ,a_{0}} do not contain the leading derivative u p {\textstyle u_{p}} . Degree of polynomial is the leading derivative's greatest exponent: deg u p ⁡ ( p ) = d {\displaystyle \deg _{u_{p}}(p)=d} . Initial is the coefficient: I p = a d {\displaystyle I_{p}=a_{d}} . Rank is the leading derivative raised to the polynomial's degree: u p d {\displaystyle u_{p}^{d}} . Separant is the derivative: S p = ∂ p ∂ u p {\displaystyle S_{p}={\frac {\partial p}{\partial u_{p}}}} . Separant set is S A = { S p ∣ p ∈ A } {\displaystyle S_{A}=\{S_{p}\mid p\in A\}} , initial set is I A = { I p ∣ p ∈ A } {\displaystyle I_{A}=\{I_{p}\mid p\in A\}} and combined set is H A = S A ∪ I A {\textstyle H_{A}=S_{A}\cup I_{A}} . === Reduction === Partially reduced (partial normal form) polynomial q {\textstyle q} with respect to polynomial p {\textstyle p} indicates these polynomials are non-ground field elements, p , q ∈ K { Y } ∖ K {\textstyle p,q\in {\mathcal {K}}\{Y\}\setminus {\mathcal {K}}} , and q {\displaystyle q} contains no proper derivative of u p {\displaystyle u_{p}} . Partially reduced polynomial q {\textstyle q} with respect to polynomial p {\textstyle p} becomes reduced (normal form) polynomial q {\textstyle q} with respect to p {\textstyle p} if the degree of u p {\textstyle u_{p}} in q {\textstyle q} is less than the degree of u p {\textstyle u_{p}} in p {\textstyle p} . An autoreduced polynomial set has every polynomial reduced with respect to every other polynomial of the set. Every autoreduced set is finite. An autoreduced set is triangular meaning each polynomial element has a distinct leading derivative. Ritt's reduction algorithm identifies integers i A k , s A k {\textstyle i_{A_{k}},s_{A_{k}}} and transforms a differential polynomial f {\textstyle f} using pseudodivision to a lower or equally ranked remainder polynomial f r e d {\textstyle f_{red}} that is reduced with respect to the autoreduced polynomial set A {\textstyle A} . The algorithm's first step partially reduces the input polynomial and the algorithm's second step fully reduces the polynomial. The formula for reduction is: f red ≡ ∏ A k ∈ A I A k i A k ⋅ S A k i A k ⋅ f , ( mod [ A ] ) with i A k , s A k ∈ N . {\displaystyle f_{\text{red}}\equiv \prod _{A_{k}\in A}I_{A_{k}}^{i_{A_{k}}}\cdot S_{A_{k}}^{i_{A_{k}}}\cdot f,{\pmod {[A]}}{\text{ with }}i_{A_{k}},s_{A_{k}}\in \mathbb {N} .} === Ranking polynomial sets === Set A {\textstyle A} is a differential chain if the rank of the leading derivatives is u A 1 < ⋯ < u A m {\textstyle u_{A_{1}}<\dots <u_{A_{m}}} and ∀ i , A i {\textstyle \forall i,\ A_{i}} is reduced with respect to A i + 1 {\textstyle A_{i+1}} Autoreduced sets A {\textstyle A} and B {\textstyle B} each contain ranked polynomial elements. This procedure ranks two autoreduced sets by comparing pairs of identically indexed polynomials from both autoreduced sets. A 1 < ⋯ < A m ∈ A {\displaystyle A_{1}<\cdots <A_{m}\in A} and B 1 < ⋯ < B n ∈ B {\displaystyle B_{1}<\cdots <B_{n}\in B} and i , j , k ∈ N {\displaystyle i,j,k\in \mathbb {N} } . rank A < rank B {\displaystyle {\text{rank }}A<{\text{rank }}B} if there is a k ≤ minimum ⁡ ( m , n ) {\displaystyle k\leq \operatorname {minimum} (m,n)} such that A i = B i {\displaystyle A_{i}=B_{i}} for 1 ≤ i < k {\textstyle 1\leq i<k} and A k < B k {\displaystyle A_{k}<B_{k}} . rank ⁡ A < rank ⁡ B {\displaystyle \operatorname {rank} A<\operatorname {rank} B} if n < m {\displaystyle n<m} and A i = B i {\displaystyle A_{i}=B_{i}} for 1 ≤ i ≤ n {\displaystyle 1\leq i\leq n} . rank ⁡ A = rank ⁡ B {\displaystyle \operatorname {rank} A=\operatorname {rank} B} if n = m {\displaystyle n=m} and A i = B i {\displaystyle A_{i}=B_{i}} for 1 ≤ i ≤ n {\displaystyle 1\leq i\leq n} . === Polynomial sets === A characteristic set C {\textstyle C} is the lowest ranked autoreduced subset among all the ideal's autoreduced subsets whose subset polynomial separants are non-members of the ideal I {\textstyle {\mathcal {I}}} . The delta polynomial applies to polynomial pair p , q {\textstyle p,q} whose leaders share a common derivative, θ α u p = θ β u q {\textstyle \theta _{\alpha }u_{p}=\theta _{\beta }u_{q}} . The least common derivative operator for the polynomial pair's leading derivatives is θ p q {\textstyle \theta _{pq}} , and the delta polynomial is: Δ - p o l y ⁡ ( p , q ) = S q ⋅ θ p q p θ p − S p ⋅ θ p q q θ q {\displaystyle \operatorname {\Delta -poly} (p,q)=S_{q}\cdot {\frac {\theta _{pq}p}{\theta _{p}}}-S_{p}\cdot {\frac {\theta _{pq}q}{\theta _{q}}}} A coherent set is a polynomial set that reduces its delta polynomial pairs to zero. === Regular system and regular ideal === A regular system Ω {\textstyle \Omega } contains a autoreduced and coherent set of differential equations A {\textstyle A} and a inequation set H Ω ⊇ H A {\textstyle H_{\Omega }\supseteq H_{A}} with set H Ω {\textstyle H_{\Omega }} reduced with respect to the equation set. Regular differential ideal I dif {\textstyle {\mathcal {I}}_{\text{dif}}} and regular algebraic ideal I alg {\textstyle {\mathcal {I}}_{\text{alg}}} are saturation ideals that arise from a regular system. Lazard's lemma states that the regular differential and regular algebraic ideals are radical ideals. Regular differential ideal: I dif = [ A ] : H Ω ∞ . {\textstyle {\mathcal {I}}_{\text{dif}}=[A]:H_{\Omega }^{\infty }.} Regular algebraic ideal: I alg = ( A ) : H Ω ∞ . {\textstyle {\mathcal {I}}_{\text{alg}}=(A):H_{\Omega }^{\infty }.} === Rosenfeld–Gröbner algorithm === The Rosenfeld–Gröbner algorithm decomposes the radical differential ideal as a finite intersection of regular radical differential ideals. These regular differential radical ideals, represented by characteristic sets, are not necessarily prime ideals and the representation is not necessarily minimal. The membership problem is to determine if a differential polynomial p {\textstyle p} is a member of an ideal generated from a set of differential polynomials S {\textstyle S} . The Rosenfeld–Gröbner algorithm generates sets of Gröbner bases. The algorithm determines that a polynomial is a member of the ideal if and only if the partially reduced remainder polynomial is a member of the algebraic ideal generated by the Gröbner bases. The Rosenfeld–Gröbner algorithm facilitates creating Taylor series expansions of solutions to the differential equations. == Examples == === Differential fields === Example 1: ( Mer ⁡ ( f ⁡ ( y ) , ∂ y ) ) {\textstyle (\operatorname {Mer} (\operatorname {f} (y),\partial _{y}))} is the differential meromorphic function field with a single standard derivation. Example 2: ( C { y } , p ( y ) ⋅ ∂ y ) {\textstyle (\mathbb {C} \{y\},p(y)\cdot \partial _{y})} is a differential field with a linear differential operator as the derivation, for any polynomial p ( y ) {\displaystyle p(y)} . === Derivation === Define E a ( p ( y ) ) = p ( y + a ) {\textstyle E^{a}(p(y))=p(y+a)} as shift operator E a {\textstyle E^{a}} for polynomial p ( y ) {\textstyle p(y)} . A shift-invariant operator T {\textstyle T} commutes with the shift operator: E a ∘ T = T ∘ E a {\textstyle E^{a}\circ T=T\circ E^{a}} . The Pincherle derivative, a derivation of shift-invariant operator T {\textstyle T} , is T ′ = T ∘ y − y ∘ T {\textstyle T^{\prime }=T\circ y-y\circ T} . === Constants === Ring of integers is ( Z . δ ) {\displaystyle (\mathbb {Z} .\delta )} , and every integer is a constant. The derivation of 1 is zero. δ ( 1 ) = δ ( 1 ⋅ 1 ) = δ ( 1 ) ⋅ 1 + 1 ⋅ δ ( 1 ) = 2 ⋅ δ ( 1 ) ⇒ δ ( 1 ) = 0 {\textstyle \delta (1)=\delta (1\cdot 1)=\delta (1)\cdot 1+1\cdot \delta (1)=2\cdot \delta (1)\Rightarrow \delta (1)=0} . Also, δ ( m + 1 ) = δ ( m ) + δ ( 1 ) = δ ( m ) ⇒ δ ( m + 1 ) = δ ( m ) {\displaystyle \delta (m+1)=\delta (m)+\delta (1)=\delta (m)\Rightarrow \delta (m+1)=\delta (m)} . By induction, δ ( 1 ) = 0 ∧ δ ( m + 1 ) = δ ( m ) ⇒ ∀ m ∈ Z , δ ( m ) = 0 {\displaystyle \delta (1)=0\ \wedge \ \delta (m+1)=\delta (m)\Rightarrow \forall \ m\in \mathbb {Z} ,\ \delta (m)=0} . Field of rational numbers is ( Q . δ ) {\displaystyle (\mathbb {Q} .\delta )} , and every rational number is a constant. Every rational number is a quotient of integers. ∀ r ∈ Q , ∃ a ∈ Z , b ∈ Z / { 0 } , r = a b {\displaystyle \forall r\in \mathbb {Q} ,\ \exists \ a\in \mathbb {Z} ,\ b\in \mathbb {Z} /\{0\},\ r={\frac {a}{b}}} Apply the derivation formula for quotients recognizing that derivations of integers are zero: δ ( r ) = δ ( a b ) = δ ( a ) ⋅ b − a ⋅ δ ( b ) b 2 = 0 {\displaystyle \delta (r)=\delta \left({\frac {a}{b}}\right)={\frac {\delta (a)\cdot b-a\cdot \delta (b)}{b^{2}}}=0} . === Differential subring === Constants form the subring of constants ( C , ∂ y ) ⊂ ( C { y } , ∂ y ) {\textstyle (\mathbb {C} ,\partial _{y})\subset (\mathbb {C} \{y\},\partial _{y})} . === Differential ideal === Element exp ⁡ ( y ) {\textstyle \exp(y)} simply generates differential ideal [ exp ⁡ ( y ) ] {\textstyle [\exp(y)]} in the differential ring ( C { y , exp ⁡ ( y ) } , ∂ y ) {\textstyle (\mathbb {C} \{y,\exp(y)\},\partial _{y})} . === Algebra over a differential ring === Any ring with identity is a Z - {\textstyle \operatorname {{\mathcal {Z}}-} } algebra. Thus a differential ring is a Z - {\textstyle \operatorname {{\mathcal {Z}}-} } algebra. If ring R {\textstyle {\mathcal {R}}} is a subring of the center of unital ring M {\textstyle {\mathcal {M}}} , then M {\textstyle {\mathcal {M}}} is an R - {\textstyle \operatorname {{\mathcal {R}}-} } algebra. Thus, a differential ring is an algebra over its differential subring. This is the natural structure of an algebra over its subring. === Special and normal polynomials === Ring ( Q { y , z } , ∂ y ) {\textstyle (\mathbb {Q} \{y,z\},\partial _{y})} has irreducible polynomials, p {\textstyle p} (normal, squarefree) and q {\textstyle q} (special, ideal generator). ∂ y ( y ) = 1 , ∂ y ( z ) = 1 + z 2 , z = tan ⁡ ( y ) {\textstyle \partial _{y}(y)=1,\ \partial _{y}(z)=1+z^{2},\ z=\tan(y)} p ( y ) = 1 + y 2 , ∂ y ( p ) = 2 ⋅ y , gcd ( p , ∂ y ( p ) ) = 1 {\textstyle p(y)=1+y^{2},\ \partial _{y}(p)=2\cdot y,\ \gcd(p,\partial _{y}(p))=1} q ( z ) = 1 + z 2 , ∂ y ( q ) = 2 ⋅ z ⋅ ( 1 + z 2 ) , gcd ( q , ∂ y ( q ) ) = q {\textstyle q(z)=1+z^{2},\ \partial _{y}(q)=2\cdot z\cdot (1+z^{2}),\ \gcd(q,\partial _{y}(q))=q} === Polynomials === ==== Ranking ==== Ring ( Q { y 1 , y 2 } , δ ) {\textstyle (\mathbb {Q} \{y_{1},y_{2}\},\delta )} has derivatives δ ( y 1 ) = y 1 ′ {\textstyle \delta (y_{1})=y_{1}^{\prime }} and δ ( y 2 ) = y 2 ′ {\textstyle \delta (y_{2})=y_{2}^{\prime }} Map each derivative to an integer tuple: η ( δ ( i 2 ) ( y i 1 ) ) = ( i 1 , i 2 ) {\textstyle \eta (\delta ^{(i_{2})}(y_{i_{1}}))=(i_{1},i_{2})} . Rank derivatives and integer tuples: y 2 ′ ′ ( 2 , 2 ) > y 2 ′ ( 2 , 1 ) > y 2 ( 2 , 0 ) > y 1 ′ ′ ( 1 , 2 ) > y 1 ′ ( 1 , 1 ) > y 1 ( 1 , 0 ) {\textstyle y_{2}^{\prime \prime }\ (2,2)>y_{2}^{\prime }\ (2,1)>y_{2}\ (2,0)>y_{1}^{\prime \prime }\ (1,2)>y_{1}^{\prime }\ (1,1)>y_{1}\ (1,0)} . ==== Leading derivative and initial ==== The leading derivatives, and initials are: p = ( y 1 + y 1 ′ ) ⋅ ( y 2 ′ ′ ) 2 + 3 ⋅ y 1 2 ⋅ y 2 ′ ′ + ( y 1 ′ ) 2 {\textstyle p={\color {Blue}(y_{1}+y_{1}^{\prime })}\cdot ({\color {Red}y_{2}^{\prime \prime }})^{2}+3\cdot y_{1}^{2}\cdot {\color {Red}y_{2}^{\prime \prime }}+(y_{1}^{\prime })^{2}} q = ( y 1 + 3 ⋅ y 1 ′ ) ⋅ y 2 ′ ′ + y 1 ⋅ y 2 ′ + ( y 1 ′ ) 2 {\textstyle q={\color {Blue}(y_{1}+3\cdot y_{1}^{\prime })}\cdot {\color {Red}y_{2}^{\prime \prime }}+y_{1}\cdot y_{2}^{\prime }+(y_{1}^{\prime })^{2}} r = ( y 1 + 3 ) ⋅ ( y 1 ′ ′ ) 2 + y 1 2 ⋅ y 1 ′ ′ + 2 ⋅ y 1 {\textstyle r={\color {Blue}(y_{1}+3)}\cdot ({\color {Red}y_{1}^{\prime \prime }})^{2}+y_{1}^{2}\cdot {\color {Red}y_{1}^{\prime \prime }}+2\cdot y_{1}} ==== Separants ==== S p = 2 ⋅ ( y 1 + y 1 ′ ) ⋅ y 2 ′ ′ + 3 ⋅ y 1 2 {\textstyle S_{p}=2\cdot (y_{1}+y_{1}^{\prime })\cdot y_{2}^{\prime \prime }+3\cdot y_{1}^{2}} . S q = y 1 + 3 ⋅ y 1 ′ {\textstyle S_{q}=y_{1}+3\cdot y_{1}^{\prime }} S r = 2 ⋅ ( y 1 + 3 ) ⋅ y 1 ′ ′ + y 1 2 {\textstyle S_{r}=2\cdot (y_{1}+3)\cdot y_{1}^{\prime \prime }+y_{1}^{2}} ==== Autoreduced sets ==== Autoreduced sets are { p , r } {\textstyle \{p,r\}} and { q , r } {\textstyle \{q,r\}} . Each set is triangular with a distinct polynomial leading derivative. The non-autoreduced set { p , q } {\textstyle \{p,q\}} contains only partially reduced p {\textstyle p} with respect to q {\textstyle q} ; this set is non-triangular because the polynomials have the same leading derivative. == Applications == === Symbolic integration === Symbolic integration uses algorithms involving polynomials and their derivatives such as Hermite reduction, Czichowski algorithm, Lazard-Rioboo-Trager algorithm, Horowitz-Ostrogradsky algorithm, squarefree factorization and splitting factorization to special and normal polynomials. === Differential equations === Differential algebra can determine if a set of differential polynomial equations has a solution. A total order ranking may identify algebraic constraints. An elimination ranking may determine if one or a selected group of independent variables can express the differential equations. Using triangular decomposition and elimination order, it may be possible to solve the differential equations one differential indeterminate at a time in a step-wise method. Another approach is to create a class of differential equations with a known solution form; matching a differential equation to its class identifies the equation's solution. Methods are available to facilitate the numerical integration of a differential-algebraic system of equations. In a study of non-linear dynamical systems with chaos, researchers used differential elimination to reduce differential equations to ordinary differential equations involving a single state variable. They were successful in most cases, and this facilitated developing approximate solutions, efficiently evaluating chaos, and constructing Lyapunov functions. Researchers have applied differential elimination to understanding cellular biology, compartmental biochemical models, parameter estimation and quasi-steady state approximation (QSSA) for biochemical reactions. Using differential Gröbner bases, researchers have investigated non-classical symmetry properties of non-linear differential equations. Other applications include control theory, model theory, and algebraic geometry. Differential algebra also applies to differential-difference equations. == Algebras with derivations == === Differential graded vector space === A Z - g r a d e d {\textstyle \operatorname {\mathbb {Z} -graded} } vector space V ∙ {\textstyle V_{\bullet }} is a collection of vector spaces V m {\textstyle V_{m}} with integer degree | v | = m {\textstyle |v|=m} for v ∈ V m {\textstyle v\in V_{m}} . A direct sum can represent this graded vector space: V ∙ = ⨁ m ∈ Z V m {\displaystyle V_{\bullet }=\bigoplus _{m\in \mathbb {Z} }V_{m}} A differential graded vector space or chain complex, is a graded vector space V ∙ {\textstyle V_{\bullet }} with a differential map or boundary map d m : V m → V m − 1 {\textstyle d_{m}:V_{m}\to V_{m-1}} with d m ∘ d m + 1 = 0 {\displaystyle d_{m}\circ d_{m+1}=0} . A cochain complex is a graded vector space V ∙ {\textstyle V^{\bullet }} with a differential map or coboundary map d m : V m → V m + 1 {\textstyle d_{m}:V_{m}\to V_{m+1}} with d m + 1 ∘ d m = 0 {\displaystyle d_{m+1}\circ d_{m}=0} . === Differential graded algebra === A differential graded algebra is a graded algebra A {\textstyle A} with a linear derivation d : A → A {\textstyle d:A\to A} with d ∘ d = 0 {\displaystyle d\circ d=0} that follows the graded Leibniz product rule. Graded Leibniz product rule: ∀ a , b ∈ A , d ( a ⋅ b ) = d ( a ) ⋅ b + ( − 1 ) | a | ⋅ a ⋅ d ( b ) {\displaystyle \forall a,b\in A,\ d(a\cdot b)=d(a)\cdot b+(-1)^{|a|}\cdot a\cdot d(b)} with | a | {\displaystyle |a|} the degree of vector a {\displaystyle a} . === Lie algebra === A Lie algebra is a finite-dimensional real or complex vector space g {\textstyle {\mathcal {g}}} with a bilinear bracket operator [ , ] : g × g → g {\textstyle [,]:{\mathcal {g}}\times {\mathcal {g}}\to {\mathcal {g}}} with Skew symmetry and the Jacobi identity property. Skew symmetry: [ X , Y ] = − [ Y , X ] {\displaystyle [X,Y]=-[Y,X]} Jacobi identity property: [ X , [ Y , Z ] ] + [ Y , [ Z , X ] ] + [ Z , [ X , Y ] ] = 0 {\displaystyle [X,[Y,Z]]+[Y,[Z,X]]+[Z,[X,Y]]=0} for all X , Y , Z ∈ g {\displaystyle X,Y,Z\in {\mathcal {g}}} . The adjoint operator, a d X ⁡ ( Y ) = [ Y , X ] {\textstyle \operatorname {ad_{X}} (Y)=[Y,X]} is a derivation of the bracket because the adjoint's effect on the binary bracket operation is analogous to the derivation's effect on the binary product operation. This is the inner derivation determined by X {\textstyle X} . ad X ⁡ ( [ Y , Z ] ) = [ ad X ⁡ ( Y ) , Z ] + [ Y , ad X ⁡ ( Z ) ] {\displaystyle \operatorname {ad} _{X}([Y,Z])=[\operatorname {ad} _{X}(Y),Z]+[Y,\operatorname {ad} _{X}(Z)]} The universal enveloping algebra U ( g ) {\textstyle U({\mathcal {g}})} of Lie algebra g {\textstyle {\mathcal {g}}} is a maximal associative algebra with identity, generated by Lie algebra elements g {\textstyle {\mathcal {g}}} and containing products defined by the bracket operation. Maximal means that a linear homomorphism maps the universal algebra to any other algebra that otherwise has these properties. The adjoint operator is a derivation following the Leibniz product rule. Product in U ( g ) {\displaystyle U({\mathcal {g}})} : X ⋅ Y − Y ⋅ X = [ X , Y ] {\displaystyle X\cdot Y-Y\cdot X=[X,Y]} Leibniz product rule: ad X ⁡ ( Y ⋅ Z ) = ad X ⁡ ( Y ) ⋅ Z + Y ⋅ ad X ⁡ ( Z ) {\displaystyle \operatorname {ad} _{X}(Y\cdot Z)=\operatorname {ad} _{X}(Y)\cdot Z+Y\cdot \operatorname {ad} _{X}(Z)} for all X , Y , Z ∈ U ( g ) {\displaystyle X,Y,Z\in U({\mathcal {g}})} . === Weyl algebra === The Weyl algebra is an algebra A n ( K ) {\textstyle A_{n}(K)} over a ring K [ p 1 , q 1 , … , p n , q n ] {\textstyle K[p_{1},q_{1},\dots ,p_{n},q_{n}]} with a specific noncommutative product: p i ⋅ q i − q i ⋅ p i = 1 , : i ∈ { 1 , … , n } {\displaystyle p_{i}\cdot q_{i}-q_{i}\cdot p_{i}=1,\ :\ i\in \{1,\dots ,n\}} . All other indeterminate products are commutative for i , j ∈ { 1 , … , n } {\textstyle i,j\in \{1,\dots ,n\}} : p i ⋅ q j − q j ⋅ p i = 0 if i ≠ j , p i ⋅ p j − p j ⋅ p i = 0 , q i ⋅ q j − q j ⋅ q i = 0 {\displaystyle p_{i}\cdot q_{j}-q_{j}\cdot p_{i}=0{\text{ if }}i\neq j,\ p_{i}\cdot p_{j}-p_{j}\cdot p_{i}=0,\ q_{i}\cdot q_{j}-q_{j}\cdot q_{i}=0} . A Weyl algebra can represent the derivations for a commutative ring's polynomials f ∈ K [ y 1 , … , y n ] {\textstyle f\in K[y_{1},\ldots ,y_{n}]} . The Weyl algebra's elements are endomorphisms, the elements p 1 , … , p n {\textstyle p_{1},\ldots ,p_{n}} function as standard derivations, and map compositions generate linear differential operators. D-module is a related approach for understanding differential operators. The endomorphisms are: q j ( y k ) = y j ⋅ y k , q j ( c ) = c ⋅ y j with c ∈ K , p j ( y j ) = 1 , p j ( y k ) = 0 if j ≠ k , p j ( c ) = 0 with c ∈ K {\displaystyle q_{j}(y_{k})=y_{j}\cdot y_{k},\ q_{j}(c)=c\cdot y_{j}{\text{ with }}c\in K,\ p_{j}(y_{j})=1,\ p_{j}(y_{k})=0{\text{ if }}j\neq k,\ p_{j}(c)=0{\text{ with }}c\in K} === Pseudodifferential operator ring === The associative, possibly noncommutative ring A {\textstyle A} has derivation d : A → A {\textstyle d:A\to A} . The pseudo-differential operator ring A ( ( ∂ − 1 ) ) {\textstyle A((\partial ^{-1}))} is a left A - m o d u l e {\textstyle \operatorname {A-module} } containing ring elements L {\textstyle L} : a i ∈ A , i , i min ∈ N , | i min | > 0 : L = ∑ i ≥ i min n a i ⋅ ∂ i {\displaystyle a_{i}\in A,\ i,i_{\min }\in \mathbb {N} ,\ |i_{\min }|>0\ :\ L=\sum _{i\geq i_{\min }}^{n}a_{i}\cdot \partial ^{i}} The derivative operator is d ( a ) = ∂ ∘ a − a ∘ ∂ {\textstyle d(a)=\partial \circ a-a\circ \partial } . The binomial coefficient is ( i k ) {\displaystyle {\Bigl (}{i \atop k}{\Bigr )}} . Pseudo-differential operator multiplication is: ∑ i ≥ i min n a i ⋅ ∂ i ⋅ ∑ j ≥ j min m b i ⋅ ∂ j = ∑ i , j ; k ≥ 0 ( i k ) ⋅ a i ⋅ d k ( b j ) ⋅ ∂ i + j − k {\displaystyle \sum _{i\geq i_{\min }}^{n}a_{i}\cdot \partial ^{i}\cdot \sum _{j\geq j_{\min }}^{m}b_{i}\cdot \partial ^{j}=\sum _{i,j;k\geq 0}{\Bigl (}{i \atop k}{\Bigr )}\cdot a_{i}\cdot d^{k}(b_{j})\cdot \partial ^{i+j-k}} == Open problems == The Ritt problem asks is there an algorithm that determines if one prime differential ideal contains a second prime differential ideal when characteristic sets identify both ideals. The Kolchin catenary conjecture states given a d > 0 {\textstyle d>0} dimensional irreducible differential algebraic variety V {\textstyle V} and an arbitrary point p ∈ V {\textstyle p\in V} , a long gap chain of irreducible differential algebraic subvarieties occurs from p {\textstyle p} to V. The Jacobi bound conjecture concerns the upper bound for the order of an differential variety's irreducible component. The polynomial's orders determine a Jacobi number, and the conjecture is the Jacobi number determines this bound. == See also == Arithmetic derivative – Function defined on integers in number theory Difference algebra Differential algebraic geometry Differential calculus over commutative algebras – part of commutative algebraPages displaying wikidata descriptions as a fallback Differential Galois theory – Study of Galois symmetry groups of differential fields Differentially closed field Differential graded algebra – Algebraic structure in homological algebra D-module – Module over a sheaf of differential operators Hardy field – Mathematical concept Kähler differential – Differential form in commutative algebra Liouville's theorem (differential algebra) – Says when antiderivatives of elementary functions can be expressed as elementary functions Picard–Vessiot theory – Study of differential field extensions induced by linear differential equations Kolchin's problems == Citations == == References == == External links == David Marker's home page has several online surveys discussing differential fields.
Wikipedia/Derivation_algebra
In mathematics, a derivation is a function on an algebra that generalizes certain features of the derivative operator. Specifically, given an algebra A over a ring or a field K, a K-derivation is a K-linear map D : A → A that satisfies Leibniz's law: D ( a b ) = a D ( b ) + D ( a ) b . {\displaystyle D(ab)=aD(b)+D(a)b.} More generally, if M is an A-bimodule, a K-linear map D : A → M that satisfies the Leibniz law is also called a derivation. The collection of all K-derivations of A to itself is denoted by DerK(A). The collection of K-derivations of A into an A-module M is denoted by DerK(A, M). Derivations occur in many different contexts in diverse areas of mathematics. The partial derivative with respect to a variable is an R-derivation on the algebra of real-valued differentiable functions on Rn. The Lie derivative with respect to a vector field is an R-derivation on the algebra of differentiable functions on a differentiable manifold; more generally it is a derivation on the tensor algebra of a manifold. It follows that the adjoint representation of a Lie algebra is a derivation on that algebra. The Pincherle derivative is an example of a derivation in abstract algebra. If the algebra A is noncommutative, then the commutator with respect to an element of the algebra A defines a linear endomorphism of A to itself, which is a derivation over K. That is, [ F G , N ] = [ F , N ] G + F [ G , N ] , {\displaystyle [FG,N]=[F,N]G+F[G,N],} where [ ⋅ , N ] {\displaystyle [\cdot ,N]} is the commutator with respect to N {\displaystyle N} . An algebra A equipped with a distinguished derivation d forms a differential algebra, and is itself a significant object of study in areas such as differential Galois theory. == Properties == If A is a K-algebra, for K a ring, and D: A → A is a K-derivation, then If A has a unit 1, then D(1) = D(12) = 2D(1), so that D(1) = 0. Thus by K-linearity, D(k) = 0 for all k ∈ K. If A is commutative, D(x2) = xD(x) + D(x)x = 2xD(x), and D(xn) = nxn−1D(x), by the Leibniz rule. More generally, for any x1, x2, …, xn ∈ A, it follows by induction that D ( x 1 x 2 ⋯ x n ) = ∑ i x 1 ⋯ x i − 1 D ( x i ) x i + 1 ⋯ x n {\displaystyle D(x_{1}x_{2}\cdots x_{n})=\sum _{i}x_{1}\cdots x_{i-1}D(x_{i})x_{i+1}\cdots x_{n}} which is ∑ i D ( x i ) ∏ j ≠ i x j {\textstyle \sum _{i}D(x_{i})\prod _{j\neq i}x_{j}} if for all i, D(xi) commutes with x 1 , x 2 , … , x i − 1 {\displaystyle x_{1},x_{2},\ldots ,x_{i-1}} . For n > 1, Dn is not a derivation, instead satisfying a higher-order Leibniz rule: D n ( u v ) = ∑ k = 0 n ( n k ) ⋅ D n − k ( u ) ⋅ D k ( v ) . {\displaystyle D^{n}(uv)=\sum _{k=0}^{n}{\binom {n}{k}}\cdot D^{n-k}(u)\cdot D^{k}(v).} Moreover, if M is an A-bimodule, write Der K ⁡ ( A , M ) {\displaystyle \operatorname {Der} _{K}(A,M)} for the set of K-derivations from A to M. DerK(A, M) is a module over K. DerK(A) is a Lie algebra with Lie bracket defined by the commutator: [ D 1 , D 2 ] = D 1 ∘ D 2 − D 2 ∘ D 1 . {\displaystyle [D_{1},D_{2}]=D_{1}\circ D_{2}-D_{2}\circ D_{1}.} since it is readily verified that the commutator of two derivations is again a derivation. There is an A-module ΩA/K (called the Kähler differentials) with a K-derivation d: A → ΩA/K through which any derivation D: A → M factors. That is, for any derivation D there is a A-module map φ with D : A ⟶ d Ω A / K ⟶ φ M {\displaystyle D:A{\stackrel {d}{\longrightarrow }}\Omega _{A/K}{\stackrel {\varphi }{\longrightarrow }}M} The correspondence D ↔ φ {\displaystyle D\leftrightarrow \varphi } is an isomorphism of A-modules: Der K ⁡ ( A , M ) ≃ Hom A ⁡ ( Ω A / K , M ) {\displaystyle \operatorname {Der} _{K}(A,M)\simeq \operatorname {Hom} _{A}(\Omega _{A/K},M)} If k ⊂ K is a subring, then A inherits a k-algebra structure, so there is an inclusion Der K ⁡ ( A , M ) ⊂ Der k ⁡ ( A , M ) , {\displaystyle \operatorname {Der} _{K}(A,M)\subset \operatorname {Der} _{k}(A,M),} since any K-derivation is a fortiori a k-derivation. == Graded derivations == Given a graded algebra A and a homogeneous linear map D of grade |D| on A, D is a homogeneous derivation if D ( a b ) = D ( a ) b + ε | a | | D | a D ( b ) {\displaystyle {D(ab)=D(a)b+\varepsilon ^{|a||D|}aD(b)}} for every homogeneous element a and every element b of A for a commutator factor ε = ±1. A graded derivation is sum of homogeneous derivations with the same ε. If ε = 1, this definition reduces to the usual case. If ε = −1, however, then D ( a b ) = D ( a ) b + ( − 1 ) | a | | D | a D ( b ) {\displaystyle {D(ab)=D(a)b+(-1)^{|a||D|}aD(b)}} for odd |D|, and D is called an anti-derivation. Examples of anti-derivations include the exterior derivative and the interior product acting on differential forms. Graded derivations of superalgebras (i.e. Z2-graded algebras) are often called superderivations. == Related notions == Hasse–Schmidt derivations are K-algebra homomorphisms A → A [ [ t ] ] . {\displaystyle A\to A[[t]].} Composing further with the map that sends a formal power series ∑ a n t n {\displaystyle \sum a_{n}t^{n}} to the coefficient a 1 {\displaystyle a_{1}} gives a derivation. == See also == In differential geometry derivations are tangent vectors Kähler differential Hasse derivative p-derivation Wirtinger derivatives Derivative of the exponential map == References == Bourbaki, Nicolas (1989), Algebra I, Elements of mathematics, Springer-Verlag, ISBN 3-540-64243-9. Eisenbud, David (1999), Commutative algebra with a view toward algebraic geometry (3rd. ed.), Springer-Verlag, ISBN 978-0-387-94269-8. Matsumura, Hideyuki (1970), Commutative algebra, Mathematics lecture note series, W. A. Benjamin, ISBN 978-0-8053-7025-6. Kolař, Ivan; Slovák, Jan; Michor, Peter W. (1993), Natural operations in differential geometry, Springer-Verlag.
Wikipedia/Derivation_(abstract_algebra)
In mathematics, the Cayley–Dickson construction, sometimes also known as the Cayley–Dickson process or the Cayley–Dickson procedure produces a sequence of algebras over the field of real numbers, each with twice the dimension of the previous one. It is named after Arthur Cayley and Leonard Eugene Dickson. The algebras produced by this process are known as Cayley–Dickson algebras, for example complex numbers, quaternions, and octonions. These examples are useful composition algebras frequently applied in mathematical physics. The Cayley–Dickson construction defines a new algebra as a Cartesian product of an algebra with itself, with multiplication defined in a specific way (different from the componentwise multiplication) and an involution known as conjugation. The product of an element and its conjugate (or sometimes the square root of this product) is called the norm. The symmetries of the real field disappear as the Cayley–Dickson construction is repeatedly applied: first losing order, then commutativity of multiplication, associativity of multiplication, and finally alternativity. More generally, the Cayley–Dickson construction takes any algebra with involution to another algebra with involution of twice the dimension.: 45  Hurwitz's theorem states that the reals, complex numbers, quaternions, and octonions are the only finite-dimensional normed division algebras over the real numbers, while Frobenius theorem states that the first three are the only finite-dimensional associative division algebras over the real numbers. == Synopsis == The Cayley–Dickson construction is due to Leonard Dickson in 1919 showing how the octonions can be constructed as a two-dimensional algebra over quaternions. In fact, starting with a field F, the construction yields a sequence of F-algebras of dimension 2n. For n = 2 it is an associative algebra called a quaternion algebra, and for n = 3 it is an alternative algebra called an octonion algebra. These instances n = 1, 2 and 3 produce composition algebras as shown below. The case n = 1 starts with elements (a, b) in F × F and defines the conjugate (a, b)* to be (a*, –b) where a* = a in case n = 1, and subsequently determined by the formula. The essence of the F-algebra lies in the definition of the product of two elements (a, b) and (c, d): ( a , b ) × ( c , d ) = ( a c − d ∗ b , d a + b c ∗ ) . {\displaystyle (a,b)\times (c,d)=(ac-d^{*}b,da+bc^{*}).} Proposition 1: For z = ( a , b ) {\displaystyle z=(a,b)} and w = ( c , d ) , {\displaystyle w=(c,d),} the conjugate of the product is w ∗ z ∗ = ( z w ) ∗ . {\displaystyle w^{*}z^{*}=(zw)^{*}.} proof: ( c ∗ , − d ) ( a ∗ , − b ) = ( c ∗ a ∗ + b ∗ ( − d ) , − b c ∗ − d a ) = ( z w ) ∗ . {\displaystyle (c^{*},-d)(a^{*},-b)=(c^{*}a^{*}+b^{*}(-d),-bc^{*}-da)=(zw)^{*}.} Proposition 2: If the F-algebra is associative and N ( z ) = z z ∗ {\displaystyle N(z)=zz^{*}} ,then N ( z w ) = N ( z ) N ( w ) . {\displaystyle N(zw)=N(z)N(w).} proof: N ( z w ) = ( a c − d ∗ b , d a + b c ∗ ) ( c ∗ a ∗ − b ∗ d , − d a − b c ∗ ) = ( a a ∗ + b b ∗ ) ( c c ∗ + d d ∗ ) {\displaystyle N(zw)=(ac-d^{*}b,da+bc^{*})(c^{*}a^{*}-b^{*}d,-da-bc^{*})=(aa^{*}+bb^{*})(cc^{*}+dd^{*})} + terms that cancel by the associative property. == Stages in construction of real algebras == Details of the construction of the classical real algebras are as follows: === Complex numbers as ordered pairs === The complex numbers can be written as ordered pairs (a, b) of real numbers a and b, with the addition operator being component-wise and with multiplication defined by ( a , b ) ( c , d ) = ( a c − b d , a d + b c ) . {\displaystyle (a,b)(c,d)=(ac-bd,ad+bc).\,} A complex number whose second component is zero is associated with a real number: the complex number (a, 0) is associated with the real number a. The complex conjugate (a, b)* of (a, b) is given by ( a , b ) ∗ = ( a ∗ , − b ) = ( a , − b ) {\displaystyle (a,b)^{*}=(a^{*},-b)=(a,-b)} since a is a real number and is its own conjugate. The conjugate has the property that ( a , b ) ∗ ( a , b ) = ( a a + b b , a b − b a ) = ( a 2 + b 2 , 0 ) , {\displaystyle (a,b)^{*}(a,b)=(aa+bb,ab-ba)=\left(a^{2}+b^{2},0\right),\,} which is a non-negative real number. In this way, conjugation defines a norm, making the complex numbers a normed vector space over the real numbers: the norm of a complex number z is | z | = ( z ∗ z ) 1 2 . {\displaystyle |z|=\left(z^{*}z\right)^{\frac {1}{2}}.\,} Furthermore, for any non-zero complex number z, conjugation gives a multiplicative inverse, z − 1 = z ∗ | z | 2 . {\displaystyle z^{-1}={\frac {z^{*}}{|z|^{2}}}.} As a complex number consists of two independent real numbers, they form a two-dimensional vector space over the real numbers. Besides being of higher dimension, the complex numbers can be said to lack one algebraic property of the real numbers: a real number is its own conjugate. === Quaternions === The next step in the construction is to generalize the multiplication and conjugation operations. Form ordered pairs (a, b) of complex numbers a and b, with multiplication defined by ( a , b ) ( c , d ) = ( a c − d ∗ b , d a + b c ∗ ) . {\displaystyle (a,b)(c,d)=(ac-d^{*}b,da+bc^{*}).\,} Slight variations on this formula are possible; the resulting constructions will yield structures identical up to the signs of bases. The order of the factors seems odd now, but will be important in the next step. Define the conjugate (a, b)* of (a, b) by ( a , b ) ∗ = ( a ∗ , − b ) . {\displaystyle (a,b)^{*}=(a^{*},-b).\,} These operators are direct extensions of their complex analogs: if a and b are taken from the real subset of complex numbers, the appearance of the conjugate in the formulas has no effect, so the operators are the same as those for the complex numbers. The product of a nonzero element with its conjugate is a non-negative real number: ( a , b ) ∗ ( a , b ) = ( a ∗ , − b ) ( a , b ) = ( a ∗ a + b ∗ b , b a ∗ − b a ∗ ) = ( | a | 2 + | b | 2 , 0 ) . {\displaystyle {\begin{aligned}(a,b)^{*}(a,b)&=(a^{*},-b)(a,b)\\&=(a^{*}a+b^{*}b,ba^{*}-ba^{*})\\&=\left(|a|^{2}+|b|^{2},0\right).\,\end{aligned}}} As before, the conjugate thus yields a norm and an inverse for any such ordered pair. So in the sense we explained above, these pairs constitute an algebra something like the real numbers. They are the quaternions, named by Hamilton in 1843. As a quaternion consists of two independent complex numbers, they form a four-dimensional vector space over the real numbers. The multiplication of quaternions is not quite like the multiplication of real numbers, though; it is not commutative – that is, if p and q are quaternions, it is not always true that pq = qp. === Octonions === All the steps to create further algebras are the same from octonions onwards. This time, form ordered pairs (p, q) of quaternions p and q, with multiplication and conjugation defined exactly as for the quaternions: ( p , q ) ( r , s ) = ( p r − s ∗ q , s p + q r ∗ ) . {\displaystyle (p,q)(r,s)=(pr-s^{*}q,sp+qr^{*}).\,} Note, however, that because the quaternions are not commutative, the order of the factors in the multiplication formula becomes important—if the last factor in the multiplication formula were r*q rather than qr*, the formula for multiplication of an element by its conjugate would not yield a real number. For exactly the same reasons as before, the conjugation operator yields a norm and a multiplicative inverse of any nonzero element. This algebra was discovered by John T. Graves in 1843, and is called the octonions or the "Cayley numbers". As an octonion consists of two independent quaternions, they form an eight-dimensional vector space over the real numbers. The multiplication of octonions is even stranger than that of quaternions; besides being non-commutative, it is not associative – that is, if p, q, and r are octonions, it is not always true that (pq)r = p(qr). For the reason of this non-associativity, octonions have no matrix representation. === Sedenions === The algebra immediately following the octonions is called the sedenions. It retains the algebraic property of power associativity, meaning that if s is a sedenion, snsm = sn + m, but loses the property of being an alternative algebra and hence cannot be a composition algebra. === Trigintaduonions === The algebra immediately following the sedenions is the trigintaduonions, which form a 32-dimensional algebra over the real numbers and can be represented by blackboard bold T {\displaystyle \mathbb {T} } . === Further algebras === The Cayley–Dickson construction can be carried on ad infinitum, at each step producing a power-associative algebra whose dimension is double that of the algebra of the preceding step. These include the 64-dimensional sexagintaquatronions (or 64-nions), the 128-dimensional centumduodetrigintanions (or 128-nions), the 256-dimensional ducentiquinquagintasexions (or 256-nions), and ad infinitum. All the algebras generated in this way over a field are quadratic: that is, each element satisfies a quadratic equation with coefficients from the field.: 50  In 1954, R. D. Schafer proved that the algebras generated by the Cayley–Dickson process over a field F satisfy the flexible identity. He also proved that any derivation algebra of a Cayley–Dickson algebra is isomorphic to the derivation algebra of Cayley numbers, a 14-dimensional Lie algebra over F. == Modified Cayley–Dickson construction == The Cayley–Dickson construction, starting from the real numbers R {\displaystyle \mathbb {R} } , generates the composition algebras C {\displaystyle \mathbb {C} } (the complex numbers), H {\displaystyle \mathbb {H} } (the quaternions), and O {\displaystyle \mathbb {O} } (the octonions). There are also composition algebras whose norm is an isotropic quadratic form, which are obtained through a slight modification, by replacing the minus sign in the definition of the product of ordered pairs with a plus sign, as follows: ( a , b ) ( c , d ) = ( a c + d ∗ b , d a + b c ∗ ) . {\displaystyle (a,b)(c,d)=(ac+d^{*}b,da+bc^{*}).} When this modified construction is applied to R {\displaystyle \mathbb {R} } , one obtains the split-complex numbers, which are ring-isomorphic to the direct product R × R ; {\displaystyle \mathbb {R} \times \mathbb {R} ;} following that, one obtains the split-quaternions, an associative algebra isomorphic to that of the 2 × 2 real matrices; and the split-octonions, which are isomorphic to Zorn(R). Applying the original Cayley–Dickson construction to the split-complexes also results in the split-quaternions and then the split-octonions. == General Cayley–Dickson construction == Albert (1942, p. 171) gave a slight generalization, defining the product and involution on B = A ⊕ A for A an algebra with involution (with (xy)* = y*x*) to be ( p , q ) ( r , s ) = ( p r − γ s ∗ q , s p + q r ∗ ) ( p , q ) ∗ = ( p ∗ , − q ) {\displaystyle {\begin{aligned}(p,q)(r,s)&=(pr-\gamma s^{*}q,sp+qr^{*})\,\\(p,q)^{*}&=(p^{*},-q)\,\end{aligned}}} for γ an additive map that commutes with * and left and right multiplication by any element. (Over the reals all choices of γ are equivalent to −1, 0 or 1.) In this construction, A is an algebra with involution, meaning: A is an abelian group under + A has a product that is left and right distributive over + A has an involution *, with (x*)* = x, (x + y)* = x* + y*, (xy)* = y*x*. The algebra B = A ⊕ A produced by the Cayley–Dickson construction is also an algebra with involution. B inherits properties from A unchanged as follows. If A has an identity 1A, then B has an identity (1A, 0). If A has the property that x + x*, xx* associate and commute with all elements, then so does B. This property implies that any element generates a commutative associative *-algebra, so in particular the algebra is power associative. Other properties of A only induce weaker properties of B: If A is commutative and has trivial involution, then B is commutative. If A is commutative and associative then B is associative. If A is associative and x + x*, xx* associate and commute with everything, then B is an alternative algebra. == Notes == == References == Albert, A. A. (1942), "Quadratic forms permitting composition", Annals of Mathematics, Second Series, 43 (1): 161–177, doi:10.2307/1968887, JSTOR 1968887, MR 0006140 (see p. 171) Baez, John (2002), "The Octonions", Bulletin of the American Mathematical Society, 39 (2): 145–205, arXiv:math/0105155, doi:10.1090/S0273-0979-01-00934-X, S2CID 586512. (See "Section 2.2, The Cayley–Dickson Construction") Dickson, L. E. (1919), "On Quaternions and Their Generalization and the History of the Eight Square Theorem", Annals of Mathematics, Second Series, 20 (3), Annals of Mathematics: 155–171, doi:10.2307/1967865, JSTOR 1967865 Biss, Daniel K.; Christensen, J. Daniel; Dugger, Daniel; Isaksen, Daniel C. (2007). "Large annihilators in Cayley–Dickson algebras II". Boletin de la Sociedad Matematica Mexicana. 3: 269–292. arXiv:math/0702075. Bibcode:2007math......2075B. Hamilton, William Rowan (1847), "On Quaternions", Proceedings of the Royal Irish Academy, 3: 1–16, ISSN 1393-7197 Kantor, I. L.; Solodownikow, A. S. (1978), Hyperkomplexe Zahlen, Leipzig: B.G. Teubner (the following reference gives the English translation of this book) Kantor, I. L.; Solodovnikov, A. S. (1989), Hypercomplex numbers, Berlin, New York: Springer-Verlag, ISBN 978-0-387-96980-0, MR 0996029 Roos, Guy (2008). "Exceptional symmetric domains §1: Cayley algebras". In Gilligan, Bruce; Roos, Guy (eds.). Symmetries in Complex Analysis. Contemporary Mathematics. Vol. 468. American Mathematical Society. ISBN 978-0-8218-4459-5. == Further reading == Daboul, Jamil; Delbourgo, Robert (1999). "Matrix representations of octonions and generalizations". Journal of Mathematical Physics. 40 (8): 4134–50. arXiv:hep-th/9906065. Bibcode:1999JMP....40.4134D. doi:10.1063/1.532950. S2CID 16932871.
Wikipedia/Cayley–Dickson_algebra
In mathematics, hypercomplex number is a traditional term for an element of a finite-dimensional unital algebra over the field of real numbers. The study of hypercomplex numbers in the late 19th century forms the basis of modern group representation theory. == History == In the nineteenth century, number systems called quaternions, tessarines, coquaternions, biquaternions, and octonions became established concepts in mathematical literature, extending the real and complex numbers. The concept of a hypercomplex number covered them all, and called for a discipline to explain and classify them. The cataloguing project began in 1872 when Benjamin Peirce first published his Linear Associative Algebra, and was carried forward by his son Charles Sanders Peirce. Most significantly, they identified the nilpotent and the idempotent elements as useful hypercomplex numbers for classifications. The Cayley–Dickson construction used involutions to generate complex numbers, quaternions, and octonions out of the real number system. Hurwitz and Frobenius proved theorems that put limits on hypercomplexity: Hurwitz's theorem says finite-dimensional real composition algebras are the reals R {\displaystyle \mathbb {R} } , the complexes C {\displaystyle \mathbb {C} } , the quaternions H {\displaystyle \mathbb {H} } , and the octonions O {\displaystyle \mathbb {O} } , and the Frobenius theorem says the only real associative division algebras are R {\displaystyle \mathbb {R} } , C {\displaystyle \mathbb {C} } , and H {\displaystyle \mathbb {H} } . In 1958 J. Frank Adams published a further generalization in terms of Hopf invariants on H-spaces which still limits the dimension to 1, 2, 4, or 8. It was matrix algebra that harnessed the hypercomplex systems. For instance, 2 x 2 real matrices were found isomorphic to coquaternions. Soon the matrix paradigm began to explain several others as they were represented by matrices and their operations. In 1907 Joseph Wedderburn showed that associative hypercomplex systems could be represented by square matrices, or direct products of algebras of square matrices. From that date the preferred term for a hypercomplex system became associative algebra, as seen in the title of Wedderburn's thesis at University of Edinburgh. Note however, that non-associative systems like octonions and hyperbolic quaternions represent another type of hypercomplex number. As Thomas Hawkins explains, the hypercomplex numbers are stepping stones to learning about Lie groups and group representation theory. For instance, in 1929 Emmy Noether wrote on "hypercomplex quantities and representation theory". In 1973 Kantor and Solodovnikov published a textbook on hypercomplex numbers which was translated in 1989. Karen Parshall has written a detailed exposition of the heyday of hypercomplex numbers, including the role of mathematicians including Theodor Molien and Eduard Study. For the transition to modern algebra, Bartel van der Waerden devotes thirty pages to hypercomplex numbers in his History of Algebra. == Definition == A definition of a hypercomplex number is given by Kantor & Solodovnikov (1989) as an element of a unital, but not necessarily associative or commutative, finite-dimensional algebra over the real numbers. Elements are generated with real number coefficients ( a 0 , … , a n ) {\displaystyle (a_{0},\dots ,a_{n})} for a basis { 1 , i 1 , … , i n } {\displaystyle \{1,i_{1},\dots ,i_{n}\}} . Where possible, it is conventional to choose the basis so that i k 2 ∈ { − 1 , 0 , + 1 } {\displaystyle i_{k}^{2}\in \{-1,0,+1\}} . A technical approach to hypercomplex numbers directs attention first to those of dimension two. == Two-dimensional real algebras == Theorem:: 14, 15  Up to isomorphism, there are exactly three 2-dimensional unital algebras over the reals: the ordinary complex numbers, the split-complex numbers, and the dual numbers. In particular, every 2-dimensional unital algebra over the reals is associative and commutative. Proof: Since the algebra is 2-dimensional, we can pick a basis {1, u}. Since the algebra is closed under squaring, the non-real basis element u squares to a linear combination of 1 and u: u 2 = a 0 + a 1 u {\displaystyle u^{2}=a_{0}+a_{1}u} for some real numbers a0 and a1. Using the common method of completing the square by subtracting a1u and adding the quadratic complement a21 / 4 to both sides yields u 2 − a 1 u + 1 4 a 1 2 = a 0 + 1 4 a 1 2 . {\displaystyle u^{2}-a_{1}u+{\frac {1}{4}}a_{1}^{2}=a_{0}+{\frac {1}{4}}a_{1}^{2}.} Thus ( u − 1 2 a 1 ) 2 = u ~ 2 {\textstyle \left(u-{\frac {1}{2}}a_{1}\right)^{2}={\tilde {u}}^{2}} where u ~ 2 = a 0 + 1 4 a 1 2 . {\textstyle {\tilde {u}}^{2}~=a_{0}+{\frac {1}{4}}a_{1}^{2}.} The three cases depend on this real value: If 4a0 = −a12, the above formula yields ũ2 = 0. Hence, ũ can directly be identified with the nilpotent element ε {\displaystyle \varepsilon } of the basis { 1 , ε } {\displaystyle \{1,~\varepsilon \}} of the dual numbers. If 4a0 > −a12, the above formula yields ũ2 > 0. This leads to the split-complex numbers which have normalized basis { 1 , j } {\displaystyle \{1,~j\}} with j 2 = + 1 {\displaystyle j^{2}=+1} . To obtain j from ũ, the latter must be divided by the positive real number a := a 0 + 1 4 a 1 2 {\textstyle a\mathrel {:=} {\sqrt {a_{0}+{\frac {1}{4}}a_{1}^{2}}}} which has the same square as ũ has. If 4a0 < −a12, the above formula yields ũ2 < 0. This leads to the complex numbers which have normalized basis { 1 , i } {\displaystyle \{1,~i\}} with i 2 = − 1 {\displaystyle i^{2}=-1} . To yield i from ũ, the latter has to be divided by a positive real number a := 1 4 a 1 2 − a 0 {\textstyle a\mathrel {:=} {\sqrt {{\frac {1}{4}}a_{1}^{2}-a_{0}}}} which squares to the negative of ũ2. The complex numbers are the only 2-dimensional hypercomplex algebra that is a field. Split algebras such as the split-complex numbers that include non-real roots of 1 also contain idempotents 1 2 ( 1 ± j ) {\textstyle {\frac {1}{2}}(1\pm j)} and zero divisors ( 1 + j ) ( 1 − j ) = 0 {\displaystyle (1+j)(1-j)=0} , so such algebras cannot be division algebras. However, these properties can turn out to be very meaningful, for instance in representing a light cone with a null cone. In a 2004 edition of Mathematics Magazine the 2-dimensional real algebras have been styled the "generalized complex numbers". The idea of cross-ratio of four complex numbers can be extended to the 2-dimensional real algebras. == Higher-dimensional examples (more than one non-real axis) == === Clifford algebras === A Clifford algebra is the unital associative algebra generated over an underlying vector space equipped with a quadratic form. Over the real numbers this is equivalent to being able to define a symmetric scalar product, u ⋅ v = ⁠1/2⁠(uv + vu) that can be used to orthogonalise the quadratic form, to give a basis {e1, ..., ek} such that: 1 2 ( e i e j + e j e i ) = { − 1 , 0 , + 1 i = j , 0 i ≠ j . {\displaystyle {\frac {1}{2}}\left(e_{i}e_{j}+e_{j}e_{i}\right)={\begin{cases}-1,0,+1&i=j,\\0&i\not =j.\end{cases}}} Imposing closure under multiplication generates a multivector space spanned by a basis of 2k elements, {1, e1, e2, e3, ..., e1e2, ..., e1e2e3, ...}. These can be interpreted as the basis of a hypercomplex number system. Unlike the basis {e1, ..., ek}, the remaining basis elements need not anti-commute, depending on how many simple exchanges must be carried out to swap the two factors. So e1e2 = −e2e1, but e1(e2e3) = +(e2e3)e1. Putting aside the bases which contain an element ei such that ei2 = 0 (i.e. directions in the original space over which the quadratic form was degenerate), the remaining Clifford algebras can be identified by the label Clp,q( R {\displaystyle \mathbb {R} } ), indicating that the algebra is constructed from p simple basis elements with ei2 = +1, q with ei2 = −1, and where R {\displaystyle \mathbb {R} } indicates that this is to be a Clifford algebra over the reals—i.e. coefficients of elements of the algebra are to be real numbers. These algebras, called geometric algebras, form a systematic set, which turn out to be very useful in physics problems which involve rotations, phases, or spins, notably in classical and quantum mechanics, electromagnetic theory and relativity. Examples include: the complex numbers Cl0,1( R {\displaystyle \mathbb {R} } ), split-complex numbers Cl1,0( R {\displaystyle \mathbb {R} } ), quaternions Cl0,2( R {\displaystyle \mathbb {R} } ), split-biquaternions Cl0,3( R {\displaystyle \mathbb {R} } ), split-quaternions Cl1,1( R {\displaystyle \mathbb {R} } ) ≈ Cl2,0( R {\displaystyle \mathbb {R} } ) (the natural algebra of two-dimensional space); Cl3,0( R {\displaystyle \mathbb {R} } ) (the natural algebra of three-dimensional space, and the algebra of the Pauli matrices); and the spacetime algebra Cl1,3( R {\displaystyle \mathbb {R} } ). The elements of the algebra Clp,q( R {\displaystyle \mathbb {R} } ) form an even subalgebra Cl[0]q+1,p( R {\displaystyle \mathbb {R} } ) of the algebra Clq+1,p( R {\displaystyle \mathbb {R} } ), which can be used to parametrise rotations in the larger algebra. There is thus a close connection between complex numbers and rotations in two-dimensional space; between quaternions and rotations in three-dimensional space; between split-complex numbers and (hyperbolic) rotations (Lorentz transformations) in 1+1-dimensional space, and so on. Whereas Cayley–Dickson and split-complex constructs with eight or more dimensions are not associative with respect to multiplication, Clifford algebras retain associativity at any number of dimensions. In 1995 Ian R. Porteous wrote on "The recognition of subalgebras" in his book on Clifford algebras. His Proposition 11.4 summarizes the hypercomplex cases: Let A be a real associative algebra with unit element 1. Then 1 generates R {\displaystyle \mathbb {R} } (algebra of real numbers), any two-dimensional subalgebra generated by an element e0 of A such that e02 = −1 is isomorphic to C {\displaystyle \mathbb {C} } (algebra of complex numbers), any two-dimensional subalgebra generated by an element e0 of A such that e02 = 1 is isomorphic to R {\displaystyle \mathbb {R} } 2 (pairs of real numbers with component-wise product, isomorphic to the algebra of split-complex numbers), any four-dimensional subalgebra generated by a set {e0, e1} of mutually anti-commuting elements of A such that e 0 2 = e 1 2 = − 1 {\displaystyle e_{0}^{2}=e_{1}^{2}=-1} is isomorphic to H {\displaystyle \mathbb {H} } (algebra of quaternions), any four-dimensional subalgebra generated by a set {e0, e1} of mutually anti-commuting elements of A such that e 0 2 = e 1 2 = 1 {\displaystyle e_{0}^{2}=e_{1}^{2}=1} is isomorphic to M2( R {\displaystyle \mathbb {R} } ) (2 × 2 real matrices, coquaternions), any eight-dimensional subalgebra generated by a set {e0, e1, e2} of mutually anti-commuting elements of A such that e 0 2 = e 1 2 = e 2 2 = − 1 {\displaystyle e_{0}^{2}=e_{1}^{2}=e_{2}^{2}=-1} is isomorphic to 2 H {\displaystyle \mathbb {H} } (split-biquaternions), any eight-dimensional subalgebra generated by a set {e0, e1, e2} of mutually anti-commuting elements of A such that e 0 2 = e 1 2 = e 2 2 = 1 {\displaystyle e_{0}^{2}=e_{1}^{2}=e_{2}^{2}=1} is isomorphic to M2( C {\displaystyle \mathbb {C} } ) (2 × 2 complex matrices, biquaternions, Pauli algebra). === Cayley–Dickson construction === All of the Clifford algebras Clp,q( R {\displaystyle \mathbb {R} } ) apart from the real numbers, complex numbers and the quaternions contain non-real elements that square to +1; and so cannot be division algebras. A different approach to extending the complex numbers is taken by the Cayley–Dickson construction. This generates number systems of dimension 2n, n = 2, 3, 4, ..., with bases { 1 , i 1 , … , i 2 n − 1 } {\displaystyle \left\{1,i_{1},\dots ,i_{2^{n}-1}\right\}} , where all the non-real basis elements anti-commute and satisfy i m 2 = − 1 {\displaystyle i_{m}^{2}=-1} . In 8 or more dimensions (n ≥ 3) these algebras are non-associative. In 16 or more dimensions (n ≥ 4) these algebras also have zero-divisors. The first algebras in this sequence include the 4-dimensional quaternions, 8-dimensional octonions, and 16-dimensional sedenions. An algebraic symmetry is lost with each increase in dimensionality: quaternion multiplication is not commutative, octonion multiplication is non-associative, and the norm of sedenions is not multiplicative. After the sedenions are the 32-dimensional trigintaduonions (or 32-nions), the 64-dimensional sexagintaquatronions (or 64-nions), the 128-dimensional centumduodetrigintanions (or 128-nions), the 256-dimensional ducentiquinquagintasexions (or 256-nions), and ad infinitum, as summarized in the table below. The Cayley–Dickson construction can be modified by inserting an extra sign at some stages. It then generates the "split algebras" in the collection of composition algebras instead of the division algebras: split-complex numbers with basis { 1 , i 1 } {\displaystyle \{1,\,i_{1}\}} satisfying i 1 2 = + 1 {\displaystyle \ i_{1}^{2}=+1} , split-quaternions with basis { 1 , i 1 , i 2 , i 3 } {\displaystyle \{1,\,i_{1},\,i_{2},\,i_{3}\}} satisfying i 1 2 = − 1 , i 2 2 = i 3 2 = + 1 {\displaystyle \ i_{1}^{2}=-1,\,i_{2}^{2}=i_{3}^{2}=+1} , and split-octonions with basis { 1 , i 1 , … , i 7 } {\displaystyle \{1,\,i_{1},\,\dots ,\,i_{7}\}} satisfying i 1 2 = i 2 2 = i 3 2 = − 1 {\displaystyle \ i_{1}^{2}=i_{2}^{2}=i_{3}^{2}=-1} , i 4 2 = i 5 2 = i 6 2 = i 7 2 = + 1. {\displaystyle \ i_{4}^{2}=i_{5}^{2}=i_{6}^{2}=i_{7}^{2}=+1.} Unlike the complex numbers, the split-complex numbers are not algebraically closed, and further contain nontrivial zero divisors and nontrivial idempotents. As with the quaternions, split-quaternions are not commutative, but further contain nilpotents; they are isomorphic to the square matrices of dimension two. Split-octonions are non-associative and contain nilpotents. === Tensor products === The tensor product of any two algebras is another algebra, which can be used to produce many more examples of hypercomplex number systems. In particular taking tensor products with the complex numbers (considered as algebras over the reals) leads to four-dimensional bicomplex numbers C ⊗ R C {\displaystyle \mathbb {C} \otimes _{\mathbb {R} }\mathbb {C} } (isomorphic to tessarines C ⊗ R D {\displaystyle \mathbb {C} \otimes _{\mathbb {R} }D} ), eight-dimensional biquaternions C ⊗ R H {\displaystyle \mathbb {C} \otimes _{\mathbb {R} }\mathbb {H} } , and 16-dimensional complex octonions C ⊗ R O {\displaystyle \mathbb {C} \otimes _{\mathbb {R} }\mathbb {O} } . === Further examples === bicomplex numbers: a 4-dimensional vector space over the reals, 2-dimensional over the complex numbers, isomorphic to tessarines. multicomplex numbers: 2n-dimensional vector spaces over the reals, 2n−1-dimensional over the complex numbers composition algebra: algebra with a quadratic form that composes with the product == See also == Thomas Kirkman Georg Scheffers Richard Brauer Hypercomplex analysis == References == == Further reading == == External links == "Hypercomplex number", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Weisstein, Eric W. "Hypercomplex number". MathWorld. Study, E., On systems of complex numbers and their application to the theory of transformation groups (PDF) (English translation) Frobenius, G., Theory of hypercomplex quantities (PDF) (English translation)
Wikipedia/Hypercomplex_algebra
In mathematics, a quadratic algebra is a filtered algebra generated by degree one elements, with defining relations of degree 2. It was pointed out by Yuri Manin that such algebras play an important role in the theory of quantum groups. The most important class of graded quadratic algebras is Koszul algebras. == Definition == A graded quadratic algebra A is determined by a vector space of generators V = A1 and a subspace of homogeneous quadratic relations S ⊂ V ⊗ V. Thus A = T ( V ) / ⟨ S ⟩ {\displaystyle A=T(V)/\langle S\rangle } and inherits its grading from the tensor algebra T(V). If the subspace of relations is instead allowed to also contain inhomogeneous degree 2 elements, i.e. S ⊂ k ⊕ V ⊕ (V ⊗ V), this construction results in a filtered quadratic algebra. A graded quadratic algebra A as above admits a quadratic dual: the quadratic algebra generated by V* and with quadratic relations forming the orthogonal complement of S in V* ⊗ V*. == Examples == The tensor algebra, symmetric algebra and exterior algebra of a finite-dimensional vector space are graded quadratic (in fact, Koszul) algebras. The universal enveloping algebra of a finite-dimensional Lie algebra is a filtered quadratic algebra. The Clifford algebra of a finite-dimensional vector space equipped with a quadratic form is a filtered quadratic algebra. The Weyl algebra of a finite-dimensional symplectic vector space is a filtered quadratic algebra. == References == == Further reading == Mazorchuk, Volodymyr; Ovsienko, Serge; Stroppel, Catharina (2009), "Quadratic duals, Koszul dual functors, and applications", Trans. Amer. Math. Soc., 361 (3): 1129–1172, arXiv:math.RT/0603475, doi:10.1090/S0002-9947-08-04539-X
Wikipedia/Quadratic_algebra
In mathematics, specifically category theory, a family of generators (or family of separators) of a category C {\displaystyle {\mathcal {C}}} is a collection G ⊆ O b ( C ) {\displaystyle {\mathcal {G}}\subseteq Ob({\mathcal {C}})} of objects in C {\displaystyle {\mathcal {C}}} , such that for any two distinct morphisms f , g : X → Y {\displaystyle f,g:X\to Y} in C {\displaystyle {\mathcal {C}}} , that is with f ≠ g {\displaystyle f\neq g} , there is some G {\displaystyle G} in G {\displaystyle {\mathcal {G}}} and some morphism h : G → X {\displaystyle h:G\to X} such that f ∘ h ≠ g ∘ h . {\displaystyle f\circ h\neq g\circ h.} If the collection consists of a single object G {\displaystyle G} , we say it is a generator (or separator). Generators are central to the definition of Grothendieck categories. The dual concept is called a cogenerator (or coseparator). == Examples == In the category of abelian groups, the group of integers Z {\displaystyle \mathbb {Z} } is a generator: If f and g are different, then there is an element x ∈ X {\displaystyle x\in X} , such that f ( x ) ≠ g ( x ) {\displaystyle f(x)\neq g(x)} . Hence the map Z → X , {\displaystyle \mathbb {Z} \rightarrow X,} n ↦ n ⋅ x {\displaystyle n\mapsto n\cdot x} suffices. Similarly, the one-point set is a generator for the category of sets. In fact, any nonempty set is a generator. In the category of sets, any set with at least two elements is a cogenerator. In the category of modules over a ring R, a generator in a finite direct sum with itself contains an isomorphic copy of R as a direct summand. Consequently, a generator module is faithful, i.e. has zero annihilator. == References == Mac Lane, Saunders (1998), Categories for the Working Mathematician (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-98403-2, p. 123, section V.7 == External links == separator at the nLab
Wikipedia/Generator_(category_theory)
In mathematics, an even function is a real function such that f ( − x ) = f ( x ) {\displaystyle f(-x)=f(x)} for every x {\displaystyle x} in its domain. Similarly, an odd function is a function such that f ( − x ) = − f ( x ) {\displaystyle f(-x)=-f(x)} for every x {\displaystyle x} in its domain. They are named for the parity of the powers of the power functions which satisfy each condition: the function f ( x ) = x n {\displaystyle f(x)=x^{n}} is even if n is an even integer, and it is odd if n is an odd integer. Even functions are those real functions whose graph is self-symmetric with respect to the y-axis, and odd functions are those whose graph is self-symmetric with respect to the origin. If the domain of a real function is self-symmetric with respect to the origin, then the function can be uniquely decomposed as the sum of an even function and an odd function. == Early history == The concept of even and odd functions appears to date back to the early 18th century, with Leonard Euler playing a significant role in their formalization. Euler introduced the concepts of even and odd functions (using Latin terms pares and impares) in his work Traiectoriarum Reciprocarum Solutio from 1727. Before Euler, however, Isaac Newton had already developed geometric means of deriving coefficients of power series when writing the Principia (1687), and included algebraic techniques in an early draft of his Quadrature of Curves, though he removed it before publication in 1706. It is also noteworthy that Newton didn't explicitly name or focus on the even-odd decomposition, his work with power series would have involved understanding properties related to even and odd powers. == Definition and examples == Evenness and oddness are generally considered for real functions, that is real-valued functions of a real variable. However, the concepts may be more generally defined for functions whose domain and codomain both have a notion of additive inverse. This includes abelian groups, all rings, all fields, and all vector spaces. Thus, for example, a real function could be odd or even (or neither), as could a complex-valued function of a vector variable, and so on. The given examples are real functions, to illustrate the symmetry of their graphs. === Even functions === A real function f is even if, for every x in its domain, −x is also in its domain and: p. 11  f ( − x ) = f ( x ) {\displaystyle f(-x)=f(x)} or equivalently f ( x ) − f ( − x ) = 0. {\displaystyle f(x)-f(-x)=0.} Geometrically, the graph of an even function is symmetric with respect to the y-axis, meaning that its graph remains unchanged after reflection about the y-axis. Examples of even functions are: The absolute value x ↦ | x | , {\displaystyle x\mapsto |x|,} x ↦ x 2 , {\displaystyle x\mapsto x^{2},} x ↦ x n {\displaystyle x\mapsto x^{n}} for any even integer n , {\displaystyle n,} cosine cos , {\displaystyle \cos ,} hyperbolic cosine cosh , {\displaystyle \cosh ,} Gaussian function x ↦ exp ⁡ ( − x 2 ) . {\displaystyle x\mapsto \exp(-x^{2}).} === Odd functions === A real function f is odd if, for every x in its domain, −x is also in its domain and: p. 72  f ( − x ) = − f ( x ) {\displaystyle f(-x)=-f(x)} or equivalently f ( x ) + f ( − x ) = 0. {\displaystyle f(x)+f(-x)=0.} Geometrically, the graph of an odd function has rotational symmetry with respect to the origin, meaning that its graph remains unchanged after rotation of 180 degrees about the origin. If x = 0 {\displaystyle x=0} is in the domain of an odd function f ( x ) {\displaystyle f(x)} , then f ( 0 ) = 0 {\displaystyle f(0)=0} . Examples of odd functions are: The sign function x ↦ sgn ⁡ ( x ) , {\displaystyle x\mapsto \operatorname {sgn}(x),} The identity function x ↦ x , {\displaystyle x\mapsto x,} x ↦ x n {\displaystyle x\mapsto x^{n}} for any odd integer n , {\displaystyle n,} x ↦ x n {\displaystyle x\mapsto {\sqrt[{n}]{x}}} for any odd positive integer n , {\displaystyle n,} sine sin , {\displaystyle \sin ,} hyperbolic sine sinh , {\displaystyle \sinh ,} The error function erf . {\displaystyle \operatorname {erf} .} == Basic properties == === Uniqueness === If a function is both even and odd, it is equal to 0 everywhere it is defined. If a function is odd, the absolute value of that function is an even function. === Addition and subtraction === The sum of two even functions is even. The sum of two odd functions is odd. The difference between two odd functions is odd. The difference between two even functions is even. The sum of an even and odd function is not even or odd, unless one of the functions is equal to zero over the given domain. === Multiplication and division === The product and quotient of two even functions is an even function. This implies that the product of any number of even functions is also even. This implies that the reciprocal of an even function is also even. The product and quotient of two odd functions is an even function. The product and both quotients of an even function and an odd function is an odd function. This implies that the reciprocal of an odd function is odd. === Composition === The composition of two even functions is even. The composition of two odd functions is odd. The composition of an even function and an odd function is even. The composition of any function with an even function is even (but not vice versa). === Inverse function === If an odd function is invertible, then its inverse is also odd. == Even–odd decomposition == If a real function has a domain that is self-symmetric with respect to the origin, it may be uniquely decomposed as the sum of an even and an odd function, which are called respectively the even part (or the even component) and the odd part (or the odd component) of the function, and are defined by f even ( x ) = f ( x ) + f ( − x ) 2 , {\displaystyle f_{\text{even}}(x)={\frac {f(x)+f(-x)}{2}},} and f odd ( x ) = f ( x ) − f ( − x ) 2 . {\displaystyle f_{\text{odd}}(x)={\frac {f(x)-f(-x)}{2}}.} It is straightforward to verify that f even {\displaystyle f_{\text{even}}} is even, f odd {\displaystyle f_{\text{odd}}} is odd, and f = f even + f odd . {\displaystyle f=f_{\text{even}}+f_{\text{odd}}.} This decomposition is unique since, if f ( x ) = g ( x ) + h ( x ) , {\displaystyle f(x)=g(x)+h(x),} where g is even and h is odd, then g = f even {\displaystyle g=f_{\text{even}}} and h = f odd , {\displaystyle h=f_{\text{odd}},} since 2 f e ( x ) = f ( x ) + f ( − x ) = g ( x ) + g ( − x ) + h ( x ) + h ( − x ) = 2 g ( x ) , 2 f o ( x ) = f ( x ) − f ( − x ) = g ( x ) − g ( − x ) + h ( x ) − h ( − x ) = 2 h ( x ) . {\displaystyle {\begin{aligned}2f_{\text{e}}(x)&=f(x)+f(-x)=g(x)+g(-x)+h(x)+h(-x)=2g(x),\\2f_{\text{o}}(x)&=f(x)-f(-x)=g(x)-g(-x)+h(x)-h(-x)=2h(x).\end{aligned}}} For example, the hyperbolic cosine and the hyperbolic sine may be regarded as the even and odd parts of the exponential function, as the first one is an even function, the second one is odd, and e x = cosh ⁡ ( x ) ⏟ f even ( x ) + sinh ⁡ ( x ) ⏟ f odd ( x ) {\displaystyle e^{x}=\underbrace {\cosh(x)} _{f_{\text{even}}(x)}+\underbrace {\sinh(x)} _{f_{\text{odd}}(x)}} . Fourier's sine and cosine transforms also perform even–odd decomposition by representing a function's odd part with sine waves (an odd function) and the function's even part with cosine waves (an even function). == Further algebraic properties == Any linear combination of even functions is even, and the even functions form a vector space over the reals. Similarly, any linear combination of odd functions is odd, and the odd functions also form a vector space over the reals. In fact, the vector space of all real functions is the direct sum of the subspaces of even and odd functions. This is a more abstract way of expressing the property in the preceding section. The space of functions can be considered a graded algebra over the real numbers by this property, as well as some of those above. The even functions form a commutative algebra over the reals. However, the odd functions do not form an algebra over the reals, as they are not closed under multiplication. == Analytic properties == A function's being odd or even does not imply differentiability, or even continuity. For example, the Dirichlet function is even, but is nowhere continuous. In the following, properties involving derivatives, Fourier series, Taylor series are considered, and these concepts are thus supposed to be defined for the considered functions. === Basic analytic properties === The derivative of an even function is odd. The derivative of an odd function is even. If an odd function is integrable over a bounded symmetric interval [ − A , A ] {\displaystyle [-A,A]} , the integral over that interval is zero; that is ∫ − A A f ( x ) d x = 0 {\displaystyle \int _{-A}^{A}f(x)\,dx=0} . This implies that the Cauchy principal value of an odd function over the entire real line is zero. If an even function is integrable over a bounded symmetric interval [ − A , A ] {\displaystyle [-A,A]} , the integral over that interval is twice the integral from 0 to A; that is ∫ − A A f ( x ) d x = 2 ∫ 0 A f ( x ) d x {\displaystyle \int _{-A}^{A}f(x)\,dx=2\int _{0}^{A}f(x)\,dx} . This property is also true for the improper integral when A = ∞ {\displaystyle A=\infty } , provided the integral from 0 to ∞ {\displaystyle \infty } converges. === Series === The Maclaurin series of an even function includes only even powers. The Maclaurin series of an odd function includes only odd powers. The Fourier series of a periodic even function includes only cosine terms. The Fourier series of a periodic odd function includes only sine terms. The Fourier transform of a purely real-valued even function is real and even. (see Fourier analysis § Symmetry properties) The Fourier transform of a purely real-valued odd function is imaginary and odd. (see Fourier analysis § Symmetry properties) == Harmonics == In signal processing, harmonic distortion occurs when a sine wave signal is sent through a memory-less nonlinear system, that is, a system whose output at time t only depends on the input at time t and does not depend on the input at any previous times. Such a system is described by a response function V out ( t ) = f ( V in ( t ) ) {\displaystyle V_{\text{out}}(t)=f(V_{\text{in}}(t))} . The type of harmonics produced depend on the response function f: When the response function is even, the resulting signal will consist of only even harmonics of the input sine wave; 0 f , 2 f , 4 f , 6 f , … {\displaystyle 0f,2f,4f,6f,\dots } The fundamental is also an odd harmonic, so will not be present. A simple example is a full-wave rectifier. The 0 f {\displaystyle 0f} component represents the DC offset, due to the one-sided nature of even-symmetric transfer functions. When it is odd, the resulting signal will consist of only odd harmonics of the input sine wave; 1 f , 3 f , 5 f , … {\displaystyle 1f,3f,5f,\dots } The output signal will be half-wave symmetric. A simple example is clipping in a symmetric push-pull amplifier. When it is asymmetric, the resulting signal may contain either even or odd harmonics; 1 f , 2 f , 3 f , … {\displaystyle 1f,2f,3f,\dots } Simple examples are a half-wave rectifier, and clipping in an asymmetrical class-A amplifier. This does not hold true for more complex waveforms. A sawtooth wave contains both even and odd harmonics, for instance. After even-symmetric full-wave rectification, it becomes a triangle wave, which, other than the DC offset, contains only odd harmonics. == Generalizations == === Multivariate functions === Even symmetry: A function f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } is called even symmetric if: f ( x 1 , x 2 , … , x n ) = f ( − x 1 , − x 2 , … , − x n ) for all x 1 , … , x n ∈ R {\displaystyle f(x_{1},x_{2},\ldots ,x_{n})=f(-x_{1},-x_{2},\ldots ,-x_{n})\quad {\text{for all }}x_{1},\ldots ,x_{n}\in \mathbb {R} } Odd symmetry: A function f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } is called odd symmetric if: f ( x 1 , x 2 , … , x n ) = − f ( − x 1 , − x 2 , … , − x n ) for all x 1 , … , x n ∈ R {\displaystyle f(x_{1},x_{2},\ldots ,x_{n})=-f(-x_{1},-x_{2},\ldots ,-x_{n})\quad {\text{for all }}x_{1},\ldots ,x_{n}\in \mathbb {R} } === Complex-valued functions === The definitions for even and odd symmetry for complex-valued functions of a real argument are similar to the real case. In signal processing, a similar symmetry is sometimes considered, which involves complex conjugation. Conjugate symmetry: A complex-valued function of a real argument f : R → C {\displaystyle f:\mathbb {R} \to \mathbb {C} } is called conjugate symmetric if f ( x ) = f ( − x ) ¯ for all x ∈ R {\displaystyle f(x)={\overline {f(-x)}}\quad {\text{for all }}x\in \mathbb {R} } A complex valued function is conjugate symmetric if and only if its real part is an even function and its imaginary part is an odd function. A typical example of a conjugate symmetric function is the cis function x → e i x = cos ⁡ x + i sin ⁡ x {\displaystyle x\to e^{ix}=\cos x+i\sin x} Conjugate antisymmetry: A complex-valued function of a real argument f : R → C {\displaystyle f:\mathbb {R} \to \mathbb {C} } is called conjugate antisymmetric if: f ( x ) = − f ( − x ) ¯ for all x ∈ R {\displaystyle f(x)=-{\overline {f(-x)}}\quad {\text{for all }}x\in \mathbb {R} } A complex valued function is conjugate antisymmetric if and only if its real part is an odd function and its imaginary part is an even function. === Finite length sequences === The definitions of odd and even symmetry are extended to N-point sequences (i.e. functions of the form f : { 0 , 1 , … , N − 1 } → R {\displaystyle f:\left\{0,1,\ldots ,N-1\right\}\to \mathbb {R} } ) as follows:: p. 411  Even symmetry: A N-point sequence is called conjugate symmetric if f ( n ) = f ( N − n ) for all n ∈ { 1 , … , N − 1 } . {\displaystyle f(n)=f(N-n)\quad {\text{for all }}n\in \left\{1,\ldots ,N-1\right\}.} Such a sequence is often called a palindromic sequence; see also Palindromic polynomial. Odd symmetry: A N-point sequence is called conjugate antisymmetric if f ( n ) = − f ( N − n ) for all n ∈ { 1 , … , N − 1 } . {\displaystyle f(n)=-f(N-n)\quad {\text{for all }}n\in \left\{1,\ldots ,N-1\right\}.} Such a sequence is sometimes called an anti-palindromic sequence; see also Antipalindromic polynomial. == See also == Hermitian function for a generalization in complex numbers Taylor series Fourier series Holstein–Herring method Parity (physics) == Notes == == References == Gelfand, I. M.; Glagoleva, E. G.; Shnol, E. E. (2002) [1969], Functions and Graphs, Mineola, N.Y: Dover Publications
Wikipedia/Even_and_odd_functions
In algebra, a transformation semigroup (or composition semigroup) is a collection of transformations (functions from a set to itself) that is closed under function composition. If it includes the identity function, it is a monoid, called a transformation (or composition) monoid. This is the semigroup analogue of a permutation group. A transformation semigroup of a set has a tautological semigroup action on that set. Such actions are characterized by being faithful, i.e., if two elements of the semigroup have the same action, then they are equal. An analogue of Cayley's theorem shows that any semigroup can be realized as a transformation semigroup of some set. In automata theory, some authors use the term transformation semigroup to refer to a semigroup acting faithfully on a set of "states" different from the semigroup's base set. There is a correspondence between the two notions. == Transformation semigroups and monoids == A transformation semigroup is a pair (X,S), where X is a set and S is a semigroup of transformations of X. Here a transformation of X is just a function from a subset of X to X, not necessarily invertible, and therefore S is simply a set of transformations of X which is closed under composition of functions. The set of all partial functions on a given base set, X, forms a regular semigroup called the semigroup of all partial transformations (or the partial transformation semigroup on X), typically denoted by P T X {\displaystyle {\mathcal {PT}}_{X}} . If S includes the identity transformation of X, then it is called a transformation monoid. Any transformation semigroup S determines a transformation monoid M by taking the union of S with the identity transformation. A transformation monoid whose elements are invertible is a permutation group. The set of all transformations of X is a transformation monoid called the full transformation monoid (or semigroup) of X. It is also called the symmetric semigroup of X and is denoted by TX. Thus a transformation semigroup (or monoid) is just a subsemigroup (or submonoid) of the full transformation monoid of X. If (X,S) is a transformation semigroup then X can be made into a semigroup action of S by evaluation: s ⋅ x = s ( x ) for s ∈ S , x ∈ X . {\displaystyle s\cdot x=s(x){\text{ for }}s\in S,x\in X.} This is a monoid action if S is a transformation monoid. The characteristic feature of transformation semigroups, as actions, is that they are faithful, i.e., if s ⋅ x = t ⋅ x for all x ∈ X , {\displaystyle s\cdot x=t\cdot x{\text{ for all }}x\in X,} then s = t. Conversely if a semigroup S acts on a set X by T(s,x) = s • x then we can define, for s ∈ S, a transformation Ts of X by T s ( x ) = T ( s , x ) . {\displaystyle T_{s}(x)=T(s,x).\,} The map sending s to Ts is injective if and only if (X, T) is faithful, in which case the image of this map is a transformation semigroup isomorphic to S. == Cayley representation == In group theory, Cayley's theorem asserts that any group G is isomorphic to a subgroup of the symmetric group of G (regarded as a set), so that G is a permutation group. This theorem generalizes straightforwardly to monoids: any monoid M is a transformation monoid of its underlying set, via the action given by left (or right) multiplication. This action is faithful because if ax = bx for all x in M, then by taking x equal to the identity element, we have a = b. For a semigroup S without a (left or right) identity element, we take X to be the underlying set of the monoid corresponding to S to realise S as a transformation semigroup of X. In particular any finite semigroup can be represented as a subsemigroup of transformations of a set X with |X| ≤ |S| + 1, and if S is a monoid, we have the sharper bound |X| ≤ |S|, as in the case of finite groups.: 21  === In computer science === In computer science, Cayley representations can be applied to improve the asymptotic efficiency of semigroups by reassociating multiple composed multiplications. The action given by left multiplication results in right-associated multiplication, and vice versa for the action given by right multiplication. Despite having the same results for any semigroup, the asymptotic efficiency will differ. Two examples of useful transformation monoids given by an action of left multiplication are the functional variation of the difference list data structure, and the monadic Codensity transformation (a Cayley representation of a monad, which is a monoid in a particular monoidal functor category). == Transformation monoid of an automaton == Let M be a deterministic automaton with state space S and alphabet A. The words in the free monoid A∗ induce transformations of S giving rise to a monoid morphism from A∗ to the full transformation monoid TS. The image of this morphism is the transformation semigroup of M.: 78  For a regular language, the syntactic monoid is isomorphic to the transformation monoid of the minimal automaton of the language.: 81  == See also == Semiautomaton Krohn–Rhodes theory Symmetric inverse semigroup Biordered set Special classes of semigroups Composition ring == References == Clifford, A.H.; Preston, G.B. (1961). The algebraic theory of semigroups. Vol. I. Mathematical Surveys. Vol. 7. Providence, R.I.: American Mathematical Society. ISBN 978-0-8218-0272-4. Zbl 0111.03403. {{cite book}}: ISBN / Date incompatibility (help) Howie, John M. (1995). Fundamentals of Semigroup Theory. London Mathematical Society Monographs. New Series. Vol. 12. Oxford: Clarendon Press. ISBN 978-0-19-851194-6. Zbl 0835.20077. Mati Kilp, Ulrich Knauer, Alexander V. Mikhalev (2000), Monoids, Acts and Categories: with Applications to Wreath Products and Graphs, Expositions in Mathematics 29, Walter de Gruyter, Berlin, ISBN 978-3-11-015248-7.
Wikipedia/Full_transformation_monoid
In mathematics, a continuous function is a function such that a small variation of the argument induces a small variation of the value of the function. This implies there are no abrupt changes in value, known as discontinuities. More precisely, a function is continuous if arbitrarily small changes in its value can be assured by restricting to sufficiently small changes of its argument. A discontinuous function is a function that is not continuous. Until the 19th century, mathematicians largely relied on intuitive notions of continuity and considered only continuous functions. The epsilon–delta definition of a limit was introduced to formalize the definition of continuity. Continuity is one of the core concepts of calculus and mathematical analysis, where arguments and values of functions are real and complex numbers. The concept has been generalized to functions between metric spaces and between topological spaces. The latter are the most general continuous functions, and their definition is the basis of topology. A stronger form of continuity is uniform continuity. In order theory, especially in domain theory, a related concept of continuity is Scott continuity. As an example, the function H(t) denoting the height of a growing flower at time t would be considered continuous. In contrast, the function M(t) denoting the amount of money in a bank account at time t would be considered discontinuous since it "jumps" at each point in time when money is deposited or withdrawn. == History == A form of the epsilon–delta definition of continuity was first given by Bernard Bolzano in 1817. Augustin-Louis Cauchy defined continuity of y = f ( x ) {\displaystyle y=f(x)} as follows: an infinitely small increment α {\displaystyle \alpha } of the independent variable x always produces an infinitely small change f ( x + α ) − f ( x ) {\displaystyle f(x+\alpha )-f(x)} of the dependent variable y (see e.g. Cours d'Analyse, p. 34). Cauchy defined infinitely small quantities in terms of variable quantities, and his definition of continuity closely parallels the infinitesimal definition used today (see microcontinuity). The formal definition and the distinction between pointwise continuity and uniform continuity were first given by Bolzano in the 1830s, but the work wasn't published until the 1930s. Like Bolzano, Karl Weierstrass denied continuity of a function at a point c unless it was defined at and on both sides of c, but Édouard Goursat allowed the function to be defined only at and on one side of c, and Camille Jordan allowed it even if the function was defined only at c. All three of those nonequivalent definitions of pointwise continuity are still in use. Eduard Heine provided the first published definition of uniform continuity in 1872, but based these ideas on lectures given by Peter Gustav Lejeune Dirichlet in 1854. == Real functions == === Definition === A real function that is a function from real numbers to real numbers can be represented by a graph in the Cartesian plane; such a function is continuous if, roughly speaking, the graph is a single unbroken curve whose domain is the entire real line. A more mathematically rigorous definition is given below. Continuity of real functions is usually defined in terms of limits. A function f with variable x is continuous at the real number c, if the limit of f ( x ) , {\displaystyle f(x),} as x tends to c, is equal to f ( c ) . {\displaystyle f(c).} There are several different definitions of the (global) continuity of a function, which depend on the nature of its domain. A function is continuous on an open interval if the interval is contained in the function's domain and the function is continuous at every interval point. A function that is continuous on the interval ( − ∞ , + ∞ ) {\displaystyle (-\infty ,+\infty )} (the whole real line) is often called simply a continuous function; one also says that such a function is continuous everywhere. For example, all polynomial functions are continuous everywhere. A function is continuous on a semi-open or a closed interval; if the interval is contained in the domain of the function, the function is continuous at every interior point of the interval, and the value of the function at each endpoint that belongs to the interval is the limit of the values of the function when the variable tends to the endpoint from the interior of the interval. For example, the function f ( x ) = x {\displaystyle f(x)={\sqrt {x}}} is continuous on its whole domain, which is the closed interval [ 0 , + ∞ ) . {\displaystyle [0,+\infty ).} Many commonly encountered functions are partial functions that have a domain formed by all real numbers, except some isolated points. Examples include the reciprocal function x ↦ 1 x {\textstyle x\mapsto {\frac {1}{x}}} and the tangent function x ↦ tan ⁡ x . {\displaystyle x\mapsto \tan x.} When they are continuous on their domain, one says, in some contexts, that they are continuous, although they are not continuous everywhere. In other contexts, mainly when one is interested in their behavior near the exceptional points, one says they are discontinuous. A partial function is discontinuous at a point if the point belongs to the topological closure of its domain, and either the point does not belong to the domain of the function or the function is not continuous at the point. For example, the functions x ↦ 1 x {\textstyle x\mapsto {\frac {1}{x}}} and x ↦ sin ⁡ ( 1 x ) {\textstyle x\mapsto \sin({\frac {1}{x}})} are discontinuous at 0, and remain discontinuous whichever value is chosen for defining them at 0. A point where a function is discontinuous is called a discontinuity. Using mathematical notation, several ways exist to define continuous functions in the three senses mentioned above. Let f : D → R {\textstyle f:D\to \mathbb {R} } be a function whose domain D {\displaystyle D} is contained in R {\displaystyle \mathbb {R} } of real numbers. Some (but not all) possibilities for D {\displaystyle D} are: D {\displaystyle D} is the whole real line; that is, D = R {\displaystyle D=\mathbb {R} } D {\displaystyle D} is a closed interval of the form D = [ a , b ] = { x ∈ R ∣ a ≤ x ≤ b } , {\displaystyle D=[a,b]=\{x\in \mathbb {R} \mid a\leq x\leq b\},} where a and b are real numbers D {\displaystyle D} is an open interval of the form D = ( a , b ) = { x ∈ R ∣ a < x < b } , {\displaystyle D=(a,b)=\{x\in \mathbb {R} \mid a<x<b\},} where a and b are real numbers In the case of an open interval, a {\displaystyle a} and b {\displaystyle b} do not belong to D {\displaystyle D} , and the values f ( a ) {\displaystyle f(a)} and f ( b ) {\displaystyle f(b)} are not defined, and if they are, they do not matter for continuity on D {\displaystyle D} . ==== Definition in terms of limits of functions ==== The function f is continuous at some point c of its domain if the limit of f ( x ) , {\displaystyle f(x),} as x approaches c through the domain of f, exists and is equal to f ( c ) . {\displaystyle f(c).} In mathematical notation, this is written as lim x → c f ( x ) = f ( c ) . {\displaystyle \lim _{x\to c}{f(x)}=f(c).} In detail this means three conditions: first, f has to be defined at c (guaranteed by the requirement that c is in the domain of f). Second, the limit of that equation has to exist. Third, the value of this limit must equal f ( c ) . {\displaystyle f(c).} (Here, we have assumed that the domain of f does not have any isolated points.) ==== Definition in terms of neighborhoods ==== A neighborhood of a point c is a set that contains, at least, all points within some fixed distance of c. Intuitively, a function is continuous at a point c if the range of f over the neighborhood of c shrinks to a single point f ( c ) {\displaystyle f(c)} as the width of the neighborhood around c shrinks to zero. More precisely, a function f is continuous at a point c of its domain if, for any neighborhood N 1 ( f ( c ) ) {\displaystyle N_{1}(f(c))} there is a neighborhood N 2 ( c ) {\displaystyle N_{2}(c)} in its domain such that f ( x ) ∈ N 1 ( f ( c ) ) {\displaystyle f(x)\in N_{1}(f(c))} whenever x ∈ N 2 ( c ) . {\displaystyle x\in N_{2}(c).} As neighborhoods are defined in any topological space, this definition of a continuous function applies not only for real functions but also when the domain and the codomain are topological spaces and is thus the most general definition. It follows that a function is automatically continuous at every isolated point of its domain. For example, every real-valued function on the integers is continuous. ==== Definition in terms of limits of sequences ==== One can instead require that for any sequence ( x n ) n ∈ N {\displaystyle (x_{n})_{n\in \mathbb {N} }} of points in the domain which converges to c, the corresponding sequence ( f ( x n ) ) n ∈ N {\displaystyle \left(f(x_{n})\right)_{n\in \mathbb {N} }} converges to f ( c ) . {\displaystyle f(c).} In mathematical notation, ∀ ( x n ) n ∈ N ⊂ D : lim n → ∞ x n = c ⇒ lim n → ∞ f ( x n ) = f ( c ) . {\displaystyle \forall (x_{n})_{n\in \mathbb {N} }\subset D:\lim _{n\to \infty }x_{n}=c\Rightarrow \lim _{n\to \infty }f(x_{n})=f(c)\,.} ==== Weierstrass and Jordan definitions (epsilon–delta) of continuous functions ==== Explicitly including the definition of the limit of a function, we obtain a self-contained definition: Given a function f : D → R {\displaystyle f:D\to \mathbb {R} } as above and an element x 0 {\displaystyle x_{0}} of the domain D {\displaystyle D} , f {\displaystyle f} is said to be continuous at the point x 0 {\displaystyle x_{0}} when the following holds: For any positive real number ε > 0 , {\displaystyle \varepsilon >0,} however small, there exists some positive real number δ > 0 {\displaystyle \delta >0} such that for all x {\displaystyle x} in the domain of f {\displaystyle f} with x 0 − δ < x < x 0 + δ , {\displaystyle x_{0}-\delta <x<x_{0}+\delta ,} the value of f ( x ) {\displaystyle f(x)} satisfies f ( x 0 ) − ε < f ( x ) < f ( x 0 ) + ε . {\displaystyle f\left(x_{0}\right)-\varepsilon <f(x)<f(x_{0})+\varepsilon .} Alternatively written, continuity of f : D → R {\displaystyle f:D\to \mathbb {R} } at x 0 ∈ D {\displaystyle x_{0}\in D} means that for every ε > 0 , {\displaystyle \varepsilon >0,} there exists a δ > 0 {\displaystyle \delta >0} such that for all x ∈ D {\displaystyle x\in D} : | x − x 0 | < δ implies | f ( x ) − f ( x 0 ) | < ε . {\displaystyle \left|x-x_{0}\right|<\delta ~~{\text{ implies }}~~|f(x)-f(x_{0})|<\varepsilon .} More intuitively, we can say that if we want to get all the f ( x ) {\displaystyle f(x)} values to stay in some small neighborhood around f ( x 0 ) , {\displaystyle f\left(x_{0}\right),} we need to choose a small enough neighborhood for the x {\displaystyle x} values around x 0 . {\displaystyle x_{0}.} If we can do that no matter how small the f ( x 0 ) {\displaystyle f(x_{0})} neighborhood is, then f {\displaystyle f} is continuous at x 0 . {\displaystyle x_{0}.} In modern terms, this is generalized by the definition of continuity of a function with respect to a basis for the topology, here the metric topology. Weierstrass had required that the interval x 0 − δ < x < x 0 + δ {\displaystyle x_{0}-\delta <x<x_{0}+\delta } be entirely within the domain D {\displaystyle D} , but Jordan removed that restriction. ==== Definition in terms of control of the remainder ==== In proofs and numerical analysis, we often need to know how fast limits are converging, or in other words, control of the remainder. We can formalize this to a definition of continuity. A function C : [ 0 , ∞ ) → [ 0 , ∞ ] {\displaystyle C:[0,\infty )\to [0,\infty ]} is called a control function if C is non-decreasing inf δ > 0 C ( δ ) = 0 {\displaystyle \inf _{\delta >0}C(\delta )=0} A function f : D → R {\displaystyle f:D\to R} is C-continuous at x 0 {\displaystyle x_{0}} if there exists such a neighbourhood N ( x 0 ) {\textstyle N(x_{0})} that | f ( x ) − f ( x 0 ) | ≤ C ( | x − x 0 | ) for all x ∈ D ∩ N ( x 0 ) {\displaystyle |f(x)-f(x_{0})|\leq C\left(\left|x-x_{0}\right|\right){\text{ for all }}x\in D\cap N(x_{0})} A function is continuous in x 0 {\displaystyle x_{0}} if it is C-continuous for some control function C. This approach leads naturally to refining the notion of continuity by restricting the set of admissible control functions. For a given set of control functions C {\displaystyle {\mathcal {C}}} a function is C {\displaystyle {\mathcal {C}}} -continuous if it is C {\displaystyle C} -continuous for some C ∈ C . {\displaystyle C\in {\mathcal {C}}.} For example, the Lipschitz, the Hölder continuous functions of exponent α and the uniformly continuous functions below are defined by the set of control functions C L i p s c h i t z = { C : C ( δ ) = K | δ | , K > 0 } {\displaystyle {\mathcal {C}}_{\mathrm {Lipschitz} }=\{C:C(\delta )=K|\delta |,\ K>0\}} C Hölder − α = { C : C ( δ ) = K | δ | α , K > 0 } {\displaystyle {\mathcal {C}}_{{\text{Hölder}}-\alpha }=\{C:C(\delta )=K|\delta |^{\alpha },\ K>0\}} C uniform cont. = { C : C ( 0 ) = 0 } {\displaystyle {\mathcal {C}}_{\text{uniform cont.}}=\{C:C(0)=0\}} respectively. ==== Definition using oscillation ==== Continuity can also be defined in terms of oscillation: a function f is continuous at a point x 0 {\displaystyle x_{0}} if and only if its oscillation at that point is zero; in symbols, ω f ( x 0 ) = 0. {\displaystyle \omega _{f}(x_{0})=0.} A benefit of this definition is that it quantifies discontinuity: the oscillation gives how much the function is discontinuous at a point. This definition is helpful in descriptive set theory to study the set of discontinuities and continuous points – the continuous points are the intersection of the sets where the oscillation is less than ε {\displaystyle \varepsilon } (hence a G δ {\displaystyle G_{\delta }} set) – and gives a rapid proof of one direction of the Lebesgue integrability condition. The oscillation is equivalent to the ε − δ {\displaystyle \varepsilon -\delta } definition by a simple re-arrangement and by using a limit (lim sup, lim inf) to define oscillation: if (at a given point) for a given ε 0 {\displaystyle \varepsilon _{0}} there is no δ {\displaystyle \delta } that satisfies the ε − δ {\displaystyle \varepsilon -\delta } definition, then the oscillation is at least ε 0 , {\displaystyle \varepsilon _{0},} and conversely if for every ε {\displaystyle \varepsilon } there is a desired δ , {\displaystyle \delta ,} the oscillation is 0. The oscillation definition can be naturally generalized to maps from a topological space to a metric space. ==== Definition using the hyperreals ==== Cauchy defined the continuity of a function in the following intuitive terms: an infinitesimal change in the independent variable corresponds to an infinitesimal change of the dependent variable (see Cours d'analyse, page 34). Non-standard analysis is a way of making this mathematically rigorous. The real line is augmented by adding infinite and infinitesimal numbers to form the hyperreal numbers. In nonstandard analysis, continuity can be defined as follows. (see microcontinuity). In other words, an infinitesimal increment of the independent variable always produces an infinitesimal change of the dependent variable, giving a modern expression to Augustin-Louis Cauchy's definition of continuity. === Rules for continuity === Proving the continuity of a function by a direct application of the definition is generaly a noneasy task. Fortunately, in practice, most functions are built from simpler functions, and their continuity can be deduced immediately from the way they are defined, by applying the following rules: Every constant function is continuous The identity function ⁠ f ( x ) = x {\displaystyle f(x)=x} ⁠ is continuous Addition and multiplication: If the functions ⁠ f {\displaystyle f} ⁠ and ⁠ g {\displaystyle g} ⁠ are continuous on their respective domains ⁠ D f {\displaystyle D_{f}} ⁠ and ⁠ D g {\displaystyle D_{g}} ⁠, then their sum ⁠ f + g {\displaystyle f+g} ⁠ and their product ⁠ f ⋅ g {\displaystyle f\cdot g} ⁠ are continuous on the intersection ⁠ D f ∩ D g {\displaystyle D_{f}\cap D_{g}} ⁠, where ⁠ f + g {\displaystyle f+g} ⁠ and ⁠ f g {\displaystyle fg} ⁠ are defined by ⁠ ( f + g ) ( x ) = f ( x ) + g ( x ) {\displaystyle (f+g)(x)=f(x)+g(x)} ⁠ and ⁠ ( f ⋅ g ) ( x ) = f ( x ) ⋅ g ( x ) {\displaystyle (f\cdot g)(x)=f(x)\cdot g(x)} ⁠. Reciprocal: If the function ⁠ f {\displaystyle f} ⁠ is continuous on the domain ⁠ D f {\displaystyle D_{f}} ⁠, then its reciprocal ⁠ 1 f {\displaystyle {\tfrac {1}{f}}} ⁠, defined by ⁠ ( 1 f ) ( x ) = 1 f ( x ) {\displaystyle ({\tfrac {1}{f}})(x)={\tfrac {1}{f(x)}}} ⁠ is continuous on the domain ⁠ D f ∖ f − 1 ( 0 ) {\displaystyle D_{f}\setminus f^{-1}(0)} ⁠, that is, the domain ⁠ D f {\displaystyle D_{f}} ⁠ from which the points ⁠ x {\displaystyle x} ⁠ such that ⁠ f ( x ) = 0 {\displaystyle f(x)=0} ⁠ are removed. Function composition: If the functions ⁠ f {\displaystyle f} ⁠ and ⁠ g {\displaystyle g} ⁠ are continuous on their respective domains ⁠ D f {\displaystyle D_{f}} ⁠ and ⁠ D g {\displaystyle D_{g}} ⁠, then the composition ⁠ g ∘ f {\displaystyle g\circ f} ⁠ defined by ⁠ 1 {\displaystyle {1}} ⁠ is continuous on ⁠ D f ∩ f − 1 ( D g ) {\displaystyle D_{f}\cap f^{-1}(D_{g})} ⁠, that the part of ⁠ D f {\displaystyle D_{f}} ⁠ that is mapped by ⁠ f {\displaystyle f} ⁠ inside ⁠ D g {\displaystyle D_{g}} ⁠. The sine and cosine functions (⁠ sin ⁡ x {\displaystyle \sin x} ⁠ and ⁠ cos ⁡ x {\displaystyle \cos x} ⁠) are continuous everywhere. The exponential function ⁠ e x {\displaystyle e^{x}} ⁠ is continuous everywhere. The natural logarithm ⁠ ln ⁡ x {\displaystyle \ln x} ⁠ is continuous on the domain formed by all positive real numbers ⁠ { x ∣ x > 0 } {\displaystyle \{x\mid x>0\}} ⁠. These rules imply that every polynomial function is continuous everywhere and that a rational function is continuous everywhere where it is defined, if the numerator and the denominator have no common zeros. More generally, the quotient of two continuous functions is continuous outside the zeros of the denominator. An example of a function for which the above rules are not sufficirent is the sinc function, which is defined by ⁠ sinc ⁡ ( 0 ) = 1 {\displaystyle \operatorname {sinc} (0)=1} ⁠ and ⁠ sinc ⁡ ( x ) = sin ⁡ x x {\displaystyle \operatorname {sinc} (x)={\tfrac {\sin x}{x}}} ⁠ for ⁠ x ≠ 0 {\displaystyle x\neq 0} ⁠. The above rules show immediately that the function is continuous for ⁠ x ≠ 0 {\displaystyle x\neq 0} ⁠, but, for proving the continuity at ⁠ 0 {\displaystyle 0} ⁠, one has to prove lim x → 0 sin ⁡ x x = 1. {\displaystyle \lim _{x\to 0}{\frac {\sin x}{x}}=1.} As this is true, one gets that the sinc function is continuous function on all real numbers. === Examples of discontinuous functions === An example of a discontinuous function is the Heaviside step function H {\displaystyle H} , defined by H ( x ) = { 1 if x ≥ 0 0 if x < 0 {\displaystyle H(x)={\begin{cases}1&{\text{ if }}x\geq 0\\0&{\text{ if }}x<0\end{cases}}} Pick for instance ε = 1 / 2 {\displaystyle \varepsilon =1/2} . Then there is no δ {\displaystyle \delta } -neighborhood around x = 0 {\displaystyle x=0} , i.e. no open interval ( − δ , δ ) {\displaystyle (-\delta ,\;\delta )} with δ > 0 , {\displaystyle \delta >0,} that will force all the H ( x ) {\displaystyle H(x)} values to be within the ε {\displaystyle \varepsilon } -neighborhood of H ( 0 ) {\displaystyle H(0)} , i.e. within ( 1 / 2 , 3 / 2 ) {\displaystyle (1/2,\;3/2)} . Intuitively, we can think of this type of discontinuity as a sudden jump in function values. Similarly, the signum or sign function sgn ⁡ ( x ) = { 1 if x > 0 0 if x = 0 − 1 if x < 0 {\displaystyle \operatorname {sgn}(x)={\begin{cases}\;\;\ 1&{\text{ if }}x>0\\\;\;\ 0&{\text{ if }}x=0\\-1&{\text{ if }}x<0\end{cases}}} is discontinuous at x = 0 {\displaystyle x=0} but continuous everywhere else. Yet another example: the function f ( x ) = { sin ⁡ ( x − 2 ) if x ≠ 0 0 if x = 0 {\displaystyle f(x)={\begin{cases}\sin \left(x^{-2}\right)&{\text{ if }}x\neq 0\\0&{\text{ if }}x=0\end{cases}}} is continuous everywhere apart from x = 0 {\displaystyle x=0} . Besides plausible continuities and discontinuities like above, there are also functions with a behavior, often coined pathological, for example, Thomae's function, f ( x ) = { 1 if x = 0 1 q if x = p q (in lowest terms) is a rational number 0 if x is irrational . {\displaystyle f(x)={\begin{cases}1&{\text{ if }}x=0\\{\frac {1}{q}}&{\text{ if }}x={\frac {p}{q}}{\text{(in lowest terms) is a rational number}}\\0&{\text{ if }}x{\text{ is irrational}}.\end{cases}}} is continuous at all irrational numbers and discontinuous at all rational numbers. In a similar vein, Dirichlet's function, the indicator function for the set of rational numbers, D ( x ) = { 0 if x is irrational ( ∈ R ∖ Q ) 1 if x is rational ( ∈ Q ) {\displaystyle D(x)={\begin{cases}0&{\text{ if }}x{\text{ is irrational }}(\in \mathbb {R} \setminus \mathbb {Q} )\\1&{\text{ if }}x{\text{ is rational }}(\in \mathbb {Q} )\end{cases}}} is nowhere continuous. === Properties === ==== A useful lemma ==== Let f ( x ) {\displaystyle f(x)} be a function that is continuous at a point x 0 , {\displaystyle x_{0},} and y 0 {\displaystyle y_{0}} be a value such f ( x 0 ) ≠ y 0 . {\displaystyle f\left(x_{0}\right)\neq y_{0}.} Then f ( x ) ≠ y 0 {\displaystyle f(x)\neq y_{0}} throughout some neighbourhood of x 0 . {\displaystyle x_{0}.} Proof: By the definition of continuity, take ε = | y 0 − f ( x 0 ) | 2 > 0 {\displaystyle \varepsilon ={\frac {|y_{0}-f(x_{0})|}{2}}>0} , then there exists δ > 0 {\displaystyle \delta >0} such that | f ( x ) − f ( x 0 ) | < | y 0 − f ( x 0 ) | 2 whenever | x − x 0 | < δ {\displaystyle \left|f(x)-f(x_{0})\right|<{\frac {\left|y_{0}-f(x_{0})\right|}{2}}\quad {\text{ whenever }}\quad |x-x_{0}|<\delta } Suppose there is a point in the neighbourhood | x − x 0 | < δ {\displaystyle |x-x_{0}|<\delta } for which f ( x ) = y 0 ; {\displaystyle f(x)=y_{0};} then we have the contradiction | f ( x 0 ) − y 0 | < | f ( x 0 ) − y 0 | 2 . {\displaystyle \left|f(x_{0})-y_{0}\right|<{\frac {\left|f(x_{0})-y_{0}\right|}{2}}.} ==== Intermediate value theorem ==== The intermediate value theorem is an existence theorem, based on the real number property of completeness, and states: If the real-valued function f is continuous on the closed interval [ a , b ] , {\displaystyle [a,b],} and k is some number between f ( a ) {\displaystyle f(a)} and f ( b ) , {\displaystyle f(b),} then there is some number c ∈ [ a , b ] , {\displaystyle c\in [a,b],} such that f ( c ) = k . {\displaystyle f(c)=k.} For example, if a child grows from 1 m to 1.5 m between the ages of two and six years, then, at some time between two and six years of age, the child's height must have been 1.25 m. As a consequence, if f is continuous on [ a , b ] {\displaystyle [a,b]} and f ( a ) {\displaystyle f(a)} and f ( b ) {\displaystyle f(b)} differ in sign, then, at some point c ∈ [ a , b ] , {\displaystyle c\in [a,b],} f ( c ) {\displaystyle f(c)} must equal zero. ==== Extreme value theorem ==== The extreme value theorem states that if a function f is defined on a closed interval [ a , b ] {\displaystyle [a,b]} (or any closed and bounded set) and is continuous there, then the function attains its maximum, i.e. there exists c ∈ [ a , b ] {\displaystyle c\in [a,b]} with f ( c ) ≥ f ( x ) {\displaystyle f(c)\geq f(x)} for all x ∈ [ a , b ] . {\displaystyle x\in [a,b].} The same is true of the minimum of f. These statements are not, in general, true if the function is defined on an open interval ( a , b ) {\displaystyle (a,b)} (or any set that is not both closed and bounded), as, for example, the continuous function f ( x ) = 1 x , {\displaystyle f(x)={\frac {1}{x}},} defined on the open interval (0,1), does not attain a maximum, being unbounded above. ==== Relation to differentiability and integrability ==== Every differentiable function f : ( a , b ) → R {\displaystyle f:(a,b)\to \mathbb {R} } is continuous, as can be shown. The converse does not hold: for example, the absolute value function f ( x ) = | x | = { x if x ≥ 0 − x if x < 0 {\displaystyle f(x)=|x|={\begin{cases}\;\;\ x&{\text{ if }}x\geq 0\\-x&{\text{ if }}x<0\end{cases}}} is everywhere continuous. However, it is not differentiable at x = 0 {\displaystyle x=0} (but is so everywhere else). Weierstrass's function is also everywhere continuous but nowhere differentiable. The derivative f′(x) of a differentiable function f(x) need not be continuous. If f′(x) is continuous, f(x) is said to be continuously differentiable. The set of such functions is denoted C 1 ( ( a , b ) ) . {\displaystyle C^{1}((a,b)).} More generally, the set of functions f : Ω → R {\displaystyle f:\Omega \to \mathbb {R} } (from an open interval (or open subset of R {\displaystyle \mathbb {R} } ) Ω {\displaystyle \Omega } to the reals) such that f is n {\displaystyle n} times differentiable and such that the n {\displaystyle n} -th derivative of f is continuous is denoted C n ( Ω ) . {\displaystyle C^{n}(\Omega ).} See differentiability class. In the field of computer graphics, properties related (but not identical) to C 0 , C 1 , C 2 {\displaystyle C^{0},C^{1},C^{2}} are sometimes called G 0 {\displaystyle G^{0}} (continuity of position), G 1 {\displaystyle G^{1}} (continuity of tangency), and G 2 {\displaystyle G^{2}} (continuity of curvature); see Smoothness of curves and surfaces. Every continuous function f : [ a , b ] → R {\displaystyle f:[a,b]\to \mathbb {R} } is integrable (for example in the sense of the Riemann integral). The converse does not hold, as the (integrable but discontinuous) sign function shows. ==== Pointwise and uniform limits ==== Given a sequence f 1 , f 2 , … : I → R {\displaystyle f_{1},f_{2},\dotsc :I\to \mathbb {R} } of functions such that the limit f ( x ) := lim n → ∞ f n ( x ) {\displaystyle f(x):=\lim _{n\to \infty }f_{n}(x)} exists for all x ∈ D , {\displaystyle x\in D,} , the resulting function f ( x ) {\displaystyle f(x)} is referred to as the pointwise limit of the sequence of functions ( f n ) n ∈ N . {\displaystyle \left(f_{n}\right)_{n\in N}.} The pointwise limit function need not be continuous, even if all functions f n {\displaystyle f_{n}} are continuous, as the animation at the right shows. However, f is continuous if all functions f n {\displaystyle f_{n}} are continuous and the sequence converges uniformly, by the uniform convergence theorem. This theorem can be used to show that the exponential functions, logarithms, square root function, and trigonometric functions are continuous. === Directional Continuity === Discontinuous functions may be discontinuous in a restricted way, giving rise to the concept of directional continuity (or right and left continuous functions) and semi-continuity. Roughly speaking, a function is right-continuous if no jump occurs when the limit point is approached from the right. Formally, f is said to be right-continuous at the point c if the following holds: For any number ε > 0 {\displaystyle \varepsilon >0} however small, there exists some number δ > 0 {\displaystyle \delta >0} such that for all x in the domain with c < x < c + δ , {\displaystyle c<x<c+\delta ,} the value of f ( x ) {\displaystyle f(x)} will satisfy | f ( x ) − f ( c ) | < ε . {\displaystyle |f(x)-f(c)|<\varepsilon .} This is the same condition as continuous functions, except it is required to hold for x strictly larger than c only. Requiring it instead for all x with c − δ < x < c {\displaystyle c-\delta <x<c} yields the notion of left-continuous functions. A function is continuous if and only if it is both right-continuous and left-continuous. === Semicontinuity === A function f is lower semi-continuous if, roughly, any jumps that might occur only go down, but not up. That is, for any ε > 0 , {\displaystyle \varepsilon >0,} there exists some number δ > 0 {\displaystyle \delta >0} such that for all x in the domain with | x − c | < δ , {\displaystyle |x-c|<\delta ,} the value of f ( x ) {\displaystyle f(x)} satisfies f ( x ) ≥ f ( c ) − ϵ . {\displaystyle f(x)\geq f(c)-\epsilon .} The reverse condition is upper semi-continuity. == Continuous functions between metric spaces == The concept of continuous real-valued functions can be generalized to functions between metric spaces. A metric space is a set X {\displaystyle X} equipped with a function (called metric) d X , {\displaystyle d_{X},} that can be thought of as a measurement of the distance of any two elements in X. Formally, the metric is a function d X : X × X → R {\displaystyle d_{X}:X\times X\to \mathbb {R} } that satisfies a number of requirements, notably the triangle inequality. Given two metric spaces ( X , d X ) {\displaystyle \left(X,d_{X}\right)} and ( Y , d Y ) {\displaystyle \left(Y,d_{Y}\right)} and a function f : X → Y {\displaystyle f:X\to Y} then f {\displaystyle f} is continuous at the point c ∈ X {\displaystyle c\in X} (with respect to the given metrics) if for any positive real number ε > 0 , {\displaystyle \varepsilon >0,} there exists a positive real number δ > 0 {\displaystyle \delta >0} such that all x ∈ X {\displaystyle x\in X} satisfying d X ( x , c ) < δ {\displaystyle d_{X}(x,c)<\delta } will also satisfy d Y ( f ( x ) , f ( c ) ) < ε . {\displaystyle d_{Y}(f(x),f(c))<\varepsilon .} As in the case of real functions above, this is equivalent to the condition that for every sequence ( x n ) {\displaystyle \left(x_{n}\right)} in X {\displaystyle X} with limit lim x n = c , {\displaystyle \lim x_{n}=c,} we have lim f ( x n ) = f ( c ) . {\displaystyle \lim f\left(x_{n}\right)=f(c).} The latter condition can be weakened as follows: f {\displaystyle f} is continuous at the point c {\displaystyle c} if and only if for every convergent sequence ( x n ) {\displaystyle \left(x_{n}\right)} in X {\displaystyle X} with limit c {\displaystyle c} , the sequence ( f ( x n ) ) {\displaystyle \left(f\left(x_{n}\right)\right)} is a Cauchy sequence, and c {\displaystyle c} is in the domain of f {\displaystyle f} . The set of points at which a function between metric spaces is continuous is a G δ {\displaystyle G_{\delta }} set – this follows from the ε − δ {\displaystyle \varepsilon -\delta } definition of continuity. This notion of continuity is applied, for example, in functional analysis. A key statement in this area says that a linear operator T : V → W {\displaystyle T:V\to W} between normed vector spaces V {\displaystyle V} and W {\displaystyle W} (which are vector spaces equipped with a compatible norm, denoted ‖ x ‖ {\displaystyle \|x\|} ) is continuous if and only if it is bounded, that is, there is a constant K {\displaystyle K} such that ‖ T ( x ) ‖ ≤ K ‖ x ‖ {\displaystyle \|T(x)\|\leq K\|x\|} for all x ∈ V . {\displaystyle x\in V.} === Uniform, Hölder and Lipschitz continuity === The concept of continuity for functions between metric spaces can be strengthened in various ways by limiting the way δ {\displaystyle \delta } depends on ε {\displaystyle \varepsilon } and c in the definition above. Intuitively, a function f as above is uniformly continuous if the δ {\displaystyle \delta } does not depend on the point c. More precisely, it is required that for every real number ε > 0 {\displaystyle \varepsilon >0} there exists δ > 0 {\displaystyle \delta >0} such that for every c , b ∈ X {\displaystyle c,b\in X} with d X ( b , c ) < δ , {\displaystyle d_{X}(b,c)<\delta ,} we have that d Y ( f ( b ) , f ( c ) ) < ε . {\displaystyle d_{Y}(f(b),f(c))<\varepsilon .} Thus, any uniformly continuous function is continuous. The converse does not generally hold but holds when the domain space X is compact. Uniformly continuous maps can be defined in the more general situation of uniform spaces. A function is Hölder continuous with exponent α (a real number) if there is a constant K such that for all b , c ∈ X , {\displaystyle b,c\in X,} the inequality d Y ( f ( b ) , f ( c ) ) ≤ K ⋅ ( d X ( b , c ) ) α {\displaystyle d_{Y}(f(b),f(c))\leq K\cdot (d_{X}(b,c))^{\alpha }} holds. Any Hölder continuous function is uniformly continuous. The particular case α = 1 {\displaystyle \alpha =1} is referred to as Lipschitz continuity. That is, a function is Lipschitz continuous if there is a constant K such that the inequality d Y ( f ( b ) , f ( c ) ) ≤ K ⋅ d X ( b , c ) {\displaystyle d_{Y}(f(b),f(c))\leq K\cdot d_{X}(b,c)} holds for any b , c ∈ X . {\displaystyle b,c\in X.} The Lipschitz condition occurs, for example, in the Picard–Lindelöf theorem concerning the solutions of ordinary differential equations. == Continuous functions between topological spaces == Another, more abstract, notion of continuity is the continuity of functions between topological spaces in which there generally is no formal notion of distance, as there is in the case of metric spaces. A topological space is a set X together with a topology on X, which is a set of subsets of X satisfying a few requirements with respect to their unions and intersections that generalize the properties of the open balls in metric spaces while still allowing one to talk about the neighborhoods of a given point. The elements of a topology are called open subsets of X (with respect to the topology). A function f : X → Y {\displaystyle f:X\to Y} between two topological spaces X and Y is continuous if for every open set V ⊆ Y , {\displaystyle V\subseteq Y,} the inverse image f − 1 ( V ) = { x ∈ X | f ( x ) ∈ V } {\displaystyle f^{-1}(V)=\{x\in X\;|\;f(x)\in V\}} is an open subset of X. That is, f is a function between the sets X and Y (not on the elements of the topology T X {\displaystyle T_{X}} ), but the continuity of f depends on the topologies used on X and Y. This is equivalent to the condition that the preimages of the closed sets (which are the complements of the open subsets) in Y are closed in X. An extreme example: if a set X is given the discrete topology (in which every subset is open), all functions f : X → T {\displaystyle f:X\to T} to any topological space T are continuous. On the other hand, if X is equipped with the indiscrete topology (in which the only open subsets are the empty set and X) and the space T set is at least T0, then the only continuous functions are the constant functions. Conversely, any function whose codomain is indiscrete is continuous. === Continuity at a point === The translation in the language of neighborhoods of the ( ε , δ ) {\displaystyle (\varepsilon ,\delta )} -definition of continuity leads to the following definition of the continuity at a point: This definition is equivalent to the same statement with neighborhoods restricted to open neighborhoods and can be restated in several ways by using preimages rather than images. Also, as every set that contains a neighborhood is also a neighborhood, and f − 1 ( V ) {\displaystyle f^{-1}(V)} is the largest subset U of X such that f ( U ) ⊆ V , {\displaystyle f(U)\subseteq V,} this definition may be simplified into: As an open set is a set that is a neighborhood of all its points, a function f : X → Y {\displaystyle f:X\to Y} is continuous at every point of X if and only if it is a continuous function. If X and Y are metric spaces, it is equivalent to consider the neighborhood system of open balls centered at x and f(x) instead of all neighborhoods. This gives back the above ε − δ {\displaystyle \varepsilon -\delta } definition of continuity in the context of metric spaces. In general topological spaces, there is no notion of nearness or distance. If, however, the target space is a Hausdorff space, it is still true that f is continuous at a if and only if the limit of f as x approaches a is f(a). At an isolated point, every function is continuous. Given x ∈ X , {\displaystyle x\in X,} a map f : X → Y {\displaystyle f:X\to Y} is continuous at x {\displaystyle x} if and only if whenever B {\displaystyle {\mathcal {B}}} is a filter on X {\displaystyle X} that converges to x {\displaystyle x} in X , {\displaystyle X,} which is expressed by writing B → x , {\displaystyle {\mathcal {B}}\to x,} then necessarily f ( B ) → f ( x ) {\displaystyle f({\mathcal {B}})\to f(x)} in Y . {\displaystyle Y.} If N ( x ) {\displaystyle {\mathcal {N}}(x)} denotes the neighborhood filter at x {\displaystyle x} then f : X → Y {\displaystyle f:X\to Y} is continuous at x {\displaystyle x} if and only if f ( N ( x ) ) → f ( x ) {\displaystyle f({\mathcal {N}}(x))\to f(x)} in Y . {\displaystyle Y.} Moreover, this happens if and only if the prefilter f ( N ( x ) ) {\displaystyle f({\mathcal {N}}(x))} is a filter base for the neighborhood filter of f ( x ) {\displaystyle f(x)} in Y . {\displaystyle Y.} === Alternative definitions === Several equivalent definitions for a topological structure exist; thus, several equivalent ways exist to define a continuous function. ==== Sequences and nets ==== In several contexts, the topology of a space is conveniently specified in terms of limit points. This is often accomplished by specifying when a point is the limit of a sequence. Still, for some spaces that are too large in some sense, one specifies also when a point is the limit of more general sets of points indexed by a directed set, known as nets. A function is (Heine-)continuous only if it takes limits of sequences to limits of sequences. In the former case, preservation of limits is also sufficient; in the latter, a function may preserve all limits of sequences yet still fail to be continuous, and preservation of nets is a necessary and sufficient condition. In detail, a function f : X → Y {\displaystyle f:X\to Y} is sequentially continuous if whenever a sequence ( x n ) {\displaystyle \left(x_{n}\right)} in X {\displaystyle X} converges to a limit x , {\displaystyle x,} the sequence ( f ( x n ) ) {\displaystyle \left(f\left(x_{n}\right)\right)} converges to f ( x ) . {\displaystyle f(x).} Thus, sequentially continuous functions "preserve sequential limits." Every continuous function is sequentially continuous. If X {\displaystyle X} is a first-countable space and countable choice holds, then the converse also holds: any function preserving sequential limits is continuous. In particular, if X {\displaystyle X} is a metric space, sequential continuity and continuity are equivalent. For non-first-countable spaces, sequential continuity might be strictly weaker than continuity. (The spaces for which the two properties are equivalent are called sequential spaces.) This motivates the consideration of nets instead of sequences in general topological spaces. Continuous functions preserve the limits of nets, and this property characterizes continuous functions. For instance, consider the case of real-valued functions of one real variable: ==== Closure operator and interior operator definitions ==== In terms of the interior and closure operators, we have the following equivalences, If we declare that a point x {\displaystyle x} is close to a subset A ⊆ X {\displaystyle A\subseteq X} if x ∈ cl X ⁡ A , {\displaystyle x\in \operatorname {cl} _{X}A,} then this terminology allows for a plain English description of continuity: f {\displaystyle f} is continuous if and only if for every subset A ⊆ X , {\displaystyle A\subseteq X,} f {\displaystyle f} maps points that are close to A {\displaystyle A} to points that are close to f ( A ) . {\displaystyle f(A).} Similarly, f {\displaystyle f} is continuous at a fixed given point x ∈ X {\displaystyle x\in X} if and only if whenever x {\displaystyle x} is close to a subset A ⊆ X , {\displaystyle A\subseteq X,} then f ( x ) {\displaystyle f(x)} is close to f ( A ) . {\displaystyle f(A).} Instead of specifying topological spaces by their open subsets, any topology on X {\displaystyle X} can alternatively be determined by a closure operator or by an interior operator. Specifically, the map that sends a subset A {\displaystyle A} of a topological space X {\displaystyle X} to its topological closure cl X ⁡ A {\displaystyle \operatorname {cl} _{X}A} satisfies the Kuratowski closure axioms. Conversely, for any closure operator A ↦ cl ⁡ A {\displaystyle A\mapsto \operatorname {cl} A} there exists a unique topology τ {\displaystyle \tau } on X {\displaystyle X} (specifically, τ := { X ∖ cl ⁡ A : A ⊆ X } {\displaystyle \tau :=\{X\setminus \operatorname {cl} A:A\subseteq X\}} ) such that for every subset A ⊆ X , {\displaystyle A\subseteq X,} cl ⁡ A {\displaystyle \operatorname {cl} A} is equal to the topological closure cl ( X , τ ) ⁡ A {\displaystyle \operatorname {cl} _{(X,\tau )}A} of A {\displaystyle A} in ( X , τ ) . {\displaystyle (X,\tau ).} If the sets X {\displaystyle X} and Y {\displaystyle Y} are each associated with closure operators (both denoted by cl {\displaystyle \operatorname {cl} } ) then a map f : X → Y {\displaystyle f:X\to Y} is continuous if and only if f ( cl ⁡ A ) ⊆ cl ⁡ ( f ( A ) ) {\displaystyle f(\operatorname {cl} A)\subseteq \operatorname {cl} (f(A))} for every subset A ⊆ X . {\displaystyle A\subseteq X.} Similarly, the map that sends a subset A {\displaystyle A} of X {\displaystyle X} to its topological interior int X ⁡ A {\displaystyle \operatorname {int} _{X}A} defines an interior operator. Conversely, any interior operator A ↦ int ⁡ A {\displaystyle A\mapsto \operatorname {int} A} induces a unique topology τ {\displaystyle \tau } on X {\displaystyle X} (specifically, τ := { int ⁡ A : A ⊆ X } {\displaystyle \tau :=\{\operatorname {int} A:A\subseteq X\}} ) such that for every A ⊆ X , {\displaystyle A\subseteq X,} int ⁡ A {\displaystyle \operatorname {int} A} is equal to the topological interior int ( X , τ ) ⁡ A {\displaystyle \operatorname {int} _{(X,\tau )}A} of A {\displaystyle A} in ( X , τ ) . {\displaystyle (X,\tau ).} If the sets X {\displaystyle X} and Y {\displaystyle Y} are each associated with interior operators (both denoted by int {\displaystyle \operatorname {int} } ) then a map f : X → Y {\displaystyle f:X\to Y} is continuous if and only if f − 1 ( int ⁡ B ) ⊆ int ⁡ ( f − 1 ( B ) ) {\displaystyle f^{-1}(\operatorname {int} B)\subseteq \operatorname {int} \left(f^{-1}(B)\right)} for every subset B ⊆ Y . {\displaystyle B\subseteq Y.} ==== Filters and prefilters ==== Continuity can also be characterized in terms of filters. A function f : X → Y {\displaystyle f:X\to Y} is continuous if and only if whenever a filter B {\displaystyle {\mathcal {B}}} on X {\displaystyle X} converges in X {\displaystyle X} to a point x ∈ X , {\displaystyle x\in X,} then the prefilter f ( B ) {\displaystyle f({\mathcal {B}})} converges in Y {\displaystyle Y} to f ( x ) . {\displaystyle f(x).} This characterization remains true if the word "filter" is replaced by "prefilter." === Properties === If f : X → Y {\displaystyle f:X\to Y} and g : Y → Z {\displaystyle g:Y\to Z} are continuous, then so is the composition g ∘ f : X → Z . {\displaystyle g\circ f:X\to Z.} If f : X → Y {\displaystyle f:X\to Y} is continuous and X is compact, then f(X) is compact. X is connected, then f(X) is connected. X is path-connected, then f(X) is path-connected. X is Lindelöf, then f(X) is Lindelöf. X is separable, then f(X) is separable. The possible topologies on a fixed set X are partially ordered: a topology τ 1 {\displaystyle \tau _{1}} is said to be coarser than another topology τ 2 {\displaystyle \tau _{2}} (notation: τ 1 ⊆ τ 2 {\displaystyle \tau _{1}\subseteq \tau _{2}} ) if every open subset with respect to τ 1 {\displaystyle \tau _{1}} is also open with respect to τ 2 . {\displaystyle \tau _{2}.} Then, the identity map id X : ( X , τ 2 ) → ( X , τ 1 ) {\displaystyle \operatorname {id} _{X}:\left(X,\tau _{2}\right)\to \left(X,\tau _{1}\right)} is continuous if and only if τ 1 ⊆ τ 2 {\displaystyle \tau _{1}\subseteq \tau _{2}} (see also comparison of topologies). More generally, a continuous function ( X , τ X ) → ( Y , τ Y ) {\displaystyle \left(X,\tau _{X}\right)\to \left(Y,\tau _{Y}\right)} stays continuous if the topology τ Y {\displaystyle \tau _{Y}} is replaced by a coarser topology and/or τ X {\displaystyle \tau _{X}} is replaced by a finer topology. === Homeomorphisms === Symmetric to the concept of a continuous map is an open map, for which images of open sets are open. If an open map f has an inverse function, that inverse is continuous, and if a continuous map g has an inverse, that inverse is open. Given a bijective function f between two topological spaces, the inverse function f − 1 {\displaystyle f^{-1}} need not be continuous. A bijective continuous function with a continuous inverse function is called a homeomorphism. If a continuous bijection has as its domain a compact space and its codomain is Hausdorff, then it is a homeomorphism. === Defining topologies via continuous functions === Given a function f : X → S , {\displaystyle f:X\to S,} where X is a topological space and S is a set (without a specified topology), the final topology on S is defined by letting the open sets of S be those subsets A of S for which f − 1 ( A ) {\displaystyle f^{-1}(A)} is open in X. If S has an existing topology, f is continuous with respect to this topology if and only if the existing topology is coarser than the final topology on S. Thus, the final topology is the finest topology on S that makes f continuous. If f is surjective, this topology is canonically identified with the quotient topology under the equivalence relation defined by f. Dually, for a function f from a set S to a topological space X, the initial topology on S is defined by designating as an open set every subset A of S such that A = f − 1 ( U ) {\displaystyle A=f^{-1}(U)} for some open subset U of X. If S has an existing topology, f is continuous with respect to this topology if and only if the existing topology is finer than the initial topology on S. Thus, the initial topology is the coarsest topology on S that makes f continuous. If f is injective, this topology is canonically identified with the subspace topology of S, viewed as a subset of X. A topology on a set S is uniquely determined by the class of all continuous functions S → X {\displaystyle S\to X} into all topological spaces X. Dually, a similar idea can be applied to maps X → S . {\displaystyle X\to S.} == Related notions == If f : S → Y {\displaystyle f:S\to Y} is a continuous function from some subset S {\displaystyle S} of a topological space X {\displaystyle X} then a continuous extension of f {\displaystyle f} to X {\displaystyle X} is any continuous function F : X → Y {\displaystyle F:X\to Y} such that F ( s ) = f ( s ) {\displaystyle F(s)=f(s)} for every s ∈ S , {\displaystyle s\in S,} which is a condition that often written as f = F | S . {\displaystyle f=F{\big \vert }_{S}.} In words, it is any continuous function F : X → Y {\displaystyle F:X\to Y} that restricts to f {\displaystyle f} on S . {\displaystyle S.} This notion is used, for example, in the Tietze extension theorem and the Hahn–Banach theorem. If f : S → Y {\displaystyle f:S\to Y} is not continuous, then it could not possibly have a continuous extension. If Y {\displaystyle Y} is a Hausdorff space and S {\displaystyle S} is a dense subset of X {\displaystyle X} then a continuous extension of f : S → Y {\displaystyle f:S\to Y} to X , {\displaystyle X,} if one exists, will be unique. The Blumberg theorem states that if f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } is an arbitrary function then there exists a dense subset D {\displaystyle D} of R {\displaystyle \mathbb {R} } such that the restriction f | D : D → R {\displaystyle f{\big \vert }_{D}:D\to \mathbb {R} } is continuous; in other words, every function R → R {\displaystyle \mathbb {R} \to \mathbb {R} } can be restricted to some dense subset on which it is continuous. Various other mathematical domains use the concept of continuity in different but related meanings. For example, in order theory, an order-preserving function f : X → Y {\displaystyle f:X\to Y} between particular types of partially ordered sets X {\displaystyle X} and Y {\displaystyle Y} is continuous if for each directed subset A {\displaystyle A} of X , {\displaystyle X,} we have sup f ( A ) = f ( sup A ) . {\displaystyle \sup f(A)=f(\sup A).} Here sup {\displaystyle \,\sup \,} is the supremum with respect to the orderings in X {\displaystyle X} and Y , {\displaystyle Y,} respectively. This notion of continuity is the same as topological continuity when the partially ordered sets are given the Scott topology. In category theory, a functor F : C → D {\displaystyle F:{\mathcal {C}}\to {\mathcal {D}}} between two categories is called continuous if it commutes with small limits. That is to say, lim ← i ∈ I ⁡ F ( C i ) ≅ F ( lim ← i ∈ I ⁡ C i ) {\displaystyle \varprojlim _{i\in I}F(C_{i})\cong F\left(\varprojlim _{i\in I}C_{i}\right)} for any small (that is, indexed by a set I , {\displaystyle I,} as opposed to a class) diagram of objects in C {\displaystyle {\mathcal {C}}} . A continuity space is a generalization of metric spaces and posets, which uses the concept of quantales, and that can be used to unify the notions of metric spaces and domains. In measure theory, a function f : E → R k {\displaystyle f:E\to \mathbb {R} ^{k}} defined on a Lebesgue measurable set E ⊆ R n {\displaystyle E\subseteq \mathbb {R} ^{n}} is called approximately continuous at a point x 0 ∈ E {\displaystyle x_{0}\in E} if the approximate limit of f {\displaystyle f} at x 0 {\displaystyle x_{0}} exists and equals f ( x 0 ) {\displaystyle f(x_{0})} . This generalizes the notion of continuity by replacing the ordinary limit with the approximate limit. A fundamental result known as the Stepanov-Denjoy theorem states that a function is measurable if and only if it is approximately continuous almost everywhere. == See also == Direction-preserving function - an analog of a continuous function in discrete spaces. == References == == Bibliography == Dugundji, James (1966). Topology. Boston: Allyn and Bacon. ISBN 978-0-697-06889-7. OCLC 395340485. "Continuous function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Continuous_function_(topology)
In computer programming, an anonymous function (function literal, expression or block) is a function definition that is not bound to an identifier. Anonymous functions are often arguments being passed to higher-order functions or used for constructing the result of a higher-order function that needs to return a function. If the function is only used once, or a limited number of times, an anonymous function may be syntactically lighter than using a named function. Anonymous functions are ubiquitous in functional programming languages and other languages with first-class functions, where they fulfil the same role for the function type as literals do for other data types. Anonymous functions originate in the work of Alonzo Church in his invention of the lambda calculus, in which all functions are anonymous, in 1936, before electronic computers. In several programming languages, anonymous functions are introduced using the keyword lambda, and anonymous functions are often referred to as lambdas or lambda abstractions. Anonymous functions have been a feature of programming languages since Lisp in 1958, and a growing number of modern programming languages support anonymous functions. == Names == The names "lambda abstraction", "lambda function", and "lambda expression" refer to the notation of function abstraction in lambda calculus, where the usual function f(x) = M would be written (λx.M), and where M is an expression that uses x. Compare to the Python syntax of lambda x: M. The name "arrow function" refers to the mathematical "maps to" symbol, x ↦ M. Compare to the JavaScript syntax of x => M. == Uses == Anonymous functions can be used for containing functionality that need not be named and possibly for short-term use. Some notable examples include closures and currying. The use of anonymous functions is a matter of style. Using them is never the only way to solve a problem; each anonymous function could instead be defined as a named function and called by name. Anonymous functions often provide a briefer notation than defining named functions. In languages that do not permit the definition of named functions in local scopes, anonymous functions may provide encapsulation via localized scope, however the code in the body of such anonymous function may not be re-usable, or amenable to separate testing. Short/simple anonymous functions used in expressions may be easier to read and understand than separately defined named functions, though without a descriptive name they may be more difficult to understand. In some programming languages, anonymous functions are commonly implemented for very specific purposes such as binding events to callbacks or instantiating the function for particular values, which may be more efficient in a Dynamic programming language, more readable, and less error-prone than calling a named function. The following examples are written in Python 3. === Sorting === When attempting to sort in a non-standard way, it may be easier to contain the sorting logic as an anonymous function instead of creating a named function. Most languages provide a generic sort function that implements a sort algorithm that will sort arbitrary objects. This function usually accepts an arbitrary function that determines how to compare whether two elements are equal or if one is greater or less than the other. Consider this Python code sorting a list of strings by length of the string: The anonymous function in this example is the lambda expression: The anonymous function accepts one argument, x, and returns the length of its argument, which is then used by the sort() method as the criteria for sorting. Basic syntax of a lambda function in Python is The expression returned by the lambda function can be assigned to a variable and used in the code at multiple places. Another example would be sorting items in a list by the name of their class (in Python, everything has a class): Note that 11.2 has class name "float", 10 has class name "int", and 'number' has class name "str". The sorted order is "float", "int", then "str". === Closures === Closures are functions evaluated in an environment containing bound variables. The following example binds the variable "threshold" in an anonymous function that compares the input to the threshold. This can be used as a sort of generator of comparison functions: It would be impractical to create a function for every possible comparison function and may be too inconvenient to keep the threshold around for further use. Regardless of the reason why a closure is used, the anonymous function is the entity that contains the functionality that does the comparing. === Currying === Currying is the process of changing a function so that rather than taking multiple inputs, it takes a single input and returns a function which accepts the second input, and so forth. In this example, a function that performs division by any integer is transformed into one that performs division by a set integer. While the use of anonymous functions is perhaps not common with currying, it still can be used. In the above example, the function divisor generates functions with a specified divisor. The functions half and third curry the divide function with a fixed divisor. The divisor function also forms a closure by binding the variable d. === Higher-order functions === A higher-order function is a function that takes a function as an argument or returns one as a result. This is commonly used to customize the behavior of a generically defined function, often a looping construct or recursion scheme. Anonymous functions are a convenient way to specify such function arguments. The following examples are in Python 3. ==== Map ==== The map function performs a function call on each element of a list. The following example squares every element in an array with an anonymous function. The anonymous function accepts an argument and multiplies it by itself (squares it). The above form is discouraged by the creators of the language, who maintain that the form presented below has the same meaning and is more aligned with the philosophy of the language: ==== Filter ==== The filter function returns all elements from a list that evaluate True when passed to a certain function. The anonymous function checks if the argument passed to it is even. The same as with map, the form below is considered more appropriate: ==== Fold ==== A fold function runs over all elements in a structure (for lists usually left-to-right, a "left fold", called reduce in Python), accumulating a value as it goes. This can be used to combine all elements of a structure into one value, for example: This performs ( ( ( 1 × 2 ) × 3 ) × 4 ) × 5 = 120. {\displaystyle \left(\left(\left(1\times 2\right)\times 3\right)\times 4\right)\times 5=120.} The anonymous function here is the multiplication of the two arguments. The result of a fold need not be one value. Instead, both map and filter can be created using fold. In map, the value that is accumulated is a new list, containing the results of applying a function to each element of the original list. In filter, the value that is accumulated is a new list containing only those elements that match the given condition. == List of languages == The following is a list of programming languages that support unnamed anonymous functions fully, or partly as some variant, or not at all. This table shows some general trends. First, the languages that do not support anonymous functions (C, Pascal, Object Pascal) are all statically typed languages. However, statically typed languages can support anonymous functions. For example, the ML languages are statically typed and fundamentally include anonymous functions, and Delphi, a dialect of Object Pascal, has been extended to support anonymous functions, as has C++ (by the C++11 standard). Second, the languages that treat functions as first-class functions (Dylan, Haskell, JavaScript, Lisp, ML, Perl, Python, Ruby, Scheme) generally have anonymous function support so that functions can be defined and passed around as easily as other data types. == Examples of anonymous functions == == See also == First-class function Lambda calculus definition == References == == External links == Anonymous Methods - When Should They Be Used? (blog about anonymous function in Delphi) Compiling Lambda Expressions: Scala vs. Java 8 php anonymous functions php anonymous functions Lambda functions in various programming languages Functions in Go
Wikipedia/Function_constant
In formal ontology, a branch of metaphysics, and in ontological computer science, mereotopology is a first-order theory, embodying mereological and topological concepts, of the relations among wholes, parts, parts of parts, and the boundaries between parts. == History and motivation == Mereotopology begins in philosophy with theories articulated by A. N. Whitehead in several books and articles he published between 1916 and 1929, drawing in part on the mereogeometry of De Laguna (1922). The first to have proposed the idea of a point-free definition of the concept of topological space in mathematics was Karl Menger in his book Dimensionstheorie (1928) -- see also his (1940). The early historical background of mereotopology is documented in Bélanger and Marquis (2013) and Whitehead's early work is discussed in Kneebone (1963: ch. 13.5) and Simons (1987: 2.9.1). The theory of Whitehead's 1929 Process and Reality augmented the part-whole relation with topological notions such as contiguity and connection. Despite Whitehead's acumen as a mathematician, his theories were insufficiently formal, even flawed. By showing how Whitehead's theories could be fully formalized and repaired, Clarke (1981, 1985) founded contemporary mereotopology. The theories of Clarke and Whitehead are discussed in Simons (1987: 2.10.2), and Lucas (2000: ch. 10). The entry Whitehead's point-free geometry includes two contemporary treatments of Whitehead's theories, due to Giangiacomo Gerla, each different from the theory set out in the next section. Although mereotopology is a mathematical theory, we owe its subsequent development to logicians and theoretical computer scientists. Lucas (2000: ch. 10) and Casati and Varzi (1999: ch. 4,5) are introductions to mereotopology that can be read by anyone having done a course in first-order logic. More advanced treatments of mereotopology include Cohn and Varzi (2003) and, for the mathematically sophisticated, Roeper (1997). For a mathematical treatment of point-free geometry, see Gerla (1995). Lattice-theoretic (algebraic) treatments of mereotopology as contact algebras have been applied to separate the topological from the mereological structure, see Stell (2000), Düntsch and Winter (2004). == Applications == Barry Smith, Anthony Cohn, Achille Varzi and their co-authors have shown that mereotopology can be useful in formal ontology and computer science, by allowing the formalization of relations such as contact, connection, boundaries, interiors, holes, and so on. Mereotopology has been applied also as a tool for qualitative spatial-temporal reasoning, with constraint calculi such as the Region Connection Calculus (RCC). It provides the starting point for the theory of fiat boundaries developed by Smith and Varzi, which grew out of the attempt to distinguish formally between boundaries (in geography, geopolitics, and other domains) which reflect more or less arbitrary human demarcations and boundaries which reflect bona fide physical discontinuities (Smith 1995, 2001). Mereotopology is being applied by Salustri in the domain of digital manufacturing (Salustri, 2002) and by Smith and Varzi to the formalization of basic notions of ecology and environmental biology (Smith and Varzi, 1999, 2002). It has been applied also to deal with vague boundaries in geography (Smith and Mark, 2003), and in the study of vagueness and granularity (Smith and Brogaard, 2002, Bittner and Smith, 2001, 2001a). == Preferred approach of Casati & Varzi == Casati and Varzi (1999: ch.4) set out a variety of mereotopological theories in a consistent notation. This section sets out several nested theories that culminate in their preferred theory GEMTC, and follows their exposition closely. The mereological part of GEMTC is the conventional theory GEM. Casati and Varzi do not say if the models of GEMTC include any conventional topological spaces. We begin with some domain of discourse, whose elements are called individuals (a synonym for mereology is "the calculus of individuals"). Casati and Varzi prefer limiting the ontology to physical objects, but others freely employ mereotopology to reason about geometric figures and events, and to solve problems posed by research in machine intelligence. An upper case Latin letter denotes both a relation and the predicate letter referring to that relation in first-order logic. Lower case letters from the end of the alphabet denote variables ranging over the domain; letters from the start of the alphabet are names of arbitrary individuals. If a formula begins with an atomic formula followed by the biconditional, the subformula to the right of the biconditional is a definition of the atomic formula, whose variables are unbound. Otherwise, variables not explicitly quantified are tacitly universally quantified. The axiom Cn below corresponds to axiom C.n in Casati and Varzi (1999: ch. 4). We begin with a topological primitive, a binary relation called connection; the atomic formula Cxy denotes that "x is connected to y." Connection is governed, at minimum, by the axioms: C1. C x x . {\displaystyle \ Cxx.} (reflexive) C2. C x y → C y x . {\displaystyle Cxy\rightarrow Cyx.} (symmetric) Let E, the binary relation of enclosure, be defined as: E x y ↔ [ C z x → C z y ] . {\displaystyle Exy\leftrightarrow [Czx\rightarrow Czy].} Exy is read as "y encloses x" and is also topological in nature. A consequence of C1-2 is that E is reflexive and transitive, and hence a preorder. If E is also assumed extensional, so that: ( E x a ↔ E x b ) ↔ ( a = b ) , {\displaystyle (Exa\leftrightarrow Exb)\leftrightarrow (a=b),} then E can be proved antisymmetric and thus becomes a partial order. Enclosure, notated xKy, is the single primitive relation of the theories in Whitehead (1919, 1920), the starting point of mereotopology. Let parthood be the defining primitive binary relation of the underlying mereology, and let the atomic formula Pxy denote that "x is part of y". We assume that P is a partial order. Call the resulting minimalist mereological theory M. If x is part of y, we postulate that y encloses x: C3. P x y → E x y . {\displaystyle \ Pxy\rightarrow Exy.} C3 nicely connects mereological parthood to topological enclosure. Let O, the binary relation of mereological overlap, be defined as: O x y ↔ ∃ z [ P z x ∧ P z y ] . {\displaystyle Oxy\leftrightarrow \exists z[Pzx\land \ Pzy].} Let Oxy denote that "x and y overlap." With O in hand, a consequence of C3 is: O x y → C x y . {\displaystyle Oxy\rightarrow Cxy.} Note that the converse does not necessarily hold. While things that overlap are necessarily connected, connected things do not necessarily overlap. If this were not the case, topology would merely be a model of mereology (in which "overlap" is always either primitive or defined). Ground mereotopology (MT) is the theory consisting of primitive C and P, defined E and O, the axioms C1-3, and axioms assuring that P is a partial order. Replacing the M in MT with the standard extensional mereology GEM results in the theory GEMT. Let IPxy denote that "x is an internal part of y." IP is defined as: I P x y ↔ ( P x y ∧ ( C z x → O z y ) ) . {\displaystyle IPxy\leftrightarrow (Pxy\land (Czx\rightarrow Ozy)).} Let σx φ(x) denote the mereological sum (fusion) of all individuals in the domain satisfying φ(x). σ is a variable binding prefix operator. The axioms of GEM assure that this sum exists if φ(x) is a first-order formula. With σ and the relation IP in hand, we can define the interior of x, i x , {\displaystyle \mathbf {i} x,} as the mereological sum of all interior parts z of x, or: i x = {\displaystyle \mathbf {i} x=} df σ z [ I P z x ] . {\displaystyle \sigma z[IPzx].} Two easy consequences of this definition are: i W = W , {\displaystyle \mathbf {i} W=W,} where W is the universal individual, and C5. P ( i x ) x . {\displaystyle \ P(\mathbf {i} x)x.} (Inclusion) The operator i has two more axiomatic properties: C6. i ( i x ) = i x . {\displaystyle \mathbf {i} (\mathbf {i} x)=\mathbf {i} x.} (Idempotence) C7. i ( x × y ) = i x × i y , {\displaystyle \mathbf {i} (x\times y)=\mathbf {i} x\times \mathbf {i} y,} where a×b is the mereological product of a and b, not defined when Oab is false. i distributes over product. It can now be seen that i is isomorphic to the interior operator of topology. Hence the dual of i, the topological closure operator c, can be defined in terms of i, and Kuratowski's axioms for c are theorems. Likewise, given an axiomatization of c that is analogous to C5-7, i may be defined in terms of c, and C5-7 become theorems. Adding C5-7 to GEMT results in Casati and Varzi's preferred mereotopological theory, GEMTC. x is self-connected if it satisfies the following predicate: S C x ↔ ( ( O w x ↔ ( O w y ∨ O w z ) ) → C y z ) . {\displaystyle SCx\leftrightarrow ((Owx\leftrightarrow (Owy\lor Owz))\rightarrow Cyz).} Note that the primitive and defined predicates of MT alone suffice for this definition. The predicate SC enables formalizing the necessary condition given in Whitehead's Process and Reality for the mereological sum of two individuals to exist: they must be connected. Formally: C8. C x y → ∃ z [ S C z ∧ O z x ∧ ( P w z → ( O w x ∨ O w y ) ) . {\displaystyle Cxy\rightarrow \exists z[SCz\land Ozx\land (Pwz\rightarrow (Owx\lor Owy)).} Given some mereotopology X, adding C8 to X results in what Casati and Varzi call the Whiteheadian extension of X, denoted WX. Hence the theory whose axioms are C1-8 is WGEMTC. The converse of C8 is a GEMTC theorem. Hence given the axioms of GEMTC, C is a defined predicate if O and SC are taken as primitive predicates. If the underlying mereology is atomless and weaker than GEM, the axiom that assures the absence of atoms (P9 in Casati and Varzi 1999) may be replaced by C9, which postulates that no individual has a topological boundary: C9. ∀ x ∃ y [ P y x ∧ ( C z y → O z x ) ∧ ¬ ( P x y ∧ ( C z x → O z y ) ) ] . {\displaystyle \forall x\exists y[Pyx\land (Czy\rightarrow Ozx)\land \lnot (Pxy\land (Czx\rightarrow Ozy))].} When the domain consists of geometric figures, the boundaries can be points, curves, and surfaces. What boundaries could mean, given other ontologies, is not an easy matter and is discussed in Casati and Varzi (1999: ch. 5). == See also == Mereology Pointless topology Point-set topology Topology Topological space (with links to T0 through T6) Whitehead's point-free geometry == Notes == == References == Biacino L., and Gerla G., 1991, "Connection Structures," Notre Dame Journal of Formal Logic 32: 242–47. Casati, Roberto, and Varzi, Achille, 1999. Parts and places: the structures of spatial representation. MIT Press. Stell J. G., 2000, "Boolean connection algebras: A new approach to the Region-Connection Calculus," Artificial Intelligence 122: 111–136. == External links == Stanford Encyclopedia of Philosophy: Boundary—by Achille Varzi. With many references.
Wikipedia/Mereotopology
In general topology and related areas of mathematics, the final topology (or coinduced, weak, colimit, or inductive topology) on a set X , {\displaystyle X,} with respect to a family of functions from topological spaces into X , {\displaystyle X,} is the finest topology on X {\displaystyle X} that makes all those functions continuous. The quotient topology on a quotient space is a final topology, with respect to a single surjective function, namely the quotient map. The disjoint union topology is the final topology with respect to the inclusion maps. The final topology is also the topology that every direct limit in the category of topological spaces is endowed with, and it is in the context of direct limits that the final topology often appears. A topology is coherent with some collection of subspaces if and only if it is the final topology induced by the natural inclusions. The dual notion is the initial topology, which for a given family of functions from a set X {\displaystyle X} into topological spaces is the coarsest topology on X {\displaystyle X} that makes those functions continuous. == Definition == Given a set X {\displaystyle X} and an I {\displaystyle I} -indexed family of topological spaces ( Y i , υ i ) {\displaystyle \left(Y_{i},\upsilon _{i}\right)} with associated functions f i : Y i → X , {\displaystyle f_{i}:Y_{i}\to X,} the final topology on X {\displaystyle X} induced by the family of functions F := { f i : i ∈ I } {\displaystyle {\mathcal {F}}:=\left\{f_{i}:i\in I\right\}} is the finest topology τ F {\displaystyle \tau _{\mathcal {F}}} on X {\displaystyle X} such that f i : ( Y i , υ i ) → ( X , τ F ) {\displaystyle f_{i}:\left(Y_{i},\upsilon _{i}\right)\to \left(X,\tau _{\mathcal {F}}\right)} is continuous for each i ∈ I {\displaystyle i\in I} . The final topology always exists, and is unique. Explicitly, the final topology may be described as follows: a subset U {\displaystyle U} of X {\displaystyle X} is open in the final topology ( X , τ F ) {\displaystyle \left(X,\tau _{\mathcal {F}}\right)} (that is, U ∈ τ F {\displaystyle U\in \tau _{\mathcal {F}}} ) if and only if f i − 1 ( U ) {\displaystyle f_{i}^{-1}(U)} is open in ( Y i , υ i ) {\displaystyle \left(Y_{i},\upsilon _{i}\right)} for each i ∈ I {\displaystyle i\in I} . The closed subsets have an analogous characterization: a subset C {\displaystyle C} of X {\displaystyle X} is closed in the final topology ( X , τ F ) {\displaystyle \left(X,\tau _{\mathcal {F}}\right)} if and only if f i − 1 ( C ) {\displaystyle f_{i}^{-1}(C)} is closed in ( Y i , υ i ) {\displaystyle \left(Y_{i},\upsilon _{i}\right)} for each i ∈ I {\displaystyle i\in I} . The family F {\displaystyle {\mathcal {F}}} of functions that induces the final topology on X {\displaystyle X} is usually a set of functions. But the same construction can be performed if F {\displaystyle {\mathcal {F}}} is a proper class of functions, and the result is still well-defined in Zermelo–Fraenkel set theory. In that case there is always a subfamily G {\displaystyle {\mathcal {G}}} of F {\displaystyle {\mathcal {F}}} with G {\displaystyle {\mathcal {G}}} a set, such that the final topologies on X {\displaystyle X} induced by F {\displaystyle {\mathcal {F}}} and by G {\displaystyle {\mathcal {G}}} coincide. For more on this, see for example the discussion here. As an example, a commonly used variant of the notion of compactly generated space is defined as the final topology with respect to a proper class of functions. == Examples == The important special case where the family of maps F {\displaystyle {\mathcal {F}}} consists of a single surjective map can be completely characterized using the notion of quotient map. A surjective function f : ( Y , υ ) → ( X , τ ) {\displaystyle f:(Y,\upsilon )\to \left(X,\tau \right)} between topological spaces is a quotient map if and only if the topology τ {\displaystyle \tau } on X {\displaystyle X} coincides with the final topology τ F {\displaystyle \tau _{\mathcal {F}}} induced by the family F = { f } {\displaystyle {\mathcal {F}}=\{f\}} . In particular: the quotient topology is the final topology on the quotient space induced by the quotient map. The final topology on a set X {\displaystyle X} induced by a family of X {\displaystyle X} -valued maps can be viewed as a far reaching generalization of the quotient topology, where multiple maps may be used instead of just one and where these maps are not required to be surjections. Given topological spaces X i {\displaystyle X_{i}} , the disjoint union topology on the disjoint union ∐ i X i {\displaystyle \coprod _{i}X_{i}} is the final topology on the disjoint union induced by the natural injections. Given a family of topologies ( τ i ) i ∈ I {\displaystyle \left(\tau _{i}\right)_{i\in I}} on a fixed set X , {\displaystyle X,} the final topology on X {\displaystyle X} with respect to the identity maps id τ i : ( X , τ i ) → X {\displaystyle \operatorname {id} _{\tau _{i}}:\left(X,\tau _{i}\right)\to X} as i {\displaystyle i} ranges over I , {\displaystyle I,} call it τ , {\displaystyle \tau ,} is the infimum (or meet) of these topologies ( τ i ) i ∈ I {\displaystyle \left(\tau _{i}\right)_{i\in I}} in the lattice of topologies on X . {\displaystyle X.} That is, the final topology τ {\displaystyle \tau } is equal to the intersection τ = ⋂ i ∈ I τ i . {\textstyle \tau =\bigcap _{i\in I}\tau _{i}.} Given a topological space ( X , τ ) {\displaystyle (X,\tau )} and a family C = { C i : i ∈ I } {\displaystyle {\mathcal {C}}=\{C_{i}:i\in I\}} of subsets of X {\displaystyle X} each having the subspace topology, the final topology τ C {\displaystyle \tau _{\mathcal {C}}} induced by all the inclusion maps of the C i {\displaystyle C_{i}} into X {\displaystyle X} is finer than (or equal to) the original topology τ {\displaystyle \tau } on X . {\displaystyle X.} The space X {\displaystyle X} is called coherent with the family C {\displaystyle {\mathcal {C}}} of subspaces if the final topology τ C {\displaystyle \tau _{\mathcal {C}}} coincides with the original topology τ . {\displaystyle \tau .} In that case, a subset U ⊆ X {\displaystyle U\subseteq X} will be open in X {\displaystyle X} exactly when the intersection U ∩ C i {\displaystyle U\cap C_{i}} is open in C i {\displaystyle C_{i}} for each i ∈ I . {\displaystyle i\in I.} (See the coherent topology article for more details on this notion and more examples.) As a particular case, one of the notions of compactly generated space can be characterized as a certain coherent topology. The direct limit of any direct system of spaces and continuous maps is the set-theoretic direct limit together with the final topology determined by the canonical morphisms. Explicitly, this means that if Sys Y = ( Y i , f j i , I ) {\displaystyle \operatorname {Sys} _{Y}=\left(Y_{i},f_{ji},I\right)} is a direct system in the category Top of topological spaces and if ( X , ( f i ) i ∈ I ) {\displaystyle \left(X,\left(f_{i}\right)_{i\in I}\right)} is a direct limit of Sys Y {\displaystyle \operatorname {Sys} _{Y}} in the category Set of all sets, then by endowing X {\displaystyle X} with the final topology τ F {\displaystyle \tau _{\mathcal {F}}} induced by F := { f i : i ∈ I } , {\displaystyle {\mathcal {F}}:=\left\{f_{i}:i\in I\right\},} ( ( X , τ F ) , ( f i ) i ∈ I ) {\displaystyle \left(\left(X,\tau _{\mathcal {F}}\right),\left(f_{i}\right)_{i\in I}\right)} becomes the direct limit of Sys Y {\displaystyle \operatorname {Sys} _{Y}} in the category Top. The étalé space of a sheaf is topologized by a final topology. A first-countable Hausdorff space ( X , τ ) {\displaystyle (X,\tau )} is locally path-connected if and only if τ {\displaystyle \tau } is equal to the final topology on X {\displaystyle X} induced by the set C ( [ 0 , 1 ] ; X ) {\displaystyle C\left([0,1];X\right)} of all continuous maps [ 0 , 1 ] → ( X , τ ) , {\displaystyle [0,1]\to (X,\tau ),} where any such map is called a path in ( X , τ ) . {\displaystyle (X,\tau ).} If a Hausdorff locally convex topological vector space ( X , τ ) {\displaystyle (X,\tau )} is a Fréchet-Urysohn space then τ {\displaystyle \tau } is equal to the final topology on X {\displaystyle X} induced by the set Arc ⁡ ( [ 0 , 1 ] ; X ) {\displaystyle \operatorname {Arc} \left([0,1];X\right)} of all arcs in ( X , τ ) , {\displaystyle (X,\tau ),} which by definition are continuous paths [ 0 , 1 ] → ( X , τ ) {\displaystyle [0,1]\to (X,\tau )} that are also topological embeddings. == Properties == === Characterization via continuous maps === Given functions f i : Y i → X , {\displaystyle f_{i}:Y_{i}\to X,} from topological spaces Y i {\displaystyle Y_{i}} to the set X {\displaystyle X} , the final topology on X {\displaystyle X} with respect to these functions f i {\displaystyle f_{i}} satisfies the following property: a function g {\displaystyle g} from X {\displaystyle X} to some space Z {\displaystyle Z} is continuous if and only if g ∘ f i {\displaystyle g\circ f_{i}} is continuous for each i ∈ I . {\displaystyle i\in I.} This property characterizes the final topology in the sense that if a topology on X {\displaystyle X} satisfies the property above for all spaces Z {\displaystyle Z} and all functions g : X → Z {\displaystyle g:X\to Z} , then the topology on X {\displaystyle X} is the final topology with respect to the f i . {\displaystyle f_{i}.} === Behavior under composition === Suppose F := { f i : Y i → X ∣ i ∈ I } {\displaystyle {\mathcal {F}}:=\left\{f_{i}:Y_{i}\to X\mid i\in I\right\}} is a family of maps, and for every i ∈ I , {\displaystyle i\in I,} the topology υ i {\displaystyle \upsilon _{i}} on Y i {\displaystyle Y_{i}} is the final topology induced by some family G i {\displaystyle {\mathcal {G}}_{i}} of maps valued in Y i {\displaystyle Y_{i}} . Then the final topology on X {\displaystyle X} induced by F {\displaystyle {\mathcal {F}}} is equal to the final topology on X {\displaystyle X} induced by the maps { f i ∘ g : i ∈ I and g ∈ G i } . {\displaystyle \left\{f_{i}\circ g~:~i\in I{\text{ and }}g\in {\cal {G_{i}}}\right\}.} As a consequence: if τ F {\displaystyle \tau _{\mathcal {F}}} is the final topology on X {\displaystyle X} induced by the family F := { f i : i ∈ I } {\displaystyle {\mathcal {F}}:=\left\{f_{i}:i\in I\right\}} and if π : X → ( S , σ ) {\displaystyle \pi :X\to (S,\sigma )} is any surjective map valued in some topological space ( S , σ ) , {\displaystyle (S,\sigma ),} then π : ( X , τ F ) → ( S , σ ) {\displaystyle \pi :\left(X,\tau _{\mathcal {F}}\right)\to (S,\sigma )} is a quotient map if and only if ( S , σ ) {\displaystyle (S,\sigma )} has the final topology induced by the maps { π ∘ f i : i ∈ I } . {\displaystyle \left\{\pi \circ f_{i}~:~i\in I\right\}.} By the universal property of the disjoint union topology we know that given any family of continuous maps f i : Y i → X , {\displaystyle f_{i}:Y_{i}\to X,} there is a unique continuous map f : ∐ i Y i → X {\displaystyle f:\coprod _{i}Y_{i}\to X} that is compatible with the natural injections. If the family of maps f i {\displaystyle f_{i}} covers X {\displaystyle X} (i.e. each x ∈ X {\displaystyle x\in X} lies in the image of some f i {\displaystyle f_{i}} ) then the map f {\displaystyle f} will be a quotient map if and only if X {\displaystyle X} has the final topology induced by the maps f i . {\displaystyle f_{i}.} === Effects of changing the family of maps === Throughout, let F := { f i : i ∈ I } {\displaystyle {\mathcal {F}}:=\left\{f_{i}:i\in I\right\}} be a family of X {\displaystyle X} -valued maps with each map being of the form f i : ( Y i , υ i ) → X {\displaystyle f_{i}:\left(Y_{i},\upsilon _{i}\right)\to X} and let τ F {\displaystyle \tau _{\mathcal {F}}} denote the final topology on X {\displaystyle X} induced by F . {\displaystyle {\mathcal {F}}.} The definition of the final topology guarantees that for every index i , {\displaystyle i,} the map f i : ( Y i , υ i ) → ( X , τ F ) {\displaystyle f_{i}:\left(Y_{i},\upsilon _{i}\right)\to \left(X,\tau _{\mathcal {F}}\right)} is continuous. For any subset S ⊆ F , {\displaystyle {\mathcal {S}}\subseteq {\mathcal {F}},} the final topology τ S {\displaystyle \tau _{\mathcal {S}}} on X {\displaystyle X} will be finer than (and possibly equal to) the topology τ F {\displaystyle \tau _{\mathcal {F}}} ; that is, S ⊆ F {\displaystyle {\mathcal {S}}\subseteq {\mathcal {F}}} implies τ F ⊆ τ S , {\displaystyle \tau _{\mathcal {F}}\subseteq \tau _{\mathcal {S}},} where set equality might hold even if S {\displaystyle {\mathcal {S}}} is a proper subset of F . {\displaystyle {\mathcal {F}}.} If τ {\displaystyle \tau } is any topology on X {\displaystyle X} such that τ ≠ τ F {\displaystyle \tau \neq \tau _{\mathcal {F}}} and f i : ( Y i , υ i ) → ( X , τ ) {\displaystyle f_{i}:\left(Y_{i},\upsilon _{i}\right)\to (X,\tau )} is continuous for every index i ∈ I , {\displaystyle i\in I,} then τ {\displaystyle \tau } must be strictly coarser than τ F {\displaystyle \tau _{\mathcal {F}}} (meaning that τ ⊆ τ F {\displaystyle \tau \subseteq \tau _{\mathcal {F}}} and τ ≠ τ F ; {\displaystyle \tau \neq \tau _{\mathcal {F}};} this will be written τ ⊊ τ F {\displaystyle \tau \subsetneq \tau _{\mathcal {F}}} ) and moreover, for any subset S ⊆ F {\displaystyle {\mathcal {S}}\subseteq {\mathcal {F}}} the topology τ {\displaystyle \tau } will also be strictly coarser than the final topology τ S {\displaystyle \tau _{\mathcal {S}}} that S {\displaystyle {\mathcal {S}}} induces on X {\displaystyle X} (because τ F ⊆ τ S {\displaystyle \tau _{\mathcal {F}}\subseteq \tau _{\mathcal {S}}} ); that is, τ ⊊ τ S . {\displaystyle \tau \subsetneq \tau _{\mathcal {S}}.} Suppose that in addition, G := { g a : a ∈ A } {\displaystyle {\mathcal {G}}:=\left\{g_{a}:a\in A\right\}} is an A {\displaystyle A} -indexed family of X {\displaystyle X} -valued maps g a : Z a → X {\displaystyle g_{a}:Z_{a}\to X} whose domains are topological spaces ( Z a , ζ a ) . {\displaystyle \left(Z_{a},\zeta _{a}\right).} If every g a : ( Z a , ζ a ) → ( X , τ F ) {\displaystyle g_{a}:\left(Z_{a},\zeta _{a}\right)\to \left(X,\tau _{\mathcal {F}}\right)} is continuous then adding these maps to the family F {\displaystyle {\mathcal {F}}} will not change the final topology on X ; {\displaystyle X;} that is, τ F ∪ G = τ F . {\displaystyle \tau _{{\mathcal {F}}\cup {\mathcal {G}}}=\tau _{\mathcal {F}}.} Explicitly, this means that the final topology on X {\displaystyle X} induced by the "extended family" F ∪ G {\displaystyle {\mathcal {F}}\cup {\mathcal {G}}} is equal to the final topology τ F {\displaystyle \tau _{\mathcal {F}}} induced by the original family F = { f i : i ∈ I } . {\displaystyle {\mathcal {F}}=\left\{f_{i}:i\in I\right\}.} However, had there instead existed even just one map g a 0 {\displaystyle g_{a_{0}}} such that g a 0 : ( Z a 0 , ζ a 0 ) → ( X , τ F ) {\displaystyle g_{a_{0}}:\left(Z_{a_{0}},\zeta _{a_{0}}\right)\to \left(X,\tau _{\mathcal {F}}\right)} was not continuous, then the final topology τ F ∪ G {\displaystyle \tau _{{\mathcal {F}}\cup {\mathcal {G}}}} on X {\displaystyle X} induced by the "extended family" F ∪ G {\displaystyle {\mathcal {F}}\cup {\mathcal {G}}} would necessarily be strictly coarser than the final topology τ F {\displaystyle \tau _{\mathcal {F}}} induced by F ; {\displaystyle {\mathcal {F}};} that is, τ F ∪ G ⊊ τ F {\displaystyle \tau _{{\mathcal {F}}\cup {\mathcal {G}}}\subsetneq \tau _{\mathcal {F}}} (see this footnote for an explanation). == Final topology on the direct limit of finite-dimensional Euclidean spaces == Let R ∞ := { ( x 1 , x 2 , … ) ∈ R N : all but finitely many x i are equal to 0 } , {\displaystyle \mathbb {R} ^{\infty }~:=~\left\{\left(x_{1},x_{2},\ldots \right)\in \mathbb {R} ^{\mathbb {N} }~:~{\text{ all but finitely many }}x_{i}{\text{ are equal to }}0\right\},} denote the space of finite sequences, where R N {\displaystyle \mathbb {R} ^{\mathbb {N} }} denotes the space of all real sequences. For every natural number n ∈ N , {\displaystyle n\in \mathbb {N} ,} let R n {\displaystyle \mathbb {R} ^{n}} denote the usual Euclidean space endowed with the Euclidean topology and let In R n : R n → R ∞ {\displaystyle \operatorname {In} _{\mathbb {R} ^{n}}:\mathbb {R} ^{n}\to \mathbb {R} ^{\infty }} denote the inclusion map defined by In R n ⁡ ( x 1 , … , x n ) := ( x 1 , … , x n , 0 , 0 , … ) {\displaystyle \operatorname {In} _{\mathbb {R} ^{n}}\left(x_{1},\ldots ,x_{n}\right):=\left(x_{1},\ldots ,x_{n},0,0,\ldots \right)} so that its image is Im ⁡ ( In R n ) = { ( x 1 , … , x n , 0 , 0 , … ) : x 1 , … , x n ∈ R } = R n × { ( 0 , 0 , … ) } {\displaystyle \operatorname {Im} \left(\operatorname {In} _{\mathbb {R} ^{n}}\right)=\left\{\left(x_{1},\ldots ,x_{n},0,0,\ldots \right)~:~x_{1},\ldots ,x_{n}\in \mathbb {R} \right\}=\mathbb {R} ^{n}\times \left\{(0,0,\ldots )\right\}} and consequently, R ∞ = ⋃ n ∈ N Im ⁡ ( In R n ) . {\displaystyle \mathbb {R} ^{\infty }=\bigcup _{n\in \mathbb {N} }\operatorname {Im} \left(\operatorname {In} _{\mathbb {R} ^{n}}\right).} Endow the set R ∞ {\displaystyle \mathbb {R} ^{\infty }} with the final topology τ ∞ {\displaystyle \tau ^{\infty }} induced by the family F := { In R n ⁡ : n ∈ N } {\displaystyle {\mathcal {F}}:=\left\{\;\operatorname {In} _{\mathbb {R} ^{n}}~:~n\in \mathbb {N} \;\right\}} of all inclusion maps. With this topology, R ∞ {\displaystyle \mathbb {R} ^{\infty }} becomes a complete Hausdorff locally convex sequential topological vector space that is not a Fréchet–Urysohn space. The topology τ ∞ {\displaystyle \tau ^{\infty }} is strictly finer than the subspace topology induced on R ∞ {\displaystyle \mathbb {R} ^{\infty }} by R N , {\displaystyle \mathbb {R} ^{\mathbb {N} },} where R N {\displaystyle \mathbb {R} ^{\mathbb {N} }} is endowed with its usual product topology. Endow the image Im ⁡ ( In R n ) {\displaystyle \operatorname {Im} \left(\operatorname {In} _{\mathbb {R} ^{n}}\right)} with the final topology induced on it by the bijection In R n : R n → Im ⁡ ( In R n ) ; {\displaystyle \operatorname {In} _{\mathbb {R} ^{n}}:\mathbb {R} ^{n}\to \operatorname {Im} \left(\operatorname {In} _{\mathbb {R} ^{n}}\right);} that is, it is endowed with the Euclidean topology transferred to it from R n {\displaystyle \mathbb {R} ^{n}} via In R n . {\displaystyle \operatorname {In} _{\mathbb {R} ^{n}}.} This topology on Im ⁡ ( In R n ) {\displaystyle \operatorname {Im} \left(\operatorname {In} _{\mathbb {R} ^{n}}\right)} is equal to the subspace topology induced on it by ( R ∞ , τ ∞ ) . {\displaystyle \left(\mathbb {R} ^{\infty },\tau ^{\infty }\right).} A subset S ⊆ R ∞ {\displaystyle S\subseteq \mathbb {R} ^{\infty }} is open (respectively, closed) in ( R ∞ , τ ∞ ) {\displaystyle \left(\mathbb {R} ^{\infty },\tau ^{\infty }\right)} if and only if for every n ∈ N , {\displaystyle n\in \mathbb {N} ,} the set S ∩ Im ⁡ ( In R n ) {\displaystyle S\cap \operatorname {Im} \left(\operatorname {In} _{\mathbb {R} ^{n}}\right)} is an open (respectively, closed) subset of Im ⁡ ( In R n ) . {\displaystyle \operatorname {Im} \left(\operatorname {In} _{\mathbb {R} ^{n}}\right).} The topology τ ∞ {\displaystyle \tau ^{\infty }} is coherent with the family of subspaces S := { Im ⁡ ( In R n ) : n ∈ N } . {\displaystyle \mathbb {S} :=\left\{\;\operatorname {Im} \left(\operatorname {In} _{\mathbb {R} ^{n}}\right)~:~n\in \mathbb {N} \;\right\}.} This makes ( R ∞ , τ ∞ ) {\displaystyle \left(\mathbb {R} ^{\infty },\tau ^{\infty }\right)} into an LB-space. Consequently, if v ∈ R ∞ {\displaystyle v\in \mathbb {R} ^{\infty }} and v ∙ {\displaystyle v_{\bullet }} is a sequence in R ∞ {\displaystyle \mathbb {R} ^{\infty }} then v ∙ → v {\displaystyle v_{\bullet }\to v} in ( R ∞ , τ ∞ ) {\displaystyle \left(\mathbb {R} ^{\infty },\tau ^{\infty }\right)} if and only if there exists some n ∈ N {\displaystyle n\in \mathbb {N} } such that both v {\displaystyle v} and v ∙ {\displaystyle v_{\bullet }} are contained in Im ⁡ ( In R n ) {\displaystyle \operatorname {Im} \left(\operatorname {In} _{\mathbb {R} ^{n}}\right)} and v ∙ → v {\displaystyle v_{\bullet }\to v} in Im ⁡ ( In R n ) . {\displaystyle \operatorname {Im} \left(\operatorname {In} _{\mathbb {R} ^{n}}\right).} Often, for every n ∈ N , {\displaystyle n\in \mathbb {N} ,} the inclusion map In R n {\displaystyle \operatorname {In} _{\mathbb {R} ^{n}}} is used to identify R n {\displaystyle \mathbb {R} ^{n}} with its image Im ⁡ ( In R n ) {\displaystyle \operatorname {Im} \left(\operatorname {In} _{\mathbb {R} ^{n}}\right)} in R ∞ ; {\displaystyle \mathbb {R} ^{\infty };} explicitly, the elements ( x 1 , … , x n ) ∈ R n {\displaystyle \left(x_{1},\ldots ,x_{n}\right)\in \mathbb {R} ^{n}} and ( x 1 , … , x n , 0 , 0 , 0 , … ) {\displaystyle \left(x_{1},\ldots ,x_{n},0,0,0,\ldots \right)} are identified together. Under this identification, ( ( R ∞ , τ ∞ ) , ( In R n ) n ∈ N ) {\displaystyle \left(\left(\mathbb {R} ^{\infty },\tau ^{\infty }\right),\left(\operatorname {In} _{\mathbb {R} ^{n}}\right)_{n\in \mathbb {N} }\right)} becomes a direct limit of the direct system ( ( R n ) n ∈ N , ( In R m R n ) m ≤ n in N , N ) , {\displaystyle \left(\left(\mathbb {R} ^{n}\right)_{n\in \mathbb {N} },\left(\operatorname {In} _{\mathbb {R} ^{m}}^{\mathbb {R} ^{n}}\right)_{m\leq n{\text{ in }}\mathbb {N} },\mathbb {N} \right),} where for every m ≤ n , {\displaystyle m\leq n,} the map In R m R n : R m → R n {\displaystyle \operatorname {In} _{\mathbb {R} ^{m}}^{\mathbb {R} ^{n}}:\mathbb {R} ^{m}\to \mathbb {R} ^{n}} is the inclusion map defined by In R m R n ⁡ ( x 1 , … , x m ) := ( x 1 , … , x m , 0 , … , 0 ) , {\displaystyle \operatorname {In} _{\mathbb {R} ^{m}}^{\mathbb {R} ^{n}}\left(x_{1},\ldots ,x_{m}\right):=\left(x_{1},\ldots ,x_{m},0,\ldots ,0\right),} where there are n − m {\displaystyle n-m} trailing zeros. == Categorical description == In the language of category theory, the final topology construction can be described as follows. Let Y {\displaystyle Y} be a functor from a discrete category J {\displaystyle J} to the category of topological spaces Top that selects the spaces Y i {\displaystyle Y_{i}} for i ∈ J . {\displaystyle i\in J.} Let Δ {\displaystyle \Delta } be the diagonal functor from Top to the functor category TopJ (this functor sends each space X {\displaystyle X} to the constant functor to X {\displaystyle X} ). The comma category ( Y ↓ Δ ) {\displaystyle (Y\,\downarrow \,\Delta )} is then the category of co-cones from Y , {\displaystyle Y,} i.e. objects in ( Y ↓ Δ ) {\displaystyle (Y\,\downarrow \,\Delta )} are pairs ( X , f ) {\displaystyle (X,f)} where f = ( f i : Y i → X ) i ∈ J {\displaystyle f=(f_{i}:Y_{i}\to X)_{i\in J}} is a family of continuous maps to X . {\displaystyle X.} If U {\displaystyle U} is the forgetful functor from Top to Set and Δ′ is the diagonal functor from Set to SetJ then the comma category ( U Y ↓ Δ ′ ) {\displaystyle \left(UY\,\downarrow \,\Delta ^{\prime }\right)} is the category of all co-cones from U Y . {\displaystyle UY.} The final topology construction can then be described as a functor from ( U Y ↓ Δ ′ ) {\displaystyle \left(UY\,\downarrow \,\Delta ^{\prime }\right)} to ( Y ↓ Δ ) . {\displaystyle (Y\,\downarrow \,\Delta ).} This functor is left adjoint to the corresponding forgetful functor. == See also == Direct limit – Special case of colimit in category theory Induced topology – Inherited topologyPages displaying short descriptions of redirect targets Initial topology – Coarsest topology making certain functions continuous LB-space LF-space – Topological vector space == Notes == == Citations == == References == Brown, Ronald (June 2006). Topology and Groupoids. North Charleston: CreateSpace. ISBN 1-4196-2722-8. Willard, Stephen (1970). General Topology. Addison-Wesley Series in Mathematics. Reading, MA: Addison-Wesley. ISBN 9780201087079. Zbl 0205.26601.. (Provides a short, general introduction in section 9 and Exercise 9H) Willard, Stephen (2004) [1970]. General Topology. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-43479-7. OCLC 115240.
Wikipedia/Final_topology
In mathematics, a topological algebra A {\displaystyle A} is an algebra and at the same time a topological space, where the algebraic and the topological structures are coherent in a specified sense. == Definition == A topological algebra A {\displaystyle A} over a topological field K {\displaystyle K} is a topological vector space together with a bilinear multiplication ⋅ : A × A → A {\displaystyle \cdot :A\times A\to A} , ( a , b ) ↦ a ⋅ b {\displaystyle (a,b)\mapsto a\cdot b} that turns A {\displaystyle A} into an algebra over K {\displaystyle K} and is continuous in some definite sense. Usually the continuity of the multiplication is expressed by one of the following (non-equivalent) requirements: joint continuity: for each neighbourhood of zero U ⊆ A {\displaystyle U\subseteq A} there are neighbourhoods of zero V ⊆ A {\displaystyle V\subseteq A} and W ⊆ A {\displaystyle W\subseteq A} such that V ⋅ W ⊆ U {\displaystyle V\cdot W\subseteq U} (in other words, this condition means that the multiplication is continuous as a map between topological spaces A × A → A {\displaystyle A\times A\to A} ), or stereotype continuity: for each totally bounded set S ⊆ A {\displaystyle S\subseteq A} and for each neighbourhood of zero U ⊆ A {\displaystyle U\subseteq A} there is a neighbourhood of zero V ⊆ A {\displaystyle V\subseteq A} such that S ⋅ V ⊆ U {\displaystyle S\cdot V\subseteq U} and V ⋅ S ⊆ U {\displaystyle V\cdot S\subseteq U} , or separate continuity: for each element a ∈ A {\displaystyle a\in A} and for each neighbourhood of zero U ⊆ A {\displaystyle U\subseteq A} there is a neighbourhood of zero V ⊆ A {\displaystyle V\subseteq A} such that a ⋅ V ⊆ U {\displaystyle a\cdot V\subseteq U} and V ⋅ a ⊆ U {\displaystyle V\cdot a\subseteq U} . (Certainly, joint continuity implies stereotype continuity, and stereotype continuity implies separate continuity.) In the first case A {\displaystyle A} is called a "topological algebra with jointly continuous multiplication", and in the last, "with separately continuous multiplication". A unital associative topological algebra is (sometimes) called a topological ring. == History == The term was coined by David van Dantzig; it appears in the title of his doctoral dissertation (1931). == Examples == 1. Fréchet algebras are examples of associative topological algebras with jointly continuous multiplication. 2. Banach algebras are special cases of Fréchet algebras. 3. Stereotype algebras are examples of associative topological algebras with stereotype continuous multiplication. == Notes == == External links == Topological algebra at the nLab == References == Beckenstein, E.; Narici, L.; Suffel, C. (1977). Topological Algebras. Amsterdam: North Holland. ISBN 9780080871356. Akbarov, S.S. (2003). "Pontryagin duality in the theory of topological vector spaces and in topological algebra". Journal of Mathematical Sciences. 113 (2): 179–349. doi:10.1023/A:1020929201133. S2CID 115297067. Mallios, A. (1986). Topological Algebras. Amsterdam: North Holland. ISBN 9780080872353. Balachandran, V.K. (2000). Topological Algebras. Amsterdam: North Holland. ISBN 9780080543086. Fragoulopoulou, M. (2005). Topological Algebras with Involution. Amsterdam: North Holland. ISBN 9780444520258.
Wikipedia/Topological_algebra
The cocountable topology, also known as the countable complement topology, is a topology that can be defined on any infinite set X {\displaystyle X} . In this topology, a set is open if its complement in X {\displaystyle X} is either countable or equal to the entire set. Equivalently, the open sets consist of the empty set and all subsets of X {\displaystyle X} whose complements are countable, a property known as cocountability. The only closed sets in this topology are X {\displaystyle X} itself and the countable subsets of X {\displaystyle X} . == Definitions == Let X {\displaystyle X} be an infinite set and let T {\displaystyle {\mathcal {T}}} be the set of subsets of X {\displaystyle X} such that H ∈ T ⟺ X ∖ H is countable, or H = ∅ {\displaystyle H\in {\mathcal {T}}\iff X\setminus H{\mbox{ is countable, or}}\,H=\varnothing } then T {\displaystyle {\mathcal {T}}} is the countable complement toplogy on X {\displaystyle X} , and the topological space T = ( X , T ) {\displaystyle T=(X,{\mathcal {T}})} is a countable complement space. Symbolically, the topology is typically written as T = { H ⊆ X : H = ∅ or X ∖ H is countable } . {\displaystyle {\mathcal {T}}=\{H\subseteq X:H=\varnothing {\mbox{ or }}X\setminus H{\mbox{ is countable}}\}.} === Double pointed cocountable topology === Let X {\displaystyle X} be an uncountable set. We define the topology T {\displaystyle {\mathcal {T}}} as all open sets whose complements are countable, along with ∅ {\displaystyle \varnothing } and X {\displaystyle X} itself. === Cocountable extension topology === Let X {\displaystyle X} be the real line. Now let T 1 {\displaystyle {\mathcal {T}}_{1}} be the Euclidean topology and T 2 {\displaystyle {\mathcal {T}}_{2}} be the cocountable topology on X {\displaystyle X} . The cocountable extension topology is the smallest topology generated by T 1 ∪ T 2 {\displaystyle {\mathcal {T}}_{1}\cup {\mathcal {T}}_{2}} . == Proof that cocountable topology is a topology == By definition, the empty set ∅ {\displaystyle \varnothing } is an element of T {\displaystyle {\mathcal {T}}} . Similarly, the entire set X ∈ T {\displaystyle X\in {\mathcal {T}}} , since the complement of X {\displaystyle X} relative to itself is the empty set, which is vacuously countable. Suppose A , B ∈ T {\displaystyle A,B\in {\mathcal {T}}} . Let H = A ∩ B {\displaystyle H=A\cap B} . Then X ∖ H = X ∖ ( A ∩ B ) = ( X ∖ A ) ∪ ( X ∖ B ) {\displaystyle X\setminus H=X\setminus (A\cap B)=(X\setminus A)\cup (X\setminus B)} by De Morgan's laws. Since A , B ∈ T {\displaystyle A,B\in {\mathcal {T}}} , it follows that X ∖ A {\displaystyle X\setminus A} and X ∖ B {\displaystyle X\setminus B} are both countable. Because the countable union of countable sets is countable, X ∖ H {\displaystyle X\setminus H} is also countable. Therefore, H = A ∩ B ∈ T {\displaystyle H=A\cap B\in {\mathcal {T}}} , as its complement is countable. Now let U ⊆ T {\displaystyle {\mathcal {U}}\subseteq {\mathcal {T}}} . Then X ∖ ( ⋃ U ) = ⋂ U ∈ U ( X ∖ U ) {\displaystyle X\setminus \left(\bigcup {\mathcal {U}}\right)=\bigcap _{U\in {\mathcal {U}}}(X\setminus U)} again by De Morgan's laws. For each U ∈ U {\displaystyle U\in {\mathcal {U}}} , X ∖ U {\displaystyle X\setminus U} is countable. The countable intersection of countable sets is also countable (assuming U {\displaystyle {\mathcal {U}}} is countable), so S ∖ ( ⋃ U ) {\displaystyle S\setminus \left(\bigcup {\mathcal {U}}\right)} is countable. Thus, ⋃ U ∈ T {\displaystyle \bigcup {\mathcal {U}}\in {\mathcal {T}}} . Since all three open set axioms are met, T {\displaystyle {\mathcal {T}}} is a topology on X {\displaystyle X} . == Properties == Every set X {\displaystyle X} with the cocountable topology is Lindelöf, since every nonempty open set omits only countably many points of X {\displaystyle X} . It is also T1, as all singletons are closed. If X {\displaystyle X} is an uncountable set, then any two nonempty open sets intersect, hence, the space is not Hausdorff. However, in the cocountable topology all convergent sequences are eventually constant, so limits are unique. Since compact sets in X {\displaystyle X} are finite subsets, all compact subsets are closed, another condition usually related to the Hausdorff separation axiom. The cocountable topology on a countable set is the discrete topology. The cocountable topology on an uncountable set is hyperconnected, thus connected, locally connected and pseudocompact, but neither weakly countably compact nor countably metacompact, hence not compact. == Examples == Uncountable set: On any uncountable set, such as the real numbers R {\displaystyle \mathbb {R} } , the cocountable topology is a proper subset of the standard topology. In this case, the topology is T1 but not Hausdorff, first-countable, nor metrizable. Countable set: If X {\displaystyle X} is countable, then every subset of X {\displaystyle X} has a countable complement. In this case, the cocountable topology is just the discrete topology. Finite sets: On a finite set, the cocountable topology reduces to the indiscrete topology, consisting only of the empty set and the whole set. This is because any proper subset of a finite set has a finite (and hence not countable) complement, violating the openness condition. Subspace topology: If Y ⊆ X {\displaystyle Y\subseteq X} and X {\displaystyle X} carries the cocountable topology, then Y {\displaystyle Y} inherits the subspace topology. This topology on Y {\displaystyle Y} consists of the empty set, all of Y {\displaystyle Y} , and all subsets U ⊆ Y {\displaystyle U\subseteq Y} such that Y ∖ U {\displaystyle Y\setminus U} is countable. == See also == Cofinite topology List of topologies == References ==
Wikipedia/Cocountable_topology
In mathematics, an order topology is a specific topology that can be defined on any totally ordered set. It is a natural generalization of the topology of the real numbers to arbitrary totally ordered sets. If X is a totally ordered set, the order topology on X is generated by the subbase of "open rays" { x ∣ a < x } {\displaystyle \{x\mid a<x\}} { x ∣ x < b } {\displaystyle \{x\mid x<b\}} for all a, b in X. Provided X has at least two elements, this is equivalent to saying that the open intervals ( a , b ) = { x ∣ a < x < b } {\displaystyle (a,b)=\{x\mid a<x<b\}} together with the above rays form a base for the order topology. The open sets in X are the sets that are a union of (possibly infinitely many) such open intervals and rays. A topological space X is called orderable or linearly orderable if there exists a total order on its elements such that the order topology induced by that order and the given topology on X coincide. The order topology makes X into a completely normal Hausdorff space. The standard topologies on R, Q, Z, and N are the order topologies. == Induced order topology == If Y is a subset of X, X a totally ordered set, then Y inherits a total order from X. The set Y therefore has an order topology, the induced order topology. As a subset of X, Y also has a subspace topology. The subspace topology is always at least as fine as the induced order topology, but they are not in general the same. For example, consider the subset Y = {−1} ∪ {1/n}n∈N of the rationals. Under the subspace topology, the singleton set {−1} is open in Y, but under the induced order topology, any open set containing −1 must contain all but finitely many members of the space. == Example of a subspace of a linearly ordered space whose topology is not an order topology == Though the subspace topology of Y = {−1} ∪ {1/n}n∈N in the section above is shown not to be generated by the induced order on Y, it is nonetheless an order topology on Y; indeed, in the subspace topology every point is isolated (i.e., singleton {y} is open in Y for every y in Y), so the subspace topology is the discrete topology on Y (the topology in which every subset of Y is open), and the discrete topology on any set is an order topology. To define a total order on Y that generates the discrete topology on Y, simply modify the induced order on Y by defining −1 to be the greatest element of Y and otherwise keeping the same order for the other points, so that in this new order (call it say <1) we have 1/n <1 −1 for all n ∈ N. Then, in the order topology on Y generated by <1, every point of Y is isolated in Y. We wish to define here a subset Z of a linearly ordered topological space X such that no total order on Z generates the subspace topology on Z, so that the subspace topology will not be an order topology even though it is the subspace topology of a space whose topology is an order topology. Let Z = { − 1 } ∪ ( 0 , 1 ) {\displaystyle Z=\{-1\}\cup (0,1)} in the real line. The same argument as before shows that the subspace topology on Z is not equal to the induced order topology on Z, but one can show that the subspace topology on Z cannot be equal to any order topology on Z. An argument follows. Suppose by way of contradiction that there is some strict total order < on Z such that the order topology generated by < is equal to the subspace topology on Z (note that we are not assuming that < is the induced order on Z, but rather an arbitrarily given total order on Z that generates the subspace topology). Let M = Z \ {−1} = (0,1), then M is connected, so M is dense on itself and has no gaps, in regards to <. If −1 is not the smallest or the largest element of Z, then ( − ∞ , − 1 ) {\displaystyle (-\infty ,-1)} and ( − 1 , ∞ ) {\displaystyle (-1,\infty )} separate M, a contradiction. Assume without loss of generality that −1 is the smallest element of Z. Since {−1} is open in Z, there is some point p in M such that the interval (−1,p) is empty, so p is the minimum of M. Then M \ {p} = (0,p) ∪ (p,1) is not connected with respect to the subspace topology inherited from R. On the other hand, the subspace topology of M \ {p} inherited from the order topology of Z coincides with the order topology of M \ {p} induced by <, which is connected since there are no gaps in M \ {p} and it is dense. This is a contradiction. == Left and right order topologies == Several variants of the order topology can be given: The right order topology on X is the topology having as a base all intervals of the form ( a , ∞ ) = { x ∈ X ∣ x > a } {\displaystyle (a,\infty )=\{x\in X\mid x>a\}} , together with the set X. The left order topology on X is the topology having as a base all intervals of the form ( − ∞ , a ) = { x ∈ X ∣ x < a } {\displaystyle (-\infty ,a)=\{x\in X\mid x<a\}} , together with the set X. These topologies naturally arise when working with semicontinuous functions, in that a real-valued function on a topological space is lower semicontinuous if and only if it is continuous when the reals are equipped with the right order. The (natural) compact open topology on the resulting set of continuous functions is sometimes referred to as the semicontinuous topology. Additionally, these topologies can be used to give counterexamples in general topology. For example, the left or right order topology on a bounded set provides an example of a compact space that is not Hausdorff. The left order topology is the standard topology used for many set-theoretic purposes on a Boolean algebra. == Ordinal space == For any ordinal number λ one can consider the spaces of ordinal numbers [ 0 , λ ) = { α ∣ α < λ } {\displaystyle [0,\lambda )=\{\alpha \mid \alpha <\lambda \}} [ 0 , λ ] = { α ∣ α ≤ λ } {\displaystyle [0,\lambda ]=\{\alpha \mid \alpha \leq \lambda \}} together with the natural order topology. These spaces are called ordinal spaces. (Note that in the usual set-theoretic construction of ordinal numbers we have λ = [0, λ) and λ + 1 = [0, λ]). Obviously, these spaces are mostly of interest when λ is an infinite ordinal; for finite ordinals, the order topology is simply the discrete topology. When λ = ω (the first infinite ordinal), the space [0,ω) is just N with the usual (still discrete) topology, while [0,ω] is the one-point compactification of N. Of particular interest is the case when λ = ω1, the set of all countable ordinals, and the first uncountable ordinal. The element ω1 is a limit point of the subset [0,ω1) even though no sequence of elements in [0,ω1) has the element ω1 as its limit. In particular, [0,ω1] is not first-countable. The subspace [0,ω1) is first-countable however, since the only point in [0,ω1] without a countable local base is ω1. Some further properties include neither [0,ω1) or [0,ω1] is separable or second-countable [0,ω1] is compact, while [0,ω1) is sequentially compact and countably compact, but not compact or paracompact == Topology and ordinals == === Ordinals as topological spaces === Any ordinal number can be viewed as a topological space by endowing it with the order topology (indeed, ordinals are well-ordered, so in particular totally ordered). Unless otherwise specified, this is the usual topology given to ordinals. Moreover, if we are willing to accept a proper class as a topological space, then we may similarly view the class of all ordinals as a topological space with the order topology. The set of limit points of an ordinal α is precisely the set of limit ordinals less than α. Successor ordinals (and zero) less than α are isolated points in α. In particular, the finite ordinals and ω are discrete topological spaces, and no ordinal beyond that is discrete. The ordinal α is compact as a topological space if and only if α is either a successor ordinal or zero. The closed sets of a limit ordinal α are just the closed sets in the sense that we have already defined, namely, those that contain a limit ordinal whenever they contain all sufficiently large ordinals below it. Any ordinal is, of course, an open subset of any larger ordinal. We can also define the topology on the ordinals in the following inductive way: 0 is the empty topological space, α+1 is obtained by taking the one-point compactification of α, and for δ a limit ordinal, δ is equipped with the inductive limit topology. Note that if α is a successor ordinal, then α is compact, in which case its one-point compactification α+1 is the disjoint union of α and a point. As topological spaces, all the ordinals are Hausdorff and even normal. They are also totally disconnected (connected components are points), scattered (every non-empty subspace has an isolated point; in this case, just take the smallest element), zero-dimensional (the topology has a clopen basis: here, write an open interval (β,γ) as the union of the clopen intervals (β,γ'+1) = [β+1,γ'] for γ'<γ). However, they are not extremally disconnected in general (there are open sets, for example the even numbers from ω, whose closure is not open). The topological spaces ω1 and its successor ω1+1 are frequently used as textbook examples of uncountable topological spaces. For example, in the topological space ω1+1, the element ω1 is in the closure of the subset ω1 even though no sequence of elements in ω1 has the element ω1 as its limit: an element in ω1 is a countable set; for any sequence of such sets, the union of these sets is the union of countably many countable sets, so still countable; this union is an upper bound of the elements of the sequence, and therefore of the limit of the sequence, if it has one. The space ω1 is first-countable but not second-countable, and ω1+1 has neither of these two properties, despite being compact. It is also worthy of note that any continuous function from ω1 to R (the real line) is eventually constant: so the Stone–Čech compactification of ω1 is ω1+1, just as its one-point compactification (in sharp contrast to ω, whose Stone–Čech compactification is much larger than ω). === Ordinal-indexed sequences === If α is a limit ordinal and X is a set, an α-indexed sequence of elements of X merely means a function from α to X. This concept, a transfinite sequence or ordinal-indexed sequence, is a generalization of the concept of a sequence. An ordinary sequence corresponds to the case α = ω. If X is a topological space, we say that an α-indexed sequence of elements of X converges to a limit x when it converges as a net, in other words, when given any neighborhood U of x there is an ordinal β < α such that xι is in U for all ι ≥ β. Ordinal-indexed sequences are more powerful than ordinary (ω-indexed) sequences to determine limits in topology: for example, ω1 is a limit point of ω1+1 (because it is a limit ordinal), and, indeed, it is the limit of the ω1-indexed sequence which maps any ordinal less than ω1 to itself: however, it is not the limit of any ordinary (ω-indexed) sequence in ω1, since any such limit is less than or equal to the union of its elements, which is a countable union of countable sets, hence itself countable. However, ordinal-indexed sequences are not powerful enough to replace nets (or filters) in general: for example, on the Tychonoff plank (the product space ( ω 1 + 1 ) × ( ω + 1 ) {\displaystyle (\omega _{1}+1)\times (\omega +1)} ), the corner point ( ω 1 , ω ) {\displaystyle (\omega _{1},\omega )} is a limit point (it is in the closure) of the open subset ω 1 × ω {\displaystyle \omega _{1}\times \omega } , but it is not the limit of an ordinal-indexed sequence. == See also == List of topologies Lower limit topology Long line (topology) Linear continuum Order topology (functional analysis) Partially ordered space == Notes == == References == This article incorporates material from Order topology on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Wikipedia/Order_topology
In topology and related areas of mathematics, the set of all possible topologies on a given set forms a partially ordered set. This order relation can be used for comparison of the topologies. == Definition == A topology on a set may be defined as the collection of subsets which are considered to be "open". (An alternative definition is that it is the collection of subsets which are considered "closed". These two ways of defining the topology are essentially equivalent because the complement of an open set is closed and vice versa. In the following, it doesn't matter which definition is used.) For definiteness the reader should think of a topology as the family of open sets of a topological space, since that is the standard meaning of the word "topology". Let τ1 and τ2 be two topologies on a set X such that τ1 is contained in τ2: τ 1 ⊆ τ 2 {\displaystyle \tau _{1}\subseteq \tau _{2}} . That is, every element of τ1 is also an element of τ2. Then the topology τ1 is said to be a coarser (weaker or smaller) topology than τ2, and τ2 is said to be a finer (stronger or larger) topology than τ1. If additionally τ 1 ≠ τ 2 {\displaystyle \tau _{1}\neq \tau _{2}} we say τ1 is strictly coarser than τ2 and τ2 is strictly finer than τ1. The binary relation ⊆ defines a partial ordering relation on the set of all possible topologies on X. == Examples == The finest topology on X is the discrete topology; this topology makes all subsets open. The coarsest topology on X is the trivial topology; this topology only admits the empty set and the whole space as open sets. In function spaces and spaces of measures there are often a number of possible topologies. See topologies on the set of operators on a Hilbert space for some intricate relationships. All possible polar topologies on a dual pair are finer than the weak topology and coarser than the strong topology. The complex vector space Cn may be equipped with either its usual (Euclidean) topology, or its Zariski topology. In the latter, a subset V of Cn is closed if and only if it consists of all solutions to some system of polynomial equations. Since any such V also is a closed set in the ordinary sense, but not vice versa, the Zariski topology is strictly weaker than the ordinary one. == Properties == Let τ1 and τ2 be two topologies on a set X. Then the following statements are equivalent: τ1 ⊆ τ2 the identity map idX : (X, τ2) → (X, τ1) is a continuous map. the identity map idX : (X, τ1) → (X, τ2) is a strongly/relatively open map. (The identity map idX is surjective and therefore it is strongly open if and only if it is relatively open.) Two immediate corollaries of the above equivalent statements are A continuous map f : X → Y remains continuous if the topology on Y becomes coarser or the topology on X finer. An open (resp. closed) map f : X → Y remains open (resp. closed) if the topology on Y becomes finer or the topology on X coarser. One can also compare topologies using neighborhood bases. Let τ1 and τ2 be two topologies on a set X and let Bi(x) be a local base for the topology τi at x ∈ X for i = 1,2. Then τ1 ⊆ τ2 if and only if for all x ∈ X, each open set U1 in B1(x) contains some open set U2 in B2(x). Intuitively, this makes sense: a finer topology should have smaller neighborhoods. == Lattice of topologies == The set of all topologies on a set X together with the partial ordering relation ⊆ forms a complete lattice that is also closed under arbitrary intersections. That is, any collection of topologies on X have a meet (or infimum) and a join (or supremum). The meet of a collection of topologies is the intersection of those topologies. The join, however, is not generally the union of those topologies (the union of two topologies need not be a topology) but rather the topology generated by the union. Every complete lattice is also a bounded lattice, which is to say that it has a greatest and least element. In the case of topologies, the greatest element is the discrete topology and the least element is the trivial topology. The lattice of topologies on a set X {\displaystyle X} is a complemented lattice; that is, given a topology τ {\displaystyle \tau } on X {\displaystyle X} there exists a topology τ ′ {\displaystyle \tau '} on X {\displaystyle X} such that the intersection τ ∩ τ ′ {\displaystyle \tau \cap \tau '} is the trivial topology and the topology generated by the union τ ∪ τ ′ {\displaystyle \tau \cup \tau '} is the discrete topology. If the set X {\displaystyle X} has at least three elements, the lattice of topologies on X {\displaystyle X} is not modular, and hence not distributive either. == See also == Initial topology, the coarsest topology on a set to make a family of mappings from that set continuous Final topology, the finest topology on a set to make a family of mappings into that set continuous == Notes == == References == Munkres, James R. (2000). Topology (2nd ed.). Upper Saddle River, NJ: Prentice Hall, Inc. ISBN 978-0-13-181629-9. OCLC 42683260. (accessible to patrons with print disabilities)
Wikipedia/Coarsest_topology
In topology and related areas of mathematics, a product space is the Cartesian product of a family of topological spaces equipped with a natural topology called the product topology. This topology differs from another, perhaps more natural-seeming, topology called the box topology, which can also be given to a product space and which agrees with the product topology when the product is over only finitely many spaces. However, the product topology is "correct" in that it makes the product space a categorical product of its factors, whereas the box topology is too fine; in that sense the product topology is the natural topology on the Cartesian product. == Definition == Throughout, I {\displaystyle I} will be some non-empty index set and for every index i ∈ I , {\displaystyle i\in I,} let X i {\displaystyle X_{i}} be a topological space. Denote the Cartesian product of the sets X i {\displaystyle X_{i}} by X := ∏ X ∙ := ∏ i ∈ I X i {\displaystyle X:=\prod X_{\bullet }:=\prod _{i\in I}X_{i}} and for every index i ∈ I , {\displaystyle i\in I,} denote the i {\displaystyle i} -th canonical projection by p i : ∏ j ∈ I X j → X i , ( x j ) j ∈ I ↦ x i . {\displaystyle {\begin{aligned}p_{i}:\ \prod _{j\in I}X_{j}&\to X_{i},\\[3mu](x_{j})_{j\in I}&\mapsto x_{i}.\\\end{aligned}}} The product topology, sometimes called the Tychonoff topology, on ∏ i ∈ I X i {\textstyle \prod _{i\in I}X_{i}} is defined to be the coarsest topology (that is, the topology with the fewest open sets) for which all the projections p i : ∏ X ∙ → X i {\textstyle p_{i}:\prod X_{\bullet }\to X_{i}} are continuous. It is the initial topology on ∏ i ∈ I X i {\textstyle \prod _{i\in I}X_{i}} with respect to the family of projections { p i | i ∈ I } {\displaystyle \left\{p_{i}\mathbin {\big \vert } i\in I\right\}} . The Cartesian product X := ∏ i ∈ I X i {\textstyle X:=\prod _{i\in I}X_{i}} endowed with the product topology is called the product space. The open sets in the product topology are arbitrary unions (finite or infinite) of sets of the form ∏ i ∈ I U i , {\textstyle \prod _{i\in I}U_{i},} where each U i {\displaystyle U_{i}} is open in X i {\displaystyle X_{i}} and U i ≠ X i {\displaystyle U_{i}\neq X_{i}} for only finitely many i . {\displaystyle i.} In particular, for a finite product (in particular, for the product of two topological spaces), the set of all Cartesian products between one basis element from each X i {\displaystyle X_{i}} gives a basis for the product topology of ∏ i ∈ I X i . {\textstyle \prod _{i\in I}X_{i}.} That is, for a finite product, the set of all ∏ i ∈ I U i , {\textstyle \prod _{i\in I}U_{i},} where U i {\displaystyle U_{i}} is an element of the (chosen) basis of X i , {\displaystyle X_{i},} is a basis for the product topology of ∏ i ∈ I X i . {\textstyle \prod _{i\in I}X_{i}.} The product topology on ∏ i ∈ I X i {\textstyle \prod _{i\in I}X_{i}} is the topology generated by sets of the form p i − 1 ( U i ) , {\displaystyle p_{i}^{-1}\left(U_{i}\right),} where i ∈ I {\displaystyle i\in I} and U i {\displaystyle U_{i}} is an open subset of X i . {\displaystyle X_{i}.} In other words, the sets { p i − 1 ( U i ) | i ∈ I and U i ⊆ X i is open in X i } {\displaystyle \left\{p_{i}^{-1}\left(U_{i}\right)\mathbin {\big \vert } i\in I{\text{ and }}U_{i}\subseteq X_{i}{\text{ is open in }}X_{i}\right\}} form a subbase for the topology on X . {\displaystyle X.} A subset of X {\displaystyle X} is open if and only if it is a (possibly infinite) union of intersections of finitely many sets of the form p i − 1 ( U i ) . {\displaystyle p_{i}^{-1}\left(U_{i}\right).} The p i − 1 ( U i ) {\displaystyle p_{i}^{-1}\left(U_{i}\right)} are sometimes called open cylinders, and their intersections are cylinder sets. The product topology is also called the topology of pointwise convergence because a sequence (or more generally, a net) in ∏ i ∈ I X i {\textstyle \prod _{i\in I}X_{i}} converges if and only if all its projections to the spaces X i {\displaystyle X_{i}} converge. Explicitly, a sequence s ∙ = ( s n ) n = 1 ∞ {\textstyle s_{\bullet }=\left(s_{n}\right)_{n=1}^{\infty }} (respectively, a net s ∙ = ( s a ) a ∈ A {\textstyle s_{\bullet }=\left(s_{a}\right)_{a\in A}} ) converges to a given point x ∈ ∏ i ∈ I X i {\textstyle x\in \prod _{i\in I}X_{i}} if and only if p i ( s ∙ ) → p i ( x ) {\displaystyle p_{i}\left(s_{\bullet }\right)\to p_{i}(x)} in X i {\displaystyle X_{i}} for every index i ∈ I , {\displaystyle i\in I,} where p i ( s ∙ ) := p i ∘ s ∙ {\displaystyle p_{i}\left(s_{\bullet }\right):=p_{i}\circ s_{\bullet }} denotes ( p i ( s n ) ) n = 1 ∞ {\displaystyle \left(p_{i}\left(s_{n}\right)\right)_{n=1}^{\infty }} (respectively, denotes ( p i ( s a ) ) a ∈ A {\displaystyle \left(p_{i}\left(s_{a}\right)\right)_{a\in A}} ). In particular, if X i = R {\displaystyle X_{i}=\mathbb {R} } is used for all i {\displaystyle i} then the Cartesian product is the space ∏ i ∈ I R = R I {\textstyle \prod _{i\in I}\mathbb {R} =\mathbb {R} ^{I}} of all real-valued functions on I , {\displaystyle I,} and convergence in the product topology is the same as pointwise convergence of functions. == Examples == If the real line R {\displaystyle \mathbb {R} } is endowed with its standard topology then the product topology on the product of n {\displaystyle n} copies of R {\displaystyle \mathbb {R} } is equal to the ordinary Euclidean topology on R n . {\displaystyle \mathbb {R} ^{n}.} (Because n {\displaystyle n} is finite, this is also equivalent to the box topology on R n . {\displaystyle \mathbb {R} ^{n}.} ) The Cantor set is homeomorphic to the product of countably many copies of the discrete space { 0 , 1 } {\displaystyle \{0,1\}} and the space of irrational numbers is homeomorphic to the product of countably many copies of the natural numbers, where again each copy carries the discrete topology. Several additional examples are given in the article on the initial topology. == Properties == The set of Cartesian products between the open sets of the topologies of each X i {\displaystyle X_{i}} forms a basis for what is called the box topology on X . {\displaystyle X.} In general, the box topology is finer than the product topology, but for finite products they coincide. The product space X , {\displaystyle X,} together with the canonical projections, can be characterized by the following universal property: if Y {\displaystyle Y} is a topological space, and for every i ∈ I , {\displaystyle i\in I,} f i : Y → X i {\displaystyle f_{i}:Y\to X_{i}} is a continuous map, then there exists precisely one continuous map f : Y → X {\displaystyle f:Y\to X} such that for each i ∈ I {\displaystyle i\in I} the following diagram commutes: This shows that the product space is a product in the category of topological spaces. It follows from the above universal property that a map f : Y → X {\displaystyle f:Y\to X} is continuous if and only if f i = p i ∘ f {\displaystyle f_{i}=p_{i}\circ f} is continuous for all i ∈ I . {\displaystyle i\in I.} In many cases it is easier to check that the component functions f i {\displaystyle f_{i}} are continuous. Checking whether a map X → Y {\displaystyle X\to Y} is continuous is usually more difficult; one tries to use the fact that the p i {\displaystyle p_{i}} are continuous in some way. In addition to being continuous, the canonical projections p i : X → X i {\displaystyle p_{i}:X\to X_{i}} are open maps. This means that any open subset of the product space remains open when projected down to the X i . {\displaystyle X_{i}.} The converse is not true: if W {\displaystyle W} is a subspace of the product space whose projections down to all the X i {\displaystyle X_{i}} are open, then W {\displaystyle W} need not be open in X {\displaystyle X} (consider for instance W = R 2 ∖ ( 0 , 1 ) 2 . {\textstyle W=\mathbb {R} ^{2}\setminus (0,1)^{2}.} ) The canonical projections are not generally closed maps (consider for example the closed set { ( x , y ) ∈ R 2 : x y = 1 } , {\textstyle \left\{(x,y)\in \mathbb {R} ^{2}:xy=1\right\},} whose projections onto both axes are R ∖ { 0 } {\displaystyle \mathbb {R} \setminus \{0\}} ). Suppose ∏ i ∈ I S i {\textstyle \prod _{i\in I}S_{i}} is a product of arbitrary subsets, where S i ⊆ X i {\displaystyle S_{i}\subseteq X_{i}} for every i ∈ I . {\displaystyle i\in I.} If all S i {\displaystyle S_{i}} are non-empty then ∏ i ∈ I S i {\textstyle \prod _{i\in I}S_{i}} is a closed subset of the product space X {\displaystyle X} if and only if every S i {\displaystyle S_{i}} is a closed subset of X i . {\displaystyle X_{i}.} More generally, the closure of the product ∏ i ∈ I S i {\textstyle \prod _{i\in I}S_{i}} of arbitrary subsets in the product space X {\displaystyle X} is equal to the product of the closures: Cl X ( ∏ i ∈ I S i ) = ∏ i ∈ I ( Cl X i S i ) . {\displaystyle {\operatorname {Cl} _{X}}{\Bigl (}\prod _{i\in I}S_{i}{\Bigr )}=\prod _{i\in I}{\bigl (}{\operatorname {Cl} _{X_{i}}}S_{i}{\bigr )}.} Any product of Hausdorff spaces is again a Hausdorff space. Tychonoff's theorem, which is equivalent to the axiom of choice, states that any product of compact spaces is a compact space. A specialization of Tychonoff's theorem that requires only the ultrafilter lemma (and not the full strength of the axiom of choice) states that any product of compact Hausdorff spaces is a compact space. If z = ( z i ) i ∈ I ∈ X {\textstyle z=\left(z_{i}\right)_{i\in I}\in X} is fixed then the set { x = ( x i ) i ∈ I ∈ X | x i = z i for all but finitely many i } {\displaystyle \left\{x=\left(x_{i}\right)_{i\in I}\in X\mathbin {\big \vert } x_{i}=z_{i}{\text{ for all but finitely many }}i\right\}} is a dense subset of the product space X {\displaystyle X} . == Relation to other topological notions == Separation Every product of T0 spaces is T0. Every product of T1 spaces is T1. Every product of Hausdorff spaces is Hausdorff. Every product of regular spaces is regular. Every product of Tychonoff spaces is Tychonoff. A product of normal spaces need not be normal. Compactness Every product of compact spaces is compact (Tychonoff's theorem). A product of locally compact spaces need not be locally compact. However, an arbitrary product of locally compact spaces where all but finitely many are compact is locally compact (This condition is sufficient and necessary). Connectedness Every product of connected (resp. path-connected) spaces is connected (resp. path-connected). Every product of hereditarily disconnected spaces is hereditarily disconnected. Metric spaces Countable products of metric spaces are metrizable spaces. == Axiom of choice == One of many ways to express the axiom of choice is to say that it is equivalent to the statement that the Cartesian product of a collection of non-empty sets is non-empty. The proof that this is equivalent to the statement of the axiom in terms of choice functions is immediate: one needs only to pick an element from each set to find a representative in the product. Conversely, a representative of the product is a set which contains exactly one element from each component. The axiom of choice occurs again in the study of (topological) product spaces; for example, Tychonoff's theorem on compact sets is a more complex and subtle example of a statement that requires the axiom of choice and is equivalent to it in its most general formulation, and shows why the product topology may be considered the more useful topology to put on a Cartesian product. == See also == Disjoint union (topology) – Mathematical term Final topology – Finest topology making some functions continuous Initial topology – Coarsest topology making certain functions continuous - Sometimes called the projective limit topology Inverse limit – Construction in category theory Pointwise convergence – A notion of convergence in mathematics Quotient space (topology) – Topological space construction Subspace (topology) – Inherited topologyPages displaying short descriptions of redirect targets Weak topology – Mathematical term == Notes == == References == Bourbaki, Nicolas (1989) [1966]. General Topology: Chapters 1–4 [Topologie Générale]. Éléments de mathématique. Berlin New York: Springer Science & Business Media. ISBN 978-3-540-64241-1. OCLC 18588129. Willard, Stephen (1970). General Topology. Reading, Mass.: Addison-Wesley Pub. Co. ISBN 0486434796. Retrieved 13 February 2013.
Wikipedia/Product_topology
In mathematics, more specifically point-set topology, a Moore space is a developable regular Hausdorff space. That is, a topological space X is a Moore space if the following conditions hold: Any two distinct points can be separated by neighbourhoods, and any closed set and any point in its complement can be separated by neighbourhoods. (X is a regular Hausdorff space.) There is a countable collection of open covers of X, such that for any closed set C and any point p in its complement there exists a cover in the collection such that every neighbourhood of p in the cover is disjoint from C. (X is a developable space.) Moore spaces are generally interesting in mathematics because they may be applied to prove interesting metrization theorems. The concept of a Moore space was formulated by R. L. Moore in the earlier part of the 20th century. == Examples and properties == Every metrizable space, X, is a Moore space. If {A(n)x} is the open cover of X (indexed by x in X) by all balls of radius 1/n, then the collection of all such open covers as n varies over the positive integers is a development of X. Since all metrizable spaces are normal, all metric spaces are Moore spaces. Moore spaces are a lot like regular spaces and different from normal spaces in the sense that every subspace of a Moore space is also a Moore space. The image of a Moore space under an injective, continuous open map is always a Moore space. (The image of a regular space under an injective, continuous open map is always regular.) Both examples 2 and 3 suggest that Moore spaces are similar to regular spaces. Neither the Sorgenfrey line nor the Sorgenfrey plane are Moore spaces because they are normal and not second countable. The Moore plane (also known as the Niemytski plane) is an example of a non-metrizable Moore space. Every metacompact, separable, normal Moore space is metrizable. This theorem is known as Traylor’s theorem. Every locally compact, locally connected normal Moore space is metrizable. This theorem was proved by Reed and Zenor. If 2 ℵ 0 < 2 ℵ 1 {\displaystyle 2^{\aleph _{0}}<2^{\aleph _{1}}} , then every separable normal Moore space is metrizable. This theorem is known as Jones’ theorem. == Normal Moore space conjecture == For a long time, topologists were trying to prove the so-called normal Moore space conjecture: every normal Moore space is metrizable. This was inspired by the fact that all known Moore spaces that were not metrizable were also not normal. This would have been a nice metrization theorem. There were some nice partial results at first; namely properties 7, 8 and 9 as given in the previous section. With property 9, we see that we can drop metacompactness from Traylor's theorem, but at the cost of a set-theoretic assumption. Another example of this is Fleissner's theorem that the axiom of constructibility implies that locally compact, normal Moore spaces are metrizable. On the other hand, under the continuum hypothesis (CH) and also under Martin's axiom and not CH, there are several examples of non-metrizable normal Moore spaces. Nyikos proved that, under the so-called PMEA (Product Measure Extension Axiom), which needs a large cardinal, all normal Moore spaces are metrizable. Finally, it was shown later that any model of ZFC in which the conjecture holds, implies the existence of a model with a large cardinal. So large cardinals are needed essentially. Jones (1937) gave an example of a pseudonormal Moore space that is not metrizable, so the conjecture cannot be strengthened in this way. Moore himself proved the theorem that a collectionwise normal Moore space is metrizable, so strengthening normality is another way to settle the matter. == References == Lynn Arthur Steen and J. Arthur Seebach, Counterexamples in Topology, Dover Books, 1995. ISBN 0-486-68735-X Jones, F. B. (1937), "Concerning normal and completely normal spaces" (PDF), Bulletin of the American Mathematical Society, 43 (10): 671–677, doi:10.1090/S0002-9904-1937-06622-5, MR 1563615. Nyikos, Peter J. (2001), "A history of the normal Moore space problem", Handbook of the History of General Topology, Hist. Topol., vol. 3, Dordrecht: Kluwer Academic Publishers, pp. 1179–1212, ISBN 9780792369707, MR 1900271. The original definition by R.L. Moore appears here: MR0150722 (27 #709) Moore, R. L. Foundations of point set theory. Revised edition. American Mathematical Society Colloquium Publications, Vol. XIII American Mathematical Society, Providence, R.I. 1962 xi+419 pp. (Reviewer: F. Burton Jones) Historical information can be found here: MR0199840 (33 #7980) Jones, F. Burton "Metrization". American Mathematical Monthly 73 1966 571–576. (Reviewer: R. W. Bagley) Historical information can be found here: MR0203661 (34 #3510) Bing, R. H. "Challenging conjectures". American Mathematical Monthly 74 1967 no. 1, part II, 56–64; Vickery's theorem may be found here: MR0001909 (1,317f) Vickery, C. W. "Axioms for Moore spaces and metric spaces". Bulletin of the American Mathematical Society 46, (1940). 560–564 This article incorporates material from Moore space on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Wikipedia/Moore_space_(topology)
In topology and related areas of mathematics, a neighbourhood (or neighborhood) is one of the basic concepts in a topological space. It is closely related to the concepts of open set and interior. Intuitively speaking, a neighbourhood of a point is a set of points containing that point where one can move some amount in any direction away from that point without leaving the set. == Definitions == === Neighbourhood of a point === If X {\displaystyle X} is a topological space and p {\displaystyle p} is a point in X , {\displaystyle X,} then a neighbourhood of p {\displaystyle p} is a subset V {\displaystyle V} of X {\displaystyle X} that includes an open set U {\displaystyle U} containing p {\displaystyle p} , p ∈ U ⊆ V ⊆ X . {\displaystyle p\in U\subseteq V\subseteq X.} This is equivalent to the point p ∈ X {\displaystyle p\in X} belonging to the topological interior of V {\displaystyle V} in X . {\displaystyle X.} The neighbourhood V {\displaystyle V} need not be an open subset of X . {\displaystyle X.} When V {\displaystyle V} is open (resp. closed, compact, etc.) in X , {\displaystyle X,} it is called an open neighbourhood (resp. closed neighbourhood, compact neighbourhood, etc.). Some authors require neighbourhoods to be open, so it is important to note their conventions. A set that is a neighbourhood of each of its points is open since it can be expressed as the union of open sets containing each of its points. A closed rectangle, as illustrated in the figure, is not a neighbourhood of all its points; points on the edges or corners of the rectangle are not contained in any open set that is contained within the rectangle. The collection of all neighbourhoods of a point is called the neighbourhood system at the point. === Neighbourhood of a set === If S {\displaystyle S} is a subset of a topological space X {\displaystyle X} , then a neighbourhood of S {\displaystyle S} is a set V {\displaystyle V} that includes an open set U {\displaystyle U} containing S {\displaystyle S} , S ⊆ U ⊆ V ⊆ X . {\displaystyle S\subseteq U\subseteq V\subseteq X.} It follows that a set V {\displaystyle V} is a neighbourhood of S {\displaystyle S} if and only if it is a neighbourhood of all the points in S . {\displaystyle S.} Furthermore, V {\displaystyle V} is a neighbourhood of S {\displaystyle S} if and only if S {\displaystyle S} is a subset of the interior of V . {\displaystyle V.} A neighbourhood of S {\displaystyle S} that is also an open subset of X {\displaystyle X} is called an open neighbourhood of S . {\displaystyle S.} The neighbourhood of a point is just a special case of this definition. == In a metric space == In a metric space M = ( X , d ) , {\displaystyle M=(X,d),} a set V {\displaystyle V} is a neighbourhood of a point p {\displaystyle p} if there exists an open ball with center p {\displaystyle p} and radius r > 0 , {\displaystyle r>0,} such that B r ( p ) = B ( p ; r ) = { x ∈ X : d ( x , p ) < r } {\displaystyle B_{r}(p)=B(p;r)=\{x\in X:d(x,p)<r\}} is contained in V . {\displaystyle V.} V {\displaystyle V} is called a uniform neighbourhood of a set S {\displaystyle S} if there exists a positive number r {\displaystyle r} such that for all elements p {\displaystyle p} of S , {\displaystyle S,} B r ( p ) = { x ∈ X : d ( x , p ) < r } {\displaystyle B_{r}(p)=\{x\in X:d(x,p)<r\}} is contained in V . {\displaystyle V.} Under the same condition, for r > 0 , {\displaystyle r>0,} the r {\displaystyle r} -neighbourhood S r {\displaystyle S_{r}} of a set S {\displaystyle S} is the set of all points in X {\displaystyle X} that are at distance less than r {\displaystyle r} from S {\displaystyle S} (or equivalently, S r {\displaystyle S_{r}} is the union of all the open balls of radius r {\displaystyle r} that are centered at a point in S {\displaystyle S} ): S r = ⋃ p ∈ S B r ( p ) . {\displaystyle S_{r}=\bigcup \limits _{p\in {}S}B_{r}(p).} It directly follows that an r {\displaystyle r} -neighbourhood is a uniform neighbourhood, and that a set is a uniform neighbourhood if and only if it contains an r {\displaystyle r} -neighbourhood for some value of r . {\displaystyle r.} == Examples == Given the set of real numbers R {\displaystyle \mathbb {R} } with the usual Euclidean metric and a subset V {\displaystyle V} defined as V := ⋃ n ∈ N B ( n ; 1 / n ) , {\displaystyle V:=\bigcup _{n\in \mathbb {N} }B\left(n\,;\,1/n\right),} then V {\displaystyle V} is a neighbourhood for the set N {\displaystyle \mathbb {N} } of natural numbers, but is not a uniform neighbourhood of this set. == Topology from neighbourhoods == The above definition is useful if the notion of open set is already defined. There is an alternative way to define a topology, by first defining the neighbourhood system, and then open sets as those sets containing a neighbourhood of each of their points. A neighbourhood system on X {\displaystyle X} is the assignment of a filter N ( x ) {\displaystyle N(x)} of subsets of X {\displaystyle X} to each x {\displaystyle x} in X , {\displaystyle X,} such that the point x {\displaystyle x} is an element of each U {\displaystyle U} in N ( x ) {\displaystyle N(x)} each U {\displaystyle U} in N ( x ) {\displaystyle N(x)} contains some V {\displaystyle V} in N ( x ) {\displaystyle N(x)} such that for each y {\displaystyle y} in V , {\displaystyle V,} U {\displaystyle U} is in N ( y ) . {\displaystyle N(y).} One can show that both definitions are compatible, that is, the topology obtained from the neighbourhood system defined using open sets is the original one, and vice versa when starting out from a neighbourhood system. == Uniform neighbourhoods == In a uniform space S = ( X , Φ ) , {\displaystyle S=(X,\Phi ),} V {\displaystyle V} is called a uniform neighbourhood of P {\displaystyle P} if there exists an entourage U ∈ Φ {\displaystyle U\in \Phi } such that V {\displaystyle V} contains all points of X {\displaystyle X} that are U {\displaystyle U} -close to some point of P ; {\displaystyle P;} that is, U [ x ] ⊆ V {\displaystyle U[x]\subseteq V} for all x ∈ P . {\displaystyle x\in P.} == Deleted neighbourhood == A deleted neighbourhood of a point p {\displaystyle p} (sometimes called a punctured neighbourhood) is a neighbourhood of p , {\displaystyle p,} without { p } . {\displaystyle \{p\}.} For instance, the interval ( − 1 , 1 ) = { y : − 1 < y < 1 } {\displaystyle (-1,1)=\{y:-1<y<1\}} is a neighbourhood of p = 0 {\displaystyle p=0} in the real line, so the set ( − 1 , 0 ) ∪ ( 0 , 1 ) = ( − 1 , 1 ) ∖ { 0 } {\displaystyle (-1,0)\cup (0,1)=(-1,1)\setminus \{0\}} is a deleted neighbourhood of 0. {\displaystyle 0.} A deleted neighbourhood of a given point is not in fact a neighbourhood of the point. The concept of deleted neighbourhood occurs in the definition of the limit of a function and in the definition of limit points (among other things). == See also == Isolated point – Point of a subset S around which there are no other points of S Neighbourhood system – Concept in mathematics Region (mathematics) – Connected open subset of a topological spacePages displaying short descriptions of redirect targets Tubular neighbourhood – neighborhood of a submanifold homeomorphic to that submanifold’s normal bundlePages displaying wikidata descriptions as a fallback == Notes == == References == Bredon, Glen E. (1993). Topology and geometry. New York: Springer-Verlag. ISBN 0-387-97926-3. Engelking, Ryszard (1989). General Topology. Heldermann Verlag, Berlin. ISBN 3-88538-006-4. Kaplansky, Irving (2001). Set Theory and Metric Spaces. American Mathematical Society. ISBN 0-8218-2694-8. Kelley, John L. (1975). General topology. New York: Springer-Verlag. ISBN 0-387-90125-6. Willard, Stephen (2004) [1970]. General Topology. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-43479-7. OCLC 115240.
Wikipedia/Neighborhood_(topology)
In mathematics, a continuous function is a function such that a small variation of the argument induces a small variation of the value of the function. This implies there are no abrupt changes in value, known as discontinuities. More precisely, a function is continuous if arbitrarily small changes in its value can be assured by restricting to sufficiently small changes of its argument. A discontinuous function is a function that is not continuous. Until the 19th century, mathematicians largely relied on intuitive notions of continuity and considered only continuous functions. The epsilon–delta definition of a limit was introduced to formalize the definition of continuity. Continuity is one of the core concepts of calculus and mathematical analysis, where arguments and values of functions are real and complex numbers. The concept has been generalized to functions between metric spaces and between topological spaces. The latter are the most general continuous functions, and their definition is the basis of topology. A stronger form of continuity is uniform continuity. In order theory, especially in domain theory, a related concept of continuity is Scott continuity. As an example, the function H(t) denoting the height of a growing flower at time t would be considered continuous. In contrast, the function M(t) denoting the amount of money in a bank account at time t would be considered discontinuous since it "jumps" at each point in time when money is deposited or withdrawn. == History == A form of the epsilon–delta definition of continuity was first given by Bernard Bolzano in 1817. Augustin-Louis Cauchy defined continuity of y = f ( x ) {\displaystyle y=f(x)} as follows: an infinitely small increment α {\displaystyle \alpha } of the independent variable x always produces an infinitely small change f ( x + α ) − f ( x ) {\displaystyle f(x+\alpha )-f(x)} of the dependent variable y (see e.g. Cours d'Analyse, p. 34). Cauchy defined infinitely small quantities in terms of variable quantities, and his definition of continuity closely parallels the infinitesimal definition used today (see microcontinuity). The formal definition and the distinction between pointwise continuity and uniform continuity were first given by Bolzano in the 1830s, but the work wasn't published until the 1930s. Like Bolzano, Karl Weierstrass denied continuity of a function at a point c unless it was defined at and on both sides of c, but Édouard Goursat allowed the function to be defined only at and on one side of c, and Camille Jordan allowed it even if the function was defined only at c. All three of those nonequivalent definitions of pointwise continuity are still in use. Eduard Heine provided the first published definition of uniform continuity in 1872, but based these ideas on lectures given by Peter Gustav Lejeune Dirichlet in 1854. == Real functions == === Definition === A real function that is a function from real numbers to real numbers can be represented by a graph in the Cartesian plane; such a function is continuous if, roughly speaking, the graph is a single unbroken curve whose domain is the entire real line. A more mathematically rigorous definition is given below. Continuity of real functions is usually defined in terms of limits. A function f with variable x is continuous at the real number c, if the limit of f ( x ) , {\displaystyle f(x),} as x tends to c, is equal to f ( c ) . {\displaystyle f(c).} There are several different definitions of the (global) continuity of a function, which depend on the nature of its domain. A function is continuous on an open interval if the interval is contained in the function's domain and the function is continuous at every interval point. A function that is continuous on the interval ( − ∞ , + ∞ ) {\displaystyle (-\infty ,+\infty )} (the whole real line) is often called simply a continuous function; one also says that such a function is continuous everywhere. For example, all polynomial functions are continuous everywhere. A function is continuous on a semi-open or a closed interval; if the interval is contained in the domain of the function, the function is continuous at every interior point of the interval, and the value of the function at each endpoint that belongs to the interval is the limit of the values of the function when the variable tends to the endpoint from the interior of the interval. For example, the function f ( x ) = x {\displaystyle f(x)={\sqrt {x}}} is continuous on its whole domain, which is the closed interval [ 0 , + ∞ ) . {\displaystyle [0,+\infty ).} Many commonly encountered functions are partial functions that have a domain formed by all real numbers, except some isolated points. Examples include the reciprocal function x ↦ 1 x {\textstyle x\mapsto {\frac {1}{x}}} and the tangent function x ↦ tan ⁡ x . {\displaystyle x\mapsto \tan x.} When they are continuous on their domain, one says, in some contexts, that they are continuous, although they are not continuous everywhere. In other contexts, mainly when one is interested in their behavior near the exceptional points, one says they are discontinuous. A partial function is discontinuous at a point if the point belongs to the topological closure of its domain, and either the point does not belong to the domain of the function or the function is not continuous at the point. For example, the functions x ↦ 1 x {\textstyle x\mapsto {\frac {1}{x}}} and x ↦ sin ⁡ ( 1 x ) {\textstyle x\mapsto \sin({\frac {1}{x}})} are discontinuous at 0, and remain discontinuous whichever value is chosen for defining them at 0. A point where a function is discontinuous is called a discontinuity. Using mathematical notation, several ways exist to define continuous functions in the three senses mentioned above. Let f : D → R {\textstyle f:D\to \mathbb {R} } be a function whose domain D {\displaystyle D} is contained in R {\displaystyle \mathbb {R} } of real numbers. Some (but not all) possibilities for D {\displaystyle D} are: D {\displaystyle D} is the whole real line; that is, D = R {\displaystyle D=\mathbb {R} } D {\displaystyle D} is a closed interval of the form D = [ a , b ] = { x ∈ R ∣ a ≤ x ≤ b } , {\displaystyle D=[a,b]=\{x\in \mathbb {R} \mid a\leq x\leq b\},} where a and b are real numbers D {\displaystyle D} is an open interval of the form D = ( a , b ) = { x ∈ R ∣ a < x < b } , {\displaystyle D=(a,b)=\{x\in \mathbb {R} \mid a<x<b\},} where a and b are real numbers In the case of an open interval, a {\displaystyle a} and b {\displaystyle b} do not belong to D {\displaystyle D} , and the values f ( a ) {\displaystyle f(a)} and f ( b ) {\displaystyle f(b)} are not defined, and if they are, they do not matter for continuity on D {\displaystyle D} . ==== Definition in terms of limits of functions ==== The function f is continuous at some point c of its domain if the limit of f ( x ) , {\displaystyle f(x),} as x approaches c through the domain of f, exists and is equal to f ( c ) . {\displaystyle f(c).} In mathematical notation, this is written as lim x → c f ( x ) = f ( c ) . {\displaystyle \lim _{x\to c}{f(x)}=f(c).} In detail this means three conditions: first, f has to be defined at c (guaranteed by the requirement that c is in the domain of f). Second, the limit of that equation has to exist. Third, the value of this limit must equal f ( c ) . {\displaystyle f(c).} (Here, we have assumed that the domain of f does not have any isolated points.) ==== Definition in terms of neighborhoods ==== A neighborhood of a point c is a set that contains, at least, all points within some fixed distance of c. Intuitively, a function is continuous at a point c if the range of f over the neighborhood of c shrinks to a single point f ( c ) {\displaystyle f(c)} as the width of the neighborhood around c shrinks to zero. More precisely, a function f is continuous at a point c of its domain if, for any neighborhood N 1 ( f ( c ) ) {\displaystyle N_{1}(f(c))} there is a neighborhood N 2 ( c ) {\displaystyle N_{2}(c)} in its domain such that f ( x ) ∈ N 1 ( f ( c ) ) {\displaystyle f(x)\in N_{1}(f(c))} whenever x ∈ N 2 ( c ) . {\displaystyle x\in N_{2}(c).} As neighborhoods are defined in any topological space, this definition of a continuous function applies not only for real functions but also when the domain and the codomain are topological spaces and is thus the most general definition. It follows that a function is automatically continuous at every isolated point of its domain. For example, every real-valued function on the integers is continuous. ==== Definition in terms of limits of sequences ==== One can instead require that for any sequence ( x n ) n ∈ N {\displaystyle (x_{n})_{n\in \mathbb {N} }} of points in the domain which converges to c, the corresponding sequence ( f ( x n ) ) n ∈ N {\displaystyle \left(f(x_{n})\right)_{n\in \mathbb {N} }} converges to f ( c ) . {\displaystyle f(c).} In mathematical notation, ∀ ( x n ) n ∈ N ⊂ D : lim n → ∞ x n = c ⇒ lim n → ∞ f ( x n ) = f ( c ) . {\displaystyle \forall (x_{n})_{n\in \mathbb {N} }\subset D:\lim _{n\to \infty }x_{n}=c\Rightarrow \lim _{n\to \infty }f(x_{n})=f(c)\,.} ==== Weierstrass and Jordan definitions (epsilon–delta) of continuous functions ==== Explicitly including the definition of the limit of a function, we obtain a self-contained definition: Given a function f : D → R {\displaystyle f:D\to \mathbb {R} } as above and an element x 0 {\displaystyle x_{0}} of the domain D {\displaystyle D} , f {\displaystyle f} is said to be continuous at the point x 0 {\displaystyle x_{0}} when the following holds: For any positive real number ε > 0 , {\displaystyle \varepsilon >0,} however small, there exists some positive real number δ > 0 {\displaystyle \delta >0} such that for all x {\displaystyle x} in the domain of f {\displaystyle f} with x 0 − δ < x < x 0 + δ , {\displaystyle x_{0}-\delta <x<x_{0}+\delta ,} the value of f ( x ) {\displaystyle f(x)} satisfies f ( x 0 ) − ε < f ( x ) < f ( x 0 ) + ε . {\displaystyle f\left(x_{0}\right)-\varepsilon <f(x)<f(x_{0})+\varepsilon .} Alternatively written, continuity of f : D → R {\displaystyle f:D\to \mathbb {R} } at x 0 ∈ D {\displaystyle x_{0}\in D} means that for every ε > 0 , {\displaystyle \varepsilon >0,} there exists a δ > 0 {\displaystyle \delta >0} such that for all x ∈ D {\displaystyle x\in D} : | x − x 0 | < δ implies | f ( x ) − f ( x 0 ) | < ε . {\displaystyle \left|x-x_{0}\right|<\delta ~~{\text{ implies }}~~|f(x)-f(x_{0})|<\varepsilon .} More intuitively, we can say that if we want to get all the f ( x ) {\displaystyle f(x)} values to stay in some small neighborhood around f ( x 0 ) , {\displaystyle f\left(x_{0}\right),} we need to choose a small enough neighborhood for the x {\displaystyle x} values around x 0 . {\displaystyle x_{0}.} If we can do that no matter how small the f ( x 0 ) {\displaystyle f(x_{0})} neighborhood is, then f {\displaystyle f} is continuous at x 0 . {\displaystyle x_{0}.} In modern terms, this is generalized by the definition of continuity of a function with respect to a basis for the topology, here the metric topology. Weierstrass had required that the interval x 0 − δ < x < x 0 + δ {\displaystyle x_{0}-\delta <x<x_{0}+\delta } be entirely within the domain D {\displaystyle D} , but Jordan removed that restriction. ==== Definition in terms of control of the remainder ==== In proofs and numerical analysis, we often need to know how fast limits are converging, or in other words, control of the remainder. We can formalize this to a definition of continuity. A function C : [ 0 , ∞ ) → [ 0 , ∞ ] {\displaystyle C:[0,\infty )\to [0,\infty ]} is called a control function if C is non-decreasing inf δ > 0 C ( δ ) = 0 {\displaystyle \inf _{\delta >0}C(\delta )=0} A function f : D → R {\displaystyle f:D\to R} is C-continuous at x 0 {\displaystyle x_{0}} if there exists such a neighbourhood N ( x 0 ) {\textstyle N(x_{0})} that | f ( x ) − f ( x 0 ) | ≤ C ( | x − x 0 | ) for all x ∈ D ∩ N ( x 0 ) {\displaystyle |f(x)-f(x_{0})|\leq C\left(\left|x-x_{0}\right|\right){\text{ for all }}x\in D\cap N(x_{0})} A function is continuous in x 0 {\displaystyle x_{0}} if it is C-continuous for some control function C. This approach leads naturally to refining the notion of continuity by restricting the set of admissible control functions. For a given set of control functions C {\displaystyle {\mathcal {C}}} a function is C {\displaystyle {\mathcal {C}}} -continuous if it is C {\displaystyle C} -continuous for some C ∈ C . {\displaystyle C\in {\mathcal {C}}.} For example, the Lipschitz, the Hölder continuous functions of exponent α and the uniformly continuous functions below are defined by the set of control functions C L i p s c h i t z = { C : C ( δ ) = K | δ | , K > 0 } {\displaystyle {\mathcal {C}}_{\mathrm {Lipschitz} }=\{C:C(\delta )=K|\delta |,\ K>0\}} C Hölder − α = { C : C ( δ ) = K | δ | α , K > 0 } {\displaystyle {\mathcal {C}}_{{\text{Hölder}}-\alpha }=\{C:C(\delta )=K|\delta |^{\alpha },\ K>0\}} C uniform cont. = { C : C ( 0 ) = 0 } {\displaystyle {\mathcal {C}}_{\text{uniform cont.}}=\{C:C(0)=0\}} respectively. ==== Definition using oscillation ==== Continuity can also be defined in terms of oscillation: a function f is continuous at a point x 0 {\displaystyle x_{0}} if and only if its oscillation at that point is zero; in symbols, ω f ( x 0 ) = 0. {\displaystyle \omega _{f}(x_{0})=0.} A benefit of this definition is that it quantifies discontinuity: the oscillation gives how much the function is discontinuous at a point. This definition is helpful in descriptive set theory to study the set of discontinuities and continuous points – the continuous points are the intersection of the sets where the oscillation is less than ε {\displaystyle \varepsilon } (hence a G δ {\displaystyle G_{\delta }} set) – and gives a rapid proof of one direction of the Lebesgue integrability condition. The oscillation is equivalent to the ε − δ {\displaystyle \varepsilon -\delta } definition by a simple re-arrangement and by using a limit (lim sup, lim inf) to define oscillation: if (at a given point) for a given ε 0 {\displaystyle \varepsilon _{0}} there is no δ {\displaystyle \delta } that satisfies the ε − δ {\displaystyle \varepsilon -\delta } definition, then the oscillation is at least ε 0 , {\displaystyle \varepsilon _{0},} and conversely if for every ε {\displaystyle \varepsilon } there is a desired δ , {\displaystyle \delta ,} the oscillation is 0. The oscillation definition can be naturally generalized to maps from a topological space to a metric space. ==== Definition using the hyperreals ==== Cauchy defined the continuity of a function in the following intuitive terms: an infinitesimal change in the independent variable corresponds to an infinitesimal change of the dependent variable (see Cours d'analyse, page 34). Non-standard analysis is a way of making this mathematically rigorous. The real line is augmented by adding infinite and infinitesimal numbers to form the hyperreal numbers. In nonstandard analysis, continuity can be defined as follows. (see microcontinuity). In other words, an infinitesimal increment of the independent variable always produces an infinitesimal change of the dependent variable, giving a modern expression to Augustin-Louis Cauchy's definition of continuity. === Rules for continuity === Proving the continuity of a function by a direct application of the definition is generaly a noneasy task. Fortunately, in practice, most functions are built from simpler functions, and their continuity can be deduced immediately from the way they are defined, by applying the following rules: Every constant function is continuous The identity function ⁠ f ( x ) = x {\displaystyle f(x)=x} ⁠ is continuous Addition and multiplication: If the functions ⁠ f {\displaystyle f} ⁠ and ⁠ g {\displaystyle g} ⁠ are continuous on their respective domains ⁠ D f {\displaystyle D_{f}} ⁠ and ⁠ D g {\displaystyle D_{g}} ⁠, then their sum ⁠ f + g {\displaystyle f+g} ⁠ and their product ⁠ f ⋅ g {\displaystyle f\cdot g} ⁠ are continuous on the intersection ⁠ D f ∩ D g {\displaystyle D_{f}\cap D_{g}} ⁠, where ⁠ f + g {\displaystyle f+g} ⁠ and ⁠ f g {\displaystyle fg} ⁠ are defined by ⁠ ( f + g ) ( x ) = f ( x ) + g ( x ) {\displaystyle (f+g)(x)=f(x)+g(x)} ⁠ and ⁠ ( f ⋅ g ) ( x ) = f ( x ) ⋅ g ( x ) {\displaystyle (f\cdot g)(x)=f(x)\cdot g(x)} ⁠. Reciprocal: If the function ⁠ f {\displaystyle f} ⁠ is continuous on the domain ⁠ D f {\displaystyle D_{f}} ⁠, then its reciprocal ⁠ 1 f {\displaystyle {\tfrac {1}{f}}} ⁠, defined by ⁠ ( 1 f ) ( x ) = 1 f ( x ) {\displaystyle ({\tfrac {1}{f}})(x)={\tfrac {1}{f(x)}}} ⁠ is continuous on the domain ⁠ D f ∖ f − 1 ( 0 ) {\displaystyle D_{f}\setminus f^{-1}(0)} ⁠, that is, the domain ⁠ D f {\displaystyle D_{f}} ⁠ from which the points ⁠ x {\displaystyle x} ⁠ such that ⁠ f ( x ) = 0 {\displaystyle f(x)=0} ⁠ are removed. Function composition: If the functions ⁠ f {\displaystyle f} ⁠ and ⁠ g {\displaystyle g} ⁠ are continuous on their respective domains ⁠ D f {\displaystyle D_{f}} ⁠ and ⁠ D g {\displaystyle D_{g}} ⁠, then the composition ⁠ g ∘ f {\displaystyle g\circ f} ⁠ defined by ⁠ 1 {\displaystyle {1}} ⁠ is continuous on ⁠ D f ∩ f − 1 ( D g ) {\displaystyle D_{f}\cap f^{-1}(D_{g})} ⁠, that the part of ⁠ D f {\displaystyle D_{f}} ⁠ that is mapped by ⁠ f {\displaystyle f} ⁠ inside ⁠ D g {\displaystyle D_{g}} ⁠. The sine and cosine functions (⁠ sin ⁡ x {\displaystyle \sin x} ⁠ and ⁠ cos ⁡ x {\displaystyle \cos x} ⁠) are continuous everywhere. The exponential function ⁠ e x {\displaystyle e^{x}} ⁠ is continuous everywhere. The natural logarithm ⁠ ln ⁡ x {\displaystyle \ln x} ⁠ is continuous on the domain formed by all positive real numbers ⁠ { x ∣ x > 0 } {\displaystyle \{x\mid x>0\}} ⁠. These rules imply that every polynomial function is continuous everywhere and that a rational function is continuous everywhere where it is defined, if the numerator and the denominator have no common zeros. More generally, the quotient of two continuous functions is continuous outside the zeros of the denominator. An example of a function for which the above rules are not sufficirent is the sinc function, which is defined by ⁠ sinc ⁡ ( 0 ) = 1 {\displaystyle \operatorname {sinc} (0)=1} ⁠ and ⁠ sinc ⁡ ( x ) = sin ⁡ x x {\displaystyle \operatorname {sinc} (x)={\tfrac {\sin x}{x}}} ⁠ for ⁠ x ≠ 0 {\displaystyle x\neq 0} ⁠. The above rules show immediately that the function is continuous for ⁠ x ≠ 0 {\displaystyle x\neq 0} ⁠, but, for proving the continuity at ⁠ 0 {\displaystyle 0} ⁠, one has to prove lim x → 0 sin ⁡ x x = 1. {\displaystyle \lim _{x\to 0}{\frac {\sin x}{x}}=1.} As this is true, one gets that the sinc function is continuous function on all real numbers. === Examples of discontinuous functions === An example of a discontinuous function is the Heaviside step function H {\displaystyle H} , defined by H ( x ) = { 1 if x ≥ 0 0 if x < 0 {\displaystyle H(x)={\begin{cases}1&{\text{ if }}x\geq 0\\0&{\text{ if }}x<0\end{cases}}} Pick for instance ε = 1 / 2 {\displaystyle \varepsilon =1/2} . Then there is no δ {\displaystyle \delta } -neighborhood around x = 0 {\displaystyle x=0} , i.e. no open interval ( − δ , δ ) {\displaystyle (-\delta ,\;\delta )} with δ > 0 , {\displaystyle \delta >0,} that will force all the H ( x ) {\displaystyle H(x)} values to be within the ε {\displaystyle \varepsilon } -neighborhood of H ( 0 ) {\displaystyle H(0)} , i.e. within ( 1 / 2 , 3 / 2 ) {\displaystyle (1/2,\;3/2)} . Intuitively, we can think of this type of discontinuity as a sudden jump in function values. Similarly, the signum or sign function sgn ⁡ ( x ) = { 1 if x > 0 0 if x = 0 − 1 if x < 0 {\displaystyle \operatorname {sgn}(x)={\begin{cases}\;\;\ 1&{\text{ if }}x>0\\\;\;\ 0&{\text{ if }}x=0\\-1&{\text{ if }}x<0\end{cases}}} is discontinuous at x = 0 {\displaystyle x=0} but continuous everywhere else. Yet another example: the function f ( x ) = { sin ⁡ ( x − 2 ) if x ≠ 0 0 if x = 0 {\displaystyle f(x)={\begin{cases}\sin \left(x^{-2}\right)&{\text{ if }}x\neq 0\\0&{\text{ if }}x=0\end{cases}}} is continuous everywhere apart from x = 0 {\displaystyle x=0} . Besides plausible continuities and discontinuities like above, there are also functions with a behavior, often coined pathological, for example, Thomae's function, f ( x ) = { 1 if x = 0 1 q if x = p q (in lowest terms) is a rational number 0 if x is irrational . {\displaystyle f(x)={\begin{cases}1&{\text{ if }}x=0\\{\frac {1}{q}}&{\text{ if }}x={\frac {p}{q}}{\text{(in lowest terms) is a rational number}}\\0&{\text{ if }}x{\text{ is irrational}}.\end{cases}}} is continuous at all irrational numbers and discontinuous at all rational numbers. In a similar vein, Dirichlet's function, the indicator function for the set of rational numbers, D ( x ) = { 0 if x is irrational ( ∈ R ∖ Q ) 1 if x is rational ( ∈ Q ) {\displaystyle D(x)={\begin{cases}0&{\text{ if }}x{\text{ is irrational }}(\in \mathbb {R} \setminus \mathbb {Q} )\\1&{\text{ if }}x{\text{ is rational }}(\in \mathbb {Q} )\end{cases}}} is nowhere continuous. === Properties === ==== A useful lemma ==== Let f ( x ) {\displaystyle f(x)} be a function that is continuous at a point x 0 , {\displaystyle x_{0},} and y 0 {\displaystyle y_{0}} be a value such f ( x 0 ) ≠ y 0 . {\displaystyle f\left(x_{0}\right)\neq y_{0}.} Then f ( x ) ≠ y 0 {\displaystyle f(x)\neq y_{0}} throughout some neighbourhood of x 0 . {\displaystyle x_{0}.} Proof: By the definition of continuity, take ε = | y 0 − f ( x 0 ) | 2 > 0 {\displaystyle \varepsilon ={\frac {|y_{0}-f(x_{0})|}{2}}>0} , then there exists δ > 0 {\displaystyle \delta >0} such that | f ( x ) − f ( x 0 ) | < | y 0 − f ( x 0 ) | 2 whenever | x − x 0 | < δ {\displaystyle \left|f(x)-f(x_{0})\right|<{\frac {\left|y_{0}-f(x_{0})\right|}{2}}\quad {\text{ whenever }}\quad |x-x_{0}|<\delta } Suppose there is a point in the neighbourhood | x − x 0 | < δ {\displaystyle |x-x_{0}|<\delta } for which f ( x ) = y 0 ; {\displaystyle f(x)=y_{0};} then we have the contradiction | f ( x 0 ) − y 0 | < | f ( x 0 ) − y 0 | 2 . {\displaystyle \left|f(x_{0})-y_{0}\right|<{\frac {\left|f(x_{0})-y_{0}\right|}{2}}.} ==== Intermediate value theorem ==== The intermediate value theorem is an existence theorem, based on the real number property of completeness, and states: If the real-valued function f is continuous on the closed interval [ a , b ] , {\displaystyle [a,b],} and k is some number between f ( a ) {\displaystyle f(a)} and f ( b ) , {\displaystyle f(b),} then there is some number c ∈ [ a , b ] , {\displaystyle c\in [a,b],} such that f ( c ) = k . {\displaystyle f(c)=k.} For example, if a child grows from 1 m to 1.5 m between the ages of two and six years, then, at some time between two and six years of age, the child's height must have been 1.25 m. As a consequence, if f is continuous on [ a , b ] {\displaystyle [a,b]} and f ( a ) {\displaystyle f(a)} and f ( b ) {\displaystyle f(b)} differ in sign, then, at some point c ∈ [ a , b ] , {\displaystyle c\in [a,b],} f ( c ) {\displaystyle f(c)} must equal zero. ==== Extreme value theorem ==== The extreme value theorem states that if a function f is defined on a closed interval [ a , b ] {\displaystyle [a,b]} (or any closed and bounded set) and is continuous there, then the function attains its maximum, i.e. there exists c ∈ [ a , b ] {\displaystyle c\in [a,b]} with f ( c ) ≥ f ( x ) {\displaystyle f(c)\geq f(x)} for all x ∈ [ a , b ] . {\displaystyle x\in [a,b].} The same is true of the minimum of f. These statements are not, in general, true if the function is defined on an open interval ( a , b ) {\displaystyle (a,b)} (or any set that is not both closed and bounded), as, for example, the continuous function f ( x ) = 1 x , {\displaystyle f(x)={\frac {1}{x}},} defined on the open interval (0,1), does not attain a maximum, being unbounded above. ==== Relation to differentiability and integrability ==== Every differentiable function f : ( a , b ) → R {\displaystyle f:(a,b)\to \mathbb {R} } is continuous, as can be shown. The converse does not hold: for example, the absolute value function f ( x ) = | x | = { x if x ≥ 0 − x if x < 0 {\displaystyle f(x)=|x|={\begin{cases}\;\;\ x&{\text{ if }}x\geq 0\\-x&{\text{ if }}x<0\end{cases}}} is everywhere continuous. However, it is not differentiable at x = 0 {\displaystyle x=0} (but is so everywhere else). Weierstrass's function is also everywhere continuous but nowhere differentiable. The derivative f′(x) of a differentiable function f(x) need not be continuous. If f′(x) is continuous, f(x) is said to be continuously differentiable. The set of such functions is denoted C 1 ( ( a , b ) ) . {\displaystyle C^{1}((a,b)).} More generally, the set of functions f : Ω → R {\displaystyle f:\Omega \to \mathbb {R} } (from an open interval (or open subset of R {\displaystyle \mathbb {R} } ) Ω {\displaystyle \Omega } to the reals) such that f is n {\displaystyle n} times differentiable and such that the n {\displaystyle n} -th derivative of f is continuous is denoted C n ( Ω ) . {\displaystyle C^{n}(\Omega ).} See differentiability class. In the field of computer graphics, properties related (but not identical) to C 0 , C 1 , C 2 {\displaystyle C^{0},C^{1},C^{2}} are sometimes called G 0 {\displaystyle G^{0}} (continuity of position), G 1 {\displaystyle G^{1}} (continuity of tangency), and G 2 {\displaystyle G^{2}} (continuity of curvature); see Smoothness of curves and surfaces. Every continuous function f : [ a , b ] → R {\displaystyle f:[a,b]\to \mathbb {R} } is integrable (for example in the sense of the Riemann integral). The converse does not hold, as the (integrable but discontinuous) sign function shows. ==== Pointwise and uniform limits ==== Given a sequence f 1 , f 2 , … : I → R {\displaystyle f_{1},f_{2},\dotsc :I\to \mathbb {R} } of functions such that the limit f ( x ) := lim n → ∞ f n ( x ) {\displaystyle f(x):=\lim _{n\to \infty }f_{n}(x)} exists for all x ∈ D , {\displaystyle x\in D,} , the resulting function f ( x ) {\displaystyle f(x)} is referred to as the pointwise limit of the sequence of functions ( f n ) n ∈ N . {\displaystyle \left(f_{n}\right)_{n\in N}.} The pointwise limit function need not be continuous, even if all functions f n {\displaystyle f_{n}} are continuous, as the animation at the right shows. However, f is continuous if all functions f n {\displaystyle f_{n}} are continuous and the sequence converges uniformly, by the uniform convergence theorem. This theorem can be used to show that the exponential functions, logarithms, square root function, and trigonometric functions are continuous. === Directional Continuity === Discontinuous functions may be discontinuous in a restricted way, giving rise to the concept of directional continuity (or right and left continuous functions) and semi-continuity. Roughly speaking, a function is right-continuous if no jump occurs when the limit point is approached from the right. Formally, f is said to be right-continuous at the point c if the following holds: For any number ε > 0 {\displaystyle \varepsilon >0} however small, there exists some number δ > 0 {\displaystyle \delta >0} such that for all x in the domain with c < x < c + δ , {\displaystyle c<x<c+\delta ,} the value of f ( x ) {\displaystyle f(x)} will satisfy | f ( x ) − f ( c ) | < ε . {\displaystyle |f(x)-f(c)|<\varepsilon .} This is the same condition as continuous functions, except it is required to hold for x strictly larger than c only. Requiring it instead for all x with c − δ < x < c {\displaystyle c-\delta <x<c} yields the notion of left-continuous functions. A function is continuous if and only if it is both right-continuous and left-continuous. === Semicontinuity === A function f is lower semi-continuous if, roughly, any jumps that might occur only go down, but not up. That is, for any ε > 0 , {\displaystyle \varepsilon >0,} there exists some number δ > 0 {\displaystyle \delta >0} such that for all x in the domain with | x − c | < δ , {\displaystyle |x-c|<\delta ,} the value of f ( x ) {\displaystyle f(x)} satisfies f ( x ) ≥ f ( c ) − ϵ . {\displaystyle f(x)\geq f(c)-\epsilon .} The reverse condition is upper semi-continuity. == Continuous functions between metric spaces == The concept of continuous real-valued functions can be generalized to functions between metric spaces. A metric space is a set X {\displaystyle X} equipped with a function (called metric) d X , {\displaystyle d_{X},} that can be thought of as a measurement of the distance of any two elements in X. Formally, the metric is a function d X : X × X → R {\displaystyle d_{X}:X\times X\to \mathbb {R} } that satisfies a number of requirements, notably the triangle inequality. Given two metric spaces ( X , d X ) {\displaystyle \left(X,d_{X}\right)} and ( Y , d Y ) {\displaystyle \left(Y,d_{Y}\right)} and a function f : X → Y {\displaystyle f:X\to Y} then f {\displaystyle f} is continuous at the point c ∈ X {\displaystyle c\in X} (with respect to the given metrics) if for any positive real number ε > 0 , {\displaystyle \varepsilon >0,} there exists a positive real number δ > 0 {\displaystyle \delta >0} such that all x ∈ X {\displaystyle x\in X} satisfying d X ( x , c ) < δ {\displaystyle d_{X}(x,c)<\delta } will also satisfy d Y ( f ( x ) , f ( c ) ) < ε . {\displaystyle d_{Y}(f(x),f(c))<\varepsilon .} As in the case of real functions above, this is equivalent to the condition that for every sequence ( x n ) {\displaystyle \left(x_{n}\right)} in X {\displaystyle X} with limit lim x n = c , {\displaystyle \lim x_{n}=c,} we have lim f ( x n ) = f ( c ) . {\displaystyle \lim f\left(x_{n}\right)=f(c).} The latter condition can be weakened as follows: f {\displaystyle f} is continuous at the point c {\displaystyle c} if and only if for every convergent sequence ( x n ) {\displaystyle \left(x_{n}\right)} in X {\displaystyle X} with limit c {\displaystyle c} , the sequence ( f ( x n ) ) {\displaystyle \left(f\left(x_{n}\right)\right)} is a Cauchy sequence, and c {\displaystyle c} is in the domain of f {\displaystyle f} . The set of points at which a function between metric spaces is continuous is a G δ {\displaystyle G_{\delta }} set – this follows from the ε − δ {\displaystyle \varepsilon -\delta } definition of continuity. This notion of continuity is applied, for example, in functional analysis. A key statement in this area says that a linear operator T : V → W {\displaystyle T:V\to W} between normed vector spaces V {\displaystyle V} and W {\displaystyle W} (which are vector spaces equipped with a compatible norm, denoted ‖ x ‖ {\displaystyle \|x\|} ) is continuous if and only if it is bounded, that is, there is a constant K {\displaystyle K} such that ‖ T ( x ) ‖ ≤ K ‖ x ‖ {\displaystyle \|T(x)\|\leq K\|x\|} for all x ∈ V . {\displaystyle x\in V.} === Uniform, Hölder and Lipschitz continuity === The concept of continuity for functions between metric spaces can be strengthened in various ways by limiting the way δ {\displaystyle \delta } depends on ε {\displaystyle \varepsilon } and c in the definition above. Intuitively, a function f as above is uniformly continuous if the δ {\displaystyle \delta } does not depend on the point c. More precisely, it is required that for every real number ε > 0 {\displaystyle \varepsilon >0} there exists δ > 0 {\displaystyle \delta >0} such that for every c , b ∈ X {\displaystyle c,b\in X} with d X ( b , c ) < δ , {\displaystyle d_{X}(b,c)<\delta ,} we have that d Y ( f ( b ) , f ( c ) ) < ε . {\displaystyle d_{Y}(f(b),f(c))<\varepsilon .} Thus, any uniformly continuous function is continuous. The converse does not generally hold but holds when the domain space X is compact. Uniformly continuous maps can be defined in the more general situation of uniform spaces. A function is Hölder continuous with exponent α (a real number) if there is a constant K such that for all b , c ∈ X , {\displaystyle b,c\in X,} the inequality d Y ( f ( b ) , f ( c ) ) ≤ K ⋅ ( d X ( b , c ) ) α {\displaystyle d_{Y}(f(b),f(c))\leq K\cdot (d_{X}(b,c))^{\alpha }} holds. Any Hölder continuous function is uniformly continuous. The particular case α = 1 {\displaystyle \alpha =1} is referred to as Lipschitz continuity. That is, a function is Lipschitz continuous if there is a constant K such that the inequality d Y ( f ( b ) , f ( c ) ) ≤ K ⋅ d X ( b , c ) {\displaystyle d_{Y}(f(b),f(c))\leq K\cdot d_{X}(b,c)} holds for any b , c ∈ X . {\displaystyle b,c\in X.} The Lipschitz condition occurs, for example, in the Picard–Lindelöf theorem concerning the solutions of ordinary differential equations. == Continuous functions between topological spaces == Another, more abstract, notion of continuity is the continuity of functions between topological spaces in which there generally is no formal notion of distance, as there is in the case of metric spaces. A topological space is a set X together with a topology on X, which is a set of subsets of X satisfying a few requirements with respect to their unions and intersections that generalize the properties of the open balls in metric spaces while still allowing one to talk about the neighborhoods of a given point. The elements of a topology are called open subsets of X (with respect to the topology). A function f : X → Y {\displaystyle f:X\to Y} between two topological spaces X and Y is continuous if for every open set V ⊆ Y , {\displaystyle V\subseteq Y,} the inverse image f − 1 ( V ) = { x ∈ X | f ( x ) ∈ V } {\displaystyle f^{-1}(V)=\{x\in X\;|\;f(x)\in V\}} is an open subset of X. That is, f is a function between the sets X and Y (not on the elements of the topology T X {\displaystyle T_{X}} ), but the continuity of f depends on the topologies used on X and Y. This is equivalent to the condition that the preimages of the closed sets (which are the complements of the open subsets) in Y are closed in X. An extreme example: if a set X is given the discrete topology (in which every subset is open), all functions f : X → T {\displaystyle f:X\to T} to any topological space T are continuous. On the other hand, if X is equipped with the indiscrete topology (in which the only open subsets are the empty set and X) and the space T set is at least T0, then the only continuous functions are the constant functions. Conversely, any function whose codomain is indiscrete is continuous. === Continuity at a point === The translation in the language of neighborhoods of the ( ε , δ ) {\displaystyle (\varepsilon ,\delta )} -definition of continuity leads to the following definition of the continuity at a point: This definition is equivalent to the same statement with neighborhoods restricted to open neighborhoods and can be restated in several ways by using preimages rather than images. Also, as every set that contains a neighborhood is also a neighborhood, and f − 1 ( V ) {\displaystyle f^{-1}(V)} is the largest subset U of X such that f ( U ) ⊆ V , {\displaystyle f(U)\subseteq V,} this definition may be simplified into: As an open set is a set that is a neighborhood of all its points, a function f : X → Y {\displaystyle f:X\to Y} is continuous at every point of X if and only if it is a continuous function. If X and Y are metric spaces, it is equivalent to consider the neighborhood system of open balls centered at x and f(x) instead of all neighborhoods. This gives back the above ε − δ {\displaystyle \varepsilon -\delta } definition of continuity in the context of metric spaces. In general topological spaces, there is no notion of nearness or distance. If, however, the target space is a Hausdorff space, it is still true that f is continuous at a if and only if the limit of f as x approaches a is f(a). At an isolated point, every function is continuous. Given x ∈ X , {\displaystyle x\in X,} a map f : X → Y {\displaystyle f:X\to Y} is continuous at x {\displaystyle x} if and only if whenever B {\displaystyle {\mathcal {B}}} is a filter on X {\displaystyle X} that converges to x {\displaystyle x} in X , {\displaystyle X,} which is expressed by writing B → x , {\displaystyle {\mathcal {B}}\to x,} then necessarily f ( B ) → f ( x ) {\displaystyle f({\mathcal {B}})\to f(x)} in Y . {\displaystyle Y.} If N ( x ) {\displaystyle {\mathcal {N}}(x)} denotes the neighborhood filter at x {\displaystyle x} then f : X → Y {\displaystyle f:X\to Y} is continuous at x {\displaystyle x} if and only if f ( N ( x ) ) → f ( x ) {\displaystyle f({\mathcal {N}}(x))\to f(x)} in Y . {\displaystyle Y.} Moreover, this happens if and only if the prefilter f ( N ( x ) ) {\displaystyle f({\mathcal {N}}(x))} is a filter base for the neighborhood filter of f ( x ) {\displaystyle f(x)} in Y . {\displaystyle Y.} === Alternative definitions === Several equivalent definitions for a topological structure exist; thus, several equivalent ways exist to define a continuous function. ==== Sequences and nets ==== In several contexts, the topology of a space is conveniently specified in terms of limit points. This is often accomplished by specifying when a point is the limit of a sequence. Still, for some spaces that are too large in some sense, one specifies also when a point is the limit of more general sets of points indexed by a directed set, known as nets. A function is (Heine-)continuous only if it takes limits of sequences to limits of sequences. In the former case, preservation of limits is also sufficient; in the latter, a function may preserve all limits of sequences yet still fail to be continuous, and preservation of nets is a necessary and sufficient condition. In detail, a function f : X → Y {\displaystyle f:X\to Y} is sequentially continuous if whenever a sequence ( x n ) {\displaystyle \left(x_{n}\right)} in X {\displaystyle X} converges to a limit x , {\displaystyle x,} the sequence ( f ( x n ) ) {\displaystyle \left(f\left(x_{n}\right)\right)} converges to f ( x ) . {\displaystyle f(x).} Thus, sequentially continuous functions "preserve sequential limits." Every continuous function is sequentially continuous. If X {\displaystyle X} is a first-countable space and countable choice holds, then the converse also holds: any function preserving sequential limits is continuous. In particular, if X {\displaystyle X} is a metric space, sequential continuity and continuity are equivalent. For non-first-countable spaces, sequential continuity might be strictly weaker than continuity. (The spaces for which the two properties are equivalent are called sequential spaces.) This motivates the consideration of nets instead of sequences in general topological spaces. Continuous functions preserve the limits of nets, and this property characterizes continuous functions. For instance, consider the case of real-valued functions of one real variable: ==== Closure operator and interior operator definitions ==== In terms of the interior and closure operators, we have the following equivalences, If we declare that a point x {\displaystyle x} is close to a subset A ⊆ X {\displaystyle A\subseteq X} if x ∈ cl X ⁡ A , {\displaystyle x\in \operatorname {cl} _{X}A,} then this terminology allows for a plain English description of continuity: f {\displaystyle f} is continuous if and only if for every subset A ⊆ X , {\displaystyle A\subseteq X,} f {\displaystyle f} maps points that are close to A {\displaystyle A} to points that are close to f ( A ) . {\displaystyle f(A).} Similarly, f {\displaystyle f} is continuous at a fixed given point x ∈ X {\displaystyle x\in X} if and only if whenever x {\displaystyle x} is close to a subset A ⊆ X , {\displaystyle A\subseteq X,} then f ( x ) {\displaystyle f(x)} is close to f ( A ) . {\displaystyle f(A).} Instead of specifying topological spaces by their open subsets, any topology on X {\displaystyle X} can alternatively be determined by a closure operator or by an interior operator. Specifically, the map that sends a subset A {\displaystyle A} of a topological space X {\displaystyle X} to its topological closure cl X ⁡ A {\displaystyle \operatorname {cl} _{X}A} satisfies the Kuratowski closure axioms. Conversely, for any closure operator A ↦ cl ⁡ A {\displaystyle A\mapsto \operatorname {cl} A} there exists a unique topology τ {\displaystyle \tau } on X {\displaystyle X} (specifically, τ := { X ∖ cl ⁡ A : A ⊆ X } {\displaystyle \tau :=\{X\setminus \operatorname {cl} A:A\subseteq X\}} ) such that for every subset A ⊆ X , {\displaystyle A\subseteq X,} cl ⁡ A {\displaystyle \operatorname {cl} A} is equal to the topological closure cl ( X , τ ) ⁡ A {\displaystyle \operatorname {cl} _{(X,\tau )}A} of A {\displaystyle A} in ( X , τ ) . {\displaystyle (X,\tau ).} If the sets X {\displaystyle X} and Y {\displaystyle Y} are each associated with closure operators (both denoted by cl {\displaystyle \operatorname {cl} } ) then a map f : X → Y {\displaystyle f:X\to Y} is continuous if and only if f ( cl ⁡ A ) ⊆ cl ⁡ ( f ( A ) ) {\displaystyle f(\operatorname {cl} A)\subseteq \operatorname {cl} (f(A))} for every subset A ⊆ X . {\displaystyle A\subseteq X.} Similarly, the map that sends a subset A {\displaystyle A} of X {\displaystyle X} to its topological interior int X ⁡ A {\displaystyle \operatorname {int} _{X}A} defines an interior operator. Conversely, any interior operator A ↦ int ⁡ A {\displaystyle A\mapsto \operatorname {int} A} induces a unique topology τ {\displaystyle \tau } on X {\displaystyle X} (specifically, τ := { int ⁡ A : A ⊆ X } {\displaystyle \tau :=\{\operatorname {int} A:A\subseteq X\}} ) such that for every A ⊆ X , {\displaystyle A\subseteq X,} int ⁡ A {\displaystyle \operatorname {int} A} is equal to the topological interior int ( X , τ ) ⁡ A {\displaystyle \operatorname {int} _{(X,\tau )}A} of A {\displaystyle A} in ( X , τ ) . {\displaystyle (X,\tau ).} If the sets X {\displaystyle X} and Y {\displaystyle Y} are each associated with interior operators (both denoted by int {\displaystyle \operatorname {int} } ) then a map f : X → Y {\displaystyle f:X\to Y} is continuous if and only if f − 1 ( int ⁡ B ) ⊆ int ⁡ ( f − 1 ( B ) ) {\displaystyle f^{-1}(\operatorname {int} B)\subseteq \operatorname {int} \left(f^{-1}(B)\right)} for every subset B ⊆ Y . {\displaystyle B\subseteq Y.} ==== Filters and prefilters ==== Continuity can also be characterized in terms of filters. A function f : X → Y {\displaystyle f:X\to Y} is continuous if and only if whenever a filter B {\displaystyle {\mathcal {B}}} on X {\displaystyle X} converges in X {\displaystyle X} to a point x ∈ X , {\displaystyle x\in X,} then the prefilter f ( B ) {\displaystyle f({\mathcal {B}})} converges in Y {\displaystyle Y} to f ( x ) . {\displaystyle f(x).} This characterization remains true if the word "filter" is replaced by "prefilter." === Properties === If f : X → Y {\displaystyle f:X\to Y} and g : Y → Z {\displaystyle g:Y\to Z} are continuous, then so is the composition g ∘ f : X → Z . {\displaystyle g\circ f:X\to Z.} If f : X → Y {\displaystyle f:X\to Y} is continuous and X is compact, then f(X) is compact. X is connected, then f(X) is connected. X is path-connected, then f(X) is path-connected. X is Lindelöf, then f(X) is Lindelöf. X is separable, then f(X) is separable. The possible topologies on a fixed set X are partially ordered: a topology τ 1 {\displaystyle \tau _{1}} is said to be coarser than another topology τ 2 {\displaystyle \tau _{2}} (notation: τ 1 ⊆ τ 2 {\displaystyle \tau _{1}\subseteq \tau _{2}} ) if every open subset with respect to τ 1 {\displaystyle \tau _{1}} is also open with respect to τ 2 . {\displaystyle \tau _{2}.} Then, the identity map id X : ( X , τ 2 ) → ( X , τ 1 ) {\displaystyle \operatorname {id} _{X}:\left(X,\tau _{2}\right)\to \left(X,\tau _{1}\right)} is continuous if and only if τ 1 ⊆ τ 2 {\displaystyle \tau _{1}\subseteq \tau _{2}} (see also comparison of topologies). More generally, a continuous function ( X , τ X ) → ( Y , τ Y ) {\displaystyle \left(X,\tau _{X}\right)\to \left(Y,\tau _{Y}\right)} stays continuous if the topology τ Y {\displaystyle \tau _{Y}} is replaced by a coarser topology and/or τ X {\displaystyle \tau _{X}} is replaced by a finer topology. === Homeomorphisms === Symmetric to the concept of a continuous map is an open map, for which images of open sets are open. If an open map f has an inverse function, that inverse is continuous, and if a continuous map g has an inverse, that inverse is open. Given a bijective function f between two topological spaces, the inverse function f − 1 {\displaystyle f^{-1}} need not be continuous. A bijective continuous function with a continuous inverse function is called a homeomorphism. If a continuous bijection has as its domain a compact space and its codomain is Hausdorff, then it is a homeomorphism. === Defining topologies via continuous functions === Given a function f : X → S , {\displaystyle f:X\to S,} where X is a topological space and S is a set (without a specified topology), the final topology on S is defined by letting the open sets of S be those subsets A of S for which f − 1 ( A ) {\displaystyle f^{-1}(A)} is open in X. If S has an existing topology, f is continuous with respect to this topology if and only if the existing topology is coarser than the final topology on S. Thus, the final topology is the finest topology on S that makes f continuous. If f is surjective, this topology is canonically identified with the quotient topology under the equivalence relation defined by f. Dually, for a function f from a set S to a topological space X, the initial topology on S is defined by designating as an open set every subset A of S such that A = f − 1 ( U ) {\displaystyle A=f^{-1}(U)} for some open subset U of X. If S has an existing topology, f is continuous with respect to this topology if and only if the existing topology is finer than the initial topology on S. Thus, the initial topology is the coarsest topology on S that makes f continuous. If f is injective, this topology is canonically identified with the subspace topology of S, viewed as a subset of X. A topology on a set S is uniquely determined by the class of all continuous functions S → X {\displaystyle S\to X} into all topological spaces X. Dually, a similar idea can be applied to maps X → S . {\displaystyle X\to S.} == Related notions == If f : S → Y {\displaystyle f:S\to Y} is a continuous function from some subset S {\displaystyle S} of a topological space X {\displaystyle X} then a continuous extension of f {\displaystyle f} to X {\displaystyle X} is any continuous function F : X → Y {\displaystyle F:X\to Y} such that F ( s ) = f ( s ) {\displaystyle F(s)=f(s)} for every s ∈ S , {\displaystyle s\in S,} which is a condition that often written as f = F | S . {\displaystyle f=F{\big \vert }_{S}.} In words, it is any continuous function F : X → Y {\displaystyle F:X\to Y} that restricts to f {\displaystyle f} on S . {\displaystyle S.} This notion is used, for example, in the Tietze extension theorem and the Hahn–Banach theorem. If f : S → Y {\displaystyle f:S\to Y} is not continuous, then it could not possibly have a continuous extension. If Y {\displaystyle Y} is a Hausdorff space and S {\displaystyle S} is a dense subset of X {\displaystyle X} then a continuous extension of f : S → Y {\displaystyle f:S\to Y} to X , {\displaystyle X,} if one exists, will be unique. The Blumberg theorem states that if f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } is an arbitrary function then there exists a dense subset D {\displaystyle D} of R {\displaystyle \mathbb {R} } such that the restriction f | D : D → R {\displaystyle f{\big \vert }_{D}:D\to \mathbb {R} } is continuous; in other words, every function R → R {\displaystyle \mathbb {R} \to \mathbb {R} } can be restricted to some dense subset on which it is continuous. Various other mathematical domains use the concept of continuity in different but related meanings. For example, in order theory, an order-preserving function f : X → Y {\displaystyle f:X\to Y} between particular types of partially ordered sets X {\displaystyle X} and Y {\displaystyle Y} is continuous if for each directed subset A {\displaystyle A} of X , {\displaystyle X,} we have sup f ( A ) = f ( sup A ) . {\displaystyle \sup f(A)=f(\sup A).} Here sup {\displaystyle \,\sup \,} is the supremum with respect to the orderings in X {\displaystyle X} and Y , {\displaystyle Y,} respectively. This notion of continuity is the same as topological continuity when the partially ordered sets are given the Scott topology. In category theory, a functor F : C → D {\displaystyle F:{\mathcal {C}}\to {\mathcal {D}}} between two categories is called continuous if it commutes with small limits. That is to say, lim ← i ∈ I ⁡ F ( C i ) ≅ F ( lim ← i ∈ I ⁡ C i ) {\displaystyle \varprojlim _{i\in I}F(C_{i})\cong F\left(\varprojlim _{i\in I}C_{i}\right)} for any small (that is, indexed by a set I , {\displaystyle I,} as opposed to a class) diagram of objects in C {\displaystyle {\mathcal {C}}} . A continuity space is a generalization of metric spaces and posets, which uses the concept of quantales, and that can be used to unify the notions of metric spaces and domains. In measure theory, a function f : E → R k {\displaystyle f:E\to \mathbb {R} ^{k}} defined on a Lebesgue measurable set E ⊆ R n {\displaystyle E\subseteq \mathbb {R} ^{n}} is called approximately continuous at a point x 0 ∈ E {\displaystyle x_{0}\in E} if the approximate limit of f {\displaystyle f} at x 0 {\displaystyle x_{0}} exists and equals f ( x 0 ) {\displaystyle f(x_{0})} . This generalizes the notion of continuity by replacing the ordinary limit with the approximate limit. A fundamental result known as the Stepanov-Denjoy theorem states that a function is measurable if and only if it is approximately continuous almost everywhere. == See also == Direction-preserving function - an analog of a continuous function in discrete spaces. == References == == Bibliography == Dugundji, James (1966). Topology. Boston: Allyn and Bacon. ISBN 978-0-697-06889-7. OCLC 395340485. "Continuous function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Continuous_(topology)
In topology, a discrete space is a particularly simple example of a topological space or similar structure, one in which the points form a discontinuous sequence, meaning they are isolated from each other in a certain sense. The discrete topology is the finest topology that can be given on a set. Every subset is open in the discrete topology so that in particular, every singleton subset is an open set in the discrete topology. == Definitions == Given a set X {\displaystyle X} : A metric space ( E , d ) {\displaystyle (E,d)} is said to be uniformly discrete if there exists a packing radius r > 0 {\displaystyle r>0} such that, for any x , y ∈ E , {\displaystyle x,y\in E,} one has either x = y {\displaystyle x=y} or d ( x , y ) > r . {\displaystyle d(x,y)>r.} The topology underlying a metric space can be discrete, without the metric being uniformly discrete: for example the usual metric on the set { 2 − n : n ∈ N 0 } . {\displaystyle \left\{2^{-n}:n\in \mathbb {N} _{0}\right\}.} == Properties == The underlying uniformity on a discrete metric space is the discrete uniformity, and the underlying topology on a discrete uniform space is the discrete topology. Thus, the different notions of discrete space are compatible with one another. On the other hand, the underlying topology of a non-discrete uniform or metric space can be discrete; an example is the metric space X = { n − 1 : n ∈ N } {\displaystyle X=\{n^{-1}:n\in \mathbb {N} \}} (with metric inherited from the real line and given by d ( x , y ) = | x − y | {\displaystyle d(x,y)=\left|x-y\right|} ). This is not the discrete metric; also, this space is not complete and hence not discrete as a uniform space. Nevertheless, it is discrete as a topological space. We say that X {\displaystyle X} is topologically discrete but not uniformly discrete or metrically discrete. Additionally: The topological dimension of a discrete space is equal to 0. A topological space is discrete if and only if its singletons are open, which is the case if and only if it does not contain any accumulation points. The singletons form a basis for the discrete topology. A uniform space X {\displaystyle X} is discrete if and only if the diagonal { ( x , x ) : x ∈ X } {\displaystyle \{(x,x):x\in X\}} is an entourage. Every discrete topological space satisfies each of the separation axioms; in particular, every discrete space is Hausdorff, that is, separated. A discrete space is compact if and only if it is finite. Every discrete uniform or metric space is complete. Combining the above two facts, every discrete uniform or metric space is totally bounded if and only if it is finite. Every discrete metric space is bounded. Every discrete space is first-countable; it is moreover second-countable if and only if it is countable. Every discrete space is totally disconnected. Every non-empty discrete space is second category. Any two discrete spaces with the same cardinality are homeomorphic. Every discrete space is metrizable (by the discrete metric). A finite space is metrizable only if it is discrete. If X {\displaystyle X} is a topological space and Y {\displaystyle Y} is a set carrying the discrete topology, then X {\displaystyle X} is evenly covered by X × Y {\displaystyle X\times Y} (the projection map is the desired covering) The subspace topology on the integers as a subspace of the real line is the discrete topology. A discrete space is separable if and only if it is countable. Any topological subspace of R {\displaystyle \mathbb {R} } (with its usual Euclidean topology) that is discrete is necessarily countable. Any function from a discrete topological space to another topological space is continuous, and any function from a discrete uniform space to another uniform space is uniformly continuous. That is, the discrete space X {\displaystyle X} is free on the set X {\displaystyle X} in the category of topological spaces and continuous maps or in the category of uniform spaces and uniformly continuous maps. These facts are examples of a much broader phenomenon, in which discrete structures are usually free on sets. With metric spaces, things are more complicated, because there are several categories of metric spaces, depending on what is chosen for the morphisms. Certainly the discrete metric space is free when the morphisms are all uniformly continuous maps or all continuous maps, but this says nothing interesting about the metric structure, only the uniform or topological structure. Categories more relevant to the metric structure can be found by limiting the morphisms to Lipschitz continuous maps or to short maps; however, these categories don't have free objects (on more than one element). However, the discrete metric space is free in the category of bounded metric spaces and Lipschitz continuous maps, and it is free in the category of metric spaces bounded by 1 and short maps. That is, any function from a discrete metric space to another bounded metric space is Lipschitz continuous, and any function from a discrete metric space to another metric space bounded by 1 is short. Going the other direction, a function f {\displaystyle f} from a topological space Y {\displaystyle Y} to a discrete space X {\displaystyle X} is continuous if and only if it is locally constant in the sense that every point in Y {\displaystyle Y} has a neighborhood on which f {\displaystyle f} is constant. Every ultrafilter U {\displaystyle {\mathcal {U}}} on a non-empty set X {\displaystyle X} can be associated with a topology τ = U ∪ { ∅ } {\displaystyle \tau ={\mathcal {U}}\cup \left\{\varnothing \right\}} on X {\displaystyle X} with the property that every non-empty proper subset S {\displaystyle S} of X {\displaystyle X} is either an open subset or else a closed subset, but never both. Said differently, every subset is open or closed but (in contrast to the discrete topology) the only subsets that are both open and closed (i.e. clopen) are ∅ {\displaystyle \varnothing } and X {\displaystyle X} . In comparison, every subset of X {\displaystyle X} is open and closed in the discrete topology. == Examples and uses == A discrete structure is often used as the "default structure" on a set that doesn't carry any other natural topology, uniformity, or metric; discrete structures can often be used as "extreme" examples to test particular suppositions. For example, any group can be considered as a topological group by giving it the discrete topology, implying that theorems about topological groups apply to all groups. Indeed, analysts may refer to the ordinary, non-topological groups studied by algebraists as "discrete groups". In some cases, this can be usefully applied, for example in combination with Pontryagin duality. A 0-dimensional manifold (or differentiable or analytic manifold) is nothing but a discrete and countable topological space (an uncountable discrete space is not second-countable). We can therefore view any discrete countable group as a 0-dimensional Lie group. A product of countably infinite copies of the discrete space of natural numbers is homeomorphic to the space of irrational numbers, with the homeomorphism given by the continued fraction expansion. A product of countably infinite copies of the discrete space { 0 , 1 } {\displaystyle \{0,1\}} is homeomorphic to the Cantor set; and in fact uniformly homeomorphic to the Cantor set if we use the product uniformity on the product. Such a homeomorphism is given by using ternary notation of numbers. (See Cantor space.) Every fiber of a locally injective function is necessarily a discrete subspace of its domain. In the foundations of mathematics, the study of compactness properties of products of { 0 , 1 } {\displaystyle \{0,1\}} is central to the topological approach to the ultrafilter lemma (equivalently, the Boolean prime ideal theorem), which is a weak form of the axiom of choice. == Indiscrete spaces == In some ways, the opposite of the discrete topology is the trivial topology (also called the indiscrete topology), which has the fewest possible open sets (just the empty set and the space itself). Where the discrete topology is initial or free, the indiscrete topology is final or cofree: every function from a topological space to an indiscrete space is continuous, etc. == See also == Cylinder set List of topologies Taxicab geometry == References == Steen, Lynn Arthur; Seebach, J. Arthur Jr. (1978). Counterexamples in Topology (2nd ed.). Berlin, New York: Springer-Verlag. ISBN 3-540-90312-7. MR 0507446. Zbl 0386.54001. Wilansky, Albert (17 October 2008) [1970]. Topology for Analysis. Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-46903-4. OCLC 227923899.
Wikipedia/Discrete_topology
In mathematics, the lower limit topology or right half-open interval topology is a topology defined on R {\displaystyle \mathbb {R} } , the set of real numbers; it is different from the standard topology on R {\displaystyle \mathbb {R} } (generated by the open intervals) and has a number of interesting properties. It is the topology generated by the basis of all half-open intervals [a,b), where a and b are real numbers. The resulting topological space is called the Sorgenfrey line after Robert Sorgenfrey or the arrow and is sometimes written R l {\displaystyle \mathbb {R} _{l}} . Like the Cantor set and the long line, the Sorgenfrey line often serves as a useful counterexample to many otherwise plausible-sounding conjectures in general topology. The product of R l {\displaystyle \mathbb {R} _{l}} with itself is also a useful counterexample, known as the Sorgenfrey plane. In complete analogy, one can also define the upper limit topology, or left half-open interval topology. == Properties == The lower limit topology is finer (has more open sets) than the standard topology on the real numbers (which is generated by the open intervals). The reason is that every open interval can be written as a (countably infinite) union of half-open intervals. For any real a {\displaystyle a} and b {\displaystyle b} , the interval [ a , b ) {\displaystyle [a,b)} is clopen in R l {\displaystyle \mathbb {R} _{l}} (i.e., both open and closed). Furthermore, for all real a {\displaystyle a} , the sets { x ∈ R : x < a } {\displaystyle \{x\in \mathbb {R} :x<a\}} and { x ∈ R : x ≥ a } {\displaystyle \{x\in \mathbb {R} :x\geq a\}} are also clopen. This shows that the Sorgenfrey line is totally disconnected. Any compact subset of R l {\displaystyle \mathbb {R} _{l}} must be an at most countable set. To see this, consider a non-empty compact subset C ⊆ R l {\displaystyle C\subseteq \mathbb {R} _{l}} . Fix an x ∈ C {\displaystyle x\in C} , consider the following open cover of C {\displaystyle C} : { [ x , + ∞ ) } ∪ { ( − ∞ , x − 1 n ) | n ∈ N } . {\displaystyle {\bigl \{}[x,+\infty ){\bigr \}}\cup {\Bigl \{}{\bigl (}-\infty ,x-{\tfrac {1}{n}}{\bigr )}\,{\Big |}\,n\in \mathbb {N} {\Bigr \}}.} Since C {\displaystyle C} is compact, this cover has a finite subcover, and hence there exists a real number a ( x ) {\displaystyle a(x)} such that the interval ( a ( x ) , x ] {\displaystyle (a(x),x]} contains no point of C {\displaystyle C} apart from x {\displaystyle x} . This is true for all x ∈ C {\displaystyle x\in C} . Now choose a rational number q ( x ) ∈ ( a ( x ) , x ] ∩ Q {\displaystyle q(x)\in (a(x),x]\cap \mathbb {Q} } . Since the intervals ( a ( x ) , x ] {\displaystyle (a(x),x]} , parametrized by x ∈ C {\displaystyle x\in C} , are pairwise disjoint, the function q : C → Q {\displaystyle q:C\to \mathbb {Q} } is injective, and so C {\displaystyle C} is at most countable. The name "lower limit topology" comes from the following fact: a sequence (or net) ( x α ) {\displaystyle (x_{\alpha })} in R l {\displaystyle \mathbb {R} _{l}} converges to the limit L {\displaystyle L} if and only if it "approaches L {\displaystyle L} from the right", meaning for every ϵ > 0 {\displaystyle \epsilon >0} there exists an index α 0 {\displaystyle \alpha _{0}} such that ∀ α ≥ α 0 : L ≤ x α < L + ϵ {\displaystyle \forall \alpha \geq \alpha _{0}:L\leq x_{\alpha }<L+\epsilon } . The Sorgenfrey line can thus be used to study right-sided limits: if f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } is a function, then the ordinary right-sided limit of f {\displaystyle f} at x {\displaystyle x} (when the codomain carries the standard topology) is the same as the usual limit of f {\displaystyle f} at x {\displaystyle x} when the domain is equipped with the lower limit topology and the codomain carries the standard topology. In terms of separation axioms, R l {\displaystyle \mathbb {R} _{l}} is a perfectly normal Hausdorff space. In terms of countability axioms, R l {\displaystyle \mathbb {R} _{l}} is first-countable and separable, but not second-countable. In terms of compactness properties, R l {\displaystyle \mathbb {R} _{l}} is Lindelöf and paracompact, but not σ-compact nor locally compact. R l {\displaystyle \mathbb {R} _{l}} is not metrizable, since separable metric spaces are second-countable. However, the topology of a Sorgenfrey line is generated by a quasimetric. R l {\displaystyle \mathbb {R} _{l}} is a Baire space. R l {\displaystyle \mathbb {R} _{l}} does not have any connected compactifications. == See also == List of topologies == References == Steen, Lynn Arthur; Seebach, J. Arthur Jr. (1995) [1978], Counterexamples in Topology (Dover reprint of 1978 ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-486-68735-3, MR 0507446
Wikipedia/Lower_limit_topology
In mathematics, topological dynamics is a branch of the theory of dynamical systems in which qualitative, asymptotic properties of dynamical systems are studied from the viewpoint of general topology. == Scope == The central object of study in topological dynamics is a topological dynamical system, i.e. a topological space, together with a continuous transformation, a continuous flow, or more generally, a semigroup of continuous transformations of that space. The origins of topological dynamics lie in the study of asymptotic properties of trajectories of systems of autonomous ordinary differential equations, in particular, the behavior of limit sets and various manifestations of "repetitiveness" of the motion, such as periodic trajectories, recurrence and minimality, stability, non-wandering points. George Birkhoff is considered to be the founder of the field. A structure theorem for minimal distal flows proved by Hillel Furstenberg in the early 1960s inspired much work on classification of minimal flows. A lot of research in the 1970s and 1980s was devoted to topological dynamics of one-dimensional maps, in particular, piecewise linear self-maps of the interval and the circle. Unlike the theory of smooth dynamical systems, where the main object of study is a smooth manifold with a diffeomorphism or a smooth flow, phase spaces considered in topological dynamics are general metric spaces (usually, compact). This necessitates development of entirely different techniques but allows an extra degree of flexibility even in the smooth setting, because invariant subsets of a manifold are frequently very complicated topologically (cf limit cycle, strange attractor); additionally, shift spaces arising via symbolic representations can be considered on an equal footing with more geometric actions. Topological dynamics has intimate connections with ergodic theory of dynamical systems, and many fundamental concepts of the latter have topological analogues (cf Kolmogorov–Sinai entropy and topological entropy). == See also == Poincaré–Bendixson theorem Symbolic dynamics Topological conjugacy == References == D. V. Anosov (2001) [1994], "Topological dynamics", Encyclopedia of Mathematics, EMS Press Joseph Auslander (ed.). "Topological dynamics". Scholarpedia. Robert Ellis, Lectures on topological dynamics. W. A. Benjamin, Inc., New York 1969 Walter Gottschalk, Gustav Hedlund, Topological dynamics. American Mathematical Society Colloquium Publications, Vol. 36. American Mathematical Society, Providence, R. I., 1955 J. de Vries, Elements of topological dynamics. Mathematics and its Applications, 257. Kluwer Academic Publishers Group, Dordrecht, 1993 ISBN 0-7923-2287-8 Ethan Akin, The General Topology of Dynamical Systems, AMS Bookstore, 2010, ISBN 978-0-8218-4932-3 J. de Vries, Topological Dynamical Systems: An Introduction to the Dynamics of Continuous Mappings, De Gruyter Studies in Mathematics, 59, De Gruyter, Berlin, 2014, ISBN 978-3-1103-4073-0 Jian Li and Xiang Dong Ye, Recent development of chaos theory in topological dynamics, Acta Mathematica Sinica, English Series, 2016, Volume 32, Issue 1, pp. 83–114.
Wikipedia/Topological_dynamics
In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it. Thus, a line has a dimension of one (1D) because only one coordinate is needed to specify a point on it – for example, the point at 5 on a number line. A surface, such as the boundary of a cylinder or sphere, has a dimension of two (2D) because two coordinates are needed to specify a point on it – for example, both a latitude and longitude are required to locate a point on the surface of a sphere. A two-dimensional Euclidean space is a two-dimensional space on the plane. The inside of a cube, a cylinder or a sphere is three-dimensional (3D) because three coordinates are needed to locate a point within these spaces. In classical mechanics, space and time are different categories and refer to absolute space and time. That conception of the world is a four-dimensional space but not the one that was found necessary to describe electromagnetism. The four dimensions (4D) of spacetime consist of events that are not absolutely defined spatially and temporally, but rather are known relative to the motion of an observer. Minkowski space first approximates the universe without gravity; the pseudo-Riemannian manifolds of general relativity describe spacetime with matter and gravity. 10 dimensions are used to describe superstring theory (6D hyperspace + 4D), 11 dimensions can describe supergravity and M-theory (7D hyperspace + 4D), and the state-space of quantum mechanics is an infinite-dimensional function space. The concept of dimension is not restricted to physical objects. High-dimensional spaces frequently occur in mathematics and the sciences. They may be Euclidean spaces or more general parameter spaces or configuration spaces such as in Lagrangian or Hamiltonian mechanics; these are abstract spaces, independent of the physical space. == In mathematics == In mathematics, the dimension of an object is, roughly speaking, the number of degrees of freedom of a point that moves on this object. In other words, the dimension is the number of independent parameters or coordinates that are needed for defining the position of a point that is constrained to be on the object. For example, the dimension of a point is zero; the dimension of a line is one, as a point can move on a line in only one direction (or its opposite); the dimension of a plane is two, etc. The dimension is an intrinsic property of an object, in the sense that it is independent of the dimension of the space in which the object is or can be embedded. For example, a curve, such as a circle, is of dimension one, because the position of a point on a curve is determined by its signed distance along the curve to a fixed point on the curve. This is independent from the fact that a curve cannot be embedded in a Euclidean space of dimension lower than two, unless it is a line. Similarly, a surface is of dimension two, even if embedded in three-dimensional space. The dimension of Euclidean n-space En is n. When trying to generalize to other types of spaces, one is faced with the question "what makes En n-dimensional?" One answer is that to cover a fixed ball in En by small balls of radius ε, one needs on the order of ε−n such small balls. This observation leads to the definition of the Minkowski dimension and its more sophisticated variant, the Hausdorff dimension, but there are also other answers to that question. For example, the boundary of a ball in En looks locally like En-1 and this leads to the notion of the inductive dimension. While these notions agree on En, they turn out to be different when one looks at more general spaces. A tesseract is an example of a four-dimensional object. Whereas outside mathematics the use of the term "dimension" is as in: "A tesseract has four dimensions", mathematicians usually express this as: "The tesseract has dimension 4", or: "The dimension of the tesseract is 4". Although the notion of higher dimensions goes back to René Descartes, substantial development of a higher-dimensional geometry only began in the 19th century, via the work of Arthur Cayley, William Rowan Hamilton, Ludwig Schläfli and Bernhard Riemann. Riemann's 1854 Habilitationsschrift, Schläfli's 1852 Theorie der vielfachen Kontinuität, and Hamilton's discovery of the quaternions and John T. Graves' discovery of the octonions in 1843 marked the beginning of higher-dimensional geometry. The rest of this section examines some of the more important mathematical definitions of dimension. === Vector spaces === The dimension of a vector space is the number of vectors in any basis for the space, i.e. the number of coordinates necessary to specify any vector. This notion of dimension (the cardinality of a basis) is often referred to as the Hamel dimension or algebraic dimension to distinguish it from other notions of dimension. For the non-free case, this generalizes to the notion of the length of a module. === Manifolds === The uniquely defined dimension of every connected topological manifold can be calculated. A connected topological manifold is locally homeomorphic to Euclidean n-space, in which the number n is the manifold's dimension. For connected differentiable manifolds, the dimension is also the dimension of the tangent vector space at any point. In geometric topology, the theory of manifolds is characterized by the way dimensions 1 and 2 are relatively elementary, the high-dimensional cases n > 4 are simplified by having extra space in which to "work"; and the cases n = 3 and 4 are in some senses the most difficult. This state of affairs was highly marked in the various cases of the Poincaré conjecture, in which four different proof methods are applied. ==== Complex dimension ==== The dimension of a manifold depends on the base field with respect to which Euclidean space is defined. While analysis usually assumes a manifold to be over the real numbers, it is sometimes useful in the study of complex manifolds and algebraic varieties to work over the complex numbers instead. A complex number (x + iy) has a real part x and an imaginary part y, in which x and y are both real numbers; hence, the complex dimension is half the real dimension. Conversely, in algebraically unconstrained contexts, a single complex coordinate system may be applied to an object having two real dimensions. For example, an ordinary two-dimensional spherical surface, when given a complex metric, becomes a Riemann sphere of one complex dimension. === Varieties === The dimension of an algebraic variety may be defined in various equivalent ways. The most intuitive way is probably the dimension of the tangent space at any Regular point of an algebraic variety. Another intuitive way is to define the dimension as the number of hyperplanes that are needed in order to have an intersection with the variety that is reduced to a finite number of points (dimension zero). This definition is based on the fact that the intersection of a variety with a hyperplane reduces the dimension by one unless if the hyperplane contains the variety. An algebraic set being a finite union of algebraic varieties, its dimension is the maximum of the dimensions of its components. It is equal to the maximal length of the chains V 0 ⊊ V 1 ⊊ ⋯ ⊊ V d {\displaystyle V_{0}\subsetneq V_{1}\subsetneq \cdots \subsetneq V_{d}} of sub-varieties of the given algebraic set (the length of such a chain is the number of " ⊊ {\displaystyle \subsetneq } "). Each variety can be considered as an algebraic stack, and its dimension as variety agrees with its dimension as stack. There are however many stacks which do not correspond to varieties, and some of these have negative dimension. Specifically, if V is a variety of dimension m and G is an algebraic group of dimension n acting on V, then the quotient stack [V/G] has dimension m − n. === Krull dimension === The Krull dimension of a commutative ring is the maximal length of chains of prime ideals in it, a chain of length n being a sequence P 0 ⊊ P 1 ⊊ ⋯ ⊊ P n {\displaystyle {\mathcal {P}}_{0}\subsetneq {\mathcal {P}}_{1}\subsetneq \cdots \subsetneq {\mathcal {P}}_{n}} of prime ideals related by inclusion. It is strongly related to the dimension of an algebraic variety, because of the natural correspondence between sub-varieties and prime ideals of the ring of the polynomials on the variety. For an algebra over a field, the dimension as vector space is finite if and only if its Krull dimension is 0. === Topological spaces === For any normal topological space X, the Lebesgue covering dimension of X is defined to be the smallest integer n for which the following holds: any open cover has an open refinement (a second open cover in which each element is a subset of an element in the first cover) such that no point is included in more than n + 1 elements. In this case dim X = n. For X a manifold, this coincides with the dimension mentioned above. If no such integer n exists, then the dimension of X is said to be infinite, and one writes dim X = ∞. Moreover, X has dimension −1, i.e. dim X = −1 if and only if X is empty. This definition of covering dimension can be extended from the class of normal spaces to all Tychonoff spaces merely by replacing the term "open" in the definition by the term "functionally open". An inductive dimension may be defined inductively as follows. Consider a discrete set of points (such as a finite collection of points) to be 0-dimensional. By dragging a 0-dimensional object in some direction, one obtains a 1-dimensional object. By dragging a 1-dimensional object in a new direction, one obtains a 2-dimensional object. In general, one obtains an (n + 1)-dimensional object by dragging an n-dimensional object in a new direction. The inductive dimension of a topological space may refer to the small inductive dimension or the large inductive dimension, and is based on the analogy that, in the case of metric spaces, (n + 1)-dimensional balls have n-dimensional boundaries, permitting an inductive definition based on the dimension of the boundaries of open sets. Moreover, the boundary of a discrete set of points is the empty set, and therefore the empty set can be taken to have dimension −1. Similarly, for the class of CW complexes, the dimension of an object is the largest n for which the n-skeleton is nontrivial. Intuitively, this can be described as follows: if the original space can be continuously deformed into a collection of higher-dimensional triangles joined at their faces with a complicated surface, then the dimension of the object is the dimension of those triangles. === Hausdorff dimension === The Hausdorff dimension is useful for studying structurally complicated sets, especially fractals. The Hausdorff dimension is defined for all metric spaces and, unlike the dimensions considered above, can also have non-integer real values. The box dimension or Minkowski dimension is a variant of the same idea. In general, there exist more definitions of fractal dimensions that work for highly irregular sets and attain non-integer positive real values. === Hilbert spaces === Every Hilbert space admits an orthonormal basis, and any two such bases for a particular space have the same cardinality. This cardinality is called the dimension of the Hilbert space. This dimension is finite if and only if the space's Hamel dimension is finite, and in this case the two dimensions coincide. == In physics == === Spatial dimensions === Classical physics theories describe three physical dimensions: from a particular point in space, the basic directions in which we can move are up/down, left/right, and forward/backward. Movement in any other direction can be expressed in terms of just these three. Moving down is the same as moving up a negative distance. Moving diagonally upward and forward is just as the name of the direction implies i.e., moving in a linear combination of up and forward. In its simplest form: a line describes one dimension, a plane describes two dimensions, and a cube describes three dimensions. (See Space and Cartesian coordinate system.) === Time === A temporal dimension, or time dimension, is a dimension of time. Time is often referred to as the "fourth dimension" for this reason, but that is not to imply that it is a spatial dimension. A temporal dimension is one way to measure physical change. It is perceived differently from the three spatial dimensions in that there is only one of it, and that we cannot move freely in time but subjectively move in one direction. The equations used in physics to model reality do not treat time in the same way that humans commonly perceive it. The equations of classical mechanics are symmetric with respect to time, and equations of quantum mechanics are typically symmetric if both time and other quantities (such as charge and parity) are reversed. In these models, the perception of time flowing in one direction is an artifact of the laws of thermodynamics (we perceive time as flowing in the direction of increasing entropy). The best-known treatment of time as a dimension is Poincaré and Einstein's special relativity (and extended to general relativity), which treats perceived space and time as components of a four-dimensional manifold, known as spacetime, and in the special, flat case as Minkowski space. Time is different from other spatial dimensions as time operates in all spatial dimensions. Time operates in the first, second and third as well as theoretical spatial dimensions such as a fourth spatial dimension. Time is not however present in a single point of absolute infinite singularity as defined as a geometric point, as an infinitely small point can have no change and therefore no time. Just as when an object moves through positions in space, it also moves through positions in time. In this sense the force moving any object to change is time. === Additional dimensions === In physics, three dimensions of space and one of time is the accepted norm. However, there are theories that attempt to unify the four fundamental forces by introducing extra dimensions/hyperspace. Most notably, superstring theory requires 10 spacetime dimensions, and originates from a more fundamental 11-dimensional theory tentatively called M-theory which subsumes five previously distinct superstring theories. Supergravity theory also promotes 11D spacetime = 7D hyperspace + 4 common dimensions. To date, no direct experimental or observational evidence is available to support the existence of these extra dimensions. If hyperspace exists, it must be hidden from us by some physical mechanism. One well-studied possibility is that the extra dimensions may be "curled up" (compactified) at such tiny scales as to be effectively invisible to current experiments. In 1921, Kaluza–Klein theory presented 5D including an extra dimension of space. At the level of quantum field theory, Kaluza–Klein theory unifies gravity with gauge interactions, based on the realization that gravity propagating in small, compact extra dimensions is equivalent to gauge interactions at long distances. In particular when the geometry of the extra dimensions is trivial, it reproduces electromagnetism. However, at sufficiently high energies or short distances, this setup still suffers from the same pathologies that famously obstruct direct attempts to describe quantum gravity. Therefore, these models still require a UV completion, of the kind that string theory is intended to provide. In particular, superstring theory requires six compact dimensions (6D hyperspace) forming a Calabi–Yau manifold. Thus Kaluza-Klein theory may be considered either as an incomplete description on its own, or as a subset of string theory model building. In addition to small and curled up extra dimensions, there may be extra dimensions that instead are not apparent because the matter associated with our visible universe is localized on a (3 + 1)-dimensional subspace. Thus, the extra dimensions need not be small and compact but may be large extra dimensions. D-branes are dynamical extended objects of various dimensionalities predicted by string theory that could play this role. They have the property that open string excitations, which are associated with gauge interactions, are confined to the brane by their endpoints, whereas the closed strings that mediate the gravitational interaction are free to propagate into the whole spacetime, or "the bulk". This could be related to why gravity is exponentially weaker than the other forces, as it effectively dilutes itself as it propagates into a higher-dimensional volume. Some aspects of brane physics have been applied to cosmology. For example, brane gas cosmology attempts to explain why there are three dimensions of space using topological and thermodynamic considerations. According to this idea it would be since three is the largest number of spatial dimensions in which strings can generically intersect. If initially there are many windings of strings around compact dimensions, space could only expand to macroscopic sizes once these windings are eliminated, which requires oppositely wound strings to find each other and annihilate. But strings can only find each other to annihilate at a meaningful rate in three dimensions, so it follows that only three dimensions of space are allowed to grow large given this kind of initial configuration. Extra dimensions are said to be universal if all fields are equally free to propagate within them. == In computer graphics and spatial data == Several types of digital systems are based on the storage, analysis, and visualization of geometric shapes, including illustration software, Computer-aided design, and Geographic information systems. Different vector systems use a wide variety of data structures to represent shapes, but almost all are fundamentally based on a set of geometric primitives corresponding to the spatial dimensions: Point (0-dimensional), a single coordinate in a Cartesian coordinate system. Line or Polyline (1-dimensional) usually represented as an ordered list of points sampled from a continuous line, whereupon the software is expected to interpolate the intervening shape of the line as straight- or curved-line segments. Polygon (2-dimensional) usually represented as a line that closes at its endpoints, representing the boundary of a two-dimensional region. The software is expected to use this boundary to partition 2-dimensional space into an interior and exterior. Surface (3-dimensional) represented using a variety of strategies, such as a polyhedron consisting of connected polygon faces. The software is expected to use this surface to partition 3-dimensional space into an interior and exterior. Frequently in these systems, especially GIS and Cartography, a representation of a real-world phenomenon may have a different (usually lower) dimension than the phenomenon being represented. For example, a city (a two-dimensional region) may be represented as a point, or a road (a three-dimensional volume of material) may be represented as a line. This dimensional generalization correlates with tendencies in spatial cognition. For example, asking the distance between two cities presumes a conceptual model of the cities as points, while giving directions involving travel "up," "down," or "along" a road imply a one-dimensional conceptual model. This is frequently done for purposes of data efficiency, visual simplicity, or cognitive efficiency, and is acceptable if the distinction between the representation and the represented is understood but can cause confusion if information users assume that the digital shape is a perfect representation of reality (i.e., believing that roads really are lines). == More dimensions == == List of topics by dimension == == See also == == References == == Further reading == Murty, Katta G. (2014). "1. Systems of Simultaneous Linear Equations" (PDF). Computational and Algorithmic Linear Algebra and n-Dimensional Geometry. World Scientific Publishing. doi:10.1142/8261. ISBN 978-981-4366-62-5. Abbott, Edwin A. (1884). Flatland: A Romance of Many Dimensions. London: Seely & Co. —. Flatland: ... Project Gutenberg. —; Stewart, Ian (2008). The Annotated Flatland: A Romance of Many Dimensions. Basic Books. ISBN 978-0-7867-2183-2. Banchoff, Thomas F. (1996). Beyond the Third Dimension: Geometry, Computer Graphics, and Higher Dimensions. Scientific American Library. ISBN 978-0-7167-6015-3. Pickover, Clifford A. (2001). Surfing through Hyperspace: Understanding Higher Universes in Six Easy Lessons. Oxford University Press. ISBN 978-0-19-992381-6. Rucker, Rudy (2014) [1984]. The Fourth Dimension: Toward a Geometry of Higher Reality. Courier Corporation. ISBN 978-0-486-77978-2. Google preview Kaku, Michio (1994). Hyperspace, a Scientific Odyssey Through the 10th Dimension. Oxford University Press. ISBN 978-0-19-286189-4. Krauss, Lawrence M. (2005). Hiding in the Mirror. Viking Press. ISBN 978-0-670-03395-9. == External links == Copeland, Ed (2009). "Extra Dimensions". Sixty Symbols. Brady Haran for the University of Nottingham.
Wikipedia/Dimension_theory
In mathematics, pointless topology, also called point-free topology (or pointfree topology) or topology without points and locale theory, is an approach to topology that avoids mentioning points, and in which the lattices of open sets are the primitive notions. In this approach it becomes possible to construct topologically interesting spaces from purely algebraic data. == History == The first approaches to topology were geometrical, where one started from Euclidean space and patched things together. But Marshall Stone's work on Stone duality in the 1930s showed that topology can be viewed from an algebraic point of view (lattice-theoretic). Karl Menger was an early pioneer in the field, and his work on topology without points was inspired by Whitehead's point-free geometry and used shrinking regions of the plain to simulate points. Apart from Stone, Henry Wallman also exploited this idea. Others continued this path till Charles Ehresmann and his student Jean Bénabou (and simultaneously others), took a major step in the late fifties. Their insights arose from the study of "topological" and "differentiable" categories. Ehresmann's approach involved using a category whose objects were complete lattices which satisfied a distributive law and whose morphisms were maps which preserved finite meets and arbitrary joins. He called such lattices "local lattices"; today they are called "frames" to avoid ambiguity with other notions in lattice theory. The theory of frames and locales in the contemporary sense was developed through the following decades (John Isbell, Peter Johnstone, Harold Simmons, Bernhard Banaschewski, Aleš Pultr, Till Plewe, Japie Vermeulen, Steve Vickers) into a lively branch of topology, with application in various fields, in particular also in theoretical computer science. For more on the history of locale theory see Johnstone's overview. == Intuition == Traditionally, a topological space consists of a set of points together with a topology, a system of subsets called open sets that with the operations of union (as join) and intersection (as meet) forms a lattice with certain properties. Specifically, the union of any family of open sets is again an open set, and the intersection of finitely many open sets is again open. In pointless topology we take these properties of the lattice as fundamental, without requiring that the lattice elements be sets of points of some underlying space and that the lattice operation be intersection and union. Rather, point-free topology is based on the concept of a "realistic spot" instead of a point without extent. These "spots" can be joined (symbol ∨ {\displaystyle \vee } ), akin to a union, and we also have a meet operation for spots (symbol ∧ {\displaystyle \land } ), akin to an intersection. Using these two operations, the spots form a complete lattice. If a spot meets a join of others it has to meet some of the constituents, which, roughly speaking, leads to the distributive law b ∧ ( ⋁ i ∈ I a i ) = ⋁ i ∈ I ( b ∧ a i ) {\displaystyle b\wedge \left(\bigvee _{i\in I}a_{i}\right)=\bigvee _{i\in I}\left(b\wedge a_{i}\right)} where the a i {\displaystyle a_{i}} and b {\displaystyle b} are spots and the index family I {\displaystyle I} can be arbitrarily large. This distributive law is also satisfied by the lattice of open sets of a topological space. If X {\displaystyle X} and Y {\displaystyle Y} are topological spaces with lattices of open sets denoted by Ω ( X ) {\displaystyle \Omega (X)} and Ω ( Y ) {\displaystyle \Omega (Y)} , respectively, and f : X → Y {\displaystyle f\colon X\to Y} is a continuous map, then, since the pre-image of an open set under a continuous map is open, we obtain a map of lattices in the opposite direction: f ∗ : Ω ( Y ) → Ω ( X ) {\displaystyle f^{*}\colon \Omega (Y)\to \Omega (X)} . Such "opposite-direction" lattice maps thus serve as the proper generalization of continuous maps in the point-free setting. == Formal definitions == The basic concept is that of a frame, a complete lattice satisfying the general distributive law above. Frame homomorphisms are maps between frames that respect all joins (in particular, the least element of the lattice) and finite meets (in particular, the greatest element of the lattice). Frames, together with frame homomorphisms, form a category. The opposite category of the category of frames is known as the category of locales. A locale X {\displaystyle X} is thus nothing but a frame; if we consider it as a frame, we will write it as O ( X ) {\displaystyle O(X)} . A locale morphism X → Y {\displaystyle X\to Y} from the locale X {\displaystyle X} to the locale Y {\displaystyle Y} is given by a frame homomorphism O ( Y ) → O ( X ) {\displaystyle O(Y)\to O(X)} . Every topological space T {\displaystyle T} gives rise to a frame Ω ( T ) {\displaystyle \Omega (T)} of open sets and thus to a locale. A locale is called spatial if it isomorphic (in the category of locales) to a locale arising from a topological space in this manner. == Examples of locales == As mentioned above, every topological space T {\displaystyle T} gives rise to a frame Ω ( T ) {\displaystyle \Omega (T)} of open sets and thus to a locale, by definition a spatial one. Given a topological space T {\displaystyle T} , we can also consider the collection of its regular open sets. This is a frame using as join the interior of the closure of the union, and as meet the intersection. We thus obtain another locale associated to T {\displaystyle T} . This locale will usually not be spatial. For each n ∈ N {\displaystyle n\in \mathbb {N} } and each a ∈ R {\displaystyle a\in \mathbb {R} } , use a symbol U n , a {\displaystyle U_{n,a}} and construct the free frame on these symbols, modulo the relations ⋁ a ∈ R U n , a = ⊤ for every n ∈ N {\displaystyle \bigvee _{a\in \mathbb {R} }U_{n,a}=\top \ {\text{ for every }}n\in \mathbb {N} } U n , a ∧ U n , b = ⊥ for every n ∈ N and all a , b ∈ R with a ≠ b {\displaystyle U_{n,a}\land U_{n,b}=\bot \ {\text{ for every }}n\in \mathbb {N} {\text{ and all }}a,b\in \mathbb {R} {\text{ with }}a\neq b} ⋁ n ∈ N U n , a = ⊤ for every a ∈ R {\displaystyle \bigvee _{n\in \mathbb {N} }U_{n,a}=\top \ {\text{ for every }}a\in \mathbb {R} } (where ⊤ {\displaystyle \top } denotes the greatest element and ⊥ {\displaystyle \bot } the smallest element of the frame.) The resulting locale is known as the "locale of surjective functions N → R {\displaystyle \mathbb {N} \to \mathbb {R} } ". The relations are designed to suggest the interpretation of U n , a {\displaystyle U_{n,a}} as the set of all those surjective functions f : N → R {\displaystyle f:\mathbb {N} \to \mathbb {R} } with f ( n ) = a {\displaystyle f(n)=a} . Of course, there are no such surjective functions N → R {\displaystyle \mathbb {N} \to \mathbb {R} } , and this is not a spatial locale. == The theory of locales == We have seen that we have a functor Ω {\displaystyle \Omega } from the category of topological spaces and continuous maps to the category of locales. If we restrict this functor to the full subcategory of sober spaces, we obtain a full embedding of the category of sober spaces and continuous maps into the category of locales. In this sense, locales are generalizations of sober spaces. It is possible to translate most concepts of point-set topology into the context of locales, and prove analogous theorems. Some important facts of classical topology depending on choice principles become choice-free (that is, constructive, which is, in particular, appealing for computer science). Thus for instance, arbitrary products of compact locales are compact constructively (this is Tychonoff's theorem in point-set topology), or completions of uniform locales are constructive. This can be useful if one works in a topos that does not have the axiom of choice. Other advantages include the much better behaviour of paracompactness, with arbitrary products of paracompact locales being paracompact, which is not true for paracompact spaces, or the fact that subgroups of localic groups are always closed. Another point where topology and locale theory diverge strongly is the concepts of subspaces versus sublocales, and density: given any collection of dense sublocales of a locale X {\displaystyle X} , their intersection is also dense in X {\displaystyle X} . This leads to Isbell's density theorem: every locale has a smallest dense sublocale. These results have no equivalent in the realm of topological spaces. == See also == Heyting algebra. Frames turn out to be the same as complete Heyting algebras (even though frame homomorphisms need not be Heyting algebra homomorphisms.) Complete Boolean algebra. Any complete Boolean algebra is a frame (it is a spatial frame if and only if it is atomic). Details on the relationship between the category of topological spaces and the category of locales, including the explicit construction of the equivalence between sober spaces and spatial locales, can be found in the article on Stone duality. Whitehead's point-free geometry. Mereotopology. == References == === Bibliography === A general introduction to pointless topology is Johnstone, Peter T. (1983). "The point of pointless topology". Bulletin of the American Mathematical Society. New Series. 8 (1): 41–53. doi:10.1090/S0273-0979-1983-15080-2. ISSN 0273-0979. Retrieved 2016-05-09. This is, in its own words, to be read as a trailer for Johnstone's monograph and which can be used for basic reference: 1982: Johnstone, Peter T. (1982) Stone Spaces, Cambridge University Press, ISBN 978-0-521-33779-3. There is a recent monograph 2012: Picado, Jorge, Pultr, Aleš Frames and locales: Topology without points, Frontiers in Mathematics, vol. 28, Springer, Basel (extensive bibliography) For relations with logic: 1996: Vickers, Steven, Topology via Logic, Cambridge Tracts in Theoretical Computer Science, Cambridge University Press. For a more concise account see the respective chapters in: 2003: Pedicchio, Maria Cristina, Tholen, Walter (editors) Categorical Foundations - Special Topics in Order, Topology, Algebra and Sheaf Theory, Encyclopedia of Mathematics and its Applications, Vol. 97, Cambridge University Press, pp. 49–101. 2003: Hazewinkel, Michiel (editor) Handbook of Algebra Vol. 3, North-Holland, Amsterdam, pp. 791–857. 2014: Grätzer, George, Wehrung, Friedrich (editors) Lattice Theory: Special Topics and Applications Vol. 1, Springer, Basel, pp. 55–88.
Wikipedia/Pointless_topology
In general topology and related areas of mathematics, the initial topology (or induced topology or strong topology or limit topology or projective topology) on a set X , {\displaystyle X,} with respect to a family of functions on X , {\displaystyle X,} is the coarsest topology on X {\displaystyle X} that makes those functions continuous. The subspace topology and product topology constructions are both special cases of initial topologies. Indeed, the initial topology construction can be viewed as a generalization of these. The dual notion is the final topology, which for a given family of functions mapping to a set Y {\displaystyle Y} is the finest topology on Y {\displaystyle Y} that makes those functions continuous. == Definition == Given a set X {\displaystyle X} and an indexed family ( Y i ) i ∈ I {\displaystyle \left(Y_{i}\right)_{i\in I}} of topological spaces with functions f i : X → Y i , {\displaystyle f_{i}:X\to Y_{i},} the initial topology τ {\displaystyle \tau } on X {\displaystyle X} is the coarsest topology on X {\displaystyle X} such that each f i : ( X , τ ) → Y i {\displaystyle f_{i}:(X,\tau )\to Y_{i}} is continuous. Definition in terms of open sets If ( τ i ) i ∈ I {\displaystyle \left(\tau _{i}\right)_{i\in I}} is a family of topologies X {\displaystyle X} indexed by I ≠ ∅ , {\displaystyle I\neq \varnothing ,} then the least upper bound topology of these topologies is the coarsest topology on X {\displaystyle X} that is finer than each τ i . {\displaystyle \tau _{i}.} This topology always exists and it is equal to the topology generated by ⋃ i ∈ I τ i . {\textstyle \bigcup _{i\in I}\tau _{i}.} If for every i ∈ I , {\displaystyle i\in I,} σ i {\displaystyle \sigma _{i}} denotes the topology on Y i , {\displaystyle Y_{i},} then f i − 1 ( σ i ) = { f i − 1 ( V ) : V ∈ σ i } {\displaystyle f_{i}^{-1}\left(\sigma _{i}\right)=\left\{f_{i}^{-1}(V):V\in \sigma _{i}\right\}} is a topology on X {\displaystyle X} , and the initial topology of the Y i {\displaystyle Y_{i}} by the mappings f i {\displaystyle f_{i}} is the least upper bound topology of the I {\displaystyle I} -indexed family of topologies f i − 1 ( σ i ) {\displaystyle f_{i}^{-1}\left(\sigma _{i}\right)} (for i ∈ I {\displaystyle i\in I} ). Explicitly, the initial topology is the collection of open sets generated by all sets of the form f i − 1 ( U ) , {\displaystyle f_{i}^{-1}(U),} where U {\displaystyle U} is an open set in Y i {\displaystyle Y_{i}} for some i ∈ I , {\displaystyle i\in I,} under finite intersections and arbitrary unions. Sets of the form f i − 1 ( V ) {\displaystyle f_{i}^{-1}(V)} are often called cylinder sets. If I {\displaystyle I} contains exactly one element, then all the open sets of the initial topology ( X , τ ) {\displaystyle (X,\tau )} are cylinder sets. == Examples == Several topological constructions can be regarded as special cases of the initial topology. The subspace topology is the initial topology on the subspace with respect to the inclusion map. The product topology is the initial topology with respect to the family of projection maps. The inverse limit of any inverse system of spaces and continuous maps is the set-theoretic inverse limit together with the initial topology determined by the canonical morphisms. The weak topology on a locally convex space is the initial topology with respect to the continuous linear forms of its dual space. Given a family of topologies { τ i } {\displaystyle \left\{\tau _{i}\right\}} on a fixed set X {\displaystyle X} the initial topology on X {\displaystyle X} with respect to the functions id i : X → ( X , τ i ) {\displaystyle \operatorname {id} _{i}:X\to \left(X,\tau _{i}\right)} is the supremum (or join) of the topologies { τ i } {\displaystyle \left\{\tau _{i}\right\}} in the lattice of topologies on X . {\displaystyle X.} That is, the initial topology τ {\displaystyle \tau } is the topology generated by the union of the topologies { τ i } . {\displaystyle \left\{\tau _{i}\right\}.} A topological space is completely regular if and only if it has the initial topology with respect to its family of (bounded) real-valued continuous functions. Every topological space X {\displaystyle X} has the initial topology with respect to the family of continuous functions from X {\displaystyle X} to the Sierpiński space. == Properties == === Characteristic property === The initial topology on X {\displaystyle X} can be characterized by the following characteristic property: A function g {\displaystyle g} from some space Z {\displaystyle Z} to X {\displaystyle X} is continuous if and only if f i ∘ g {\displaystyle f_{i}\circ g} is continuous for each i ∈ I . {\displaystyle i\in I.} Note that, despite looking quite similar, this is not a universal property. A categorical description is given below. A filter B {\displaystyle {\mathcal {B}}} on X {\displaystyle X} converges to a point x ∈ X {\displaystyle x\in X} if and only if the prefilter f i ( B ) {\displaystyle f_{i}({\mathcal {B}})} converges to f i ( x ) {\displaystyle f_{i}(x)} for every i ∈ I . {\displaystyle i\in I.} === Evaluation === By the universal property of the product topology, we know that any family of continuous maps f i : X → Y i {\displaystyle f_{i}:X\to Y_{i}} determines a unique continuous map f : X → ∏ i Y i x ↦ ( f i ( x ) ) i ∈ I {\displaystyle {\begin{alignedat}{4}f:\;&&X&&\;\to \;&\prod _{i}Y_{i}\\[0.3ex]&&x&&\;\mapsto \;&\left(f_{i}(x)\right)_{i\in I}\\\end{alignedat}}} This map is known as the evaluation map. A family of maps { f i : X → Y i } {\displaystyle \{f_{i}:X\to Y_{i}\}} is said to separate points in X {\displaystyle X} if for all x ≠ y {\displaystyle x\neq y} in X {\displaystyle X} there exists some i {\displaystyle i} such that f i ( x ) ≠ f i ( y ) . {\displaystyle f_{i}(x)\neq f_{i}(y).} The family { f i } {\displaystyle \{f_{i}\}} separates points if and only if the associated evaluation map f {\displaystyle f} is injective. The evaluation map f {\displaystyle f} will be a topological embedding if and only if X {\displaystyle X} has the initial topology determined by the maps { f i } {\displaystyle \{f_{i}\}} and this family of maps separates points in X . {\displaystyle X.} === Hausdorffness === If X {\displaystyle X} has the initial topology induced by { f i : X → Y i } {\displaystyle \left\{f_{i}:X\to Y_{i}\right\}} and if every Y i {\displaystyle Y_{i}} is Hausdorff, then X {\displaystyle X} is a Hausdorff space if and only if these maps separate points on X . {\displaystyle X.} === Transitivity of the initial topology === If X {\displaystyle X} has the initial topology induced by the I {\displaystyle I} -indexed family of mappings { f i : X → Y i } {\displaystyle \left\{f_{i}:X\to Y_{i}\right\}} and if for every i ∈ I , {\displaystyle i\in I,} the topology on Y i {\displaystyle Y_{i}} is the initial topology induced by some J i {\displaystyle J_{i}} -indexed family of mappings { g j : Y i → Z j } {\displaystyle \left\{g_{j}:Y_{i}\to Z_{j}\right\}} (as j {\displaystyle j} ranges over J i {\displaystyle J_{i}} ), then the initial topology on X {\displaystyle X} induced by { f i : X → Y i } {\displaystyle \left\{f_{i}:X\to Y_{i}\right\}} is equal to the initial topology induced by the ⋃ i ∈ I J i {\displaystyle {\textstyle \bigcup \limits _{i\in I}J_{i}}} -indexed family of mappings { g j ∘ f i : X → Z j } {\displaystyle \left\{g_{j}\circ f_{i}:X\to Z_{j}\right\}} as i {\displaystyle i} ranges over I {\displaystyle I} and j {\displaystyle j} ranges over J i . {\displaystyle J_{i}.} Several important corollaries of this fact are now given. In particular, if S ⊆ X {\displaystyle S\subseteq X} then the subspace topology that S {\displaystyle S} inherits from X {\displaystyle X} is equal to the initial topology induced by the inclusion map S → X {\displaystyle S\to X} (defined by s ↦ s {\displaystyle s\mapsto s} ). Consequently, if X {\displaystyle X} has the initial topology induced by { f i : X → Y i } {\displaystyle \left\{f_{i}:X\to Y_{i}\right\}} then the subspace topology that S {\displaystyle S} inherits from X {\displaystyle X} is equal to the initial topology induced on S {\displaystyle S} by the restrictions { f i | S : S → Y i } {\displaystyle \left\{\left.f_{i}\right|_{S}:S\to Y_{i}\right\}} of the f i {\displaystyle f_{i}} to S . {\displaystyle S.} The product topology on ∏ i Y i {\displaystyle \prod _{i}Y_{i}} is equal to the initial topology induced by the canonical projections pr i : ( x k ) k ∈ I ↦ x i {\displaystyle \operatorname {pr} _{i}:\left(x_{k}\right)_{k\in I}\mapsto x_{i}} as i {\displaystyle i} ranges over I . {\displaystyle I.} Consequently, the initial topology on X {\displaystyle X} induced by { f i : X → Y i } {\displaystyle \left\{f_{i}:X\to Y_{i}\right\}} is equal to the inverse image of the product topology on ∏ i Y i {\displaystyle \prod _{i}Y_{i}} by the evaluation map f : X → ∏ i Y i . {\textstyle f:X\to \prod _{i}Y_{i}\,.} Furthermore, if the maps { f i } i ∈ I {\displaystyle \left\{f_{i}\right\}_{i\in I}} separate points on X {\displaystyle X} then the evaluation map is a homeomorphism onto the subspace f ( X ) {\displaystyle f(X)} of the product space ∏ i Y i . {\displaystyle \prod _{i}Y_{i}.} === Separating points from closed sets === If a space X {\displaystyle X} comes equipped with a topology, it is often useful to know whether or not the topology on X {\displaystyle X} is the initial topology induced by some family of maps on X . {\displaystyle X.} This section gives a sufficient (but not necessary) condition. A family of maps { f i : X → Y i } {\displaystyle \left\{f_{i}:X\to Y_{i}\right\}} separates points from closed sets in X {\displaystyle X} if for all closed sets A {\displaystyle A} in X {\displaystyle X} and all x ∉ A , {\displaystyle x\not \in A,} there exists some i {\displaystyle i} such that f i ( x ) ∉ cl ⁡ ( f i ( A ) ) {\displaystyle f_{i}(x)\notin \operatorname {cl} (f_{i}(A))} where cl {\displaystyle \operatorname {cl} } denotes the closure operator. Theorem. A family of continuous maps { f i : X → Y i } {\displaystyle \left\{f_{i}:X\to Y_{i}\right\}} separates points from closed sets if and only if the cylinder sets f i − 1 ( V ) , {\displaystyle f_{i}^{-1}(V),} for V {\displaystyle V} open in Y i , {\displaystyle Y_{i},} form a base for the topology on X . {\displaystyle X.} It follows that whenever { f i } {\displaystyle \left\{f_{i}\right\}} separates points from closed sets, the space X {\displaystyle X} has the initial topology induced by the maps { f i } . {\displaystyle \left\{f_{i}\right\}.} The converse fails, since generally the cylinder sets will only form a subbase (and not a base) for the initial topology. If the space X {\displaystyle X} is a T0 space, then any collection of maps { f i } {\displaystyle \left\{f_{i}\right\}} that separates points from closed sets in X {\displaystyle X} must also separate points. In this case, the evaluation map will be an embedding. === Initial uniform structure === If ( U i ) i ∈ I {\displaystyle \left({\mathcal {U}}_{i}\right)_{i\in I}} is a family of uniform structures on X {\displaystyle X} indexed by I ≠ ∅ , {\displaystyle I\neq \varnothing ,} then the least upper bound uniform structure of ( U i ) i ∈ I {\displaystyle \left({\mathcal {U}}_{i}\right)_{i\in I}} is the coarsest uniform structure on X {\displaystyle X} that is finer than each U i . {\displaystyle {\mathcal {U}}_{i}.} This uniform always exists and it is equal to the filter on X × X {\displaystyle X\times X} generated by the filter subbase ⋃ i ∈ I U i . {\displaystyle {\textstyle \bigcup \limits _{i\in I}{\mathcal {U}}_{i}}.} If τ i {\displaystyle \tau _{i}} is the topology on X {\displaystyle X} induced by the uniform structure U i {\displaystyle {\mathcal {U}}_{i}} then the topology on X {\displaystyle X} associated with least upper bound uniform structure is equal to the least upper bound topology of ( τ i ) i ∈ I . {\displaystyle \left(\tau _{i}\right)_{i\in I}.} Now suppose that { f i : X → Y i } {\displaystyle \left\{f_{i}:X\to Y_{i}\right\}} is a family of maps and for every i ∈ I , {\displaystyle i\in I,} let U i {\displaystyle {\mathcal {U}}_{i}} be a uniform structure on Y i . {\displaystyle Y_{i}.} Then the initial uniform structure of the Y i {\displaystyle Y_{i}} by the mappings f i {\displaystyle f_{i}} is the unique coarsest uniform structure U {\displaystyle {\mathcal {U}}} on X {\displaystyle X} making all f i : ( X , U ) → ( Y i , U i ) {\displaystyle f_{i}:\left(X,{\mathcal {U}}\right)\to \left(Y_{i},{\mathcal {U}}_{i}\right)} uniformly continuous. It is equal to the least upper bound uniform structure of the I {\displaystyle I} -indexed family of uniform structures f i − 1 ( U i ) {\displaystyle f_{i}^{-1}\left({\mathcal {U}}_{i}\right)} (for i ∈ I {\displaystyle i\in I} ). The topology on X {\displaystyle X} induced by U {\displaystyle {\mathcal {U}}} is the coarsest topology on X {\displaystyle X} such that every f i : X → Y i {\displaystyle f_{i}:X\to Y_{i}} is continuous. The initial uniform structure U {\displaystyle {\mathcal {U}}} is also equal to the coarsest uniform structure such that the identity mappings id : ( X , U ) → ( X , f i − 1 ( U i ) ) {\displaystyle \operatorname {id} :\left(X,{\mathcal {U}}\right)\to \left(X,f_{i}^{-1}\left({\mathcal {U}}_{i}\right)\right)} are uniformly continuous. Hausdorffness: The topology on X {\displaystyle X} induced by the initial uniform structure U {\displaystyle {\mathcal {U}}} is Hausdorff if and only if for whenever x , y ∈ X {\displaystyle x,y\in X} are distinct ( x ≠ y {\displaystyle x\neq y} ) then there exists some i ∈ I {\displaystyle i\in I} and some entourage V i ∈ U i {\displaystyle V_{i}\in {\mathcal {U}}_{i}} of Y i {\displaystyle Y_{i}} such that ( f i ( x ) , f i ( y ) ) ∉ V i . {\displaystyle \left(f_{i}(x),f_{i}(y)\right)\not \in V_{i}.} Furthermore, if for every index i ∈ I , {\displaystyle i\in I,} the topology on Y i {\displaystyle Y_{i}} induced by U i {\displaystyle {\mathcal {U}}_{i}} is Hausdorff then the topology on X {\displaystyle X} induced by the initial uniform structure U {\displaystyle {\mathcal {U}}} is Hausdorff if and only if the maps { f i : X → Y i } {\displaystyle \left\{f_{i}:X\to Y_{i}\right\}} separate points on X {\displaystyle X} (or equivalently, if and only if the evaluation map f : X → ∏ i Y i {\textstyle f:X\to \prod _{i}Y_{i}} is injective) Uniform continuity: If U {\displaystyle {\mathcal {U}}} is the initial uniform structure induced by the mappings { f i : X → Y i } , {\displaystyle \left\{f_{i}:X\to Y_{i}\right\},} then a function g {\displaystyle g} from some uniform space Z {\displaystyle Z} into ( X , U ) {\displaystyle (X,{\mathcal {U}})} is uniformly continuous if and only if f i ∘ g : Z → Y i {\displaystyle f_{i}\circ g:Z\to Y_{i}} is uniformly continuous for each i ∈ I . {\displaystyle i\in I.} Cauchy filter: A filter B {\displaystyle {\mathcal {B}}} on X {\displaystyle X} is a Cauchy filter on ( X , U ) {\displaystyle (X,{\mathcal {U}})} if and only if f i ( B ) {\displaystyle f_{i}\left({\mathcal {B}}\right)} is a Cauchy prefilter on Y i {\displaystyle Y_{i}} for every i ∈ I . {\displaystyle i\in I.} Transitivity of the initial uniform structure: If the word "topology" is replaced with "uniform structure" in the statement of "transitivity of the initial topology" given above, then the resulting statement will also be true. == Categorical description == In the language of category theory, the initial topology construction can be described as follows. Let Y {\displaystyle Y} be the functor from a discrete category J {\displaystyle J} to the category of topological spaces T o p {\displaystyle \mathrm {Top} } which maps j ↦ Y j {\displaystyle j\mapsto Y_{j}} . Let U {\displaystyle U} be the usual forgetful functor from T o p {\displaystyle \mathrm {Top} } to S e t {\displaystyle \mathrm {Set} } . The maps f j : X → Y j {\displaystyle f_{j}:X\to Y_{j}} can then be thought of as a cone from X {\displaystyle X} to U Y . {\displaystyle UY.} That is, ( X , f ) {\displaystyle (X,f)} is an object of C o n e ( U Y ) := ( Δ ↓ U Y ) {\displaystyle \mathrm {Cone} (UY):=(\Delta \downarrow {UY})} —the category of cones to U Y . {\displaystyle UY.} More precisely, this cone ( X , f ) {\displaystyle (X,f)} defines a U {\displaystyle U} -structured cosink in S e t . {\displaystyle \mathrm {Set} .} The forgetful functor U : T o p → S e t {\displaystyle U:\mathrm {Top} \to \mathrm {Set} } induces a functor U ¯ : C o n e ( Y ) → C o n e ( U Y ) {\displaystyle {\bar {U}}:\mathrm {Cone} (Y)\to \mathrm {Cone} (UY)} . The characteristic property of the initial topology is equivalent to the statement that there exists a universal morphism from U ¯ {\displaystyle {\bar {U}}} to ( X , f ) ; {\displaystyle (X,f);} that is, a terminal object in the category ( U ¯ ↓ ( X , f ) ) . {\displaystyle \left({\bar {U}}\downarrow (X,f)\right).} Explicitly, this consists of an object I ( X , f ) {\displaystyle I(X,f)} in C o n e ( Y ) {\displaystyle \mathrm {Cone} (Y)} together with a morphism ε : U ¯ I ( X , f ) → ( X , f ) {\displaystyle \varepsilon :{\bar {U}}I(X,f)\to (X,f)} such that for any object ( Z , g ) {\displaystyle (Z,g)} in C o n e ( Y ) {\displaystyle \mathrm {Cone} (Y)} and morphism φ : U ¯ ( Z , g ) → ( X , f ) {\displaystyle \varphi :{\bar {U}}(Z,g)\to (X,f)} there exists a unique morphism ζ : ( Z , g ) → I ( X , f ) {\displaystyle \zeta :(Z,g)\to I(X,f)} such that the following diagram commutes: The assignment ( X , f ) ↦ I ( X , f ) {\displaystyle (X,f)\mapsto I(X,f)} placing the initial topology on X {\displaystyle X} extends to a functor I : C o n e ( U Y ) → C o n e ( Y ) {\displaystyle I:\mathrm {Cone} (UY)\to \mathrm {Cone} (Y)} which is right adjoint to the forgetful functor U ¯ . {\displaystyle {\bar {U}}.} In fact, I {\displaystyle I} is a right-inverse to U ¯ {\displaystyle {\bar {U}}} ; since U ¯ I {\displaystyle {\bar {U}}I} is the identity functor on C o n e ( U Y ) . {\displaystyle \mathrm {Cone} (UY).} == See also == Final topology – Finest topology making some functions continuous Product topology – Topology on Cartesian products of topological spaces Quotient space (topology) – Topological space construction Subspace topology – Inherited topology == References == == Bibliography == Bourbaki, Nicolas (1989) [1966]. General Topology: Chapters 1–4 [Topologie Générale]. Éléments de mathématique. Berlin New York: Springer Science & Business Media. ISBN 978-3-540-64241-1. OCLC 18588129. Bourbaki, Nicolas (1989) [1967]. General Topology 2: Chapters 5–10 [Topologie Générale]. Éléments de mathématique. Vol. 4. Berlin New York: Springer Science & Business Media. ISBN 978-3-540-64563-4. OCLC 246032063. Dugundji, James (1966). Topology. Boston: Allyn and Bacon. ISBN 978-0-697-06889-7. OCLC 395340485. Grothendieck, Alexander (1973). Topological Vector Spaces. Translated by Chaljub, Orlando. New York: Gordon and Breach Science Publishers. ISBN 978-0-677-30020-7. OCLC 886098. Willard, Stephen (2004) [1970]. General Topology. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-43479-7. OCLC 115240. Willard, Stephen (1970). General Topology. Reading, Massachusetts: Addison-Wesley. ISBN 0-486-43479-6. == External links == Initial topology at PlanetMath. Product topology and subspace topology at PlanetMath.
Wikipedia/Initial_topology
In topology and related areas of mathematics, the quotient space of a topological space under a given equivalence relation is a new topological space constructed by endowing the quotient set of the original topological space with the quotient topology, that is, with the finest topology that makes continuous the canonical projection map (the function that maps points to their equivalence classes). In other words, a subset of a quotient space is open if and only if its preimage under the canonical projection map is open in the original topological space. Intuitively speaking, the points of each equivalence class are identified or "glued together" for forming a new topological space. For example, identifying the points of a sphere that belong to the same diameter produces the projective plane as a quotient space. == Definition == Let X {\displaystyle X} be a topological space, and let ∼ {\displaystyle \sim } be an equivalence relation on X . {\displaystyle X.} The quotient set Y = X / ∼ {\displaystyle Y=X/{\sim }} is the set of equivalence classes of elements of X . {\displaystyle X.} The equivalence class of x ∈ X {\displaystyle x\in X} is denoted [ x ] . {\displaystyle [x].} The construction of Y {\displaystyle Y} defines a canonical surjection q : X ∋ x ↦ [ x ] ∈ Y . {\textstyle q:X\ni x\mapsto [x]\in Y.} As discussed below, q {\displaystyle q} is a quotient mapping, commonly called the canonical quotient map, or canonical projection map, associated to X / ∼ . {\displaystyle X/{\sim }.} The quotient space under ∼ {\displaystyle \sim } is the set Y {\displaystyle Y} equipped with the quotient topology, whose open sets are those subsets U ⊆ Y {\textstyle U\subseteq Y} whose preimage q − 1 ( U ) {\displaystyle q^{-1}(U)} is open. In other words, U {\displaystyle U} is open in the quotient topology on X / ∼ {\displaystyle X/{\sim }} if and only if { x ∈ X : [ x ] ∈ U } {\textstyle \{x\in X:[x]\in U\}} is open in X . {\displaystyle X.} Similarly, a subset S ⊆ Y {\displaystyle S\subseteq Y} is closed if and only if { x ∈ X : [ x ] ∈ S } {\displaystyle \{x\in X:[x]\in S\}} is closed in X . {\displaystyle X.} The quotient topology is the final topology on the quotient set, with respect to the map x ↦ [ x ] . {\displaystyle x\mapsto [x].} == Quotient map == A map f : X → Y {\displaystyle f:X\to Y} is a quotient map (sometimes called an identification map) if it is surjective and Y {\displaystyle Y} is equipped with the final topology induced by f . {\displaystyle f.} The latter condition admits two more-elementary formulations: a subset V ⊆ Y {\displaystyle V\subseteq Y} is open (closed) if and only if f − 1 ( V ) {\displaystyle f^{-1}(V)} is open (resp. closed). Every quotient map is continuous but not every continuous map is a quotient map. Saturated sets A subset S {\displaystyle S} of X {\displaystyle X} is called saturated (with respect to f {\displaystyle f} ) if it is of the form S = f − 1 ( T ) {\displaystyle S=f^{-1}(T)} for some set T , {\displaystyle T,} which is true if and only if f − 1 ( f ( S ) ) = S . {\displaystyle f^{-1}(f(S))=S.} The assignment T ↦ f − 1 ( T ) {\displaystyle T\mapsto f^{-1}(T)} establishes a one-to-one correspondence (whose inverse is S ↦ f ( S ) {\displaystyle S\mapsto f(S)} ) between subsets T {\displaystyle T} of Y = f ( X ) {\displaystyle Y=f(X)} and saturated subsets of X . {\displaystyle X.} With this terminology, a surjection f : X → Y {\displaystyle f:X\to Y} is a quotient map if and only if for every saturated subset S {\displaystyle S} of X , {\displaystyle X,} S {\displaystyle S} is open in X {\displaystyle X} if and only if f ( S ) {\displaystyle f(S)} is open in Y . {\displaystyle Y.} In particular, open subsets of X {\displaystyle X} that are not saturated have no impact on whether the function f {\displaystyle f} is a quotient map (or, indeed, continuous: a function f : X → Y {\displaystyle f:X\to Y} is continuous if and only if, for every saturated S ⊆ X {\textstyle S\subseteq X} such that f ( S ) {\displaystyle f(S)} is open in f ( X ) {\textstyle f(X)} , the set S {\displaystyle S} is open in X {\textstyle X} ). Indeed, if τ {\displaystyle \tau } is a topology on X {\displaystyle X} and f : X → Y {\displaystyle f:X\to Y} is any map, then the set τ f {\displaystyle \tau _{f}} of all U ∈ τ {\displaystyle U\in \tau } that are saturated subsets of X {\displaystyle X} forms a topology on X . {\displaystyle X.} If Y {\displaystyle Y} is also a topological space then f : ( X , τ ) → Y {\displaystyle f:(X,\tau )\to Y} is a quotient map (respectively, continuous) if and only if the same is true of f : ( X , τ f ) → Y . {\displaystyle f:\left(X,\tau _{f}\right)\to Y.} Quotient space of fibers characterization Given an equivalence relation ∼ {\displaystyle \,\sim \,} on X , {\displaystyle X,} denote the equivalence class of a point x ∈ X {\displaystyle x\in X} by [ x ] := { z ∈ X : z ∼ x } {\displaystyle [x]:=\{z\in X:z\sim x\}} and let X / ∼ := { [ x ] : x ∈ X } {\displaystyle X/{\sim }:=\{[x]:x\in X\}} denote the set of equivalence classes. The map q : X → X / ∼ {\displaystyle q:X\to X/{\sim }} that sends points to their equivalence classes (that is, it is defined by q ( x ) := [ x ] {\displaystyle q(x):=[x]} for every x ∈ X {\displaystyle x\in X} ) is called the canonical map. It is a surjective map and for all a , b ∈ X , {\displaystyle a,b\in X,} a ∼ b {\displaystyle a\,\sim \,b} if and only if q ( a ) = q ( b ) ; {\displaystyle q(a)=q(b);} consequently, q ( x ) = q − 1 ( q ( x ) ) {\displaystyle q(x)=q^{-1}(q(x))} for all x ∈ X . {\displaystyle x\in X.} In particular, this shows that the set of equivalence class X / ∼ {\displaystyle X/{\sim }} is exactly the set of fibers of the canonical map q . {\displaystyle q.} If X {\displaystyle X} is a topological space then giving X / ∼ {\displaystyle X/{\sim }} the quotient topology induced by q {\displaystyle q} will make it into a quotient space and make q : X → X / ∼ {\displaystyle q:X\to X/{\sim }} into a quotient map. Up to a homeomorphism, this construction is representative of all quotient spaces; the precise meaning of this is now explained. Let f : X → Y {\displaystyle f:X\to Y} be a surjection between topological spaces (not yet assumed to be continuous or a quotient map) and declare for all a , b ∈ X {\displaystyle a,b\in X} that a ∼ b {\displaystyle a\,\sim \,b} if and only if f ( a ) = f ( b ) . {\displaystyle f(a)=f(b).} Then ∼ {\displaystyle \,\sim \,} is an equivalence relation on X {\displaystyle X} such that for every x ∈ X , {\displaystyle x\in X,} [ x ] = f − 1 ( f ( x ) ) , {\displaystyle [x]=f^{-1}(f(x)),} which implies that f ( [ x ] ) {\displaystyle f([x])} (defined by f ( [ x ] ) = { f ( z ) : z ∈ [ x ] } {\displaystyle f([x])=\{\,f(z)\,:z\in [x]\}} ) is a singleton set; denote the unique element in f ( [ x ] ) {\displaystyle f([x])} by f ^ ( [ x ] ) {\displaystyle {\hat {f}}([x])} (so by definition, f ( [ x ] ) = { f ^ ( [ x ] ) } {\displaystyle f([x])=\{\,{\hat {f}}([x])\,\}} ). The assignment [ x ] ↦ f ^ ( [ x ] ) {\displaystyle [x]\mapsto {\hat {f}}([x])} defines a bijection f ^ : X / ∼ → Y {\displaystyle {\hat {f}}:X/{\sim }\;\to \;Y} between the fibers of f {\displaystyle f} and points in Y . {\displaystyle Y.} Define the map q : X → X / ∼ {\displaystyle q:X\to X/{\sim }} as above (by q ( x ) := [ x ] {\displaystyle q(x):=[x]} ) and give X / ∼ {\displaystyle X/{\sim }} the quotient topology induced by q {\displaystyle q} (which makes q {\displaystyle q} a quotient map). These maps are related by: f = f ^ ∘ q and q = f ^ − 1 ∘ f . {\displaystyle f={\hat {f}}\circ q\quad {\text{ and }}\quad q={\hat {f}}^{-1}\circ f.} From this and the fact that q : X → X / ∼ {\displaystyle q:X\to X/{\sim }} is a quotient map, it follows that f : X → Y {\displaystyle f:X\to Y} is continuous if and only if this is true of f ^ : X / ∼ → Y . {\displaystyle {\hat {f}}:X/{\sim }\;\to \;Y.} Furthermore, f : X → Y {\displaystyle f:X\to Y} is a quotient map if and only if f ^ : X / ∼ → Y {\displaystyle {\hat {f}}:X/{\sim }\;\to \;Y} is a homeomorphism (or equivalently, if and only if both f ^ {\displaystyle {\hat {f}}} and its inverse are continuous). === Related definitions === A hereditarily quotient map is a surjective map f : X → Y {\displaystyle f:X\to Y} with the property that for every subset T ⊆ Y , {\displaystyle T\subseteq Y,} the restriction f | f − 1 ( T ) : f − 1 ( T ) → T {\displaystyle f{\big \vert }_{f^{-1}(T)}~:~f^{-1}(T)\to T} is also a quotient map. There exist quotient maps that are not hereditarily quotient. == Examples == Gluing. Topologists talk of gluing points together. If X {\displaystyle X} is a topological space, gluing the points x {\displaystyle x} and y {\displaystyle y} in X {\displaystyle X} means considering the quotient space obtained from the equivalence relation a ∼ b {\displaystyle a\sim b} if and only if a = b {\displaystyle a=b} or a = x , b = y {\displaystyle a=x,b=y} (or a = y , b = x {\displaystyle a=y,b=x} ). Consider the unit square I 2 = [ 0 , 1 ] × [ 0 , 1 ] {\displaystyle I^{2}=[0,1]\times [0,1]} and the equivalence relation ~ generated by the requirement that all boundary points be equivalent, thus identifying all boundary points to a single equivalence class. Then I 2 / ∼ {\displaystyle I^{2}/\sim } is homeomorphic to the sphere S 2 . {\displaystyle S^{2}.} Adjunction space. More generally, suppose X {\displaystyle X} is a space and A {\displaystyle A} is a subspace of X . {\displaystyle X.} One can identify all points in A {\displaystyle A} to a single equivalence class and leave points outside of A {\displaystyle A} equivalent only to themselves. The resulting quotient space is denoted X / A . {\displaystyle X/A.} The 2-sphere is then homeomorphic to a closed disc with its boundary identified to a single point: D 2 / ∂ D 2 . {\displaystyle D^{2}/\partial {D^{2}}.} Consider the set R {\displaystyle \mathbb {R} } of real numbers with the ordinary topology, and write x ∼ y {\displaystyle x\sim y} if and only if x − y {\displaystyle x-y} is an integer. Then the quotient space X / ∼ {\displaystyle X/{\sim }} is homeomorphic to the unit circle S 1 {\displaystyle S^{1}} via the homeomorphism which sends the equivalence class of x {\displaystyle x} to exp ⁡ ( 2 π i x ) . {\displaystyle \exp(2\pi ix).} A generalization of the previous example is the following: Suppose a topological group G {\displaystyle G} acts continuously on a space X . {\displaystyle X.} One can form an equivalence relation on X {\displaystyle X} by saying points are equivalent if and only if they lie in the same orbit. The quotient space under this relation is called the orbit space, denoted X / G . {\displaystyle X/G.} In the previous example G = Z {\displaystyle G=\mathbb {Z} } acts on R {\displaystyle \mathbb {R} } by translation. The orbit space R / Z {\displaystyle \mathbb {R} /\mathbb {Z} } is homeomorphic to S 1 . {\displaystyle S^{1}.} Note: The notation R / Z {\displaystyle \mathbb {R} /\mathbb {Z} } is somewhat ambiguous. If Z {\displaystyle \mathbb {Z} } is understood to be a group acting on R {\displaystyle \mathbb {R} } via addition, then the quotient is the circle. However, if Z {\displaystyle \mathbb {Z} } is thought of as a topological subspace of R {\displaystyle \mathbb {R} } (that is identified as a single point) then the quotient { Z } ∪ { { r } : r ∈ R ∖ Z } {\displaystyle \{\mathbb {Z} \}\cup \{\,\{r\}:r\in \mathbb {R} \setminus \mathbb {Z} \}} (which is identifiable with the set { Z } ∪ ( R ∖ Z ) {\displaystyle \{\mathbb {Z} \}\cup (\mathbb {R} \setminus \mathbb {Z} )} ) is a countably infinite bouquet of circles joined at a single point Z . {\displaystyle \mathbb {Z} .} This next example shows that it is in general not true that if q : X → Y {\displaystyle q:X\to Y} is a quotient map then every convergent sequence (respectively, every convergent net) in Y {\displaystyle Y} has a lift (by q {\displaystyle q} ) to a convergent sequence (or convergent net) in X . {\displaystyle X.} Let X = [ 0 , 1 ] {\displaystyle X=[0,1]} and ∼ = { { 0 , 1 } } ∪ { { x } : x ∈ ( 0 , 1 ) } . {\displaystyle \,\sim ~=~\{\,\{0,1\}\,\}~\cup ~\left\{\{x\}:x\in (0,1)\,\right\}.} Let Y := X / ∼ {\displaystyle Y:=X/{\sim }} and let q : X → X / ∼ {\displaystyle q:X\to X/{\sim }} be the quotient map q ( x ) := [ x ] , {\displaystyle q(x):=[x],} so that q ( 0 ) = q ( 1 ) = { 0 , 1 } {\displaystyle q(0)=q(1)=\{0,1\}} and q ( x ) = { x } {\displaystyle q(x)=\{x\}} for every x ∈ ( 0 , 1 ) . {\displaystyle x\in (0,1).} The map h : X / ∼ → S 1 ⊆ C {\displaystyle h:X/{\sim }\to S^{1}\subseteq \mathbb {C} } defined by h ( [ x ] ) := e 2 π i x {\displaystyle h([x]):=e^{2\pi ix}} is well-defined (because e 2 π i ( 0 ) = 1 = e 2 π i ( 1 ) {\displaystyle e^{2\pi i(0)}=1=e^{2\pi i(1)}} ) and a homeomorphism. Let I = N {\displaystyle I=\mathbb {N} } and let a ∙ := ( a i ) i ∈ I and b ∙ := ( b i ) i ∈ I {\displaystyle a_{\bullet }:=\left(a_{i}\right)_{i\in I}{\text{ and }}b_{\bullet }:=\left(b_{i}\right)_{i\in I}} be any sequences (or more generally, any nets) valued in ( 0 , 1 ) {\displaystyle (0,1)} such that a ∙ → 0 and b ∙ → 1 {\displaystyle a_{\bullet }\to 0{\text{ and }}b_{\bullet }\to 1} in X = [ 0 , 1 ] . {\displaystyle X=[0,1].} Then the sequence y 1 := q ( a 1 ) , y 2 := q ( b 1 ) , y 3 := q ( a 2 ) , y 4 := q ( b 2 ) , … {\displaystyle y_{1}:=q\left(a_{1}\right),y_{2}:=q\left(b_{1}\right),y_{3}:=q\left(a_{2}\right),y_{4}:=q\left(b_{2}\right),\ldots } converges to [ 0 ] = [ 1 ] {\displaystyle [0]=[1]} in X / ∼ {\displaystyle X/{\sim }} but there does not exist any convergent lift of this sequence by the quotient map q {\displaystyle q} (that is, there is no sequence s ∙ = ( s i ) i ∈ I {\displaystyle s_{\bullet }=\left(s_{i}\right)_{i\in I}} in X {\displaystyle X} that both converges to some x ∈ X {\displaystyle x\in X} and satisfies y i = q ( s i ) {\displaystyle y_{i}=q\left(s_{i}\right)} for every i ∈ I {\displaystyle i\in I} ). This counterexample can be generalized to nets by letting ( A , ≤ ) {\displaystyle (A,\leq )} be any directed set, and making I := A × { 1 , 2 } {\displaystyle I:=A\times \{1,2\}} into a net by declaring that for any ( a , m ) , ( b , n ) ∈ I , {\displaystyle (a,m),(b,n)\in I,} ( m , a ) ≤ ( n , b ) {\displaystyle (m,a)\;\leq \;(n,b)} holds if and only if both (1) a ≤ b , {\displaystyle a\leq b,} and (2) if a = b then m ≤ n ; {\displaystyle a=b{\text{ then }}m\leq n;} then the A {\displaystyle A} -indexed net defined by letting y ( a , m ) {\displaystyle y_{(a,m)}} equal a i if m = 1 {\displaystyle a_{i}{\text{ if }}m=1} and equal to b i if m = 2 {\displaystyle b_{i}{\text{ if }}m=2} has no lift (by q {\displaystyle q} ) to a convergent A {\displaystyle A} -indexed net in X = [ 0 , 1 ] . {\displaystyle X=[0,1].} == Properties == Quotient maps q : X → Y {\displaystyle q:X\to Y} are characterized among surjective maps by the following property: if Z {\displaystyle Z} is any topological space and f : Y → Z {\displaystyle f:Y\to Z} is any function, then f {\displaystyle f} is continuous if and only if f ∘ q {\displaystyle f\circ q} is continuous. The quotient space X / ∼ {\displaystyle X/{\sim }} together with the quotient map q : X → X / ∼ {\displaystyle q:X\to X/{\sim }} is characterized by the following universal property: if g : X → Z {\displaystyle g:X\to Z} is a continuous map such that a ∼ b {\displaystyle a\sim b} implies g ( a ) = g ( b ) {\displaystyle g(a)=g(b)} for all a , b ∈ X , {\displaystyle a,b\in X,} then there exists a unique continuous map f : X / ∼ → Z {\displaystyle f:X/{\sim }\to Z} such that g = f ∘ q . {\displaystyle g=f\circ q.} In other words, the following diagram commutes: One says that g {\displaystyle g} descends to the quotient for expressing this, that is that it factorizes through the quotient space. The continuous maps defined on X / ∼ {\displaystyle X/{\sim }} are, therefore, precisely those maps which arise from continuous maps defined on X {\displaystyle X} that respect the equivalence relation (in the sense that they send equivalent elements to the same image). This criterion is copiously used when studying quotient spaces. Given a continuous surjection q : X → Y {\displaystyle q:X\to Y} it is useful to have criteria by which one can determine if q {\displaystyle q} is a quotient map. Two sufficient criteria are that q {\displaystyle q} be open or closed. Note that these conditions are only sufficient, not necessary. It is easy to construct examples of quotient maps that are neither open nor closed. For topological groups, the quotient map is open. == Compatibility with other topological notions == Separation In general, quotient spaces are ill-behaved with respect to separation axioms. The separation properties of X {\displaystyle X} need not be inherited by X / ∼ {\displaystyle X/{\sim }} and X / ∼ {\displaystyle X/{\sim }} may have separation properties not shared by X . {\displaystyle X.} X / ∼ {\displaystyle X/{\sim }} is a T1 space if and only if every equivalence class of ∼ {\displaystyle \,\sim \,} is closed in X . {\displaystyle X.} If the quotient map is open, then X / ∼ {\displaystyle X/{\sim }} is a Hausdorff space if and only if ~ is a closed subset of the product space X × X . {\displaystyle X\times X.} Connectedness If a space is connected or path connected, then so are all its quotient spaces. A quotient space of a simply connected or contractible space need not share those properties. Compactness If a space is compact, then so are all its quotient spaces. A quotient space of a locally compact space need not be locally compact. Dimension The topological dimension of a quotient space can be more (as well as less) than the dimension of the original space; space-filling curves provide such examples. == See also == Topology Covering space – Type of continuous map in topology Disjoint union (topology) – Mathematical term Final topology – Finest topology making some functions continuous Mapping cone (topology) – Topological construction on a map between spaces Product space – Topology on Cartesian products of topological spacesPages displaying short descriptions of redirect targets Subspace (topology) – Inherited topologyPages displaying short descriptions of redirect targets Topological space – Mathematical space with a notion of closeness Algebra Quotient category Quotient group – Group obtained by aggregating similar elements of a larger group Quotient space (linear algebra) – Vector space consisting of affine subsets Mapping cone (homological algebra) – Tool in homological algebra == Notes == == References == Bourbaki, Nicolas (1989) [1966]. General Topology: Chapters 1–4 [Topologie Générale]. Éléments de mathématique. Berlin New York: Springer Science & Business Media. ISBN 978-3-540-64241-1. OCLC 18588129. Bourbaki, Nicolas (1989) [1967]. General Topology 2: Chapters 5–10 [Topologie Générale]. Éléments de mathématique. Vol. 4. Berlin New York: Springer Science & Business Media. ISBN 978-3-540-64563-4. OCLC 246032063. Brown, Ronald (2006), Topology and Groupoids, Booksurge, ISBN 1-4196-2722-8 Dixmier, Jacques (1984). General Topology. Undergraduate Texts in Mathematics. Translated by Berberian, S. K. New York: Springer-Verlag. ISBN 978-0-387-90972-1. OCLC 10277303. Dugundji, James (1966). Topology. Boston: Allyn and Bacon. ISBN 978-0-697-06889-7. OCLC 395340485. Kelley, John L. (1975) [1955]. General Topology. Graduate Texts in Mathematics. Vol. 27 (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-90125-1. OCLC 1365153. Munkres, James R. (2000). Topology (2nd ed.). Upper Saddle River, NJ: Prentice Hall, Inc. ISBN 978-0-13-181629-9. OCLC 42683260. (accessible to patrons with print disabilities) Willard, Stephen (2004) [1970]. General Topology. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-43479-7. OCLC 115240. Willard, Stephen (1970). General Topology. Reading, MA: Addison-Wesley. ISBN 0-486-43479-6.
Wikipedia/Quotient_space_(topology)
In mathematics, a cofinite subset of a set X {\displaystyle X} is a subset A {\displaystyle A} whose complement in X {\displaystyle X} is a finite set. In other words, A {\displaystyle A} contains all but finitely many elements of X . {\displaystyle X.} If the complement is not finite, but is countable, then one says the set is cocountable. These arise naturally when generalizing structures on finite sets to infinite sets, particularly on infinite products, as in the product topology or direct sum. This use of the prefix "co" to describe a property possessed by a set's complement is consistent with its use in other terms such as "comeagre set". == Boolean algebras == The set of all subsets of X {\displaystyle X} that are either finite or cofinite forms a Boolean algebra, which means that it is closed under the operations of union, intersection, and complementation. This Boolean algebra is the finite–cofinite algebra on X . {\displaystyle X.} In the other direction, a Boolean algebra A {\displaystyle A} has a unique non-principal ultrafilter (that is, a maximal filter not generated by a single element of the algebra) if and only if there exists an infinite set X {\displaystyle X} such that A {\displaystyle A} is isomorphic to the finite–cofinite algebra on X . {\displaystyle X.} In this case, the non-principal ultrafilter is the set of all cofinite subsets of X {\displaystyle X} . == Cofinite topology == The cofinite topology or the finite complement topology is a topology that can be defined on every set X . {\displaystyle X.} It has precisely the empty set and all cofinite subsets of X {\displaystyle X} as open sets. As a consequence, in the cofinite topology, the only closed subsets are finite sets, or the whole of X . {\displaystyle X.} For this reason, the cofinite topology is also known as the finite-closed topology. Symbolically, one writes the topology as T = { A ⊆ X : A = ∅ or X ∖ A is finite } . {\displaystyle {\mathcal {T}}=\{A\subseteq X:A=\varnothing {\mbox{ or }}X\setminus A{\mbox{ is finite}}\}.} This topology occurs naturally in the context of the Zariski topology. Since polynomials in one variable over a field K {\displaystyle K} are zero on finite sets, or the whole of K , {\displaystyle K,} the Zariski topology on K {\displaystyle K} (considered as affine line) is the cofinite topology. The same is true for any irreducible algebraic curve; it is not true, for example, for X Y = 0 {\displaystyle XY=0} in the plane. === Properties === Subspaces: Every subspace topology of the cofinite topology is also a cofinite topology. Compactness: Since every open set contains all but finitely many points of X , {\displaystyle X,} the space X {\displaystyle X} is compact and sequentially compact. Separation: The cofinite topology is the coarsest topology satisfying the T1 axiom; that is, it is the smallest topology for which every singleton set is closed. In fact, an arbitrary topology on X {\displaystyle X} satisfies the T1 axiom if and only if it contains the cofinite topology. If X {\displaystyle X} is finite then the cofinite topology is simply the discrete topology. If X {\displaystyle X} is not finite then this topology is not Hausdorff (T2), regular or normal because no two nonempty open sets are disjoint (that is, it is hyperconnected). === Double-pointed cofinite topology === The double-pointed cofinite topology is the cofinite topology with every point doubled; that is, it is the topological product of the cofinite topology with the indiscrete topology on a two-element set. It is not T0 or T1, since the points of each doublet are topologically indistinguishable. It is, however, R0 since topologically distinguishable points are separated. The space is compact as the product of two compact spaces; alternatively, it is compact because each nonempty open set contains all but finitely many points. For an example of the countable double-pointed cofinite topology, the set Z {\displaystyle \mathbb {Z} } of integers can be given a topology such that every even number 2 n {\displaystyle 2n} is topologically indistinguishable from the following odd number 2 n + 1 {\displaystyle 2n+1} . The closed sets are the unions of finitely many pairs 2 n , 2 n + 1 , {\displaystyle 2n,2n+1,} or the whole set. The open sets are the complements of the closed sets; namely, each open set consists of all but a finite number of pairs 2 n , 2 n + 1 , {\displaystyle 2n,2n+1,} or is the empty set. == Other examples == === Product topology === The product topology on a product of topological spaces ∏ X i {\displaystyle \prod X_{i}} has basis ∏ U i {\displaystyle \prod U_{i}} where U i ⊆ X i {\displaystyle U_{i}\subseteq X_{i}} is open, and cofinitely many U i = X i . {\displaystyle U_{i}=X_{i}.} The analog without requiring that cofinitely many factors are the whole space is the box topology. === Direct sum === The elements of the direct sum of modules ⨁ M i {\displaystyle \bigoplus M_{i}} are sequences α i ∈ M i {\displaystyle \alpha _{i}\in M_{i}} where cofinitely many α i = 0. {\displaystyle \alpha _{i}=0.} The analog without requiring that cofinitely many summands are zero is the direct product. == See also == Fréchet filter – frechet filterPages displaying wikidata descriptions as a fallback List of topologies – List of concrete topologies and topological spaces == References == Steen, Lynn Arthur; Seebach, J. Arthur Jr. (1995) [1978], Counterexamples in Topology (Dover reprint of 1978 ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-486-68735-3, MR 0507446 (See example 18)
Wikipedia/Cofinite_topology
In topology, a topological space with the trivial topology is one where the only open sets are the empty set and the entire space. Such spaces are commonly called indiscrete, anti-discrete, concrete or codiscrete. Intuitively, this has the consequence that all points of the space are "lumped together" and cannot be distinguished by topological means. Every indiscrete space can be viewed as a pseudometric space in which the distance between any two points is zero. == Details == The trivial topology is the topology with the least possible number of open sets, namely the empty set and the entire space, since the definition of a topology requires these two sets to be open. Despite its simplicity, a space X with more than one element and the trivial topology lacks a key desirable property: it is not a T0 space. Other properties of an indiscrete space X—many of which are quite unusual—include: The only closed sets are the empty set and X. The only possible basis of X is {X}. If X has more than one point, then since it is not T0, it does not satisfy any of the higher T axioms either. In particular, it is not a Hausdorff space. Not being Hausdorff, X is not an order topology, nor is it metrizable. X is, however, regular, completely regular, normal, and completely normal; all in a rather vacuous way though, since the only closed sets are ∅ and X. X is compact and therefore paracompact, Lindelöf, and locally compact. Every function whose domain is a topological space and codomain X is continuous. X is path-connected and so connected. X is second-countable, and therefore is first-countable, separable and Lindelöf. All subspaces of X have the trivial topology. All quotient spaces of X have the trivial topology Arbitrary products of trivial topological spaces, with either the product topology or box topology, have the trivial topology. All sequences in X converge to every point of X. In particular, every sequence has a convergent subsequence (the whole sequence or any other subsequence), thus X is sequentially compact. The interior of every set except X is empty. The closure of every non-empty subset of X is X. Put another way: every non-empty subset of X is dense, a property that characterizes trivial topological spaces. As a result of this, the closure of every open subset U of X is either ∅ (if U = ∅) or X (otherwise). In particular, the closure of every open subset of X is again an open set, and therefore X is extremally disconnected. If S is any subset of X with more than one element, then all elements of X are limit points of S. If S is a singleton, then every point of X \ S is still a limit point of S. X is a Baire space. Two topological spaces carrying the trivial topology are homeomorphic iff they have the same cardinality. In some sense the opposite of the trivial topology is the discrete topology, in which every subset is open. The trivial topology belongs to a uniform space in which the whole cartesian product X × X is the only entourage. Let Top be the category of topological spaces with continuous maps and Set be the category of sets with functions. If G : Top → Set is the functor that assigns to each topological space its underlying set (the so-called forgetful functor), and H : Set → Top is the functor that puts the trivial topology on a given set, then H (the so-called cofree functor) is right adjoint to G. (The so-called free functor F : Set → Top that puts the discrete topology on a given set is left adjoint to G.) == See also == List of topologies Triviality (mathematics) == Notes == == References == Steen, Lynn Arthur; Seebach, J. Arthur Jr. (1995) [1978], Counterexamples in Topology (Dover reprint of 1978 ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-486-68735-3, MR 0507446
Wikipedia/Trivial_topology
In topology, the closure of a subset S of points in a topological space consists of all points in S together with all limit points of S. The closure of S may equivalently be defined as the union of S and its boundary, and also as the intersection of all closed sets containing S. Intuitively, the closure can be thought of as all the points that are either in S or "very near" S. A point which is in the closure of S is a point of closure of S. The notion of closure is in many ways dual to the notion of interior. == Definitions == === Point of closure === For S {\displaystyle S} as a subset of a Euclidean space, x {\displaystyle x} is a point of closure of S {\displaystyle S} if every open ball centered at x {\displaystyle x} contains a point of S {\displaystyle S} (this point can be x {\displaystyle x} itself). This definition generalizes to any subset S {\displaystyle S} of a metric space X . {\displaystyle X.} Fully expressed, for X {\displaystyle X} as a metric space with metric d , {\displaystyle d,} x {\displaystyle x} is a point of closure of S {\displaystyle S} if for every r > 0 {\displaystyle r>0} there exists some s ∈ S {\displaystyle s\in S} such that the distance d ( x , s ) < r {\displaystyle d(x,s)<r} ( x = s {\displaystyle x=s} is allowed). Another way to express this is to say that x {\displaystyle x} is a point of closure of S {\displaystyle S} if the distance d ( x , S ) := inf s ∈ S d ( x , s ) = 0 {\displaystyle d(x,S):=\inf _{s\in S}d(x,s)=0} where inf {\displaystyle \inf } is the infimum. This definition generalizes to topological spaces by replacing "open ball" or "ball" with "neighbourhood". Let S {\displaystyle S} be a subset of a topological space X . {\displaystyle X.} Then x {\displaystyle x} is a point of closure or adherent point of S {\displaystyle S} if every neighbourhood of x {\displaystyle x} contains a point of S {\displaystyle S} (again, x = s {\displaystyle x=s} for s ∈ S {\displaystyle s\in S} is allowed). Note that this definition does not depend upon whether neighbourhoods are required to be open. === Limit point === The definition of a point of closure of a set is closely related to the definition of a limit point of a set. The difference between the two definitions is subtle but important – namely, in the definition of a limit point x {\displaystyle x} of a set S {\displaystyle S} , every neighbourhood of x {\displaystyle x} must contain a point of S {\displaystyle S} other than x {\displaystyle x} itself, i.e., each neighbourhood of x {\displaystyle x} obviously has x {\displaystyle x} but it also must have a point of S {\displaystyle S} that is not equal to x {\displaystyle x} in order for x {\displaystyle x} to be a limit point of S {\displaystyle S} . A limit point of S {\displaystyle S} has more strict condition than a point of closure of S {\displaystyle S} in the definitions. The set of all limit points of a set S {\displaystyle S} is called the derived set of S {\displaystyle S} . A limit point of a set is also called cluster point or accumulation point of the set. Thus, every limit point is a point of closure, but not every point of closure is a limit point. A point of closure which is not a limit point is an isolated point. In other words, a point x {\displaystyle x} is an isolated point of S {\displaystyle S} if it is an element of S {\displaystyle S} and there is a neighbourhood of x {\displaystyle x} which contains no other points of S {\displaystyle S} than x {\displaystyle x} itself. For a given set S {\displaystyle S} and point x , {\displaystyle x,} x {\displaystyle x} is a point of closure of S {\displaystyle S} if and only if x {\displaystyle x} is an element of S {\displaystyle S} or x {\displaystyle x} is a limit point of S {\displaystyle S} (or both). === Closure of a set === The closure of a subset S {\displaystyle S} of a topological space ( X , τ ) , {\displaystyle (X,\tau ),} denoted by cl ( X , τ ) ⁡ S {\displaystyle \operatorname {cl} _{(X,\tau )}S} or possibly by cl X ⁡ S {\displaystyle \operatorname {cl} _{X}S} (if τ {\displaystyle \tau } is understood), where if both X {\displaystyle X} and τ {\displaystyle \tau } are clear from context then it may also be denoted by cl ⁡ S , {\displaystyle \operatorname {cl} S,} S ¯ , {\displaystyle {\overline {S}},} or S − {\displaystyle S{}^{-}} (Moreover, cl {\displaystyle \operatorname {cl} } is sometimes capitalized to Cl {\displaystyle \operatorname {Cl} } .) can be defined using any of the following equivalent definitions: cl ⁡ S {\displaystyle \operatorname {cl} S} is the set of all points of closure of S . {\displaystyle S.} cl ⁡ S {\displaystyle \operatorname {cl} S} is the set S {\displaystyle S} together with all of its limit points. (Each point of S {\displaystyle S} is a point of closure of S {\displaystyle S} , and each limit point of S {\displaystyle S} is also a point of closure of S {\displaystyle S} .) cl ⁡ S {\displaystyle \operatorname {cl} S} is the intersection of all closed sets containing S . {\displaystyle S.} cl ⁡ S {\displaystyle \operatorname {cl} S} is the smallest closed set containing S . {\displaystyle S.} cl ⁡ S {\displaystyle \operatorname {cl} S} is the union of S {\displaystyle S} and its boundary ∂ ( S ) . {\displaystyle \partial (S).} cl ⁡ S {\displaystyle \operatorname {cl} S} is the set of all x ∈ X {\displaystyle x\in X} for which there exists a net (valued) in S {\displaystyle S} that converges to x {\displaystyle x} in ( X , τ ) . {\displaystyle (X,\tau ).} The closure of a set has the following properties. cl ⁡ S {\displaystyle \operatorname {cl} S} is a closed superset of S {\displaystyle S} . The set S {\displaystyle S} is closed if and only if S = cl ⁡ S {\displaystyle S=\operatorname {cl} S} . If S ⊆ T {\displaystyle S\subseteq T} then cl ⁡ S {\displaystyle \operatorname {cl} S} is a subset of cl ⁡ T . {\displaystyle \operatorname {cl} T.} If A {\displaystyle A} is a closed set, then A {\displaystyle A} contains S {\displaystyle S} if and only if A {\displaystyle A} contains cl ⁡ S . {\displaystyle \operatorname {cl} S.} Sometimes the second or third property above is taken as the definition of the topological closure, which still make sense when applied to other types of closures (see below). In a first-countable space (such as a metric space), cl ⁡ S {\displaystyle \operatorname {cl} S} is the set of all limits of all convergent sequences of points in S . {\displaystyle S.} For a general topological space, this statement remains true if one replaces "sequence" by "net" or "filter" (as described in the article on filters in topology). Note that these properties are also satisfied if "closure", "superset", "intersection", "contains/containing", "smallest" and "closed" are replaced by "interior", "subset", "union", "contained in", "largest", and "open". For more on this matter, see closure operator below. == Examples == Consider a sphere in a 3 dimensional space. Implicitly there are two regions of interest created by this sphere; the sphere itself and its interior (which is called an open 3-ball). It is useful to distinguish between the interior and the surface of the sphere, so we distinguish between the open 3-ball (the interior of the sphere), and the closed 3-ball – the closure of the open 3-ball that is the open 3-ball plus the surface (the surface as the sphere itself). In topological space: In any space, ∅ = cl ⁡ ∅ {\displaystyle \varnothing =\operatorname {cl} \varnothing } . In other words, the closure of the empty set ∅ {\displaystyle \varnothing } is ∅ {\displaystyle \varnothing } itself. In any space X , {\displaystyle X,} X = cl ⁡ X . {\displaystyle X=\operatorname {cl} X.} Giving R {\displaystyle \mathbb {R} } and C {\displaystyle \mathbb {C} } the standard (metric) topology: If X {\displaystyle X} is the Euclidean space R {\displaystyle \mathbb {R} } of real numbers, then cl X ⁡ ( ( 0 , 1 ) ) = [ 0 , 1 ] {\displaystyle \operatorname {cl} _{X}((0,1))=[0,1]} . In other words., the closure of the set ( 0 , 1 ) {\displaystyle (0,1)} as a subset of X {\displaystyle X} is [ 0 , 1 ] {\displaystyle [0,1]} . If X {\displaystyle X} is the Euclidean space R {\displaystyle \mathbb {R} } , then the closure of the set Q {\displaystyle \mathbb {Q} } of rational numbers is the whole space R . {\displaystyle \mathbb {R} .} We say that Q {\displaystyle \mathbb {Q} } is dense in R . {\displaystyle \mathbb {R} .} If X {\displaystyle X} is the complex plane C = R 2 , {\displaystyle \mathbb {C} =\mathbb {R} ^{2},} then cl X ⁡ ( { z ∈ C : | z | > 1 } ) = { z ∈ C : | z | ≥ 1 } . {\displaystyle \operatorname {cl} _{X}\left(\{z\in \mathbb {C} :|z|>1\}\right)=\{z\in \mathbb {C} :|z|\geq 1\}.} If S {\displaystyle S} is a finite subset of a Euclidean space X , {\displaystyle X,} then cl X ⁡ S = S . {\displaystyle \operatorname {cl} _{X}S=S.} (For a general topological space, this property is equivalent to the T1 axiom.) On the set of real numbers one can put other topologies rather than the standard one. If X = R {\displaystyle X=\mathbb {R} } is endowed with the lower limit topology, then cl X ⁡ ( ( 0 , 1 ) ) = [ 0 , 1 ) . {\displaystyle \operatorname {cl} _{X}((0,1))=[0,1).} If one considers on X = R {\displaystyle X=\mathbb {R} } the discrete topology in which every set is closed (open), then cl X ⁡ ( ( 0 , 1 ) ) = ( 0 , 1 ) . {\displaystyle \operatorname {cl} _{X}((0,1))=(0,1).} If one considers on X = R {\displaystyle X=\mathbb {R} } the trivial topology in which the only closed (open) sets are the empty set and R {\displaystyle \mathbb {R} } itself, then cl X ⁡ ( ( 0 , 1 ) ) = R . {\displaystyle \operatorname {cl} _{X}((0,1))=\mathbb {R} .} These examples show that the closure of a set depends upon the topology of the underlying space. The last two examples are special cases of the following. In any discrete space, since every set is closed (and also open), every set is equal to its closure. In any indiscrete space X , {\displaystyle X,} since the only closed sets are the empty set and X {\displaystyle X} itself, we have that the closure of the empty set is the empty set, and for every non-empty subset A {\displaystyle A} of X , {\displaystyle X,} cl X ⁡ A = X . {\displaystyle \operatorname {cl} _{X}A=X.} In other words, every non-empty subset of an indiscrete space is dense. The closure of a set also depends upon in which space we are taking the closure. For example, if X {\displaystyle X} is the set of rational numbers, with the usual relative topology induced by the Euclidean space R , {\displaystyle \mathbb {R} ,} and if S = { q ∈ Q : q 2 > 2 , q > 0 } , {\displaystyle S=\{q\in \mathbb {Q} :q^{2}>2,q>0\},} then S {\displaystyle S} is both closed and open in Q {\displaystyle \mathbb {Q} } because neither S {\displaystyle S} nor its complement can contain 2 {\displaystyle {\sqrt {2}}} , which would be the lower bound of S {\displaystyle S} , but cannot be in S {\displaystyle S} because 2 {\displaystyle {\sqrt {2}}} is irrational. So, S {\displaystyle S} has no well defined closure due to boundary elements not being in Q {\displaystyle \mathbb {Q} } . However, if we instead define X {\displaystyle X} to be the set of real numbers and define the interval in the same way then the closure of that interval is well defined and would be the set of all real numbers greater than or equal to 2 {\displaystyle {\sqrt {2}}} . == Closure operator == A closure operator on a set X {\displaystyle X} is a mapping of the power set of X , {\displaystyle X,} P ( X ) {\displaystyle {\mathcal {P}}(X)} , into itself which satisfies the Kuratowski closure axioms. Given a topological space ( X , τ ) {\displaystyle (X,\tau )} , the topological closure induces a function cl X : ℘ ( X ) → ℘ ( X ) {\displaystyle \operatorname {cl} _{X}:\wp (X)\to \wp (X)} that is defined by sending a subset S ⊆ X {\displaystyle S\subseteq X} to cl X ⁡ S , {\displaystyle \operatorname {cl} _{X}S,} where the notation S ¯ {\displaystyle {\overline {S}}} or S − {\displaystyle S^{-}} may be used instead. Conversely, if c {\displaystyle \mathbb {c} } is a closure operator on a set X , {\displaystyle X,} then a topological space is obtained by defining the closed sets as being exactly those subsets S ⊆ X {\displaystyle S\subseteq X} that satisfy c ( S ) = S {\displaystyle \mathbb {c} (S)=S} (so complements in X {\displaystyle X} of these subsets form the open sets of the topology). The closure operator cl X {\displaystyle \operatorname {cl} _{X}} is dual to the interior operator, which is denoted by int X , {\displaystyle \operatorname {int} _{X},} in the sense that cl X ⁡ S = X ∖ int X ⁡ ( X ∖ S ) , {\displaystyle \operatorname {cl} _{X}S=X\setminus \operatorname {int} _{X}(X\setminus S),} and also int X ⁡ S = X ∖ cl X ⁡ ( X ∖ S ) . {\displaystyle \operatorname {int} _{X}S=X\setminus \operatorname {cl} _{X}(X\setminus S).} Therefore, the abstract theory of closure operators and the Kuratowski closure axioms can be readily translated into the language of interior operators by replacing sets with their complements in X . {\displaystyle X.} In general, the closure operator does not commute with intersections. However, in a complete metric space the following result does hold: == Facts about closures == A subset S {\displaystyle S} is closed in X {\displaystyle X} if and only if cl X ⁡ S = S . {\displaystyle \operatorname {cl} _{X}S=S.} In particular: The closure of the empty set is the empty set; The closure of X {\displaystyle X} itself is X . {\displaystyle X.} The closure of an intersection of sets is always a subset of (but need not be equal to) the intersection of the closures of the sets. In a union of finitely many sets, the closure of the union and the union of the closures are equal; the union of zero sets is the empty set, and so this statement contains the earlier statement about the closure of the empty set as a special case. The closure of the union of infinitely many sets need not equal the union of the closures, but it is always a superset of the union of the closures. Thus, just as the union of two closed sets is closed, so too does closure distribute over binary unions: that is, cl X ⁡ ( S ∪ T ) = ( cl X ⁡ S ) ∪ ( cl X ⁡ T ) . {\displaystyle \operatorname {cl} _{X}(S\cup T)=(\operatorname {cl} _{X}S)\cup (\operatorname {cl} _{X}T).} But just as a union of infinitely many closed sets is not necessarily closed, so too does closure not necessarily distribute over infinite unions: that is, cl X ⁡ ( ⋃ i ∈ I S i ) ≠ ⋃ i ∈ I cl X ⁡ S i {\displaystyle \operatorname {cl} _{X}\left(\bigcup _{i\in I}S_{i}\right)\neq \bigcup _{i\in I}\operatorname {cl} _{X}S_{i}} is possible when I {\displaystyle I} is infinite. If S ⊆ T ⊆ X {\displaystyle S\subseteq T\subseteq X} and if T {\displaystyle T} is a subspace of X {\displaystyle X} (meaning that T {\displaystyle T} is endowed with the subspace topology that X {\displaystyle X} induces on it), then cl T ⁡ S ⊆ cl X ⁡ S {\displaystyle \operatorname {cl} _{T}S\subseteq \operatorname {cl} _{X}S} and the closure of S {\displaystyle S} computed in T {\displaystyle T} is equal to the intersection of T {\displaystyle T} and the closure of S {\displaystyle S} computed in X {\displaystyle X} : cl T ⁡ S = T ∩ cl X ⁡ S . {\displaystyle \operatorname {cl} _{T}S~=~T\cap \operatorname {cl} _{X}S.} It follows that S ⊆ T {\displaystyle S\subseteq T} is a dense subset of T {\displaystyle T} if and only if T {\displaystyle T} is a subset of cl X ⁡ S . {\displaystyle \operatorname {cl} _{X}S.} It is possible for cl T ⁡ S = T ∩ cl X ⁡ S {\displaystyle \operatorname {cl} _{T}S=T\cap \operatorname {cl} _{X}S} to be a proper subset of cl X ⁡ S ; {\displaystyle \operatorname {cl} _{X}S;} for example, take X = R , {\displaystyle X=\mathbb {R} ,} S = ( 0 , 1 ) , {\displaystyle S=(0,1),} and T = ( 0 , ∞ ) . {\displaystyle T=(0,\infty ).} If S , T ⊆ X {\displaystyle S,T\subseteq X} but S {\displaystyle S} is not necessarily a subset of T {\displaystyle T} then only cl T ⁡ ( S ∩ T ) ⊆ T ∩ cl X ⁡ S {\displaystyle \operatorname {cl} _{T}(S\cap T)~\subseteq ~T\cap \operatorname {cl} _{X}S} is always guaranteed, where this containment could be strict (consider for instance X = R {\displaystyle X=\mathbb {R} } with the usual topology, T = ( − ∞ , 0 ] , {\displaystyle T=(-\infty ,0],} and S = ( 0 , ∞ ) {\displaystyle S=(0,\infty )} ), although if T {\displaystyle T} happens to an open subset of X {\displaystyle X} then the equality cl T ⁡ ( S ∩ T ) = T ∩ cl X ⁡ S {\displaystyle \operatorname {cl} _{T}(S\cap T)=T\cap \operatorname {cl} _{X}S} will hold (no matter the relationship between S {\displaystyle S} and T {\displaystyle T} ). Consequently, if U {\displaystyle {\mathcal {U}}} is any open cover of X {\displaystyle X} and if S ⊆ X {\displaystyle S\subseteq X} is any subset then: cl X ⁡ S = ⋃ U ∈ U cl U ⁡ ( U ∩ S ) {\displaystyle \operatorname {cl} _{X}S=\bigcup _{U\in {\mathcal {U}}}\operatorname {cl} _{U}(U\cap S)} because cl U ⁡ ( S ∩ U ) = U ∩ cl X ⁡ S {\displaystyle \operatorname {cl} _{U}(S\cap U)=U\cap \operatorname {cl} _{X}S} for every U ∈ U {\displaystyle U\in {\mathcal {U}}} (where every U ∈ U {\displaystyle U\in {\mathcal {U}}} is endowed with the subspace topology induced on it by X {\displaystyle X} ). This equality is particularly useful when X {\displaystyle X} is a manifold and the sets in the open cover U {\displaystyle {\mathcal {U}}} are domains of coordinate charts. In words, this result shows that the closure in X {\displaystyle X} of any subset S ⊆ X {\displaystyle S\subseteq X} can be computed "locally" in the sets of any open cover of X {\displaystyle X} and then unioned together. In this way, this result can be viewed as the analogue of the well-known fact that a subset S ⊆ X {\displaystyle S\subseteq X} is closed in X {\displaystyle X} if and only if it is "locally closed in X {\displaystyle X} ", meaning that if U {\displaystyle {\mathcal {U}}} is any open cover of X {\displaystyle X} then S {\displaystyle S} is closed in X {\displaystyle X} if and only if S ∩ U {\displaystyle S\cap U} is closed in U {\displaystyle U} for every U ∈ U . {\displaystyle U\in {\mathcal {U}}.} == Functions and closure == === Continuity === A function f : X → Y {\displaystyle f:X\to Y} between topological spaces is continuous if and only if the preimage of every closed subset of the codomain is closed in the domain; explicitly, this means: f − 1 ( C ) {\displaystyle f^{-1}(C)} is closed in X {\displaystyle X} whenever C {\displaystyle C} is a closed subset of Y . {\displaystyle Y.} In terms of the closure operator, f : X → Y {\displaystyle f:X\to Y} is continuous if and only if for every subset A ⊆ X , {\displaystyle A\subseteq X,} f ( cl X ⁡ A ) ⊆ cl Y ⁡ ( f ( A ) ) . {\displaystyle f\left(\operatorname {cl} _{X}A\right)~\subseteq ~\operatorname {cl} _{Y}(f(A)).} That is to say, given any element x ∈ X {\displaystyle x\in X} that belongs to the closure of a subset A ⊆ X , {\displaystyle A\subseteq X,} f ( x ) {\displaystyle f(x)} necessarily belongs to the closure of f ( A ) {\displaystyle f(A)} in Y . {\displaystyle Y.} If we declare that a point x {\displaystyle x} is close to a subset A ⊆ X {\displaystyle A\subseteq X} if x ∈ cl X ⁡ A , {\displaystyle x\in \operatorname {cl} _{X}A,} then this terminology allows for a plain English description of continuity: f {\displaystyle f} is continuous if and only if for every subset A ⊆ X , {\displaystyle A\subseteq X,} f {\displaystyle f} maps points that are close to A {\displaystyle A} to points that are close to f ( A ) . {\displaystyle f(A).} Thus continuous functions are exactly those functions that preserve (in the forward direction) the "closeness" relationship between points and sets: a function is continuous if and only if whenever a point is close to a set then the image of that point is close to the image of that set. Similarly, f {\displaystyle f} is continuous at a fixed given point x ∈ X {\displaystyle x\in X} if and only if whenever x {\displaystyle x} is close to a subset A ⊆ X , {\displaystyle A\subseteq X,} then f ( x ) {\displaystyle f(x)} is close to f ( A ) . {\displaystyle f(A).} === Closed maps === A function f : X → Y {\displaystyle f:X\to Y} is a (strongly) closed map if and only if whenever C {\displaystyle C} is a closed subset of X {\displaystyle X} then f ( C ) {\displaystyle f(C)} is a closed subset of Y . {\displaystyle Y.} In terms of the closure operator, f : X → Y {\displaystyle f:X\to Y} is a (strongly) closed map if and only if cl Y ⁡ f ( A ) ⊆ f ( cl X ⁡ A ) {\displaystyle \operatorname {cl} _{Y}f(A)\subseteq f\left(\operatorname {cl} _{X}A\right)} for every subset A ⊆ X . {\displaystyle A\subseteq X.} Equivalently, f : X → Y {\displaystyle f:X\to Y} is a (strongly) closed map if and only if cl Y ⁡ f ( C ) ⊆ f ( C ) {\displaystyle \operatorname {cl} _{Y}f(C)\subseteq f(C)} for every closed subset C ⊆ X . {\displaystyle C\subseteq X.} == Categorical interpretation == One may define the closure operator in terms of universal arrows, as follows. The powerset of a set X {\displaystyle X} may be realized as a partial order category P {\displaystyle P} in which the objects are subsets and the morphisms are inclusion maps A → B {\displaystyle A\to B} whenever A {\displaystyle A} is a subset of B . {\displaystyle B.} Furthermore, a topology T {\displaystyle T} on X {\displaystyle X} is a subcategory of P {\displaystyle P} with inclusion functor I : T → P . {\displaystyle I:T\to P.} The set of closed subsets containing a fixed subset A ⊆ X {\displaystyle A\subseteq X} can be identified with the comma category ( A ↓ I ) . {\displaystyle (A\downarrow I).} This category — also a partial order — then has initial object cl ⁡ A . {\displaystyle \operatorname {cl} A.} Thus there is a universal arrow from A {\displaystyle A} to I , {\displaystyle I,} given by the inclusion A → cl ⁡ A . {\displaystyle A\to \operatorname {cl} A.} Similarly, since every closed set containing X ∖ A {\displaystyle X\setminus A} corresponds with an open set contained in A {\displaystyle A} we can interpret the category ( I ↓ X ∖ A ) {\displaystyle (I\downarrow X\setminus A)} as the set of open subsets contained in A , {\displaystyle A,} with terminal object int ⁡ ( A ) , {\displaystyle \operatorname {int} (A),} the interior of A . {\displaystyle A.} All properties of the closure can be derived from this definition and a few properties of the above categories. Moreover, this definition makes precise the analogy between the topological closure and other types of closures (for example algebraic closure), since all are examples of universal arrows. == See also == Adherent point – Point that belongs to the closure of some given subset of a topological space Closure algebra – Algebraic structurePages displaying short descriptions of redirect targets Closed regular set, a set equal to the closure of their interior Derived set (mathematics) – Set of all limit points of a set Interior (topology) – Largest open subset of some given set Limit point of a set – Cluster point in a topological spacePages displaying short descriptions of redirect targets == Notes == == References == == Bibliography == Baker, Crump W. (1991), Introduction to Topology, Wm. C. Brown Publisher, ISBN 0-697-05972-3 Croom, Fred H. (1989), Principles of Topology, Saunders College Publishing, ISBN 0-03-012813-7 Gemignani, Michael C. (1990) [1967], Elementary Topology (2nd ed.), Dover, ISBN 0-486-66522-4 Hocking, John G.; Young, Gail S. (1988) [1961], Topology, Dover, ISBN 0-486-65676-4 Kuratowski, K. (1966), Topology, vol. I, Academic Press Pervin, William J. (1965), Foundations of General Topology, Academic Press Schubert, Horst (1968), Topology, Allyn and Bacon Zălinescu, Constantin (30 July 2002). Convex Analysis in General Vector Spaces. River Edge, N.J. London: World Scientific Publishing. ISBN 978-981-4488-15-0. MR 1921556. OCLC 285163112 – via Internet Archive. == External links == "Closure of a set", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Closure_(topology)
In the mathematical field of point-set topology, a continuum (plural: "continua") is a nonempty compact connected metric space, or, less frequently, a compact connected Hausdorff space. Continuum theory is the branch of topology devoted to the study of continua. == Definitions == A continuum that contains more than one point is called nondegenerate. A subset A of a continuum X such that A itself is a continuum is called a subcontinuum of X. A space homeomorphic to a subcontinuum of the Euclidean plane R2 is called a planar continuum. A continuum X is homogeneous if for every two points x and y in X, there exists a homeomorphism h: X → X such that h(x) = y. A Peano continuum is a continuum that is locally connected at each point. An indecomposable continuum is a continuum that cannot be represented as the union of two proper subcontinua. A continuum X is hereditarily indecomposable if every subcontinuum of X is indecomposable. The dimension of a continuum usually means its topological dimension. A one-dimensional continuum is often called a curve. == Examples == An arc is a space homeomorphic to the closed interval [0,1]. If h: [0,1] → X is a homeomorphism and h(0) = p and h(1) = q then p and q are called the endpoints of X; one also says that X is an arc from p to q. An arc is the simplest and most familiar type of a continuum. It is one-dimensional, arcwise connected, and locally connected. The topologist's sine curve is a subset of the plane that is the union of the graph of the function f(x) = sin(1/x), 0 < x ≤ 1 with the segment −1 ≤ y ≤ 1 of the y-axis. It is a one-dimensional continuum that is not arcwise connected, and it is locally disconnected at the points along the y-axis. The Warsaw circle is obtained by "closing up" the topologist's sine curve by an arc connecting (0,−1) and (1,sin(1)). It is a one-dimensional continuum whose homotopy groups are all trivial, but it is not a contractible space. An n-cell is a space homeomorphic to the closed ball in the Euclidean space Rn. It is contractible and is the simplest example of an n-dimensional continuum. An n-sphere is a space homeomorphic to the standard n-sphere in the (n + 1)-dimensional Euclidean space. It is an n-dimensional homogeneous continuum that is not contractible, and therefore different from an n-cell. The Hilbert cube is an infinite-dimensional continuum. Solenoids are among the simplest examples of indecomposable homogeneous continua. They are neither arcwise connected nor locally connected. The Sierpinski carpet, also known as the Sierpinski universal curve, is a one-dimensional planar Peano continuum that contains a homeomorphic image of any one-dimensional planar continuum. The pseudo-arc is a homogeneous hereditarily indecomposable planar continuum. == Properties == There are two fundamental techniques for constructing continua, by means of nested intersections and inverse limits. If {Xn} is a nested family of continua, i.e. Xn ⊇ Xn+1, then their intersection is a continuum. If {(Xn, fn)} is an inverse sequence of continua Xn, called the coordinate spaces, together with continuous maps fn: Xn+1 → Xn, called the bonding maps, then its inverse limit is a continuum. A finite or countable product of continua is a continuum. == See also == Linear continuum Menger sponge Shape theory (mathematics) == References == == Sources == Sam B. Nadler, Jr, Continuum theory. An introduction. Pure and Applied Mathematics, Marcel Dekker. ISBN 0-8247-8659-9. == External links == Open problems in continuum theory Examples in continuum theory Continuum Theory and Topological Dynamics, M. Barge and J. Kennedy, in Open Problems in Topology, J. van Mill and G.M. Reed (Editors) Elsevier Science Publishers B.V. (North-Holland), 1990. Hyperspacewiki
Wikipedia/Continuum_theory
In topology and related branches of mathematics, separated sets are pairs of subsets of a given topological space that are related to each other in a certain way: roughly speaking, neither overlapping nor touching. The notion of when two sets are separated or not is important both to the notion of connected spaces (and their connected components) as well as to the separation axioms for topological spaces. Separated sets should not be confused with separated spaces (defined below), which are somewhat related but different. Separable spaces are again a completely different topological concept. == Definitions == There are various ways in which two subsets A {\displaystyle A} and B {\displaystyle B} of a topological space X {\displaystyle X} can be considered to be separated. A most basic way in which two sets can be separated is if they are disjoint, that is, if their intersection is the empty set. This property has nothing to do with topology as such, but only set theory. Each of the following properties is stricter than disjointness, incorporating some topological information. The properties below are presented in increasing order of specificity, each being a stronger notion than the preceding one. The sets A {\displaystyle A} and B {\displaystyle B} are separated in X {\displaystyle X} if each is disjoint from the other's closure: A ∩ B ¯ = ∅ = A ¯ ∩ B . {\displaystyle A\cap {\bar {B}}=\varnothing ={\bar {A}}\cap B.} This property is known as the Hausdorff−Lennes Separation Condition. Since every set is contained in its closure, two separated sets automatically must be disjoint. The closures themselves do not have to be disjoint from each other; for example, the intervals [ 0 , 1 ) {\displaystyle [0,1)} and ( 1 , 2 ] {\displaystyle (1,2]} are separated in the real line R , {\displaystyle \mathbb {R} ,} even though the point 1 belongs to both of their closures. A more general example is that in any metric space, two open balls B r ( p ) = { x ∈ X : d ( p , x ) < r } {\displaystyle B_{r}(p)=\{x\in X:d(p,x)<r\}} and B s ( q ) = { x ∈ X : d ( q , x ) < s } {\displaystyle B_{s}(q)=\{x\in X:d(q,x)<s\}} are separated whenever d ( p , q ) ≥ r + s . {\displaystyle d(p,q)\geq r+s.} The property of being separated can also be expressed in terms of derived set (indicated by the prime symbol): A {\displaystyle A} and B {\displaystyle B} are separated when they are disjoint and each is disjoint from the other's derived set, that is, A ′ ∩ B = ∅ = B ′ ∩ A . {\textstyle A'\cap B=\varnothing =B'\cap A.} (As in the case of the first version of the definition, the derived sets A ′ {\displaystyle A'} and B ′ {\displaystyle B'} are not required to be disjoint from each other.) The sets A {\displaystyle A} and B {\displaystyle B} are separated by neighbourhoods if there are neighbourhoods U {\displaystyle U} of A {\displaystyle A} and V {\displaystyle V} of B {\displaystyle B} such that U {\displaystyle U} and V {\displaystyle V} are disjoint. (Sometimes you will see the requirement that U {\displaystyle U} and V {\displaystyle V} be open neighbourhoods, but this makes no difference in the end.) For the example of A = [ 0 , 1 ) {\displaystyle A=[0,1)} and B = ( 1 , 2 ] , {\displaystyle B=(1,2],} you could take U = ( − 1 , 1 ) {\displaystyle U=(-1,1)} and V = ( 1 , 3 ) . {\displaystyle V=(1,3).} Note that if any two sets are separated by neighbourhoods, then certainly they are separated. If A {\displaystyle A} and B {\displaystyle B} are open and disjoint, then they must be separated by neighbourhoods; just take U = A {\displaystyle U=A} and V = B . {\displaystyle V=B.} For this reason, separatedness is often used with closed sets (as in the normal separation axiom). The sets A {\displaystyle A} and B {\displaystyle B} are separated by closed neighbourhoods if there is a closed neighbourhood U {\displaystyle U} of A {\displaystyle A} and a closed neighbourhood V {\displaystyle V} of B {\displaystyle B} such that U {\displaystyle U} and V {\displaystyle V} are disjoint. Our examples, [ 0 , 1 ) {\displaystyle [0,1)} and ( 1 , 2 ] , {\displaystyle (1,2],} are not separated by closed neighbourhoods. You could make either U {\displaystyle U} or V {\displaystyle V} closed by including the point 1 in it, but you cannot make them both closed while keeping them disjoint. Note that if any two sets are separated by closed neighbourhoods, then certainly they are separated by neighbourhoods. The sets A {\displaystyle A} and B {\displaystyle B} are separated by a continuous function if there exists a continuous function f : X → R {\displaystyle f:X\to \mathbb {R} } from the space X {\displaystyle X} to the real line R {\displaystyle \mathbb {R} } such that A ⊆ f − 1 ( 0 ) {\displaystyle A\subseteq f^{-1}(0)} and B ⊆ f − 1 ( 1 ) {\displaystyle B\subseteq f^{-1}(1)} , that is, members of A {\displaystyle A} map to 0 and members of B {\displaystyle B} map to 1. (Sometimes the unit interval [ 0 , 1 ] {\displaystyle [0,1]} is used in place of R {\displaystyle \mathbb {R} } in this definition, but this makes no difference.) In our example, [ 0 , 1 ) {\displaystyle [0,1)} and ( 1 , 2 ] {\displaystyle (1,2]} are not separated by a function, because there is no way to continuously define f {\displaystyle f} at the point 1. If two sets are separated by a continuous function, then they are also separated by closed neighbourhoods; the neighbourhoods can be given in terms of the preimage of f {\displaystyle f} as U = f − 1 [ − c , c ] {\displaystyle U=f^{-1}[-c,c]} and V = f − 1 [ 1 − c , 1 + c ] , {\displaystyle V=f^{-1}[1-c,1+c],} where c {\displaystyle c} is any positive real number less than 1 / 2. {\displaystyle 1/2.} The sets A {\displaystyle A} and B {\displaystyle B} are precisely separated by a continuous function if there exists a continuous function f : X → R {\displaystyle f:X\to \mathbb {R} } such that A = f − 1 ( 0 ) {\displaystyle A=f^{-1}(0)} and B = f − 1 ( 1 ) . {\displaystyle B=f^{-1}(1).} (Again, you may also see the unit interval in place of R , {\displaystyle \mathbb {R} ,} and again it makes no difference.) Note that if any two sets are precisely separated by a function, then they are separated by a function. Since { 0 } {\displaystyle \{0\}} and { 1 } {\displaystyle \{1\}} are closed in R , {\displaystyle \mathbb {R} ,} only closed sets are capable of being precisely separated by a function, but just because two sets are closed and separated by a function does not mean that they are automatically precisely separated by a function (even a different function). == Relation to separation axioms and separated spaces == The separation axioms are various conditions that are sometimes imposed upon topological spaces, many of which can be described in terms of the various types of separated sets. As an example we will define the T2 axiom, which is the condition imposed on separated spaces. Specifically, a topological space is separated if, given any two distinct points x and y, the singleton sets {x} and {y} are separated by neighbourhoods. Separated spaces are usually called Hausdorff spaces or T2 spaces. == Relation to connected spaces == Given a topological space X, it is sometimes useful to consider whether it is possible for a subset A to be separated from its complement. This is certainly true if A is either the empty set or the entire space X, but there may be other possibilities. A topological space X is connected if these are the only two possibilities. Conversely, if a nonempty subset A is separated from its own complement, and if the only subset of A to share this property is the empty set, then A is an open-connected component of X. (In the degenerate case where X is itself the empty set ∅ {\displaystyle \emptyset } , authorities differ on whether ∅ {\displaystyle \emptyset } is connected and whether ∅ {\displaystyle \emptyset } is an open-connected component of itself.) == Relation to topologically distinguishable points == Given a topological space X, two points x and y are topologically distinguishable if there exists an open set that one point belongs to but the other point does not. If x and y are topologically distinguishable, then the singleton sets {x} and {y} must be disjoint. On the other hand, if the singletons {x} and {y} are separated, then the points x and y must be topologically distinguishable. Thus for singletons, topological distinguishability is a condition in between disjointness and separatedness. == See also == Hausdorff space – Type of topological space Locally Hausdorff space – Space such that every point has a Hausdorff neighborhood Separation axiom – Axioms in topology defining notions of "separation" == Citations == == Sources ==
Wikipedia/Separated_sets
In topology, a topological space with the trivial topology is one where the only open sets are the empty set and the entire space. Such spaces are commonly called indiscrete, anti-discrete, concrete or codiscrete. Intuitively, this has the consequence that all points of the space are "lumped together" and cannot be distinguished by topological means. Every indiscrete space can be viewed as a pseudometric space in which the distance between any two points is zero. == Details == The trivial topology is the topology with the least possible number of open sets, namely the empty set and the entire space, since the definition of a topology requires these two sets to be open. Despite its simplicity, a space X with more than one element and the trivial topology lacks a key desirable property: it is not a T0 space. Other properties of an indiscrete space X—many of which are quite unusual—include: The only closed sets are the empty set and X. The only possible basis of X is {X}. If X has more than one point, then since it is not T0, it does not satisfy any of the higher T axioms either. In particular, it is not a Hausdorff space. Not being Hausdorff, X is not an order topology, nor is it metrizable. X is, however, regular, completely regular, normal, and completely normal; all in a rather vacuous way though, since the only closed sets are ∅ and X. X is compact and therefore paracompact, Lindelöf, and locally compact. Every function whose domain is a topological space and codomain X is continuous. X is path-connected and so connected. X is second-countable, and therefore is first-countable, separable and Lindelöf. All subspaces of X have the trivial topology. All quotient spaces of X have the trivial topology Arbitrary products of trivial topological spaces, with either the product topology or box topology, have the trivial topology. All sequences in X converge to every point of X. In particular, every sequence has a convergent subsequence (the whole sequence or any other subsequence), thus X is sequentially compact. The interior of every set except X is empty. The closure of every non-empty subset of X is X. Put another way: every non-empty subset of X is dense, a property that characterizes trivial topological spaces. As a result of this, the closure of every open subset U of X is either ∅ (if U = ∅) or X (otherwise). In particular, the closure of every open subset of X is again an open set, and therefore X is extremally disconnected. If S is any subset of X with more than one element, then all elements of X are limit points of S. If S is a singleton, then every point of X \ S is still a limit point of S. X is a Baire space. Two topological spaces carrying the trivial topology are homeomorphic iff they have the same cardinality. In some sense the opposite of the trivial topology is the discrete topology, in which every subset is open. The trivial topology belongs to a uniform space in which the whole cartesian product X × X is the only entourage. Let Top be the category of topological spaces with continuous maps and Set be the category of sets with functions. If G : Top → Set is the functor that assigns to each topological space its underlying set (the so-called forgetful functor), and H : Set → Top is the functor that puts the trivial topology on a given set, then H (the so-called cofree functor) is right adjoint to G. (The so-called free functor F : Set → Top that puts the discrete topology on a given set is left adjoint to G.) == See also == List of topologies Triviality (mathematics) == Notes == == References == Steen, Lynn Arthur; Seebach, J. Arthur Jr. (1995) [1978], Counterexamples in Topology (Dover reprint of 1978 ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-486-68735-3, MR 0507446
Wikipedia/Indiscrete_topology
In topology and related areas of mathematics, the quotient space of a topological space under a given equivalence relation is a new topological space constructed by endowing the quotient set of the original topological space with the quotient topology, that is, with the finest topology that makes continuous the canonical projection map (the function that maps points to their equivalence classes). In other words, a subset of a quotient space is open if and only if its preimage under the canonical projection map is open in the original topological space. Intuitively speaking, the points of each equivalence class are identified or "glued together" for forming a new topological space. For example, identifying the points of a sphere that belong to the same diameter produces the projective plane as a quotient space. == Definition == Let X {\displaystyle X} be a topological space, and let ∼ {\displaystyle \sim } be an equivalence relation on X . {\displaystyle X.} The quotient set Y = X / ∼ {\displaystyle Y=X/{\sim }} is the set of equivalence classes of elements of X . {\displaystyle X.} The equivalence class of x ∈ X {\displaystyle x\in X} is denoted [ x ] . {\displaystyle [x].} The construction of Y {\displaystyle Y} defines a canonical surjection q : X ∋ x ↦ [ x ] ∈ Y . {\textstyle q:X\ni x\mapsto [x]\in Y.} As discussed below, q {\displaystyle q} is a quotient mapping, commonly called the canonical quotient map, or canonical projection map, associated to X / ∼ . {\displaystyle X/{\sim }.} The quotient space under ∼ {\displaystyle \sim } is the set Y {\displaystyle Y} equipped with the quotient topology, whose open sets are those subsets U ⊆ Y {\textstyle U\subseteq Y} whose preimage q − 1 ( U ) {\displaystyle q^{-1}(U)} is open. In other words, U {\displaystyle U} is open in the quotient topology on X / ∼ {\displaystyle X/{\sim }} if and only if { x ∈ X : [ x ] ∈ U } {\textstyle \{x\in X:[x]\in U\}} is open in X . {\displaystyle X.} Similarly, a subset S ⊆ Y {\displaystyle S\subseteq Y} is closed if and only if { x ∈ X : [ x ] ∈ S } {\displaystyle \{x\in X:[x]\in S\}} is closed in X . {\displaystyle X.} The quotient topology is the final topology on the quotient set, with respect to the map x ↦ [ x ] . {\displaystyle x\mapsto [x].} == Quotient map == A map f : X → Y {\displaystyle f:X\to Y} is a quotient map (sometimes called an identification map) if it is surjective and Y {\displaystyle Y} is equipped with the final topology induced by f . {\displaystyle f.} The latter condition admits two more-elementary formulations: a subset V ⊆ Y {\displaystyle V\subseteq Y} is open (closed) if and only if f − 1 ( V ) {\displaystyle f^{-1}(V)} is open (resp. closed). Every quotient map is continuous but not every continuous map is a quotient map. Saturated sets A subset S {\displaystyle S} of X {\displaystyle X} is called saturated (with respect to f {\displaystyle f} ) if it is of the form S = f − 1 ( T ) {\displaystyle S=f^{-1}(T)} for some set T , {\displaystyle T,} which is true if and only if f − 1 ( f ( S ) ) = S . {\displaystyle f^{-1}(f(S))=S.} The assignment T ↦ f − 1 ( T ) {\displaystyle T\mapsto f^{-1}(T)} establishes a one-to-one correspondence (whose inverse is S ↦ f ( S ) {\displaystyle S\mapsto f(S)} ) between subsets T {\displaystyle T} of Y = f ( X ) {\displaystyle Y=f(X)} and saturated subsets of X . {\displaystyle X.} With this terminology, a surjection f : X → Y {\displaystyle f:X\to Y} is a quotient map if and only if for every saturated subset S {\displaystyle S} of X , {\displaystyle X,} S {\displaystyle S} is open in X {\displaystyle X} if and only if f ( S ) {\displaystyle f(S)} is open in Y . {\displaystyle Y.} In particular, open subsets of X {\displaystyle X} that are not saturated have no impact on whether the function f {\displaystyle f} is a quotient map (or, indeed, continuous: a function f : X → Y {\displaystyle f:X\to Y} is continuous if and only if, for every saturated S ⊆ X {\textstyle S\subseteq X} such that f ( S ) {\displaystyle f(S)} is open in f ( X ) {\textstyle f(X)} , the set S {\displaystyle S} is open in X {\textstyle X} ). Indeed, if τ {\displaystyle \tau } is a topology on X {\displaystyle X} and f : X → Y {\displaystyle f:X\to Y} is any map, then the set τ f {\displaystyle \tau _{f}} of all U ∈ τ {\displaystyle U\in \tau } that are saturated subsets of X {\displaystyle X} forms a topology on X . {\displaystyle X.} If Y {\displaystyle Y} is also a topological space then f : ( X , τ ) → Y {\displaystyle f:(X,\tau )\to Y} is a quotient map (respectively, continuous) if and only if the same is true of f : ( X , τ f ) → Y . {\displaystyle f:\left(X,\tau _{f}\right)\to Y.} Quotient space of fibers characterization Given an equivalence relation ∼ {\displaystyle \,\sim \,} on X , {\displaystyle X,} denote the equivalence class of a point x ∈ X {\displaystyle x\in X} by [ x ] := { z ∈ X : z ∼ x } {\displaystyle [x]:=\{z\in X:z\sim x\}} and let X / ∼ := { [ x ] : x ∈ X } {\displaystyle X/{\sim }:=\{[x]:x\in X\}} denote the set of equivalence classes. The map q : X → X / ∼ {\displaystyle q:X\to X/{\sim }} that sends points to their equivalence classes (that is, it is defined by q ( x ) := [ x ] {\displaystyle q(x):=[x]} for every x ∈ X {\displaystyle x\in X} ) is called the canonical map. It is a surjective map and for all a , b ∈ X , {\displaystyle a,b\in X,} a ∼ b {\displaystyle a\,\sim \,b} if and only if q ( a ) = q ( b ) ; {\displaystyle q(a)=q(b);} consequently, q ( x ) = q − 1 ( q ( x ) ) {\displaystyle q(x)=q^{-1}(q(x))} for all x ∈ X . {\displaystyle x\in X.} In particular, this shows that the set of equivalence class X / ∼ {\displaystyle X/{\sim }} is exactly the set of fibers of the canonical map q . {\displaystyle q.} If X {\displaystyle X} is a topological space then giving X / ∼ {\displaystyle X/{\sim }} the quotient topology induced by q {\displaystyle q} will make it into a quotient space and make q : X → X / ∼ {\displaystyle q:X\to X/{\sim }} into a quotient map. Up to a homeomorphism, this construction is representative of all quotient spaces; the precise meaning of this is now explained. Let f : X → Y {\displaystyle f:X\to Y} be a surjection between topological spaces (not yet assumed to be continuous or a quotient map) and declare for all a , b ∈ X {\displaystyle a,b\in X} that a ∼ b {\displaystyle a\,\sim \,b} if and only if f ( a ) = f ( b ) . {\displaystyle f(a)=f(b).} Then ∼ {\displaystyle \,\sim \,} is an equivalence relation on X {\displaystyle X} such that for every x ∈ X , {\displaystyle x\in X,} [ x ] = f − 1 ( f ( x ) ) , {\displaystyle [x]=f^{-1}(f(x)),} which implies that f ( [ x ] ) {\displaystyle f([x])} (defined by f ( [ x ] ) = { f ( z ) : z ∈ [ x ] } {\displaystyle f([x])=\{\,f(z)\,:z\in [x]\}} ) is a singleton set; denote the unique element in f ( [ x ] ) {\displaystyle f([x])} by f ^ ( [ x ] ) {\displaystyle {\hat {f}}([x])} (so by definition, f ( [ x ] ) = { f ^ ( [ x ] ) } {\displaystyle f([x])=\{\,{\hat {f}}([x])\,\}} ). The assignment [ x ] ↦ f ^ ( [ x ] ) {\displaystyle [x]\mapsto {\hat {f}}([x])} defines a bijection f ^ : X / ∼ → Y {\displaystyle {\hat {f}}:X/{\sim }\;\to \;Y} between the fibers of f {\displaystyle f} and points in Y . {\displaystyle Y.} Define the map q : X → X / ∼ {\displaystyle q:X\to X/{\sim }} as above (by q ( x ) := [ x ] {\displaystyle q(x):=[x]} ) and give X / ∼ {\displaystyle X/{\sim }} the quotient topology induced by q {\displaystyle q} (which makes q {\displaystyle q} a quotient map). These maps are related by: f = f ^ ∘ q and q = f ^ − 1 ∘ f . {\displaystyle f={\hat {f}}\circ q\quad {\text{ and }}\quad q={\hat {f}}^{-1}\circ f.} From this and the fact that q : X → X / ∼ {\displaystyle q:X\to X/{\sim }} is a quotient map, it follows that f : X → Y {\displaystyle f:X\to Y} is continuous if and only if this is true of f ^ : X / ∼ → Y . {\displaystyle {\hat {f}}:X/{\sim }\;\to \;Y.} Furthermore, f : X → Y {\displaystyle f:X\to Y} is a quotient map if and only if f ^ : X / ∼ → Y {\displaystyle {\hat {f}}:X/{\sim }\;\to \;Y} is a homeomorphism (or equivalently, if and only if both f ^ {\displaystyle {\hat {f}}} and its inverse are continuous). === Related definitions === A hereditarily quotient map is a surjective map f : X → Y {\displaystyle f:X\to Y} with the property that for every subset T ⊆ Y , {\displaystyle T\subseteq Y,} the restriction f | f − 1 ( T ) : f − 1 ( T ) → T {\displaystyle f{\big \vert }_{f^{-1}(T)}~:~f^{-1}(T)\to T} is also a quotient map. There exist quotient maps that are not hereditarily quotient. == Examples == Gluing. Topologists talk of gluing points together. If X {\displaystyle X} is a topological space, gluing the points x {\displaystyle x} and y {\displaystyle y} in X {\displaystyle X} means considering the quotient space obtained from the equivalence relation a ∼ b {\displaystyle a\sim b} if and only if a = b {\displaystyle a=b} or a = x , b = y {\displaystyle a=x,b=y} (or a = y , b = x {\displaystyle a=y,b=x} ). Consider the unit square I 2 = [ 0 , 1 ] × [ 0 , 1 ] {\displaystyle I^{2}=[0,1]\times [0,1]} and the equivalence relation ~ generated by the requirement that all boundary points be equivalent, thus identifying all boundary points to a single equivalence class. Then I 2 / ∼ {\displaystyle I^{2}/\sim } is homeomorphic to the sphere S 2 . {\displaystyle S^{2}.} Adjunction space. More generally, suppose X {\displaystyle X} is a space and A {\displaystyle A} is a subspace of X . {\displaystyle X.} One can identify all points in A {\displaystyle A} to a single equivalence class and leave points outside of A {\displaystyle A} equivalent only to themselves. The resulting quotient space is denoted X / A . {\displaystyle X/A.} The 2-sphere is then homeomorphic to a closed disc with its boundary identified to a single point: D 2 / ∂ D 2 . {\displaystyle D^{2}/\partial {D^{2}}.} Consider the set R {\displaystyle \mathbb {R} } of real numbers with the ordinary topology, and write x ∼ y {\displaystyle x\sim y} if and only if x − y {\displaystyle x-y} is an integer. Then the quotient space X / ∼ {\displaystyle X/{\sim }} is homeomorphic to the unit circle S 1 {\displaystyle S^{1}} via the homeomorphism which sends the equivalence class of x {\displaystyle x} to exp ⁡ ( 2 π i x ) . {\displaystyle \exp(2\pi ix).} A generalization of the previous example is the following: Suppose a topological group G {\displaystyle G} acts continuously on a space X . {\displaystyle X.} One can form an equivalence relation on X {\displaystyle X} by saying points are equivalent if and only if they lie in the same orbit. The quotient space under this relation is called the orbit space, denoted X / G . {\displaystyle X/G.} In the previous example G = Z {\displaystyle G=\mathbb {Z} } acts on R {\displaystyle \mathbb {R} } by translation. The orbit space R / Z {\displaystyle \mathbb {R} /\mathbb {Z} } is homeomorphic to S 1 . {\displaystyle S^{1}.} Note: The notation R / Z {\displaystyle \mathbb {R} /\mathbb {Z} } is somewhat ambiguous. If Z {\displaystyle \mathbb {Z} } is understood to be a group acting on R {\displaystyle \mathbb {R} } via addition, then the quotient is the circle. However, if Z {\displaystyle \mathbb {Z} } is thought of as a topological subspace of R {\displaystyle \mathbb {R} } (that is identified as a single point) then the quotient { Z } ∪ { { r } : r ∈ R ∖ Z } {\displaystyle \{\mathbb {Z} \}\cup \{\,\{r\}:r\in \mathbb {R} \setminus \mathbb {Z} \}} (which is identifiable with the set { Z } ∪ ( R ∖ Z ) {\displaystyle \{\mathbb {Z} \}\cup (\mathbb {R} \setminus \mathbb {Z} )} ) is a countably infinite bouquet of circles joined at a single point Z . {\displaystyle \mathbb {Z} .} This next example shows that it is in general not true that if q : X → Y {\displaystyle q:X\to Y} is a quotient map then every convergent sequence (respectively, every convergent net) in Y {\displaystyle Y} has a lift (by q {\displaystyle q} ) to a convergent sequence (or convergent net) in X . {\displaystyle X.} Let X = [ 0 , 1 ] {\displaystyle X=[0,1]} and ∼ = { { 0 , 1 } } ∪ { { x } : x ∈ ( 0 , 1 ) } . {\displaystyle \,\sim ~=~\{\,\{0,1\}\,\}~\cup ~\left\{\{x\}:x\in (0,1)\,\right\}.} Let Y := X / ∼ {\displaystyle Y:=X/{\sim }} and let q : X → X / ∼ {\displaystyle q:X\to X/{\sim }} be the quotient map q ( x ) := [ x ] , {\displaystyle q(x):=[x],} so that q ( 0 ) = q ( 1 ) = { 0 , 1 } {\displaystyle q(0)=q(1)=\{0,1\}} and q ( x ) = { x } {\displaystyle q(x)=\{x\}} for every x ∈ ( 0 , 1 ) . {\displaystyle x\in (0,1).} The map h : X / ∼ → S 1 ⊆ C {\displaystyle h:X/{\sim }\to S^{1}\subseteq \mathbb {C} } defined by h ( [ x ] ) := e 2 π i x {\displaystyle h([x]):=e^{2\pi ix}} is well-defined (because e 2 π i ( 0 ) = 1 = e 2 π i ( 1 ) {\displaystyle e^{2\pi i(0)}=1=e^{2\pi i(1)}} ) and a homeomorphism. Let I = N {\displaystyle I=\mathbb {N} } and let a ∙ := ( a i ) i ∈ I and b ∙ := ( b i ) i ∈ I {\displaystyle a_{\bullet }:=\left(a_{i}\right)_{i\in I}{\text{ and }}b_{\bullet }:=\left(b_{i}\right)_{i\in I}} be any sequences (or more generally, any nets) valued in ( 0 , 1 ) {\displaystyle (0,1)} such that a ∙ → 0 and b ∙ → 1 {\displaystyle a_{\bullet }\to 0{\text{ and }}b_{\bullet }\to 1} in X = [ 0 , 1 ] . {\displaystyle X=[0,1].} Then the sequence y 1 := q ( a 1 ) , y 2 := q ( b 1 ) , y 3 := q ( a 2 ) , y 4 := q ( b 2 ) , … {\displaystyle y_{1}:=q\left(a_{1}\right),y_{2}:=q\left(b_{1}\right),y_{3}:=q\left(a_{2}\right),y_{4}:=q\left(b_{2}\right),\ldots } converges to [ 0 ] = [ 1 ] {\displaystyle [0]=[1]} in X / ∼ {\displaystyle X/{\sim }} but there does not exist any convergent lift of this sequence by the quotient map q {\displaystyle q} (that is, there is no sequence s ∙ = ( s i ) i ∈ I {\displaystyle s_{\bullet }=\left(s_{i}\right)_{i\in I}} in X {\displaystyle X} that both converges to some x ∈ X {\displaystyle x\in X} and satisfies y i = q ( s i ) {\displaystyle y_{i}=q\left(s_{i}\right)} for every i ∈ I {\displaystyle i\in I} ). This counterexample can be generalized to nets by letting ( A , ≤ ) {\displaystyle (A,\leq )} be any directed set, and making I := A × { 1 , 2 } {\displaystyle I:=A\times \{1,2\}} into a net by declaring that for any ( a , m ) , ( b , n ) ∈ I , {\displaystyle (a,m),(b,n)\in I,} ( m , a ) ≤ ( n , b ) {\displaystyle (m,a)\;\leq \;(n,b)} holds if and only if both (1) a ≤ b , {\displaystyle a\leq b,} and (2) if a = b then m ≤ n ; {\displaystyle a=b{\text{ then }}m\leq n;} then the A {\displaystyle A} -indexed net defined by letting y ( a , m ) {\displaystyle y_{(a,m)}} equal a i if m = 1 {\displaystyle a_{i}{\text{ if }}m=1} and equal to b i if m = 2 {\displaystyle b_{i}{\text{ if }}m=2} has no lift (by q {\displaystyle q} ) to a convergent A {\displaystyle A} -indexed net in X = [ 0 , 1 ] . {\displaystyle X=[0,1].} == Properties == Quotient maps q : X → Y {\displaystyle q:X\to Y} are characterized among surjective maps by the following property: if Z {\displaystyle Z} is any topological space and f : Y → Z {\displaystyle f:Y\to Z} is any function, then f {\displaystyle f} is continuous if and only if f ∘ q {\displaystyle f\circ q} is continuous. The quotient space X / ∼ {\displaystyle X/{\sim }} together with the quotient map q : X → X / ∼ {\displaystyle q:X\to X/{\sim }} is characterized by the following universal property: if g : X → Z {\displaystyle g:X\to Z} is a continuous map such that a ∼ b {\displaystyle a\sim b} implies g ( a ) = g ( b ) {\displaystyle g(a)=g(b)} for all a , b ∈ X , {\displaystyle a,b\in X,} then there exists a unique continuous map f : X / ∼ → Z {\displaystyle f:X/{\sim }\to Z} such that g = f ∘ q . {\displaystyle g=f\circ q.} In other words, the following diagram commutes: One says that g {\displaystyle g} descends to the quotient for expressing this, that is that it factorizes through the quotient space. The continuous maps defined on X / ∼ {\displaystyle X/{\sim }} are, therefore, precisely those maps which arise from continuous maps defined on X {\displaystyle X} that respect the equivalence relation (in the sense that they send equivalent elements to the same image). This criterion is copiously used when studying quotient spaces. Given a continuous surjection q : X → Y {\displaystyle q:X\to Y} it is useful to have criteria by which one can determine if q {\displaystyle q} is a quotient map. Two sufficient criteria are that q {\displaystyle q} be open or closed. Note that these conditions are only sufficient, not necessary. It is easy to construct examples of quotient maps that are neither open nor closed. For topological groups, the quotient map is open. == Compatibility with other topological notions == Separation In general, quotient spaces are ill-behaved with respect to separation axioms. The separation properties of X {\displaystyle X} need not be inherited by X / ∼ {\displaystyle X/{\sim }} and X / ∼ {\displaystyle X/{\sim }} may have separation properties not shared by X . {\displaystyle X.} X / ∼ {\displaystyle X/{\sim }} is a T1 space if and only if every equivalence class of ∼ {\displaystyle \,\sim \,} is closed in X . {\displaystyle X.} If the quotient map is open, then X / ∼ {\displaystyle X/{\sim }} is a Hausdorff space if and only if ~ is a closed subset of the product space X × X . {\displaystyle X\times X.} Connectedness If a space is connected or path connected, then so are all its quotient spaces. A quotient space of a simply connected or contractible space need not share those properties. Compactness If a space is compact, then so are all its quotient spaces. A quotient space of a locally compact space need not be locally compact. Dimension The topological dimension of a quotient space can be more (as well as less) than the dimension of the original space; space-filling curves provide such examples. == See also == Topology Covering space – Type of continuous map in topology Disjoint union (topology) – Mathematical term Final topology – Finest topology making some functions continuous Mapping cone (topology) – Topological construction on a map between spaces Product space – Topology on Cartesian products of topological spacesPages displaying short descriptions of redirect targets Subspace (topology) – Inherited topologyPages displaying short descriptions of redirect targets Topological space – Mathematical space with a notion of closeness Algebra Quotient category Quotient group – Group obtained by aggregating similar elements of a larger group Quotient space (linear algebra) – Vector space consisting of affine subsets Mapping cone (homological algebra) – Tool in homological algebra == Notes == == References == Bourbaki, Nicolas (1989) [1966]. General Topology: Chapters 1–4 [Topologie Générale]. Éléments de mathématique. Berlin New York: Springer Science & Business Media. ISBN 978-3-540-64241-1. OCLC 18588129. Bourbaki, Nicolas (1989) [1967]. General Topology 2: Chapters 5–10 [Topologie Générale]. Éléments de mathématique. Vol. 4. Berlin New York: Springer Science & Business Media. ISBN 978-3-540-64563-4. OCLC 246032063. Brown, Ronald (2006), Topology and Groupoids, Booksurge, ISBN 1-4196-2722-8 Dixmier, Jacques (1984). General Topology. Undergraduate Texts in Mathematics. Translated by Berberian, S. K. New York: Springer-Verlag. ISBN 978-0-387-90972-1. OCLC 10277303. Dugundji, James (1966). Topology. Boston: Allyn and Bacon. ISBN 978-0-697-06889-7. OCLC 395340485. Kelley, John L. (1975) [1955]. General Topology. Graduate Texts in Mathematics. Vol. 27 (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-90125-1. OCLC 1365153. Munkres, James R. (2000). Topology (2nd ed.). Upper Saddle River, NJ: Prentice Hall, Inc. ISBN 978-0-13-181629-9. OCLC 42683260. (accessible to patrons with print disabilities) Willard, Stephen (2004) [1970]. General Topology. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-43479-7. OCLC 115240. Willard, Stephen (1970). General Topology. Reading, MA: Addison-Wesley. ISBN 0-486-43479-6.
Wikipedia/Quotient_topology
In mathematics, a base (or basis; pl.: bases) for the topology τ of a topological space (X, τ) is a family B {\displaystyle {\mathcal {B}}} of open subsets of X such that every open set of the topology is equal to the union of some sub-family of B {\displaystyle {\mathcal {B}}} . For example, the set of all open intervals in the real number line R {\displaystyle \mathbb {R} } is a basis for the Euclidean topology on R {\displaystyle \mathbb {R} } because every open interval is an open set, and also every open subset of R {\displaystyle \mathbb {R} } can be written as a union of some family of open intervals. Bases are ubiquitous throughout topology. The sets in a base for a topology, which are called basic open sets, are often easier to describe and use than arbitrary open sets. Many important topological definitions such as continuity and convergence can be checked using only basic open sets instead of arbitrary open sets. Some topologies have a base of open sets with specific useful properties that may make checking such topological definitions easier. Not all families of subsets of a set X {\displaystyle X} form a base for a topology on X {\displaystyle X} . Under some conditions detailed below, a family of subsets will form a base for a (unique) topology on X {\displaystyle X} , obtained by taking all possible unions of subfamilies. Such families of sets are very frequently used to define topologies. A weaker notion related to bases is that of a subbase for a topology. Bases for topologies are also closely related to neighborhood bases. == Definition and basic properties == Given a topological space ( X , τ ) {\displaystyle (X,\tau )} , a base (or basis) for the topology τ {\displaystyle \tau } (also called a base for X {\displaystyle X} if the topology is understood) is a family B ⊆ τ {\displaystyle {\mathcal {B}}\subseteq \tau } of open sets such that every open set of the topology can be represented as the union of some subfamily of B {\displaystyle {\mathcal {B}}} . The elements of B {\displaystyle {\mathcal {B}}} are called basic open sets. Equivalently, a family B {\displaystyle {\mathcal {B}}} of subsets of X {\displaystyle X} is a base for the topology τ {\displaystyle \tau } if and only if B ⊆ τ {\displaystyle {\mathcal {B}}\subseteq \tau } and for every open set U {\displaystyle U} in X {\displaystyle X} and point x ∈ U {\displaystyle x\in U} there is some basic open set B ∈ B {\displaystyle B\in {\mathcal {B}}} such that x ∈ B ⊆ U {\displaystyle x\in B\subseteq U} . For example, the collection of all open intervals in the real line forms a base for the standard topology on the real numbers. More generally, in a metric space M {\displaystyle M} the collection of all open balls about points of M {\displaystyle M} forms a base for the topology. In general, a topological space ( X , τ ) {\displaystyle (X,\tau )} can have many bases. The whole topology τ {\displaystyle \tau } is always a base for itself (that is, τ {\displaystyle \tau } is a base for τ {\displaystyle \tau } ). For the real line, the collection of all open intervals is a base for the topology. So is the collection of all open intervals with rational endpoints, or the collection of all open intervals with irrational endpoints, for example. Note that two different bases need not have any basic open set in common. One of the topological properties of a space X {\displaystyle X} is the minimum cardinality of a base for its topology, called the weight of X {\displaystyle X} and denoted w ( X ) {\displaystyle w(X)} . From the examples above, the real line has countable weight. If B {\displaystyle {\mathcal {B}}} is a base for the topology τ {\displaystyle \tau } of a space X {\displaystyle X} , it satisfies the following properties: (B1) The elements of B {\displaystyle {\mathcal {B}}} cover X {\displaystyle X} , i.e., every point x ∈ X {\displaystyle x\in X} belongs to some element of B {\displaystyle {\mathcal {B}}} . (B2) For every B 1 , B 2 ∈ B {\displaystyle B_{1},B_{2}\in {\mathcal {B}}} and every point x ∈ B 1 ∩ B 2 {\displaystyle x\in B_{1}\cap B_{2}} , there exists some B 3 ∈ B {\displaystyle B_{3}\in {\mathcal {B}}} such that x ∈ B 3 ⊆ B 1 ∩ B 2 {\displaystyle x\in B_{3}\subseteq B_{1}\cap B_{2}} . Property (B1) corresponds to the fact that X {\displaystyle X} is an open set; property (B2) corresponds to the fact that B 1 ∩ B 2 {\displaystyle B_{1}\cap B_{2}} is an open set. Conversely, suppose X {\displaystyle X} is just a set without any topology and B {\displaystyle {\mathcal {B}}} is a family of subsets of X {\displaystyle X} satisfying properties (B1) and (B2). Then B {\displaystyle {\mathcal {B}}} is a base for the topology that it generates. More precisely, let τ {\displaystyle \tau } be the family of all subsets of X {\displaystyle X} that are unions of subfamilies of B . {\displaystyle {\mathcal {B}}.} Then τ {\displaystyle \tau } is a topology on X {\displaystyle X} and B {\displaystyle {\mathcal {B}}} is a base for τ {\displaystyle \tau } . (Sketch: τ {\displaystyle \tau } defines a topology because it is stable under arbitrary unions by construction, it is stable under finite intersections by (B2), it contains X {\displaystyle X} by (B1), and it contains the empty set as the union of the empty subfamily of B {\displaystyle {\mathcal {B}}} . The family B {\displaystyle {\mathcal {B}}} is then a base for τ {\displaystyle \tau } by construction.) Such families of sets are a very common way of defining a topology. In general, if X {\displaystyle X} is a set and B {\displaystyle {\mathcal {B}}} is an arbitrary collection of subsets of X {\displaystyle X} , there is a (unique) smallest topology τ {\displaystyle \tau } on X {\displaystyle X} containing B {\displaystyle {\mathcal {B}}} . (This topology is the intersection of all topologies on X {\displaystyle X} containing B {\displaystyle {\mathcal {B}}} .) The topology τ {\displaystyle \tau } is called the topology generated by B {\displaystyle {\mathcal {B}}} , and B {\displaystyle {\mathcal {B}}} is called a subbase for τ {\displaystyle \tau } . The topology τ {\displaystyle \tau } consists of X {\displaystyle X} together with all arbitrary unions of finite intersections of elements of B {\displaystyle {\mathcal {B}}} (see the article about subbase.) Now, if B {\displaystyle {\mathcal {B}}} also satisfies properties (B1) and (B2), the topology generated by B {\displaystyle {\mathcal {B}}} can be described in a simpler way without having to take intersections: τ {\displaystyle \tau } is the set of all unions of elements of B {\displaystyle {\mathcal {B}}} (and B {\displaystyle {\mathcal {B}}} is a base for τ {\displaystyle \tau } in that case). There is often an easy way to check condition (B2). If the intersection of any two elements of B {\displaystyle {\mathcal {B}}} is itself an element of B {\displaystyle {\mathcal {B}}} or is empty, then condition (B2) is automatically satisfied (by taking B 3 = B 1 ∩ B 2 {\displaystyle B_{3}=B_{1}\cap B_{2}} ). For example, the Euclidean topology on the plane admits as a base the set of all open rectangles with horizontal and vertical sides, and a nonempty intersection of two such basic open sets is also a basic open set. But another base for the same topology is the collection of all open disks; and here the full (B2) condition is necessary. An example of a collection of open sets that is not a base is the set S {\displaystyle S} of all semi-infinite intervals of the forms ( − ∞ , a ) {\displaystyle (-\infty ,a)} and ( a , ∞ ) {\displaystyle (a,\infty )} with a ∈ R {\displaystyle a\in \mathbb {R} } . The topology generated by S {\displaystyle S} contains all open intervals ( a , b ) = ( − ∞ , b ) ∩ ( a , ∞ ) {\displaystyle (a,b)=(-\infty ,b)\cap (a,\infty )} , hence S {\displaystyle S} generates the standard topology on the real line. But S {\displaystyle S} is only a subbase for the topology, not a base: a finite open interval ( a , b ) {\displaystyle (a,b)} does not contain any element of S {\displaystyle S} (equivalently, property (B2) does not hold). == Examples == The set Γ of all open intervals in R {\displaystyle \mathbb {R} } forms a basis for the Euclidean topology on R {\displaystyle \mathbb {R} } . A non-empty family of subsets of a set X that is closed under finite intersections of two or more sets, which is called a π-system on X, is necessarily a base for a topology on X if and only if it covers X. By definition, every σ-algebra, every filter (and so in particular, every neighborhood filter), and every topology is a covering π-system and so also a base for a topology. In fact, if Γ is a filter on X then { ∅ } ∪ Γ is a topology on X and Γ is a basis for it. A base for a topology does not have to be closed under finite intersections and many are not. But nevertheless, many topologies are defined by bases that are also closed under finite intersections. For example, each of the following families of subset of R {\displaystyle \mathbb {R} } is closed under finite intersections and so each forms a basis for some topology on R {\displaystyle \mathbb {R} } : The set Γ of all bounded open intervals in R {\displaystyle \mathbb {R} } generates the usual Euclidean topology on R {\displaystyle \mathbb {R} } . The set Σ of all bounded closed intervals in R {\displaystyle \mathbb {R} } generates the discrete topology on R {\displaystyle \mathbb {R} } and so the Euclidean topology is a subset of this topology. This is despite the fact that Γ is not a subset of Σ. Consequently, the topology generated by Γ, which is the Euclidean topology on R {\displaystyle \mathbb {R} } , is coarser than the topology generated by Σ. In fact, it is strictly coarser because Σ contains non-empty compact sets which are never open in the Euclidean topology. The set Γ Q {\displaystyle \mathbb {Q} } of all intervals in Γ such that both endpoints of the interval are rational numbers generates the same topology as Γ. This remains true if each instance of the symbol Γ is replaced by Σ. Σ∞ = { [r, ∞) : r ∈ R {\displaystyle \mathbb {R} } } generates a topology that is strictly coarser than the topology generated by Σ. No element of Σ∞ is open in the Euclidean topology on R {\displaystyle \mathbb {R} } . Γ∞ = { (r, ∞) : r ∈ R {\displaystyle \mathbb {R} } } generates a topology that is strictly coarser than both the Euclidean topology and the topology generated by Σ∞. The sets Σ∞ and Γ∞ are disjoint, but nevertheless Γ∞ is a subset of the topology generated by Σ∞. === Objects defined in terms of bases === The order topology on a totally ordered set admits a collection of open-interval-like sets as a base. In a metric space the collection of all open balls forms a base for the topology. The discrete topology has the collection of all singletons as a base. A second-countable space is one that has a countable base. The Zariski topology on the spectrum of a ring has a base consisting of open sets that have specific useful properties. For the usual base for this topology, every finite intersection of basic open sets is a basic open set. The Zariski topology of C n {\displaystyle \mathbb {C} ^{n}} is the topology that has the algebraic sets as closed sets. It has a base formed by the set complements of algebraic hypersurfaces. The Zariski topology of the spectrum of a ring (the set of the prime ideals) has a base such that each element consists of all prime ideals that do not contain a given element of the ring. == Theorems == A topology τ 2 {\displaystyle \tau _{2}} is finer than a topology τ 1 {\displaystyle \tau _{1}} if and only if for each x ∈ X {\displaystyle x\in X} and each basic open set B {\displaystyle B} of τ 1 {\displaystyle \tau _{1}} containing x {\displaystyle x} , there is a basic open set of τ 2 {\displaystyle \tau _{2}} containing x {\displaystyle x} and contained in B {\displaystyle B} . If B 1 , … , B n {\displaystyle {\mathcal {B}}_{1},\ldots ,{\mathcal {B}}_{n}} are bases for the topologies τ 1 , … , τ n {\displaystyle \tau _{1},\ldots ,\tau _{n}} then the collection of all set products B 1 × ⋯ × B n {\displaystyle B_{1}\times \cdots \times B_{n}} with each B i ∈ B i {\displaystyle B_{i}\in {\mathcal {B}}_{i}} is a base for the product topology τ 1 × ⋯ × τ n . {\displaystyle \tau _{1}\times \cdots \times \tau _{n}.} In the case of an infinite product, this still applies, except that all but finitely many of the base elements must be the entire space. Let B {\displaystyle {\mathcal {B}}} be a base for X {\displaystyle X} and let Y {\displaystyle Y} be a subspace of X {\displaystyle X} . Then if we intersect each element of B {\displaystyle {\mathcal {B}}} with Y {\displaystyle Y} , the resulting collection of sets is a base for the subspace Y {\displaystyle Y} . If a function f : X → Y {\displaystyle f:X\to Y} maps every basic open set of X {\displaystyle X} into an open set of Y {\displaystyle Y} , it is an open map. Similarly, if every preimage of a basic open set of Y {\displaystyle Y} is open in X {\displaystyle X} , then f {\displaystyle f} is continuous. B {\displaystyle {\mathcal {B}}} is a base for a topological space X {\displaystyle X} if and only if the subcollection of elements of B {\displaystyle {\mathcal {B}}} which contain x {\displaystyle x} form a local base at x {\displaystyle x} , for any point x ∈ X {\displaystyle x\in X} . == Base for the closed sets == Closed sets are equally adept at describing the topology of a space. There is, therefore, a dual notion of a base for the closed sets of a topological space. Given a topological space X , {\displaystyle X,} a family C {\displaystyle {\mathcal {C}}} of closed sets forms a base for the closed sets if and only if for each closed set A {\displaystyle A} and each point x {\displaystyle x} not in A {\displaystyle A} there exists an element of C {\displaystyle {\mathcal {C}}} containing A {\displaystyle A} but not containing x . {\displaystyle x.} A family C {\displaystyle {\mathcal {C}}} is a base for the closed sets of X {\displaystyle X} if and only if its dual in X , {\displaystyle X,} that is the family { X ∖ C : C ∈ C } {\displaystyle \{X\setminus C:C\in {\mathcal {C}}\}} of complements of members of C {\displaystyle {\mathcal {C}}} , is a base for the open sets of X . {\displaystyle X.} Let C {\displaystyle {\mathcal {C}}} be a base for the closed sets of X . {\displaystyle X.} Then ⋂ C = ∅ {\displaystyle \bigcap {\mathcal {C}}=\varnothing } For each C 1 , C 2 ∈ C {\displaystyle C_{1},C_{2}\in {\mathcal {C}}} the union C 1 ∪ C 2 {\displaystyle C_{1}\cup C_{2}} is the intersection of some subfamily of C {\displaystyle {\mathcal {C}}} (that is, for any x ∈ X {\displaystyle x\in X} not in C 1 or C 2 {\displaystyle C_{1}{\text{ or }}C_{2}} there is some C 3 ∈ C {\displaystyle C_{3}\in {\mathcal {C}}} containing C 1 ∪ C 2 {\displaystyle C_{1}\cup C_{2}} and not containing x {\displaystyle x} ). Any collection of subsets of a set X {\displaystyle X} satisfying these properties forms a base for the closed sets of a topology on X . {\displaystyle X.} The closed sets of this topology are precisely the intersections of members of C . {\displaystyle {\mathcal {C}}.} In some cases it is more convenient to use a base for the closed sets rather than the open ones. For example, a space is completely regular if and only if the zero sets form a base for the closed sets. Given any topological space X , {\displaystyle X,} the zero sets form the base for the closed sets of some topology on X . {\displaystyle X.} This topology will be the finest completely regular topology on X {\displaystyle X} coarser than the original one. In a similar vein, the Zariski topology on An is defined by taking the zero sets of polynomial functions as a base for the closed sets. == Weight and character == We shall work with notions established in (Engelking 1989, p. 12, pp. 127-128). Fix X {\displaystyle X} a topological space. Here, a network is a family N {\displaystyle {\mathcal {N}}} of sets, for which, for all points x {\displaystyle x} and open neighbourhoods U containing x {\displaystyle x} , there exists B {\displaystyle B} in N {\displaystyle {\mathcal {N}}} for which x ∈ B ⊆ U . {\displaystyle x\in B\subseteq U.} Note that, unlike a basis, the sets in a network need not be open. We define the weight, w ( X ) {\displaystyle w(X)} , as the minimum cardinality of a basis; we define the network weight, n w ( X ) {\displaystyle nw(X)} , as the minimum cardinality of a network; the character of a point, χ ( x , X ) , {\displaystyle \chi (x,X),} as the minimum cardinality of a neighbourhood basis for x {\displaystyle x} in X {\displaystyle X} ; and the character of X {\displaystyle X} to be χ ( X ) ≜ sup { χ ( x , X ) : x ∈ X } . {\displaystyle \chi (X)\triangleq \sup\{\chi (x,X):x\in X\}.} The point of computing the character and weight is to be able to tell what sort of bases and local bases can exist. We have the following facts: n w ( X ) ≤ w ( X ) {\displaystyle nw(X)\leq w(X)} . if X {\displaystyle X} is discrete, then w ( X ) = n w ( X ) = | X | {\displaystyle w(X)=nw(X)=|X|} . if X {\displaystyle X} is Hausdorff, then n w ( X ) {\displaystyle nw(X)} is finite if and only if X {\displaystyle X} is finite discrete. if B {\displaystyle B} is a basis of X {\displaystyle X} then there is a basis B ′ ⊆ B {\displaystyle B'\subseteq B} of size | B ′ | ≤ w ( X ) . {\displaystyle |B'|\leq w(X).} if N {\displaystyle N} is a neighbourhood basis for x {\displaystyle x} in X {\displaystyle X} then there is a neighbourhood basis N ′ ⊆ N {\displaystyle N'\subseteq N} of size | N ′ | ≤ χ ( x , X ) . {\displaystyle |N'|\leq \chi (x,X).} if f : X → Y {\displaystyle f:X\to Y} is a continuous surjection, then n w ( Y ) ≤ w ( X ) {\displaystyle nw(Y)\leq w(X)} . (Simply consider the Y {\displaystyle Y} -network f B ≜ { f ( U ) : U ∈ B } {\displaystyle fB\triangleq \{f(U):U\in B\}} for each basis B {\displaystyle B} of X {\displaystyle X} .) if ( X , τ ) {\displaystyle (X,\tau )} is Hausdorff, then there exists a weaker Hausdorff topology ( X , τ ′ ) {\displaystyle (X,\tau ')} so that w ( X , τ ′ ) ≤ n w ( X , τ ) . {\displaystyle w(X,\tau ')\leq nw(X,\tau ).} So a fortiori, if X {\displaystyle X} is also compact, then such topologies coincide and hence we have, combined with the first fact, n w ( X ) = w ( X ) {\displaystyle nw(X)=w(X)} . if f : X → Y {\displaystyle f:X\to Y} a continuous surjective map from a compact metrizable space to an Hausdorff space, then Y {\displaystyle Y} is compact metrizable. The last fact follows from f ( X ) {\displaystyle f(X)} being compact Hausdorff, and hence n w ( f ( X ) ) = w ( f ( X ) ) ≤ w ( X ) ≤ ℵ 0 {\displaystyle nw(f(X))=w(f(X))\leq w(X)\leq \aleph _{0}} (since compact metrizable spaces are necessarily second countable); as well as the fact that compact Hausdorff spaces are metrizable exactly in case they are second countable. (An application of this, for instance, is that every path in a Hausdorff space is compact metrizable.) === Increasing chains of open sets === Using the above notation, suppose that w ( X ) ≤ κ {\displaystyle w(X)\leq \kappa } some infinite cardinal. Then there does not exist a strictly increasing sequence of open sets (equivalently strictly decreasing sequence of closed sets) of length ≤ κ + {\displaystyle \leq \kappa ^{+}\!} . To see this (without the axiom of choice), fix { U ξ } ξ ∈ κ , {\displaystyle \left\{U_{\xi }\right\}_{\xi \in \kappa },} as a basis of open sets. And suppose per contra, that { V ξ } ξ ∈ κ + {\displaystyle \left\{V_{\xi }\right\}_{\xi \in \kappa ^{+}}} were a strictly increasing sequence of open sets. This means ∀ α < κ + : V α ∖ ⋃ ξ < α V ξ ≠ ∅ . {\displaystyle \forall \alpha <\kappa ^{+}\!:\qquad V_{\alpha }\setminus \bigcup _{\xi <\alpha }V_{\xi }\neq \varnothing .} For x ∈ V α ∖ ⋃ ξ < α V ξ , {\displaystyle x\in V_{\alpha }\setminus \bigcup _{\xi <\alpha }V_{\xi },} we may use the basis to find some U γ {\displaystyle U_{\gamma }} with x {\displaystyle x} in U γ ⊆ V α {\displaystyle U_{\gamma }\subseteq V_{\alpha }} . In this way we may well-define a map, f : κ + → κ {\displaystyle f:\kappa ^{+}\!\to \kappa } mapping each α {\displaystyle \alpha } to the least γ {\displaystyle \gamma } for which U γ ⊆ V α {\displaystyle U_{\gamma }\subseteq V_{\alpha }} and meets V α ∖ ⋃ ξ < α V ξ . {\displaystyle V_{\alpha }\setminus \bigcup _{\xi <\alpha }V_{\xi }.} This map is injective, otherwise there would be α < β {\displaystyle \alpha <\beta } with f ( α ) = f ( β ) = γ {\displaystyle f(\alpha )=f(\beta )=\gamma } , which would further imply U γ ⊆ V α {\displaystyle U_{\gamma }\subseteq V_{\alpha }} but also meets V β ∖ ⋃ ξ < α V ξ ⊆ V β ∖ V α , {\displaystyle V_{\beta }\setminus \bigcup _{\xi <\alpha }V_{\xi }\subseteq V_{\beta }\setminus V_{\alpha },} which is a contradiction. But this would go to show that κ + ≤ κ {\displaystyle \kappa ^{+}\!\leq \kappa } , a contradiction. == See also == Esenin-Volpin's theorem Gluing axiom Neighbourhood system == Notes == == References == == Bibliography ==
Wikipedia/Base_(topology)
In physics, physical chemistry and engineering, fluid dynamics is a subdiscipline of fluid mechanics that describes the flow of fluids – liquids and gases. It has several subdisciplines, including aerodynamics (the study of air and other gases in motion) and hydrodynamics (the study of water and other liquids in motion). Fluid dynamics has a wide range of applications, including calculating forces and moments on aircraft, determining the mass flow rate of petroleum through pipelines, predicting weather patterns, understanding nebulae in interstellar space, understanding large scale geophysical flows involving oceans/atmosphere and modelling fission weapon detonation. Fluid dynamics offers a systematic structure—which underlies these practical disciplines—that embraces empirical and semi-empirical laws derived from flow measurement and used to solve practical problems. The solution to a fluid dynamics problem typically involves the calculation of various properties of the fluid, such as flow velocity, pressure, density, and temperature, as functions of space and time. Before the twentieth century, "hydrodynamics" was synonymous with fluid dynamics. This is still reflected in names of some fluid dynamics topics, like magnetohydrodynamics and hydrodynamic stability, both of which can also be applied to gases. == Equations == The foundational axioms of fluid dynamics are the conservation laws, specifically, conservation of mass, conservation of linear momentum, and conservation of energy (also known as the first law of thermodynamics). These are based on classical mechanics and are modified in quantum mechanics and general relativity. They are expressed using the Reynolds transport theorem. In addition to the above, fluids are assumed to obey the continuum assumption. At small scale, all fluids are composed of molecules that collide with one another and solid objects. However, the continuum assumption assumes that fluids are continuous, rather than discrete. Consequently, it is assumed that properties such as density, pressure, temperature, and flow velocity are well-defined at infinitesimally small points in space and vary continuously from one point to another. The fact that the fluid is made up of discrete molecules is ignored. For fluids that are sufficiently dense to be a continuum, do not contain ionized species, and have flow velocities that are small in relation to the speed of light, the momentum equations for Newtonian fluids are the Navier–Stokes equations—which is a non-linear set of differential equations that describes the flow of a fluid whose stress depends linearly on flow velocity gradients and pressure. The unsimplified equations do not have a general closed-form solution, so they are primarily of use in computational fluid dynamics. The equations can be simplified in several ways, all of which make them easier to solve. Some of the simplifications allow some simple fluid dynamics problems to be solved in closed form. In addition to the mass, momentum, and energy conservation equations, a thermodynamic equation of state that gives the pressure as a function of other thermodynamic variables is required to completely describe the problem. An example of this would be the perfect gas equation of state: p = ρ R u T M {\displaystyle p={\frac {\rho R_{u}T}{M}}} where p is pressure, ρ is density, and T is the absolute temperature, while Ru is the gas constant and M is molar mass for a particular gas. A constitutive relation may also be useful. === Conservation laws === Three conservation laws are used to solve fluid dynamics problems, and may be written in integral or differential form. The conservation laws may be applied to a region of the flow called a control volume. A control volume is a discrete volume in space through which fluid is assumed to flow. The integral formulations of the conservation laws are used to describe the change of mass, momentum, or energy within the control volume. Differential formulations of the conservation laws apply Stokes' theorem to yield an expression that may be interpreted as the integral form of the law applied to an infinitesimally small volume (at a point) within the flow. Mass continuity (conservation of mass) The rate of change of fluid mass inside a control volume must be equal to the net rate of fluid flow into the volume. Physically, this statement requires that mass is neither created nor destroyed in the control volume, and can be translated into the integral form of the continuity equation: ∂ ∂ t ∭ V ρ d V = − {\displaystyle {\frac {\partial }{\partial t}}\iiint _{V}\rho \,dV=-\,{}} S {\displaystyle {\scriptstyle S}} ρ u ⋅ d S {\displaystyle {}\,\rho \mathbf {u} \cdot d\mathbf {S} } Above, ρ is the fluid density, u is the flow velocity vector, and t is time. The left-hand side of the above expression is the rate of increase of mass within the volume and contains a triple integral over the control volume, whereas the right-hand side contains an integration over the surface of the control volume of mass convected into the system. Mass flow into the system is accounted as positive, and since the normal vector to the surface is opposite to the sense of flow into the system the term is negated. The differential form of the continuity equation is, by the divergence theorem: ∂ ρ ∂ t + ∇ ⋅ ( ρ u ) = 0 {\displaystyle \ {\frac {\partial \rho }{\partial t}}+\nabla \cdot (\rho \mathbf {u} )=0} Conservation of momentum Newton's second law of motion applied to a control volume, is a statement that any change in momentum of the fluid within that control volume will be due to the net flow of momentum into the volume and the action of external forces acting on the fluid within the volume. ∂ ∂ t ∭ V ρ u d V = − {\displaystyle {\frac {\partial }{\partial t}}\iiint _{\scriptstyle V}\rho \mathbf {u} \,dV=-\,{}} S {\displaystyle _{\scriptstyle S}} ( ρ u ⋅ d S ) u − {\displaystyle (\rho \mathbf {u} \cdot d\mathbf {S} )\mathbf {u} -{}} S {\displaystyle {\scriptstyle S}} p d S {\displaystyle {}\,p\,d\mathbf {S} } + ∭ V ρ f body d V + F surf {\displaystyle \displaystyle {}+\iiint _{\scriptstyle V}\rho \mathbf {f} _{\text{body}}\,dV+\mathbf {F} _{\text{surf}}} In the above integral formulation of this equation, the term on the left is the net change of momentum within the volume. The first term on the right is the net rate at which momentum is convected into the volume. The second term on the right is the force due to pressure on the volume's surfaces. The first two terms on the right are negated since momentum entering the system is accounted as positive, and the normal is opposite the direction of the velocity u and pressure forces. The third term on the right is the net acceleration of the mass within the volume due to any body forces (here represented by fbody). Surface forces, such as viscous forces, are represented by Fsurf, the net force due to shear forces acting on the volume surface. The momentum balance can also be written for a moving control volume. The following is the differential form of the momentum conservation equation. Here, the volume is reduced to an infinitesimally small point, and both surface and body forces are accounted for in one total force, F. For example, F may be expanded into an expression for the frictional and gravitational forces acting at a point in a flow. D u D t = F − ∇ p ρ {\displaystyle {\frac {D\mathbf {u} }{Dt}}=\mathbf {F} -{\frac {\nabla p}{\rho }}} In aerodynamics, air is assumed to be a Newtonian fluid, which posits a linear relationship between the shear stress (due to internal friction forces) and the rate of strain of the fluid. The equation above is a vector equation in a three-dimensional flow, but it can be expressed as three scalar equations in three coordinate directions. The conservation of momentum equations for the compressible, viscous flow case is called the Navier–Stokes equations. Conservation of energy Although energy can be converted from one form to another, the total energy in a closed system remains constant. ρ D h D t = D p D t + ∇ ⋅ ( k ∇ T ) + Φ {\displaystyle \rho {\frac {Dh}{Dt}}={\frac {Dp}{Dt}}+\nabla \cdot \left(k\nabla T\right)+\Phi } Above, h is the specific enthalpy, k is the thermal conductivity of the fluid, T is temperature, and Φ is the viscous dissipation function. The viscous dissipation function governs the rate at which the mechanical energy of the flow is converted to heat. The second law of thermodynamics requires that the dissipation term is always positive: viscosity cannot create energy within the control volume. The expression on the left side is a material derivative. == Classifications == === Compressible versus incompressible flow === All fluids are compressible to an extent; that is, changes in pressure or temperature cause changes in density. However, in many situations the changes in pressure and temperature are sufficiently small that the changes in density are negligible. In this case the flow can be modelled as an incompressible flow. Otherwise the more general compressible flow equations must be used. Mathematically, incompressibility is expressed by saying that the density ρ of a fluid parcel does not change as it moves in the flow field, that is, D ρ D t = 0 , {\displaystyle {\frac {\mathrm {D} \rho }{\mathrm {D} t}}=0\,,} where ⁠D/Dt⁠ is the material derivative, which is the sum of local and convective derivatives. This additional constraint simplifies the governing equations, especially in the case when the fluid has a uniform density. For flow of gases, to determine whether to use compressible or incompressible fluid dynamics, the Mach number of the flow is evaluated. As a rough guide, compressible effects can be ignored at Mach numbers below approximately 0.3. For liquids, whether the incompressible assumption is valid depends on the fluid properties (specifically the critical pressure and temperature of the fluid) and the flow conditions (how close to the critical pressure the actual flow pressure becomes). Acoustic problems always require allowing compressibility, since sound waves are compression waves involving changes in pressure and density of the medium through which they propagate. === Newtonian versus non-Newtonian fluids === All fluids, except superfluids, are viscous, meaning that they exert some resistance to deformation: neighbouring parcels of fluid moving at different velocities exert viscous forces on each other. The velocity gradient is referred to as a strain rate; it has dimensions T−1. Isaac Newton showed that for many familiar fluids such as water and air, the stress due to these viscous forces is linearly related to the strain rate. Such fluids are called Newtonian fluids. The coefficient of proportionality is called the fluid's viscosity; for Newtonian fluids, it is a fluid property that is independent of the strain rate. Non-Newtonian fluids have a more complicated, non-linear stress-strain behaviour. The sub-discipline of rheology describes the stress-strain behaviours of such fluids, which include emulsions and slurries, some viscoelastic materials such as blood and some polymers, and sticky liquids such as latex, honey and lubricants. === Inviscid versus viscous versus Stokes flow === The dynamic of fluid parcels is described with the help of Newton's second law. An accelerating parcel of fluid is subject to inertial effects. The Reynolds number is a dimensionless quantity which characterises the magnitude of inertial effects compared to the magnitude of viscous effects. A low Reynolds number (Re ≪ 1) indicates that viscous forces are very strong compared to inertial forces. In such cases, inertial forces are sometimes neglected; this flow regime is called Stokes or creeping flow. In contrast, high Reynolds numbers (Re ≫ 1) indicate that the inertial effects have more effect on the velocity field than the viscous (friction) effects. In high Reynolds number flows, the flow is often modeled as an inviscid flow, an approximation in which viscosity is completely neglected. Eliminating viscosity allows the Navier–Stokes equations to be simplified into the Euler equations. The integration of the Euler equations along a streamline in an inviscid flow yields Bernoulli's equation. When, in addition to being inviscid, the flow is irrotational everywhere, Bernoulli's equation can completely describe the flow everywhere. Such flows are called potential flows, because the velocity field may be expressed as the gradient of a potential energy expression. This idea can work fairly well when the Reynolds number is high. However, problems such as those involving solid boundaries may require that the viscosity be included. Viscosity cannot be neglected near solid boundaries because the no-slip condition generates a thin region of large strain rate, the boundary layer, in which viscosity effects dominate and which thus generates vorticity. Therefore, to calculate net forces on bodies (such as wings), viscous flow equations must be used: inviscid flow theory fails to predict drag forces, a limitation known as the d'Alembert's paradox. A commonly used model, especially in computational fluid dynamics, is to use two flow models: the Euler equations away from the body, and boundary layer equations in a region close to the body. The two solutions can then be matched with each other, using the method of matched asymptotic expansions. === Steady versus unsteady flow === A flow that is not a function of time is called steady flow. Steady-state flow refers to the condition where the fluid properties at a point in the system do not change over time. Time dependent flow is known as unsteady (also called transient). Whether a particular flow is steady or unsteady, can depend on the chosen frame of reference. For instance, laminar flow over a sphere is steady in the frame of reference that is stationary with respect to the sphere. In a frame of reference that is stationary with respect to a background flow, the flow is unsteady. Turbulent flows are unsteady by definition. A turbulent flow can, however, be statistically stationary. The random velocity field U(x, t) is statistically stationary if all statistics are invariant under a shift in time.: 75  This roughly means that all statistical properties are constant in time. Often, the mean field is the object of interest, and this is constant too in a statistically stationary flow. Steady flows are often more tractable than otherwise similar unsteady flows. The governing equations of a steady problem have one dimension fewer (time) than the governing equations of the same problem without taking advantage of the steadiness of the flow field. === Laminar versus turbulent flow === Turbulence is flow characterized by recirculation, eddies, and apparent randomness. Flow in which turbulence is not exhibited is called laminar. The presence of eddies or recirculation alone does not necessarily indicate turbulent flow—these phenomena may be present in laminar flow as well. Mathematically, turbulent flow is often represented via a Reynolds decomposition, in which the flow is broken down into the sum of an average component and a perturbation component. It is believed that turbulent flows can be described well through the use of the Navier–Stokes equations. Direct numerical simulation (DNS), based on the Navier–Stokes equations, makes it possible to simulate turbulent flows at moderate Reynolds numbers. Restrictions depend on the power of the computer used and the efficiency of the solution algorithm. The results of DNS have been found to agree well with experimental data for some flows. Most flows of interest have Reynolds numbers much too high for DNS to be a viable option,: 344  given the state of computational power for the next few decades. Any flight vehicle large enough to carry a human (L > 3 m), moving faster than 20 m/s (72 km/h; 45 mph) is well beyond the limit of DNS simulation (Re = 4 million). Transport aircraft wings (such as on an Airbus A300 or Boeing 747) have Reynolds numbers of 40 million (based on the wing chord dimension). Solving these real-life flow problems requires turbulence models for the foreseeable future. Reynolds-averaged Navier–Stokes equations (RANS) combined with turbulence modelling provides a model of the effects of the turbulent flow. Such a modelling mainly provides the additional momentum transfer by the Reynolds stresses, although the turbulence also enhances the heat and mass transfer. Another promising methodology is large eddy simulation (LES), especially in the form of detached eddy simulation (DES) — a combination of LES and RANS turbulence modelling. === Other approximations === There are a large number of other possible approximations to fluid dynamic problems. Some of the more commonly used are listed below. The Boussinesq approximation neglects variations in density except to calculate buoyancy forces. It is often used in free convection problems where density changes are small. Lubrication theory and Hele–Shaw flow exploits the large aspect ratio of the domain to show that certain terms in the equations are small and so can be neglected. Slender-body theory is a methodology used in Stokes flow problems to estimate the force on, or flow field around, a long slender object in a viscous fluid. The shallow-water equations can be used to describe a layer of relatively inviscid fluid with a free surface, in which surface gradients are small. Darcy's law is used for flow in porous media, and works with variables averaged over several pore-widths. In rotating systems, the quasi-geostrophic equations assume an almost perfect balance between pressure gradients and the Coriolis force. It is useful in the study of atmospheric dynamics. == Multidisciplinary types == === Flows according to Mach regimes === While many flows (such as flow of water through a pipe) occur at low Mach numbers (subsonic flows), many flows of practical interest in aerodynamics or in turbomachines occur at high fractions of M = 1 (transonic flows) or in excess of it (supersonic or even hypersonic flows). New phenomena occur at these regimes such as instabilities in transonic flow, shock waves for supersonic flow, or non-equilibrium chemical behaviour due to ionization in hypersonic flows. In practice, each of those flow regimes is treated separately. === Reactive versus non-reactive flows === Reactive flows are flows that are chemically reactive, which finds its applications in many areas, including combustion (IC engine), propulsion devices (rockets, jet engines, and so on), detonations, fire and safety hazards, and astrophysics. In addition to conservation of mass, momentum and energy, conservation of individual species (for example, mass fraction of methane in methane combustion) need to be derived, where the production/depletion rate of any species are obtained by simultaneously solving the equations of chemical kinetics. === Magnetohydrodynamics === Magnetohydrodynamics is the multidisciplinary study of the flow of electrically conducting fluids in electromagnetic fields. Examples of such fluids include plasmas, liquid metals, and salt water. The fluid flow equations are solved simultaneously with Maxwell's equations of electromagnetism. === Relativistic fluid dynamics === Relativistic fluid dynamics studies the macroscopic and microscopic fluid motion at large velocities comparable to the velocity of light. This branch of fluid dynamics accounts for the relativistic effects both from the special theory of relativity and the general theory of relativity. The governing equations are derived in Riemannian geometry for Minkowski spacetime. === Fluctuating hydrodynamics === This branch of fluid dynamics augments the standard hydrodynamic equations with stochastic fluxes that model thermal fluctuations. As formulated by Landau and Lifshitz, a white noise contribution obtained from the fluctuation-dissipation theorem of statistical mechanics is added to the viscous stress tensor and heat flux. == Terminology == The concept of pressure is central to the study of both fluid statics and fluid dynamics. A pressure can be identified for every point in a body of fluid, regardless of whether the fluid is in motion or not. Pressure can be measured using an aneroid, Bourdon tube, mercury column, or various other methods. Some of the terminology that is necessary in the study of fluid dynamics is not found in other similar areas of study. In particular, some of the terminology used in fluid dynamics is not used in fluid statics. === Characteristic numbers === === Terminology in incompressible fluid dynamics === The concepts of total pressure and dynamic pressure arise from Bernoulli's equation and are significant in the study of all fluid flows. (These two pressures are not pressures in the usual sense—they cannot be measured using an aneroid, Bourdon tube or mercury column.) To avoid potential ambiguity when referring to pressure in fluid dynamics, many authors use the term static pressure to distinguish it from total pressure and dynamic pressure. Static pressure is identical to pressure and can be identified for every point in a fluid flow field. A point in a fluid flow where the flow has come to rest (that is to say, speed is equal to zero adjacent to some solid body immersed in the fluid flow) is of special significance. It is of such importance that it is given a special name—a stagnation point. The static pressure at the stagnation point is of special significance and is given its own name—stagnation pressure. In incompressible flows, the stagnation pressure at a stagnation point is equal to the total pressure throughout the flow field. === Terminology in compressible fluid dynamics === In a compressible fluid, it is convenient to define the total conditions (also called stagnation conditions) for all thermodynamic state properties (such as total temperature, total enthalpy, total speed of sound). These total flow conditions are a function of the fluid velocity and have different values in frames of reference with different motion. To avoid potential ambiguity when referring to the properties of the fluid associated with the state of the fluid rather than its motion, the prefix "static" is commonly used (such as static temperature and static enthalpy). Where there is no prefix, the fluid property is the static condition (so "density" and "static density" mean the same thing). The static conditions are independent of the frame of reference. Because the total flow conditions are defined by isentropically bringing the fluid to rest, there is no need to distinguish between total entropy and static entropy as they are always equal by definition. As such, entropy is most commonly referred to as simply "entropy". == Applications == == See also == List of publications in fluid dynamics List of fluid dynamicists == References == == Further reading == Acheson, D. J. (1990). Elementary Fluid Dynamics. Clarendon Press. ISBN 0-19-859679-0. Batchelor, G. K. (1967). An Introduction to Fluid Dynamics. Cambridge University Press. ISBN 0-521-66396-2. Chanson, H. (2009). Applied Hydrodynamics: An Introduction to Ideal and Real Fluid Flows. CRC Press, Taylor & Francis Group, Leiden, The Netherlands, 478 pages. ISBN 978-0-415-49271-3. Clancy, L. J. (1975). Aerodynamics. London: Pitman Publishing Limited. ISBN 0-273-01120-0. Lamb, Horace (1994). Hydrodynamics (6th ed.). Cambridge University Press. ISBN 0-521-45868-4. Originally published in 1879, the 6th extended edition appeared first in 1932. Milne-Thompson, L. M. (1968). Theoretical Hydrodynamics (5th ed.). Macmillan. Originally published in 1938. Shinbrot, M. (1973). Lectures on Fluid Mechanics. Gordon and Breach. ISBN 0-677-01710-3. Nazarenko, Sergey (2014), Fluid Dynamics via Examples and Solutions, CRC Press (Taylor & Francis group), ISBN 978-1-43-988882-7 Encyclopedia: Fluid dynamics Scholarpedia == External links == National Committee for Fluid Mechanics Films (NCFMF), containing films on several subjects in fluid dynamics (in RealMedia format) Gallery of fluid motion, "a visual record of the aesthetic and science of contemporary fluid mechanics," from the American Physical Society List of Fluid Dynamics books
Wikipedia/Fluid_dynamics
Complex dynamics, or holomorphic dynamics, is the study of dynamical systems obtained by iterating a complex analytic mapping. This article focuses on the case of algebraic dynamics, where a polynomial or rational function is iterated. In geometric terms, that amounts to iterating a mapping from some algebraic variety to itself. The related theory of arithmetic dynamics studies iteration over the rational numbers or the p-adic numbers instead of the complex numbers. == Dynamics in complex dimension 1 == A simple example that shows some of the main issues in complex dynamics is the mapping f ( z ) = z 2 {\displaystyle f(z)=z^{2}} from the complex numbers C to itself. It is helpful to view this as a map from the complex projective line C P 1 {\displaystyle \mathbf {CP} ^{1}} to itself, by adding a point ∞ {\displaystyle \infty } to the complex numbers. ( C P 1 {\displaystyle \mathbf {CP} ^{1}} has the advantage of being compact.) The basic question is: given a point z {\displaystyle z} in C P 1 {\displaystyle \mathbf {CP} ^{1}} , how does its orbit (or forward orbit) z , f ( z ) = z 2 , f ( f ( z ) ) = z 4 , f ( f ( f ( z ) ) ) = z 8 , … {\displaystyle z,\;f(z)=z^{2},\;f(f(z))=z^{4},f(f(f(z)))=z^{8},\;\ldots } behave, qualitatively? The answer is: if the absolute value |z| is less than 1, then the orbit converges to 0, in fact more than exponentially fast. If |z| is greater than 1, then the orbit converges to the point ∞ {\displaystyle \infty } in C P 1 {\displaystyle \mathbf {CP} ^{1}} , again more than exponentially fast. (Here 0 and ∞ {\displaystyle \infty } are superattracting fixed points of f, meaning that the derivative of f is zero at those points. An attracting fixed point means one where the derivative of f has absolute value less than 1.) On the other hand, suppose that | z | = 1 {\displaystyle |z|=1} , meaning that z is on the unit circle in C. At these points, the dynamics of f is chaotic, in various ways. For example, for almost all points z on the circle in terms of measure theory, the forward orbit of z is dense in the circle, and in fact uniformly distributed on the circle. There are also infinitely many periodic points on the circle, meaning points with f r ( z ) = z {\displaystyle f^{r}(z)=z} for some positive integer r. (Here f r ( z ) {\displaystyle f^{r}(z)} means the result of applying f to z r times, f ( f ( ⋯ ( f ( z ) ) ⋯ ) ) {\displaystyle f(f(\cdots (f(z))\cdots ))} .) Even at periodic points z on the circle, the dynamics of f can be considered chaotic, since points near z diverge exponentially fast from z upon iterating f. (The periodic points of f on the unit circle are repelling: if f r ( z ) = z {\displaystyle f^{r}(z)=z} , the derivative of f r {\displaystyle f^{r}} at z has absolute value greater than 1.) Pierre Fatou and Gaston Julia showed in the late 1910s that much of this story extends to any complex algebraic map from C P 1 {\displaystyle \mathbf {CP} ^{1}} to itself of degree greater than 1. (Such a mapping may be given by a polynomial f ( z ) {\displaystyle f(z)} with complex coefficients, or more generally by a rational function.) Namely, there is always a compact subset of C P 1 {\displaystyle \mathbf {CP} ^{1}} , the Julia set, on which the dynamics of f is chaotic. For the mapping f ( z ) = z 2 {\displaystyle f(z)=z^{2}} , the Julia set is the unit circle. For other polynomial mappings, the Julia set is often highly irregular, for example a fractal in the sense that its Hausdorff dimension is not an integer. This occurs even for mappings as simple as f ( z ) = z 2 + c {\displaystyle f(z)=z^{2}+c} for a constant c ∈ C {\displaystyle c\in \mathbf {C} } . The Mandelbrot set is the set of complex numbers c such that the Julia set of f ( z ) = z 2 + c {\displaystyle f(z)=z^{2}+c} is connected. There is a rather complete classification of the possible dynamics of a rational function f : C P 1 → C P 1 {\displaystyle f\colon \mathbf {CP} ^{1}\to \mathbf {CP} ^{1}} in the Fatou set, the complement of the Julia set, where the dynamics is "tame". Namely, Dennis Sullivan showed that each connected component U of the Fatou set is pre-periodic, meaning that there are natural numbers a < b {\displaystyle a<b} such that f a ( U ) = f b ( U ) {\displaystyle f^{a}(U)=f^{b}(U)} . Therefore, to analyze the dynamics on a component U, one can assume after replacing f by an iterate that f ( U ) = U {\displaystyle f(U)=U} . Then either (1) U contains an attracting fixed point for f; (2) U is parabolic in the sense that all points in U approach a fixed point in the boundary of U; (3) U is a Siegel disk, meaning that the action of f on U is conjugate to an irrational rotation of the open unit disk; or (4) U is a Herman ring, meaning that the action of f on U is conjugate to an irrational rotation of an open annulus. (Note that the "backward orbit" of a point z in U, the set of points in C P 1 {\displaystyle \mathbf {CP} ^{1}} that map to z under some iterate of f, need not be contained in U.) == The equilibrium measure of an endomorphism == Complex dynamics has been effectively developed in any dimension. This section focuses on the mappings from complex projective space C P n {\displaystyle \mathbf {CP} ^{n}} to itself, the richest source of examples. The main results for C P n {\displaystyle \mathbf {CP} ^{n}} have been extended to a class of rational maps from any projective variety to itself. Note, however, that many varieties have no interesting self-maps. Let f be an endomorphism of C P n {\displaystyle \mathbf {CP} ^{n}} , meaning a morphism of algebraic varieties from C P n {\displaystyle \mathbf {CP} ^{n}} to itself, for a positive integer n. Such a mapping is given in homogeneous coordinates by f ( [ z 0 , … , z n ] ) = [ f 0 ( z 0 , … , z n ) , … , f n ( z 0 , … , z n ) ] {\displaystyle f([z_{0},\ldots ,z_{n}])=[f_{0}(z_{0},\ldots ,z_{n}),\ldots ,f_{n}(z_{0},\ldots ,z_{n})]} for some homogeneous polynomials f 0 , … , f n {\displaystyle f_{0},\ldots ,f_{n}} of the same degree d that have no common zeros in C P n {\displaystyle \mathbf {CP} ^{n}} . (By Chow's theorem, this is the same thing as a holomorphic mapping from C P n {\displaystyle \mathbf {CP} ^{n}} to itself.) Assume that d is greater than 1; then the degree of the mapping f is d n {\displaystyle d^{n}} , which is also greater than 1. Then there is a unique probability measure μ f {\displaystyle \mu _{f}} on C P n {\displaystyle \mathbf {CP} ^{n}} , the equilibrium measure of f, that describes the most chaotic part of the dynamics of f. (It has also been called the Green measure or measure of maximal entropy.) This measure was defined by Hans Brolin (1965) for polynomials in one variable, by Alexandre Freire, Artur Lopes, Ricardo Mañé, and Mikhail Lyubich for n = 1 {\displaystyle n=1} (around 1983), and by John Hubbard, Peter Papadopol, John Fornaess, and Nessim Sibony in any dimension (around 1994). The small Julia set J ∗ ( f ) {\displaystyle J^{*}(f)} is the support of the equilibrium measure in C P n {\displaystyle \mathbf {CP} ^{n}} ; this is simply the Julia set when n = 1 {\displaystyle n=1} . === Examples === For the mapping f ( z ) = z 2 {\displaystyle f(z)=z^{2}} on C P 1 {\displaystyle \mathbf {CP} ^{1}} , the equilibrium measure μ f {\displaystyle \mu _{f}} is the Haar measure (the standard measure, scaled to have total measure 1) on the unit circle | z | = 1 {\displaystyle |z|=1} . More generally, for an integer d > 1 {\displaystyle d>1} , let f : C P n → C P n {\displaystyle f\colon \mathbf {CP} ^{n}\to \mathbf {CP} ^{n}} be the mapping f ( [ z 0 , … , z n ] ) = [ z 0 d , … , z n d ] . {\displaystyle f([z_{0},\ldots ,z_{n}])=[z_{0}^{d},\ldots ,z_{n}^{d}].} Then the equilibrium measure μ f {\displaystyle \mu _{f}} is the Haar measure on the n-dimensional torus { [ 1 , z 1 , … , z n ] : | z 1 | = ⋯ = | z n | = 1 } . {\displaystyle \{[1,z_{1},\ldots ,z_{n}]:|z_{1}|=\cdots =|z_{n}|=1\}.} For more general holomorphic mappings from C P n {\displaystyle \mathbf {CP} ^{n}} to itself, the equilibrium measure can be much more complicated, as one sees already in complex dimension 1 from pictures of Julia sets. === Characterizations of the equilibrium measure === A basic property of the equilibrium measure is that it is invariant under f, in the sense that the pushforward measure f ∗ μ f {\displaystyle f_{*}\mu _{f}} is equal to μ f {\displaystyle \mu _{f}} . Because f is a finite morphism, the pullback measure f ∗ μ f {\displaystyle f^{*}\mu _{f}} is also defined, and μ f {\displaystyle \mu _{f}} is totally invariant in the sense that f ∗ μ f = deg ⁡ ( f ) μ f {\displaystyle f^{*}\mu _{f}=\deg(f)\mu _{f}} . One striking characterization of the equilibrium measure is that it describes the asymptotics of almost every point in C P n {\displaystyle \mathbf {CP} ^{n}} when followed backward in time, by Jean-Yves Briend, Julien Duval, Tien-Cuong Dinh, and Sibony. Namely, for a point z in C P n {\displaystyle \mathbf {CP} ^{n}} and a positive integer r, consider the probability measure ( 1 / d r n ) ( f r ) ∗ ( δ z ) {\displaystyle (1/d^{rn})(f^{r})^{*}(\delta _{z})} which is evenly distributed on the d r n {\displaystyle d^{rn}} points w with f r ( w ) = z {\displaystyle f^{r}(w)=z} . Then there is a Zariski closed subset E ⊊ C P n {\displaystyle E\subsetneq \mathbf {CP} ^{n}} such that for all points z not in E, the measures just defined converge weakly to the equilibrium measure μ f {\displaystyle \mu _{f}} as r goes to infinity. In more detail: only finitely many closed complex subspaces of C P n {\displaystyle \mathbf {CP} ^{n}} are totally invariant under f (meaning that f − 1 ( S ) = S {\displaystyle f^{-1}(S)=S} ), and one can take the exceptional set E to be the unique largest totally invariant closed complex subspace not equal to C P n {\displaystyle \mathbf {CP} ^{n}} . Another characterization of the equilibrium measure (due to Briend and Duval) is as follows. For each positive integer r, the number of periodic points of period r (meaning that f r ( z ) = z {\displaystyle f^{r}(z)=z} ), counted with multiplicity, is ( d r ( n + 1 ) − 1 ) / ( d r − 1 ) {\displaystyle (d^{r(n+1)}-1)/(d^{r}-1)} , which is roughly d r n {\displaystyle d^{rn}} . Consider the probability measure which is evenly distributed on the points of period r. Then these measures also converge to the equilibrium measure μ f {\displaystyle \mu _{f}} as r goes to infinity. Moreover, most periodic points are repelling and lie in J ∗ ( f ) {\displaystyle J^{*}(f)} , and so one gets the same limit measure by averaging only over the repelling periodic points in J ∗ ( f ) {\displaystyle J^{*}(f)} . There may also be repelling periodic points outside J ∗ ( f ) {\displaystyle J^{*}(f)} . The equilibrium measure gives zero mass to any closed complex subspace of C P n {\displaystyle \mathbf {CP} ^{n}} that is not the whole space. Since the periodic points in J ∗ ( f ) {\displaystyle J^{*}(f)} are dense in J ∗ ( f ) {\displaystyle J^{*}(f)} , it follows that the periodic points of f are Zariski dense in C P n {\displaystyle \mathbf {CP} ^{n}} . A more algebraic proof of this Zariski density was given by Najmuddin Fakhruddin. Another consequence of μ f {\displaystyle \mu _{f}} giving zero mass to closed complex subspaces not equal to C P n {\displaystyle \mathbf {CP} ^{n}} is that each point has zero mass. As a result, the support J ∗ ( f ) {\displaystyle J^{*}(f)} of μ f {\displaystyle \mu _{f}} has no isolated points, and so it is a perfect set. The support J ∗ ( f ) {\displaystyle J^{*}(f)} of the equilibrium measure is not too small, in the sense that its Hausdorff dimension is always greater than zero. In that sense, an endomorphism of complex projective space with degree greater than 1 always behaves chaotically at least on part of the space. (There are examples where J ∗ ( f ) {\displaystyle J^{*}(f)} is all of C P n {\displaystyle \mathbf {CP} ^{n}} .) Another way to make precise that f has some chaotic behavior is that the topological entropy of f is always greater than zero, in fact equal to n log ⁡ d {\displaystyle n\log d} , by Mikhail Gromov, Michał Misiurewicz, and Feliks Przytycki. For any continuous endomorphism f of a compact metric space X, the topological entropy of f is equal to the maximum of the measure-theoretic entropy (or "metric entropy") of all f-invariant measures on X. For a holomorphic endomorphism f of C P n {\displaystyle \mathbf {CP} ^{n}} , the equilibrium measure μ f {\displaystyle \mu _{f}} is the unique invariant measure of maximal entropy, by Briend and Duval. This is another way to say that the most chaotic behavior of f is concentrated on the support of the equilibrium measure. Finally, one can say more about the dynamics of f on the support of the equilibrium measure: f is ergodic and, more strongly, mixing with respect to that measure, by Fornaess and Sibony. It follows, for example, that for almost every point with respect to μ f {\displaystyle \mu _{f}} , its forward orbit is uniformly distributed with respect to μ f {\displaystyle \mu _{f}} . === Lattès maps === A Lattès map is an endomorphism f of C P n {\displaystyle \mathbf {CP} ^{n}} obtained from an endomorphism of an abelian variety by dividing by a finite group. In this case, the equilibrium measure of f is absolutely continuous with respect to Lebesgue measure on C P n {\displaystyle \mathbf {CP} ^{n}} . Conversely, by Anna Zdunik, François Berteloot, and Christophe Dupont, the only endomorphisms of C P n {\displaystyle \mathbf {CP} ^{n}} whose equilibrium measure is absolutely continuous with respect to Lebesgue measure are the Lattès examples. That is, for all non-Lattès endomorphisms, μ f {\displaystyle \mu _{f}} assigns its full mass 1 to some Borel set of Lebesgue measure 0. In dimension 1, more is known about the "irregularity" of the equilibrium measure. Namely, define the Hausdorff dimension of a probability measure μ {\displaystyle \mu } on C P 1 {\displaystyle \mathbf {CP} ^{1}} (or more generally on a smooth manifold) by dim ⁡ ( μ ) = inf { dim H ⁡ ( Y ) : μ ( Y ) = 1 } , {\displaystyle \dim(\mu )=\inf\{\dim _{H}(Y):\mu (Y)=1\},} where dim H ⁡ ( Y ) {\displaystyle \dim _{H}(Y)} denotes the Hausdorff dimension of a Borel set Y. For an endomorphism f of C P 1 {\displaystyle \mathbf {CP} ^{1}} of degree greater than 1, Zdunik showed that the dimension of μ f {\displaystyle \mu _{f}} is equal to the Hausdorff dimension of its support (the Julia set) if and only if f is conjugate to a Lattès map, a Chebyshev polynomial (up to sign), or a power map f ( z ) = z ± d {\displaystyle f(z)=z^{\pm d}} with d ≥ 2 {\displaystyle d\geq 2} . (In the latter cases, the Julia set is all of C P 1 {\displaystyle \mathbf {CP} ^{1}} , a closed interval, or a circle, respectively.) Thus, outside those special cases, the equilibrium measure is highly irregular, assigning positive mass to some closed subsets of the Julia set with smaller Hausdorff dimension than the whole Julia set. == Automorphisms of projective varieties == More generally, complex dynamics seeks to describe the behavior of rational maps under iteration. One case that has been studied with some success is that of automorphisms of a smooth complex projective variety X, meaning isomorphisms f from X to itself. The case of main interest is where f acts nontrivially on the singular cohomology H ∗ ( X , Z ) {\displaystyle H^{*}(X,\mathbf {Z} )} . Gromov and Yosef Yomdin showed that the topological entropy of an endomorphism (for example, an automorphism) of a smooth complex projective variety is determined by its action on cohomology. Explicitly, for X of complex dimension n and 0 ≤ p ≤ n {\displaystyle 0\leq p\leq n} , let d p {\displaystyle d_{p}} be the spectral radius of f acting by pullback on the Hodge cohomology group H p , p ( X ) ⊂ H 2 p ( X , C ) {\displaystyle H^{p,p}(X)\subset H^{2p}(X,\mathbf {C} )} . Then the topological entropy of f is h ( f ) = max p log ⁡ d p . {\displaystyle h(f)=\max _{p}\log d_{p}.} (The topological entropy of f is also the logarithm of the spectral radius of f on the whole cohomology H ∗ ( X , C ) {\displaystyle H^{*}(X,\mathbf {C} )} .) Thus f has some chaotic behavior, in the sense that its topological entropy is greater than zero, if and only if it acts on some cohomology group with an eigenvalue of absolute value greater than 1. Many projective varieties do not have such automorphisms, but (for example) many rational surfaces and K3 surfaces do have such automorphisms. Let X be a compact Kähler manifold, which includes the case of a smooth complex projective variety. Say that an automorphism f of X has simple action on cohomology if: there is only one number p such that d p {\displaystyle d_{p}} takes its maximum value, the action of f on H p , p ( X ) {\displaystyle H^{p,p}(X)} has only one eigenvalue with absolute value d p {\displaystyle d_{p}} , and this is a simple eigenvalue. For example, Serge Cantat showed that every automorphism of a compact Kähler surface with positive topological entropy has simple action on cohomology. (Here an "automorphism" is complex analytic but is not assumed to preserve a Kähler metric on X. In fact, every automorphism that preserves a metric has topological entropy zero.) For an automorphism f with simple action on cohomology, some of the goals of complex dynamics have been achieved. Dinh, Sibony, and Henry de Thélin showed that there is a unique invariant probability measure μ f {\displaystyle \mu _{f}} of maximal entropy for f, called the equilibrium measure (or Green measure, or measure of maximal entropy). (In particular, μ f {\displaystyle \mu _{f}} has entropy log ⁡ d p {\displaystyle \log d_{p}} with respect to f.) The support of μ f {\displaystyle \mu _{f}} is called the small Julia set J ∗ ( f ) {\displaystyle J^{*}(f)} . Informally: f has some chaotic behavior, and the most chaotic behavior is concentrated on the small Julia set. At least when X is projective, J ∗ ( f ) {\displaystyle J^{*}(f)} has positive Hausdorff dimension. (More precisely, μ f {\displaystyle \mu _{f}} assigns zero mass to all sets of sufficiently small Hausdorff dimension.) === Kummer automorphisms === Some abelian varieties have an automorphism of positive entropy. For example, let E be a complex elliptic curve and let X be the abelian surface E × E {\displaystyle E\times E} . Then the group G L ( 2 , Z ) {\displaystyle GL(2,\mathbf {Z} )} of invertible 2 × 2 {\displaystyle 2\times 2} integer matrices acts on X. Any group element f whose trace has absolute value greater than 2, for example ( 2 1 1 1 ) {\displaystyle {\begin{pmatrix}2&1\\1&1\end{pmatrix}}} , has spectral radius greater than 1, and so it gives a positive-entropy automorphism of X. The equilibrium measure of f is the Haar measure (the standard Lebesgue measure) on X. The Kummer automorphisms are defined by taking the quotient space by a finite group of an abelian surface with automorphism, and then blowing up to make the surface smooth. The resulting surfaces include some special K3 surfaces and rational surfaces. For the Kummer automorphisms, the equilibrium measure has support equal to X and is smooth outside finitely many curves. Conversely, Cantat and Dupont showed that for all surface automorphisms of positive entropy except the Kummer examples, the equilibrium measure is not absolutely continuous with respect to Lebesgue measure. In this sense, it is usual for the equilibrium measure of an automorphism to be somewhat irregular. === Saddle periodic points === A periodic point z of f is called a saddle periodic point if, for a positive integer r such that f r ( z ) = z {\displaystyle f^{r}(z)=z} , at least one eigenvalue of the derivative of f r {\displaystyle f^{r}} on the tangent space at z has absolute value less than 1, at least one has absolute value greater than 1, and none has absolute value equal to 1. (Thus f is expanding in some directions and contracting at others, near z.) For an automorphism f with simple action on cohomology, the saddle periodic points are dense in the support J ∗ ( f ) {\displaystyle J^{*}(f)} of the equilibrium measure μ f {\displaystyle \mu _{f}} . On the other hand, the measure μ f {\displaystyle \mu _{f}} vanishes on closed complex subspaces not equal to X. It follows that the periodic points of f (or even just the saddle periodic points contained in the support of μ f {\displaystyle \mu _{f}} ) are Zariski dense in X. For an automorphism f with simple action on cohomology, f and its inverse map are ergodic and, more strongly, mixing with respect to the equilibrium measure μ f {\displaystyle \mu _{f}} . It follows that for almost every point z with respect to μ f {\displaystyle \mu _{f}} , the forward and backward orbits of z are both uniformly distributed with respect to μ f {\displaystyle \mu _{f}} . A notable difference with the case of endomorphisms of C P n {\displaystyle \mathbf {CP} ^{n}} is that for an automorphism f with simple action on cohomology, there can be a nonempty open subset of X on which neither forward nor backward orbits approach the support J ∗ ( f ) {\displaystyle J^{*}(f)} of the equilibrium measure. For example, Eric Bedford, Kyounghee Kim, and Curtis McMullen constructed automorphisms f of a smooth projective rational surface with positive topological entropy (hence simple action on cohomology) such that f has a Siegel disk, on which the action of f is conjugate to an irrational rotation. Points in that open set never approach J ∗ ( f ) {\displaystyle J^{*}(f)} under the action of f or its inverse. At least in complex dimension 2, the equilibrium measure of f describes the distribution of the isolated periodic points of f. (There may also be complex curves fixed by f or an iterate, which are ignored here.) Namely, let f be an automorphism of a compact Kähler surface X with positive topological entropy h ( f ) = log ⁡ d 1 {\displaystyle h(f)=\log d_{1}} . Consider the probability measure which is evenly distributed on the isolated periodic points of period r (meaning that f r ( z ) = z {\displaystyle f^{r}(z)=z} ). Then this measure converges weakly to μ f {\displaystyle \mu _{f}} as r goes to infinity, by Eric Bedford, Lyubich, and John Smillie. The same holds for the subset of saddle periodic points, because both sets of periodic points grow at a rate of ( d 1 ) r {\displaystyle (d_{1})^{r}} . == See also == Dynamics in complex dimension 1 Complex analysis Complex quadratic polynomial Infinite compositions of analytic functions Montel's theorem Poincaré metric Schwarz lemma Riemann mapping theorem Carathéodory's theorem (conformal mapping) Böttcher's equation Orbit portraits Yoccoz puzzles Related areas of dynamics Arithmetic dynamics Chaos theory Symbolic dynamics == Notes == == References == Alexander, Daniel (1994), A history of complex dynamics: from Schröder to Fatou and Julia, Aspects of Mathematics, vol. 24, Vieweg Verlag, doi:10.1007/978-3-663-09197-4, ISBN 3-528-06520-6, MR 1260930 Beardon, Alan (1991), Iteration of rational functions: complex analytic dynamical systems, Springer-Verlag, ISBN 0-387-97589-6, MR 1128089 Berteloot, François; Dupont, Christophe (2005), "Une caractérisation des endomorphismes de Lattès par leur mesure de Green", Commentarii Mathematici Helvetici, 80 (2): 433–454, arXiv:math/0501034, doi:10.4171/CMH/21, MR 2142250 Bonifant, Araceli; Lyubich, Mikhail; Sutherland, Scott, eds. (2014), Frontiers in complex dynamics: in celebration of John Milnor's 80th birthday, Princeton University Press, doi:10.1515/9781400851317, ISBN 978-0-691-15929-4, MR 3289442 Cantat, Serge (2010), "Quelques aspects des systèmes dynamiques polynomiaux: existence, exemples, rigidité", Quelques aspects des systèmes dynamiques polynomiaux, Société Mathématique de France, pp. 13–95, ISBN 978-2-85629-338-6, MR 2932433 Cantat, Serge (2014), "Dynamics of automorphisms of compact complex surfaces", Frontiers in complex dynamics (Banff, 2011), Princeton University Press, pp. 463–514, ISBN 978-0-691-15929-4, MR 3289919 Cantat, Serge; Dupont, Christophe (2020), "Automorphisms of surfaces: Kummer rigidity and measure of maximal entropy", Journal of the European Mathematical Society, 22 (4): 1289–1351, arXiv:1410.1202, doi:10.4171/JEMS/946, MR 4071328 Carleson, Lennart; Gamelin, Theodore (1993), Complex dynamics, Springer-Verlag, doi:10.1007/978-1-4612-4364-9, ISBN 0-387-97942-5, MR 1230383 de Thélin, Henry; Dinh, Tien-Cuong (2012), "Dynamics of automorphisms on compact Kähler manifolds", Advances in Mathematics, 229 (5): 2640–2655, arXiv:1009.5796, doi:10.1016/j.aim.2012.01.014, MR 2889139 Dinh, Tien-Cuong; Sibony, Nessim (2010), "Dynamics in several complex variables: endomorphisms of projective spaces and polynomial-like mappings", Holomorphic dynamical systems, Lecture Notes in Mathematics, vol. 1998, Springer-Verlag, pp. 165–294, arXiv:0810.0811, doi:10.1007/978-3-642-13171-4_4, ISBN 978-3-642-13170-7, MR 2648690 Dinh, Tien-Cuong; Sibony, Nessim (2010), "Super-potentials for currents on compact Kähler manifolds and dynamics of automorphisms", Journal of Algebraic Geometry, 19 (3): 473–529, arXiv:0804.0860, doi:10.1090/S1056-3911-10-00549-7, MR 2629598 Fakhruddin, Najmuddin (2003), "Questions on self maps of algebraic varieties", Journal of the Ramanujan Mathematical Society, 18 (2): 109–122, arXiv:math/0212208, MR 1995861 Fornaess, John Erik (1996), Dynamics in several complex variables, American Mathematical Society, ISBN 978-0-8218-0317-2, MR 1363948 Fornaess, John Erik; Sibony, Nessim (2001), "Dynamics of P 2 {\displaystyle \mathbf {P} ^{2}} (examples)", Laminations and foliations in dynamics, geometry and topology (Stony Brook, 1998), American Mathematical Society, pp. 47–85, doi:10.1090/conm/269/04329, ISBN 978-0-8218-1985-2, MR 1810536 Guedj, Vincent (2010), "Propriétés ergodiques des applications rationnelles", Quelques aspects des systèmes dynamiques polynomiaux, Société Mathématique de France, pp. 97–202, arXiv:math/0611302, ISBN 978-2-85629-338-6, MR 2932434 Milnor, John (2006), Dynamics in one complex variable (3rd ed.), Princeton University Press, arXiv:math/9201272, doi:10.1515/9781400835539, ISBN 0-691-12488-4, MR 2193309 Morosawa, Shunsuke; Nishimura, Yasuichiro; Taniguchu, Masahiko; Ueda, Tetsuo (2000), Holomorphic dynamics, Cambridge University Press, ISBN 0-521-66258-3, MR 1747010 Tan, Lei, ed. (2000), The Mandelbrot set, theme and variations, London Mathematical Society Lecture Note Series, vol. 274, Cambridge University Press, ISBN 0-521-77476-4, MR 1765080 Zdunik, Anna (1990), "Parabolic orbifolds and the dimension of the maximal measure for rational maps", Inventiones Mathematicae, 99 (3): 627–649, Bibcode:1990InMat..99..627Z, doi:10.1007/BF01234434, MR 1032883 == External links == Gallery of dynamics (Curtis McMullen) Surveys in Dynamical Systems
Wikipedia/Complex_dynamics
In the mathematical field of graph theory, a path graph (or linear graph) is a graph whose vertices can be listed in the order v1, v2, ..., vn such that the edges are {vi, vi+1} where i = 1, 2, ..., n − 1. Equivalently, a path with at least two vertices is connected and has two terminal vertices (vertices of degree 1), while all others (if any) have degree 2. Paths are often important in their role as subgraphs of other graphs, in which case they are called paths in that graph. A path is a particularly simple example of a tree, and in fact the paths are exactly the trees in which no vertex has degree 3 or more. A disjoint union of paths is called a linear forest. Paths are fundamental concepts of graph theory, described in the introductory sections of most graph theory texts. See, for example, Bondy and Murty (1976), Gibbons (1985), or Diestel (2005). == As Dynkin diagrams == In algebra, path graphs appear as the Dynkin diagrams of type A. As such, they classify the root system of type A and the Weyl group of type A, which is the symmetric group. == See also == Path (graph theory) Ladder graph Caterpillar tree Complete graph Null graph Path decomposition Cycle (graph theory) == References == Bondy, J. A.; Murty, U. S. R. (1976). Graph Theory with Applications. North Holland. pp. 12–21. ISBN 0-444-19451-7. Diestel, Reinhard (2005). Graph Theory (3rd ed.). Graduate Texts in Mathematics, vol. 173, Springer-Verlag. pp. 6–9. ISBN 3-540-26182-6. == External links == Weisstein, Eric W. "Path Graph". MathWorld.
Wikipedia/Linear_graph
In topology and related areas of mathematics, a subspace of a topological space (X, 𝜏) is a subset S of X which is equipped with a topology induced from that of 𝜏 called the subspace topology (or the relative topology, or the induced topology, or the trace topology). == Definition == Given a topological space ( X , τ ) {\displaystyle (X,\tau )} and a subset S {\displaystyle S} of X {\displaystyle X} , the subspace topology on S {\displaystyle S} is defined by τ S = { S ∩ U ∣ U ∈ τ } . {\displaystyle \tau _{S}=\lbrace S\cap U\mid U\in \tau \rbrace .} That is, a subset of S {\displaystyle S} is open in the subspace topology if and only if it is the intersection of S {\displaystyle S} with an open set in ( X , τ ) {\displaystyle (X,\tau )} . If S {\displaystyle S} is equipped with the subspace topology then it is a topological space in its own right, and is called a subspace of ( X , τ ) {\displaystyle (X,\tau )} . Subsets of topological spaces are usually assumed to be equipped with the subspace topology unless otherwise stated. Alternatively we can define the subspace topology for a subset S {\displaystyle S} of X {\displaystyle X} as the coarsest topology for which the inclusion map ι : S ↪ X {\displaystyle \iota :S\hookrightarrow X} is continuous. More generally, suppose ι {\displaystyle \iota } is an injection from a set S {\displaystyle S} to a topological space X {\displaystyle X} . Then the subspace topology on S {\displaystyle S} is defined as the coarsest topology for which ι {\displaystyle \iota } is continuous. The open sets in this topology are precisely the ones of the form ι − 1 ( U ) {\displaystyle \iota ^{-1}(U)} for U {\displaystyle U} open in X {\displaystyle X} . S {\displaystyle S} is then homeomorphic to its image in X {\displaystyle X} (also with the subspace topology) and ι {\displaystyle \iota } is called a topological embedding. A subspace S {\displaystyle S} is called an open subspace if the injection ι {\displaystyle \iota } is an open map, i.e., if the forward image of an open set of S {\displaystyle S} is open in X {\displaystyle X} . Likewise it is called a closed subspace if the injection ι {\displaystyle \iota } is a closed map. == Terminology == The distinction between a set and a topological space is often blurred notationally, for convenience, which can be a source of confusion when one first encounters these definitions. Thus, whenever S {\displaystyle S} is a subset of X {\displaystyle X} , and ( X , τ ) {\displaystyle (X,\tau )} is a topological space, then the unadorned symbols " S {\displaystyle S} " and " X {\displaystyle X} " can often be used to refer both to S {\displaystyle S} and X {\displaystyle X} considered as two subsets of X {\displaystyle X} , and also to ( S , τ S ) {\displaystyle (S,\tau _{S})} and ( X , τ ) {\displaystyle (X,\tau )} as the topological spaces, related as discussed above. So phrases such as " S {\displaystyle S} an open subspace of X {\displaystyle X} " are used to mean that ( S , τ S ) {\displaystyle (S,\tau _{S})} is an open subspace of ( X , τ ) {\displaystyle (X,\tau )} , in the sense used above; that is: (i) S ∈ τ {\displaystyle S\in \tau } ; and (ii) S {\displaystyle S} is considered to be endowed with the subspace topology. == Examples == In the following, R {\displaystyle \mathbb {R} } represents the real numbers with their usual topology. The subspace topology of the natural numbers, as a subspace of R {\displaystyle \mathbb {R} } , is the discrete topology. The rational numbers Q {\displaystyle \mathbb {Q} } considered as a subspace of R {\displaystyle \mathbb {R} } do not have the discrete topology ({0} for example is not an open set in Q {\displaystyle \mathbb {Q} } because there is no open subset of R {\displaystyle \mathbb {R} } whose intersection with Q {\displaystyle \mathbb {Q} } can result in only the singleton {0}). If a and b are rational, then the intervals (a, b) and [a, b] are respectively open and closed, but if a and b are irrational, then the set of all rational x with a < x < b is both open and closed. The set [0,1] as a subspace of R {\displaystyle \mathbb {R} } is both open and closed, whereas as a subset of R {\displaystyle \mathbb {R} } it is only closed. As a subspace of R {\displaystyle \mathbb {R} } , [0, 1] ∪ [2, 3] is composed of two disjoint open subsets (which happen also to be closed), and is therefore a disconnected space. Let S = [0, 1) be a subspace of the real line R {\displaystyle \mathbb {R} } . Then [0, 1⁄2) is open in S but not in R {\displaystyle \mathbb {R} } (as for example the intersection between (-1⁄2, 1⁄2) and S results in [0, 1⁄2)). Likewise [1⁄2, 1) is closed in S but not in R {\displaystyle \mathbb {R} } (as there is no open subset of R {\displaystyle \mathbb {R} } that can intersect with [0, 1) to result in [1⁄2, 1)). S is both open and closed as a subset of itself but not as a subset of R {\displaystyle \mathbb {R} } . == Properties == The subspace topology has the following characteristic property. Let Y {\displaystyle Y} be a subspace of X {\displaystyle X} and let i : Y → X {\displaystyle i:Y\to X} be the inclusion map. Then for any topological space Z {\displaystyle Z} a map f : Z → Y {\displaystyle f:Z\to Y} is continuous if and only if the composite map i ∘ f {\displaystyle i\circ f} is continuous. This property is characteristic in the sense that it can be used to define the subspace topology on Y {\displaystyle Y} . We list some further properties of the subspace topology. In the following let S {\displaystyle S} be a subspace of X {\displaystyle X} . If f : X → Y {\displaystyle f:X\to Y} is continuous then the restriction to S {\displaystyle S} is continuous. If f : X → Y {\displaystyle f:X\to Y} is continuous then f : X → f ( X ) {\displaystyle f:X\to f(X)} is continuous. The closed sets in S {\displaystyle S} are precisely the intersections of S {\displaystyle S} with closed sets in X {\displaystyle X} . If A {\displaystyle A} is a subspace of S {\displaystyle S} then A {\displaystyle A} is also a subspace of X {\displaystyle X} with the same topology. In other words, the subspace topology that A {\displaystyle A} inherits from S {\displaystyle S} is the same as the one it inherits from X {\displaystyle X} . Suppose S {\displaystyle S} is an open subspace of X {\displaystyle X} (so S ∈ τ {\displaystyle S\in \tau } ). Then a subset of S {\displaystyle S} is open in S {\displaystyle S} if and only if it is open in X {\displaystyle X} . Suppose S {\displaystyle S} is a closed subspace of X {\displaystyle X} (so X ∖ S ∈ τ {\displaystyle X\setminus S\in \tau } ). Then a subset of S {\displaystyle S} is closed in S {\displaystyle S} if and only if it is closed in X {\displaystyle X} . If B {\displaystyle B} is a basis for X {\displaystyle X} then B S = { U ∩ S : U ∈ B } {\displaystyle B_{S}=\{U\cap S:U\in B\}} is a basis for S {\displaystyle S} . The topology induced on a subset of a metric space by restricting the metric to this subset coincides with subspace topology for this subset. == Preservation of topological properties == If a topological space having some topological property implies its subspaces have that property, then we say the property is hereditary. If only closed subspaces must share the property we call it weakly hereditary. Every open and every closed subspace of a completely metrizable space is completely metrizable. Every open subspace of a Baire space is a Baire space. Every closed subspace of a compact space is compact. Being a Hausdorff space is hereditary. Being a normal space is weakly hereditary. Total boundedness is hereditary. Being totally disconnected is hereditary. First countability and second countability are hereditary. == See also == the dual notion quotient space product topology direct sum topology == Notes == == References == Bourbaki, Nicolas, Elements of Mathematics: General Topology, Addison-Wesley (1966) Steen, Lynn Arthur; Seebach, J. Arthur Jr. (1995) [1978], Counterexamples in Topology (Dover reprint of 1978 ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-486-68735-3, MR 0507446 Willard, Stephen. General Topology, Dover Publications (2004) ISBN 0-486-43479-6
Wikipedia/Subspace_topology
Counterexamples in Topology (1970, 2nd ed. 1978) is a book on mathematics by topologists Lynn Steen and J. Arthur Seebach, Jr. In the process of working on problems like the metrization problem, topologists (including Steen and Seebach) have defined a wide variety of topological properties. It is often useful in the study and understanding of abstracts such as topological spaces to determine that one property does not follow from another. One of the easiest ways of doing this is to find a counterexample which exhibits one property but not the other. In Counterexamples in Topology, Steen and Seebach, together with five students in an undergraduate research project at St. Olaf College, Minnesota in the summer of 1967, canvassed the field of topology for such counterexamples and compiled them in an attempt to simplify the literature. For instance, an example of a first-countable space which is not second-countable is counterexample #3, the discrete topology on an uncountable set. This particular counterexample shows that second-countability does not follow from first-countability. Several other "Counterexamples in ..." books and papers have followed, with similar motivations. == Reviews == In her review of the first edition, Mary Ellen Rudin wrote: In other mathematical fields one restricts one's problem by requiring that the space be Hausdorff or paracompact or metric, and usually one doesn't really care which, so long as the restriction is strong enough to avoid this dense forest of counterexamples. A usable map of the forest is a fine thing... In his submission to Mathematical Reviews C. Wayne Patty wrote: ...the book is extremely useful, and the general topology student will no doubt find it very valuable. In addition it is very well written. When the second edition appeared in 1978 its review in Advances in Mathematics treated topology as territory to be explored: Lebesgue once said that every mathematician should be something of a naturalist. This book, the updated journal of a continuing expedition to the never-never land of general topology, should appeal to the latent naturalist in every mathematician. == Notation == Several of the naming conventions in this book differ from more accepted modern conventions, particularly with respect to the separation axioms. The authors use the terms T3, T4, and T5 to refer to regular, normal, and completely normal. They also refer to completely Hausdorff as Urysohn. This was a result of the different historical development of metrization theory and general topology; see History of the separation axioms for more. The long line in example 45 is what most topologists nowadays would call the 'closed long ray'. == List of mentioned counterexamples == == See also == List of examples in general topology == References == == Bibliography == Steen, Lynn Arthur; Seebach, J. Arthur (1978). Counterexamples in topology. New York, NY: Springer New York. doi:10.1007/978-1-4612-6290-9. ISBN 978-0-387-90312-5. Steen, Lynn Arthur; Seebach, J. Arthur (1995) [First published 1978 by Springer-Verlag, New York]. Counterexamples in topology. New York: Dover Publications. ISBN 0-486-68735-X. OCLC 32311847. Lynn Arthur Steen and J. Arthur Seebach, Jr., Counterexamples in Topology. Springer-Verlag, New York, 1978. Reprinted by Dover Publications, New York, 1995. ISBN 0-486-68735-X (Dover edition). == External links == π-Base: An Interactive Encyclopedia of Topological Spaces
Wikipedia/Counterexamples_in_Topology
In topology, the long line (or Alexandroff line) is a topological space somewhat similar to the real line, but in a certain sense "longer". It behaves locally just like the real line, but has different large-scale properties (e.g., it is neither Lindelöf nor separable). Therefore, it serves as an important counterexample in topology. Intuitively, the usual real-number line consists of a countable number of line segments [ 0 , 1 ) {\displaystyle [0,1)} laid end-to-end, whereas the long line is constructed from an uncountable number of such segments. == Definition == The closed long ray L {\displaystyle L} is defined as the Cartesian product of the first uncountable ordinal ω 1 {\displaystyle \omega _{1}} with the half-open interval [ 0 , 1 ) , {\displaystyle [0,1),} equipped with the order topology that arises from the lexicographical order on ω 1 × [ 0 , 1 ) {\displaystyle \omega _{1}\times [0,1)} . The open long ray is obtained from the closed long ray by removing the smallest element ( 0 , 0 ) . {\displaystyle (0,0).} The long line is obtained by "gluing" together two long rays, one in the positive direction and the other in the negative direction. More rigorously, it can be defined as the order topology on the disjoint union of the reversed open long ray (“reversed” means the order is reversed) (this is the negative half) and the (not reversed) closed long ray (the positive half), totally ordered by letting the points of the latter be greater than the points of the former. Alternatively, take two copies of the open long ray and identify the open interval { 0 } × ( 0 , 1 ) {\displaystyle \{0\}\times (0,1)} of the one with the same interval of the other but reversing the interval, that is, identify the point ( 0 , t ) {\displaystyle (0,t)} (where t {\displaystyle t} is a real number such that 0 < t < 1 {\displaystyle 0<t<1} ) of the one with the point ( 0 , 1 − t ) {\displaystyle (0,1-t)} of the other, and define the long line to be the topological space obtained by gluing the two open long rays along the open interval identified between the two. (The former construction is better in the sense that it defines the order on the long line and shows that the topology is the order topology; the latter is better in the sense that it uses gluing along an open set, which is clearer from the topological point of view.) Intuitively, the closed long ray is like a real (closed) half-line, except that it is much longer in one direction: we say that it is long at one end and closed at the other. The open long ray is like the real line (or equivalently an open half-line) except that it is much longer in one direction: we say that it is long at one end and short (open) at the other. The long line is longer than the real lines in both directions: we say that it is long in both directions. However, many authors speak of the “long line” where we have spoken of the (closed or open) long ray, and there is much confusion between the various long spaces. In many uses or counterexamples, however, the distinction is unessential, because the important part is the “long” end of the line, and it doesn't matter what happens at the other end (whether long, short, or closed). A related space, the (closed) extended long ray, L ∗ , {\displaystyle L^{*},} is obtained as the one-point compactification of L {\displaystyle L} by adjoining an additional element to the right end of L . {\displaystyle L.} One can similarly define the extended long line by adding two elements to the long line, one at each end. == Properties == The closed long ray L = ω 1 × [ 0 , 1 ) {\displaystyle L=\omega _{1}\times [0,1)} consists of an uncountable number of copies of [ 0 , 1 ) {\displaystyle [0,1)} 'pasted together' end-to-end. Compare this with the fact that for any countable ordinal α {\displaystyle \alpha } , pasting together α {\displaystyle \alpha } copies of [ 0 , 1 ) {\displaystyle [0,1)} gives a space which is still homeomorphic (and order-isomorphic) to [ 0 , 1 ) . {\displaystyle [0,1).} (And if we tried to glue together more than ω 1 {\displaystyle \omega _{1}} copies of [ 0 , 1 ) , {\displaystyle [0,1),} the resulting space would no longer be locally homeomorphic to R . {\displaystyle \mathbb {R} .} ) Every increasing sequence in L {\displaystyle L} converges to a limit in L {\displaystyle L} ; this is a consequence of the facts that (1) the elements of ω 1 {\displaystyle \omega _{1}} are the countable ordinals, (2) the supremum of every countable family of countable ordinals is a countable ordinal, and (3) every increasing and bounded sequence of real numbers converges. Consequently, there can be no strictly increasing function L → R . {\displaystyle L\to \mathbb {R} .} In fact, every continuous function L → R {\displaystyle L\to \mathbb {R} } is eventually constant. As order topologies, the (possibly extended) long rays and lines are normal Hausdorff spaces. All of them have the same cardinality as the real line, yet they are 'much longer'. All of them are locally compact. None of them is metrizable; this can be seen as the long ray is sequentially compact but not compact, or even Lindelöf. The (non-extended) long line or ray is not paracompact. It is path-connected, locally path-connected and simply connected but not contractible. It is a one-dimensional topological manifold, with boundary in the case of the closed ray. It is first-countable but not second countable and not separable, so authors who require the latter properties in their manifolds do not call the long line a manifold. It makes sense to consider all the long spaces at once because every connected (non-empty) one-dimensional (not necessarily separable) topological manifold possibly with boundary, is homeomorphic to either the circle, the closed interval, the open interval (real line), the half-open interval, the closed long ray, the open long ray, or the long line. The long line or ray can be equipped with the structure of a (non-separable) differentiable manifold (with boundary in the case of the closed ray). However, contrary to the topological structure which is unique (topologically, there is only one way to make the real line "longer" at either end), the differentiable structure is not unique: in fact, there are uncountably many ( 2 ℵ 1 {\displaystyle 2^{\aleph _{1}}} to be precise) pairwise non-diffeomorphic smooth structures on it. This is in sharp contrast to the real line, where there are also different smooth structures, but all of them are diffeomorphic to the standard one. The long line or ray can even be equipped with the structure of a (real) analytic manifold (with boundary in the case of the closed ray). However, this is much more difficult than for the differentiable case (it depends on the classification of (separable) one-dimensional analytic manifolds, which is more difficult than for differentiable manifolds). Again, any given C ∞ {\displaystyle C^{\infty }} structure can be extended in infinitely many ways to different C ω {\displaystyle C^{\omega }} (=analytic) structures (which are pairwise non-diffeomorphic as analytic manifolds). The long line or ray cannot be equipped with a Riemannian metric that induces its topology. The reason is that Riemannian manifolds, even without the assumption of paracompactness, can be shown to be metrizable. The extended long ray L ∗ {\displaystyle L^{*}} is compact. It is the one-point compactification of the closed long ray L , {\displaystyle L,} but it is also its Stone-Čech compactification, because any continuous function from the (closed or open) long ray to the real line is eventually constant. L ∗ {\displaystyle L^{*}} is also connected, but not path-connected because the long line is 'too long' to be covered by a path, which is a continuous image of an interval. L ∗ {\displaystyle L^{*}} is not a manifold and is not first countable. == p-adic analog == There exists a p-adic analog of the long line, which is due to George Bergman. This space is constructed as the increasing union of an uncountable directed set of copies X γ {\displaystyle X_{\gamma }} of the ring of p-adic integers, indexed by a countable ordinal γ . {\displaystyle \gamma .} Define a map from X δ {\displaystyle X_{\delta }} to X γ {\displaystyle X_{\gamma }} whenever δ < γ {\displaystyle \delta <\gamma } as follows: If γ {\displaystyle \gamma } is a successor ε + 1 {\displaystyle \varepsilon +1} then the map from X ε {\displaystyle X_{\varepsilon }} to X γ {\displaystyle X_{\gamma }} is just multiplication by p . {\displaystyle p.} For other δ {\displaystyle \delta } the map from X δ {\displaystyle X_{\delta }} to X γ {\displaystyle X_{\gamma }} is the composition of the map from X δ {\displaystyle X_{\delta }} to X ε {\displaystyle X_{\varepsilon }} and the map from X ε {\displaystyle X_{\varepsilon }} to X γ . {\displaystyle X_{\gamma }.} If γ {\displaystyle \gamma } is a limit ordinal then the direct limit of the sets X δ {\displaystyle X_{\delta }} for δ < γ {\displaystyle \delta <\gamma } is a countable union of p-adic balls, so can be embedded in X γ , {\displaystyle X_{\gamma },} as X γ {\displaystyle X_{\gamma }} with a point removed is also a countable union of p-adic balls. This defines compatible embeddings of X δ {\displaystyle X_{\delta }} into X γ {\displaystyle X_{\gamma }} for all δ < γ . {\displaystyle \delta <\gamma .} This space is not compact, but the union of any countable set of compact subspaces has compact closure. == Higher dimensions == Some examples of non-paracompact manifolds in higher dimensions include the Prüfer manifold, products of any non-paracompact manifold with any non-empty manifold, the ball of long radius, and so on. The bagpipe theorem shows that there are 2 ℵ 1 {\displaystyle 2^{\aleph _{1}}} isomorphism classes of non-paracompact surfaces, even when a generalization of paracompactness, ω-boundedness, is assumed. There are no complex analogues of the long line as every Riemann surface is paracompact, but Calabi and Rosenlicht gave an example of a non-paracompact complex manifold of complex dimension 2. == See also == Lexicographic order topology on the unit square List of topologies == References ==
Wikipedia/Long_line_(topology)
In set theory, a projection is one of two closely related types of functions or operations, namely: A set-theoretic operation typified by the j {\displaystyle j} th projection map, written p r o j j , {\displaystyle \mathrm {proj} _{j},} that takes an element x → = ( x 1 , … , x j , … , x k ) {\displaystyle {\vec {x}}=(x_{1},\ \dots ,\ x_{j},\ \dots ,\ x_{k})} of the Cartesian product ( X 1 × ⋯ × X j × ⋯ × X k ) {\displaystyle (X_{1}\times \cdots \times X_{j}\times \cdots \times X_{k})} to the value p r o j j ( x → ) = x j . {\displaystyle \mathrm {proj} _{j}({\vec {x}})=x_{j}.} A function that sends an element x {\displaystyle x} to its equivalence class under a specified equivalence relation E , {\displaystyle E,} or, equivalently, a surjection from a set to another set. The function from elements to equivalence classes is a surjection, and every surjection corresponds to an equivalence relation under which two elements are equivalent when they have the same image. The result of the mapping is written as [ x ] {\displaystyle [x]} when E {\displaystyle E} is understood, or written as [ x ] E {\displaystyle [x]_{E}} when it is necessary to make E {\displaystyle E} explicit. == See also == Cartesian product – Mathematical set formed from two given sets Projection (mathematics) – Mapping equal to its square under mapping composition Projection (measure theory) Projection (linear algebra) – Idempotent linear transformation from a vector space to itself Projection (relational algebra) – Operation that restricts a relation to a specified set of attributes Relation (mathematics) – Relationship between two sets, defined by a set of ordered pairs == References ==
Wikipedia/Projection_(set_theory)
In mathematical analysis, a metric space M is called complete (or a Cauchy space) if every Cauchy sequence of points in M has a limit that is also in M. Intuitively, a space is complete if there are no "points missing" from it (inside or at the boundary). For instance, the set of rational numbers is not complete, because e.g. 2 {\displaystyle {\sqrt {2}}} is "missing" from it, even though one can construct a Cauchy sequence of rational numbers that converges to it (see further examples below). It is always possible to "fill all the holes", leading to the completion of a given space, as explained below. == Definition == Cauchy sequence A sequence x 1 , x 2 , x 3 , … {\displaystyle x_{1},x_{2},x_{3},\ldots } of elements from X {\displaystyle X} of a metric space ( X , d ) {\displaystyle (X,d)} is called Cauchy if for every positive real number r > 0 {\displaystyle r>0} there is a positive integer N {\displaystyle N} such that for all positive integers m , n > N , {\displaystyle m,n>N,} d ( x m , x n ) < r . {\displaystyle d(x_{m},x_{n})<r.} Complete space A metric space ( X , d ) {\displaystyle (X,d)} is complete if any of the following equivalent conditions are satisfied: Every Cauchy sequence in X {\displaystyle X} converges in X {\displaystyle X} (that is, has a limit that is also in X {\displaystyle X} ). Every decreasing sequence of non-empty closed subsets of X , {\displaystyle X,} with diameters tending to 0, has a non-empty intersection: if F n {\displaystyle F_{n}} is closed and non-empty, F n + 1 ⊆ F n {\displaystyle F_{n+1}\subseteq F_{n}} for every n , {\displaystyle n,} and diam ⁡ ( F n ) → 0 , {\displaystyle \operatorname {diam} \left(F_{n}\right)\to 0,} then there is a unique point x ∈ X {\displaystyle x\in X} common to all sets F n . {\displaystyle F_{n}.} == Examples == The space Q {\displaystyle \mathbb {Q} } of rational numbers, with the standard metric given by the absolute value of the difference, is not complete. Consider for instance the sequence defined by x 1 = 1 {\displaystyle x_{1}=1\;} and x n + 1 = x n 2 + 1 x n . {\displaystyle \;x_{n+1}={\frac {x_{n}}{2}}+{\frac {1}{x_{n}}}.} This is a Cauchy sequence of rational numbers, but it does not converge towards any rational limit: If the sequence did have a limit x , {\displaystyle x,} then by solving x = x 2 + 1 x {\displaystyle x={\frac {x}{2}}+{\frac {1}{x}}} necessarily x 2 = 2 , {\displaystyle x^{2}=2,} yet no rational number has this property. However, considered as a sequence of real numbers, it does converge to the irrational number 2 {\displaystyle {\sqrt {2}}} . The open interval (0,1), again with the absolute difference metric, is not complete either. The sequence defined by x n = 1 n {\displaystyle x_{n}={\tfrac {1}{n}}} is Cauchy, but does not have a limit in the given space. However the closed interval [0,1] is complete; for example the given sequence does have a limit in this interval, namely zero. The space R {\displaystyle \mathbb {R} } of real numbers and the space C {\displaystyle \mathbb {C} } of complex numbers (with the metric given by the absolute difference) are complete, and so is Euclidean space R n {\displaystyle \mathbb {R} ^{n}} , with the usual distance metric. In contrast, infinite-dimensional normed vector spaces may or may not be complete; those that are complete are Banach spaces. The space C[a, b] of continuous real-valued functions on a closed and bounded interval is a Banach space, and so a complete metric space, with respect to the supremum norm. However, the supremum norm does not give a norm on the space C(a, b) of continuous functions on (a, b), for it may contain unbounded functions. Instead, with the topology of compact convergence, C(a, b) can be given the structure of a Fréchet space: a locally convex topological vector space whose topology can be induced by a complete translation-invariant metric. The space Qp of p-adic numbers is complete for any prime number p . {\displaystyle p.} This space completes Q with the p-adic metric in the same way that R completes Q with the usual metric. If S {\displaystyle S} is an arbitrary set, then the set SN of all sequences in S {\displaystyle S} becomes a complete metric space if we define the distance between the sequences ( x n ) {\displaystyle \left(x_{n}\right)} and ( y n ) {\displaystyle \left(y_{n}\right)} to be 1 N {\displaystyle {\tfrac {1}{N}}} where N {\displaystyle N} is the smallest index for which x N {\displaystyle x_{N}} is distinct from y N {\displaystyle y_{N}} or 0 {\displaystyle 0} if there is no such index. This space is homeomorphic to the product of a countable number of copies of the discrete space S . {\displaystyle S.} Riemannian manifolds which are complete are called geodesic manifolds; completeness follows from the Hopf–Rinow theorem. == Some theorems == Every compact metric space is complete, though complete spaces need not be compact. In fact, a metric space is compact if and only if it is complete and totally bounded. This is a generalization of the Heine–Borel theorem, which states that any closed and bounded subspace S {\displaystyle S} of Rn is compact and therefore complete. Let ( X , d ) {\displaystyle (X,d)} be a complete metric space. If A ⊆ X {\displaystyle A\subseteq X} is a closed set, then A {\displaystyle A} is also complete. Let ( X , d ) {\displaystyle (X,d)} be a metric space. If A ⊆ X {\displaystyle A\subseteq X} is a complete subspace, then A {\displaystyle A} is also closed. If X {\displaystyle X} is a set and M {\displaystyle M} is a complete metric space, then the set B ( X , M ) {\displaystyle B(X,M)} of all bounded functions f from X to M {\displaystyle M} is a complete metric space. Here we define the distance in B ( X , M ) {\displaystyle B(X,M)} in terms of the distance in M {\displaystyle M} with the supremum norm d ( f , g ) ≡ sup { d [ f ( x ) , g ( x ) ] : x ∈ X } {\displaystyle d(f,g)\equiv \sup\{d[f(x),g(x)]:x\in X\}} If X {\displaystyle X} is a topological space and M {\displaystyle M} is a complete metric space, then the set C b ( X , M ) {\displaystyle C_{b}(X,M)} consisting of all continuous bounded functions f : X → M {\displaystyle f:X\to M} is a closed subspace of B ( X , M ) {\displaystyle B(X,M)} and hence also complete. The Baire category theorem says that every complete metric space is a Baire space. That is, the union of countably many nowhere dense subsets of the space has empty interior. The Banach fixed-point theorem states that a contraction mapping on a complete metric space admits a fixed point. The fixed-point theorem is often used to prove the inverse function theorem on complete metric spaces such as Banach spaces. == Completion == For any metric space M, it is possible to construct a complete metric space M′ (which is also denoted as M ¯ {\displaystyle {\overline {M}}} ), which contains M as a dense subspace. It has the following universal property: if N is any complete metric space and f is any uniformly continuous function from M to N, then there exists a unique uniformly continuous function f′ from M′ to N that extends f. The space M' is determined up to isometry by this property (among all complete metric spaces isometrically containing M), and is called the completion of M. The completion of M can be constructed as a set of equivalence classes of Cauchy sequences in M. For any two Cauchy sequences x ∙ = ( x n ) {\displaystyle x_{\bullet }=\left(x_{n}\right)} and y ∙ = ( y n ) {\displaystyle y_{\bullet }=\left(y_{n}\right)} in M, we may define their distance as d ( x ∙ , y ∙ ) = lim n d ( x n , y n ) {\displaystyle d\left(x_{\bullet },y_{\bullet }\right)=\lim _{n}d\left(x_{n},y_{n}\right)} (This limit exists because the real numbers are complete.) This is only a pseudometric, not yet a metric, since two different Cauchy sequences may have the distance 0. But "having distance 0" is an equivalence relation on the set of all Cauchy sequences, and the set of equivalence classes is a metric space, the completion of M. The original space is embedded in this space via the identification of an element x of M' with the equivalence class of sequences in M converging to x (i.e., the equivalence class containing the sequence with constant value x). This defines an isometry onto a dense subspace, as required. Notice, however, that this construction makes explicit use of the completeness of the real numbers, so completion of the rational numbers needs a slightly different treatment. Cantor's construction of the real numbers is similar to the above construction; the real numbers are the completion of the rational numbers using the ordinary absolute value to measure distances. The additional subtlety to contend with is that it is not logically permissible to use the completeness of the real numbers in their own construction. Nevertheless, equivalence classes of Cauchy sequences are defined as above, and the set of equivalence classes is easily shown to be a field that has the rational numbers as a subfield. This field is complete, admits a natural total ordering, and is the unique totally ordered complete field (up to isomorphism). It is defined as the field of real numbers (see also Construction of the real numbers for more details). One way to visualize this identification with the real numbers as usually viewed is that the equivalence class consisting of those Cauchy sequences of rational numbers that "ought" to have a given real limit is identified with that real number. The truncations of the decimal expansion give just one choice of Cauchy sequence in the relevant equivalence class. For a prime p , {\displaystyle p,} the p-adic numbers arise by completing the rational numbers with respect to a different metric. If the earlier completion procedure is applied to a normed vector space, the result is a Banach space containing the original space as a dense subspace, and if it is applied to an inner product space, the result is a Hilbert space containing the original space as a dense subspace. == Topologically complete spaces == Completeness is a property of the metric and not of the topology, meaning that a complete metric space can be homeomorphic to a non-complete one. An example is given by the real numbers, which are complete but homeomorphic to the open interval (0,1), which is not complete. In topology one considers completely metrizable spaces, spaces for which there exists at least one complete metric inducing the given topology. Completely metrizable spaces can be characterized as those spaces that can be written as an intersection of countably many open subsets of some complete metric space. Since the conclusion of the Baire category theorem is purely topological, it applies to these spaces as well. Completely metrizable spaces are often called topologically complete. However, the latter term is somewhat arbitrary since metric is not the most general structure on a topological space for which one can talk about completeness (see the section Alternatives and generalizations). Indeed, some authors use the term topologically complete for a wider class of topological spaces, the completely uniformizable spaces. A topological space homeomorphic to a separable complete metric space is called a Polish space. == Alternatives and generalizations == Since Cauchy sequences can also be defined in general topological groups, an alternative to relying on a metric structure for defining completeness and constructing the completion of a space is to use a group structure. This is most often seen in the context of topological vector spaces, but requires only the existence of a continuous "subtraction" operation. In this setting, the distance between two points x {\displaystyle x} and y {\displaystyle y} is gauged not by a real number ε {\displaystyle \varepsilon } via the metric d {\displaystyle d} in the comparison d ( x , y ) < ε , {\displaystyle d(x,y)<\varepsilon ,} but by an open neighbourhood N {\displaystyle N} of 0 {\displaystyle 0} via subtraction in the comparison x − y ∈ N . {\displaystyle x-y\in N.} A common generalisation of these definitions can be found in the context of a uniform space, where an entourage is a set of all pairs of points that are at no more than a particular "distance" from each other. It is also possible to replace Cauchy sequences in the definition of completeness by Cauchy nets or Cauchy filters. If every Cauchy net (or equivalently every Cauchy filter) has a limit in X , {\displaystyle X,} then X {\displaystyle X} is called complete. One can furthermore construct a completion for an arbitrary uniform space similar to the completion of metric spaces. The most general situation in which Cauchy nets apply is Cauchy spaces; these too have a notion of completeness and completion just like uniform spaces. == See also == Cauchy space – Concept in general topology and analysis Completion (algebra) – In algebra, completion w.r.t. powers of an idealPages displaying short descriptions of redirect targets Complete uniform space – Topological space with a notion of uniform propertiesPages displaying short descriptions of redirect targets Complete topological vector space – A TVS where points that get progressively closer to each other will always converge to a point Ekeland's variational principle – theorem that asserts that there exist nearly optimal solutions to some optimization problemsPages displaying wikidata descriptions as a fallback Knaster–Tarski theorem – Theorem in order and lattice theory == Notes == == References == Kelley, John L. (1975). General Topology. Springer. ISBN 0-387-90125-6. Kreyszig, Erwin, Introductory functional analysis with applications (Wiley, New York, 1978). ISBN 0-471-03729-X Lang, Serge, "Real and Functional Analysis" ISBN 0-387-94001-4 Meise, Reinhold; Vogt, Dietmar (1997). Introduction to functional analysis. Ramanujan, M.S. (trans.). Oxford: Clarendon Press; New York: Oxford University Press. ISBN 0-19-851485-9.
Wikipedia/Completeness_(topology)
In topology and related areas of mathematics, a subset A of a topological space X is said to be dense in X if every point of X either belongs to A or else is arbitrarily "close" to a member of A — for instance, the rational numbers are a dense subset of the real numbers because every real number either is a rational number or has a rational number arbitrarily close to it (see Diophantine approximation). Formally, A {\displaystyle A} is dense in X {\displaystyle X} if the smallest closed subset of X {\displaystyle X} containing A {\displaystyle A} is X {\displaystyle X} itself. The density of a topological space X {\displaystyle X} is the least cardinality of a dense subset of X . {\displaystyle X.} == Definition == A subset A {\displaystyle A} of a topological space X {\displaystyle X} is said to be a dense subset of X {\displaystyle X} if any of the following equivalent conditions are satisfied: The smallest closed subset of X {\displaystyle X} containing A {\displaystyle A} is X {\displaystyle X} itself. The closure of A {\displaystyle A} in X {\displaystyle X} is equal to X . {\displaystyle X.} That is, cl X ⁡ A = X . {\displaystyle \operatorname {cl} _{X}A=X.} The interior of the complement of A {\displaystyle A} is empty. That is, int X ⁡ ( X ∖ A ) = ∅ . {\displaystyle \operatorname {int} _{X}(X\setminus A)=\varnothing .} Every point in X {\displaystyle X} either belongs to A {\displaystyle A} or is a limit point of A . {\displaystyle A.} For every x ∈ X , {\displaystyle x\in X,} every neighborhood U {\displaystyle U} of x {\displaystyle x} intersects A ; {\displaystyle A;} that is, U ∩ A ≠ ∅ . {\displaystyle U\cap A\neq \varnothing .} A {\displaystyle A} intersects every non-empty open subset of X . {\displaystyle X.} and if B {\displaystyle {\mathcal {B}}} is a basis of open sets for the topology on X {\displaystyle X} then this list can be extended to include: For every x ∈ X , {\displaystyle x\in X,} every basic neighborhood B ∈ B {\displaystyle B\in {\mathcal {B}}} of x {\displaystyle x} intersects A . {\displaystyle A.} A {\displaystyle A} intersects every non-empty B ∈ B . {\displaystyle B\in {\mathcal {B}}.} === Density in metric spaces === An alternative definition of dense set in the case of metric spaces is the following. When the topology of X {\displaystyle X} is given by a metric, the closure A ¯ {\displaystyle {\overline {A}}} of A {\displaystyle A} in X {\displaystyle X} is the union of A {\displaystyle A} and the set of all limits of sequences of elements in A {\displaystyle A} (its limit points), A ¯ = A ∪ { lim n → ∞ a n : a n ∈ A for all n ∈ N } {\displaystyle {\overline {A}}=A\cup \left\{\lim _{n\to \infty }a_{n}:a_{n}\in A{\text{ for all }}n\in \mathbb {N} \right\}} Then A {\displaystyle A} is dense in X {\displaystyle X} if A ¯ = X . {\displaystyle {\overline {A}}=X.} If { U n } {\displaystyle \left\{U_{n}\right\}} is a sequence of dense open sets in a complete metric space, X , {\displaystyle X,} then ⋂ n = 1 ∞ U n {\textstyle \bigcap _{n=1}^{\infty }U_{n}} is also dense in X . {\displaystyle X.} This fact is one of the equivalent forms of the Baire category theorem. == Examples == The real numbers with the usual topology have the rational numbers as a countable dense subset which shows that the cardinality of a dense subset of a topological space may be strictly smaller than the cardinality of the space itself. The irrational numbers are another dense subset which shows that a topological space may have several disjoint dense subsets (in particular, two dense subsets may be each other's complements), and they need not even be of the same cardinality. Perhaps even more surprisingly, both the rationals and the irrationals have empty interiors, showing that dense sets need not contain any non-empty open set. The intersection of two dense open subsets of a topological space is again dense and open. The empty set is a dense subset of itself. But every dense subset of a non-empty space must also be non-empty. By the Weierstrass approximation theorem, any given complex-valued continuous function defined on a closed interval [ a , b ] {\displaystyle [a,b]} can be uniformly approximated as closely as desired by a polynomial function. In other words, the polynomial functions are dense in the space C [ a , b ] {\displaystyle C[a,b]} of continuous complex-valued functions on the interval [ a , b ] , {\displaystyle [a,b],} equipped with the supremum norm. Every metric space is dense in its completion. == Properties == Every topological space is a dense subset of itself. For a set X {\displaystyle X} equipped with the discrete topology, the whole space is the only dense subset. Every non-empty subset of a set X {\displaystyle X} equipped with the trivial topology is dense, and every topology for which every non-empty subset is dense must be trivial. Denseness is transitive: Given three subsets A , B {\displaystyle A,B} and C {\displaystyle C} of a topological space X {\displaystyle X} with A ⊆ B ⊆ C ⊆ X {\displaystyle A\subseteq B\subseteq C\subseteq X} such that A {\displaystyle A} is dense in B {\displaystyle B} and B {\displaystyle B} is dense in C {\displaystyle C} (in the respective subspace topology) then A {\displaystyle A} is also dense in C . {\displaystyle C.} The image of a dense subset under a surjective continuous function is again dense. The density of a topological space (the least of the cardinalities of its dense subsets) is a topological invariant. A topological space with a connected dense subset is necessarily connected itself. Continuous functions into Hausdorff spaces are determined by their values on dense subsets: if two continuous functions f , g : X → Y {\displaystyle f,g:X\to Y} into a Hausdorff space Y {\displaystyle Y} agree on a dense subset of X {\displaystyle X} then they agree on all of X . {\displaystyle X.} For metric spaces there are universal spaces, into which all spaces of given density can be embedded: a metric space of density α {\displaystyle \alpha } is isometric to a subspace of C ( [ 0 , 1 ] α , R ) , {\displaystyle C\left([0,1]^{\alpha },\mathbb {R} \right),} the space of real continuous functions on the product of α {\displaystyle \alpha } copies of the unit interval. == Related notions == A point x {\displaystyle x} of a subset A {\displaystyle A} of a topological space X {\displaystyle X} is called a limit point of A {\displaystyle A} (in X {\displaystyle X} ) if every neighbourhood of x {\displaystyle x} also contains a point of A {\displaystyle A} other than x {\displaystyle x} itself, and an isolated point of A {\displaystyle A} otherwise. A subset without isolated points is said to be dense-in-itself. A subset A {\displaystyle A} of a topological space X {\displaystyle X} is called nowhere dense (in X {\displaystyle X} ) if there is no neighborhood in X {\displaystyle X} on which A {\displaystyle A} is dense. Equivalently, a subset of a topological space is nowhere dense if and only if the interior of its closure is empty. The interior of the complement of a nowhere dense set is always dense. The complement of a closed nowhere dense set is a dense open set. Given a topological space X , {\displaystyle X,} a subset A {\displaystyle A} of X {\displaystyle X} that can be expressed as the union of countably many nowhere dense subsets of X {\displaystyle X} is called meagre. The rational numbers, while dense in the real numbers, are meagre as a subset of the reals. A topological space with a countable dense subset is called separable. A topological space is a Baire space if and only if the intersection of countably many dense open sets is always dense. A topological space is called resolvable if it is the union of two disjoint dense subsets. More generally, a topological space is called κ-resolvable for a cardinal κ if it contains κ pairwise disjoint dense sets. An embedding of a topological space X {\displaystyle X} as a dense subset of a compact space is called a compactification of X . {\displaystyle X.} A linear operator between topological vector spaces X {\displaystyle X} and Y {\displaystyle Y} is said to be densely defined if its domain is a dense subset of X {\displaystyle X} and if its range is contained within Y . {\displaystyle Y.} See also Continuous linear extension. A topological space X {\displaystyle X} is hyperconnected if and only if every nonempty open set is dense in X . {\displaystyle X.} A topological space is submaximal if and only if every dense subset is open. If ( X , d X ) {\displaystyle \left(X,d_{X}\right)} is a metric space, then a non-empty subset Y {\displaystyle Y} is said to be ε {\displaystyle \varepsilon } -dense if ∀ x ∈ X , ∃ y ∈ Y such that d X ( x , y ) ≤ ε . {\displaystyle \forall x\in X,\;\exists y\in Y{\text{ such that }}d_{X}(x,y)\leq \varepsilon .} One can then show that D {\displaystyle D} is dense in ( X , d X ) {\displaystyle \left(X,d_{X}\right)} if and only if it is ε-dense for every ε > 0. {\displaystyle \varepsilon >0.} == See also == Blumberg theorem – Any real function on R admits a continuous restriction on a dense subset of R Dense order – Partial order where any two distinct comparable elements have another element between them Dense (lattice theory) == References == proofs == General references ==
Wikipedia/Dense_(topology)
In topology and related areas of mathematics, a subspace of a topological space (X, 𝜏) is a subset S of X which is equipped with a topology induced from that of 𝜏 called the subspace topology (or the relative topology, or the induced topology, or the trace topology). == Definition == Given a topological space ( X , τ ) {\displaystyle (X,\tau )} and a subset S {\displaystyle S} of X {\displaystyle X} , the subspace topology on S {\displaystyle S} is defined by τ S = { S ∩ U ∣ U ∈ τ } . {\displaystyle \tau _{S}=\lbrace S\cap U\mid U\in \tau \rbrace .} That is, a subset of S {\displaystyle S} is open in the subspace topology if and only if it is the intersection of S {\displaystyle S} with an open set in ( X , τ ) {\displaystyle (X,\tau )} . If S {\displaystyle S} is equipped with the subspace topology then it is a topological space in its own right, and is called a subspace of ( X , τ ) {\displaystyle (X,\tau )} . Subsets of topological spaces are usually assumed to be equipped with the subspace topology unless otherwise stated. Alternatively we can define the subspace topology for a subset S {\displaystyle S} of X {\displaystyle X} as the coarsest topology for which the inclusion map ι : S ↪ X {\displaystyle \iota :S\hookrightarrow X} is continuous. More generally, suppose ι {\displaystyle \iota } is an injection from a set S {\displaystyle S} to a topological space X {\displaystyle X} . Then the subspace topology on S {\displaystyle S} is defined as the coarsest topology for which ι {\displaystyle \iota } is continuous. The open sets in this topology are precisely the ones of the form ι − 1 ( U ) {\displaystyle \iota ^{-1}(U)} for U {\displaystyle U} open in X {\displaystyle X} . S {\displaystyle S} is then homeomorphic to its image in X {\displaystyle X} (also with the subspace topology) and ι {\displaystyle \iota } is called a topological embedding. A subspace S {\displaystyle S} is called an open subspace if the injection ι {\displaystyle \iota } is an open map, i.e., if the forward image of an open set of S {\displaystyle S} is open in X {\displaystyle X} . Likewise it is called a closed subspace if the injection ι {\displaystyle \iota } is a closed map. == Terminology == The distinction between a set and a topological space is often blurred notationally, for convenience, which can be a source of confusion when one first encounters these definitions. Thus, whenever S {\displaystyle S} is a subset of X {\displaystyle X} , and ( X , τ ) {\displaystyle (X,\tau )} is a topological space, then the unadorned symbols " S {\displaystyle S} " and " X {\displaystyle X} " can often be used to refer both to S {\displaystyle S} and X {\displaystyle X} considered as two subsets of X {\displaystyle X} , and also to ( S , τ S ) {\displaystyle (S,\tau _{S})} and ( X , τ ) {\displaystyle (X,\tau )} as the topological spaces, related as discussed above. So phrases such as " S {\displaystyle S} an open subspace of X {\displaystyle X} " are used to mean that ( S , τ S ) {\displaystyle (S,\tau _{S})} is an open subspace of ( X , τ ) {\displaystyle (X,\tau )} , in the sense used above; that is: (i) S ∈ τ {\displaystyle S\in \tau } ; and (ii) S {\displaystyle S} is considered to be endowed with the subspace topology. == Examples == In the following, R {\displaystyle \mathbb {R} } represents the real numbers with their usual topology. The subspace topology of the natural numbers, as a subspace of R {\displaystyle \mathbb {R} } , is the discrete topology. The rational numbers Q {\displaystyle \mathbb {Q} } considered as a subspace of R {\displaystyle \mathbb {R} } do not have the discrete topology ({0} for example is not an open set in Q {\displaystyle \mathbb {Q} } because there is no open subset of R {\displaystyle \mathbb {R} } whose intersection with Q {\displaystyle \mathbb {Q} } can result in only the singleton {0}). If a and b are rational, then the intervals (a, b) and [a, b] are respectively open and closed, but if a and b are irrational, then the set of all rational x with a < x < b is both open and closed. The set [0,1] as a subspace of R {\displaystyle \mathbb {R} } is both open and closed, whereas as a subset of R {\displaystyle \mathbb {R} } it is only closed. As a subspace of R {\displaystyle \mathbb {R} } , [0, 1] ∪ [2, 3] is composed of two disjoint open subsets (which happen also to be closed), and is therefore a disconnected space. Let S = [0, 1) be a subspace of the real line R {\displaystyle \mathbb {R} } . Then [0, 1⁄2) is open in S but not in R {\displaystyle \mathbb {R} } (as for example the intersection between (-1⁄2, 1⁄2) and S results in [0, 1⁄2)). Likewise [1⁄2, 1) is closed in S but not in R {\displaystyle \mathbb {R} } (as there is no open subset of R {\displaystyle \mathbb {R} } that can intersect with [0, 1) to result in [1⁄2, 1)). S is both open and closed as a subset of itself but not as a subset of R {\displaystyle \mathbb {R} } . == Properties == The subspace topology has the following characteristic property. Let Y {\displaystyle Y} be a subspace of X {\displaystyle X} and let i : Y → X {\displaystyle i:Y\to X} be the inclusion map. Then for any topological space Z {\displaystyle Z} a map f : Z → Y {\displaystyle f:Z\to Y} is continuous if and only if the composite map i ∘ f {\displaystyle i\circ f} is continuous. This property is characteristic in the sense that it can be used to define the subspace topology on Y {\displaystyle Y} . We list some further properties of the subspace topology. In the following let S {\displaystyle S} be a subspace of X {\displaystyle X} . If f : X → Y {\displaystyle f:X\to Y} is continuous then the restriction to S {\displaystyle S} is continuous. If f : X → Y {\displaystyle f:X\to Y} is continuous then f : X → f ( X ) {\displaystyle f:X\to f(X)} is continuous. The closed sets in S {\displaystyle S} are precisely the intersections of S {\displaystyle S} with closed sets in X {\displaystyle X} . If A {\displaystyle A} is a subspace of S {\displaystyle S} then A {\displaystyle A} is also a subspace of X {\displaystyle X} with the same topology. In other words, the subspace topology that A {\displaystyle A} inherits from S {\displaystyle S} is the same as the one it inherits from X {\displaystyle X} . Suppose S {\displaystyle S} is an open subspace of X {\displaystyle X} (so S ∈ τ {\displaystyle S\in \tau } ). Then a subset of S {\displaystyle S} is open in S {\displaystyle S} if and only if it is open in X {\displaystyle X} . Suppose S {\displaystyle S} is a closed subspace of X {\displaystyle X} (so X ∖ S ∈ τ {\displaystyle X\setminus S\in \tau } ). Then a subset of S {\displaystyle S} is closed in S {\displaystyle S} if and only if it is closed in X {\displaystyle X} . If B {\displaystyle B} is a basis for X {\displaystyle X} then B S = { U ∩ S : U ∈ B } {\displaystyle B_{S}=\{U\cap S:U\in B\}} is a basis for S {\displaystyle S} . The topology induced on a subset of a metric space by restricting the metric to this subset coincides with subspace topology for this subset. == Preservation of topological properties == If a topological space having some topological property implies its subspaces have that property, then we say the property is hereditary. If only closed subspaces must share the property we call it weakly hereditary. Every open and every closed subspace of a completely metrizable space is completely metrizable. Every open subspace of a Baire space is a Baire space. Every closed subspace of a compact space is compact. Being a Hausdorff space is hereditary. Being a normal space is weakly hereditary. Total boundedness is hereditary. Being totally disconnected is hereditary. First countability and second countability are hereditary. == See also == the dual notion quotient space product topology direct sum topology == Notes == == References == Bourbaki, Nicolas, Elements of Mathematics: General Topology, Addison-Wesley (1966) Steen, Lynn Arthur; Seebach, J. Arthur Jr. (1995) [1978], Counterexamples in Topology (Dover reprint of 1978 ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-486-68735-3, MR 0507446 Willard, Stephen. General Topology, Dover Publications (2004) ISBN 0-486-43479-6
Wikipedia/Subspace_(topology)
In any domain of mathematics, a space has a natural topology if there is a topology on the space which is "best adapted" to its study within the domain in question. In many cases this imprecise definition means little more than the assertion that the topology in question arises naturally or canonically (see mathematical jargon) in the given context. Note that in some cases multiple topologies seem "natural". For example, if Y is a subset of a totally ordered set X, then the induced order topology, i.e. the order topology of the totally ordered Y, where this order is inherited from X, is coarser than the subspace topology of the order topology of X. "Natural topology" does quite often have a more specific meaning, at least given some prior contextual information: the natural topology is a topology which makes a natural map or collection of maps continuous. This is still imprecise, even once one has specified what the natural maps are, because there may be many topologies with the required property. However, there is often a finest or coarsest topology which makes the given maps continuous, in which case these are obvious candidates for the natural topology. The simplest cases (which nevertheless cover many examples) are the initial topology and the final topology (Willard (1970)). The initial topology is the coarsest topology on a space X which makes a given collection of maps from X to topological spaces Xi continuous. The final topology is the finest topology on a space X which makes a given collection of maps from topological spaces Xi to X continuous. Two of the simplest examples are the natural topologies of subspaces and quotient spaces. The natural topology on a subset of a topological space is the subspace topology. This is the coarsest topology which makes the inclusion map continuous. The natural topology on a quotient of a topological space is the quotient topology. This is the finest topology which makes the quotient map continuous. Another example is that any metric space has a natural topology induced by its metric. == See also == Induced topology == References == Willard, Stephen (1970). General Topology. Addison-Wesley, Massachusetts. (Recent edition published by Dover (2004) ISBN 0-486-43479-6.)
Wikipedia/Natural_topology