source
stringlengths 16
98
| text
stringlengths 40
168k
|
|---|---|
Wikipedia:Homogeneous coordinates#0
|
In mathematics, homogeneous coordinates or projective coordinates, introduced by August Ferdinand Möbius in his 1827 work Der barycentrische Calcul, are a system of coordinates used in projective geometry, just as Cartesian coordinates are used in Euclidean geometry. They have the advantage that the coordinates of points, including points at infinity, can be represented using finite coordinates. Formulas involving homogeneous coordinates are often simpler and more symmetric than their Cartesian counterparts. Homogeneous coordinates have a range of applications, including computer graphics and 3D computer vision, where they allow affine transformations and, in general, projective transformations to be easily represented by a matrix. They are also used in fundamental elliptic curve cryptography algorithms. If homogeneous coordinates of a point are multiplied by a non-zero scalar then the resulting coordinates represent the same point. Since homogeneous coordinates are also given to points at infinity, the number of coordinates required to allow this extension is one more than the dimension of the projective space being considered. For example, two homogeneous coordinates are required to specify a point on the projective line and three homogeneous coordinates are required to specify a point in the projective plane. == Introduction == The real projective plane can be thought of as the Euclidean plane with additional points added, which are called points at infinity, and are considered to lie on a new line, the line at infinity. There is a point at infinity corresponding to each direction (numerically given by the slope of a line), informally defined as the limit of a point that moves in that direction away from the origin. Parallel lines in the Euclidean plane are said to intersect at a point at infinity corresponding to their common direction. Given a point ( x , y ) {\displaystyle (x,y)} on the Euclidean plane, for any non-zero real number Z {\displaystyle Z} , the triple ( x Z , y Z , Z ) {\displaystyle (xZ,yZ,Z)} is called a set of homogeneous coordinates for the point. By this definition, multiplying the three homogeneous coordinates by a common, non-zero factor gives a new set of homogeneous coordinates for the same point. In particular, ( x , y , 1 ) {\displaystyle (x,y,1)} is such a system of homogeneous coordinates for the point ( x , y ) {\displaystyle (x,y)} . For example, the Cartesian point ( 1 , 2 ) {\displaystyle (1,2)} can be represented in homogeneous coordinates as ( 1 , 2 , 1 ) {\displaystyle (1,2,1)} or ( 2 , 4 , 2 ) {\displaystyle (2,4,2)} . The original Cartesian coordinates are recovered by dividing the first two positions by the third. Thus unlike Cartesian coordinates, a single point can be represented by infinitely many homogeneous coordinates. The equation of a line through the origin ( 0 , 0 ) {\displaystyle (0,0)} may be written n x + m y = 0 {\displaystyle nx+my=0} where n {\displaystyle n} and m {\displaystyle m} are not both 0 {\displaystyle 0} . In parametric form this can be written x = m t , y = − n t {\displaystyle x=mt,y=-nt} . Let Z = 1 / t {\displaystyle Z=1/t} , so the coordinates of a point on the line may be written ( m / Z , − n / Z ) {\displaystyle (m/Z,-n/Z)} . In homogeneous coordinates this becomes ( m , − n , Z ) {\displaystyle (m,-n,Z)} . In the limit, as t {\displaystyle t} approaches infinity, in other words, as the point moves away from the origin, Z {\displaystyle Z} approaches 0 {\displaystyle 0} and the homogeneous coordinates of the point become ( m , − n , 0 ) {\displaystyle (m,-n,0)} . Thus we define ( m , − n , 0 ) {\displaystyle (m,-n,0)} as the homogeneous coordinates of the point at infinity corresponding to the direction of the line n x + m y = 0 {\displaystyle nx+my=0} . As any line of the Euclidean plane is parallel to a line passing through the origin, and since parallel lines have the same point at infinity, the infinite point on every line of the Euclidean plane has been given homogeneous coordinates. To summarize: Any point in the projective plane is represented by a triple ( X , Y , Z ) {\displaystyle (X,Y,Z)} , called 'homogeneous coordinates' or 'projective coordinates' of the point, where X {\displaystyle X} , Y {\displaystyle Y} and Z {\displaystyle Z} are not all 0 {\displaystyle 0} . The point represented by a given set of homogeneous coordinates is unchanged if the coordinates are multiplied by a common factor. Conversely, two sets of homogeneous coordinates represent the same point if and only if one is obtained from the other by multiplying all the coordinates by the same non-zero constant. When Z {\displaystyle Z} is not 0 {\displaystyle 0} the point represented is the point ( X / Z , Y / Z ) {\displaystyle (X/Z,Y/Z)} in the Euclidean plane. When Z {\displaystyle Z} is 0 {\displaystyle 0} the point represented is a point at infinity. The triple ( 0 , 0 , 0 ) {\displaystyle (0,0,0)} is omitted and does not represent any point. The origin of the Euclidean plane is represented by ( 0 , 0 , 1 ) {\displaystyle (0,0,1)} . === Notation === Some authors use different notations for homogeneous coordinates which help distinguish them from Cartesian coordinates. The use of colons instead of commas, for example ( x : y : z ) {\displaystyle (x:y:z)} instead of ( x , y , z ) {\displaystyle (x,y,z)} , emphasizes that the coordinates are to be considered ratios. Square brackets, as in [ x , y , z ] {\displaystyle [x,y,z]} emphasize that multiple sets of coordinates are associated with a single point. Some authors use a combination of colons and square brackets, as in [ x : y : z ] {\displaystyle [x:y:z]} . == Other dimensions == The discussion in the preceding section applies analogously to projective spaces other than the plane. So the points on the projective line may be represented by pairs of coordinates ( x , y ) {\displaystyle (x,y)} , not both zero. In this case, the point at infinity is ( 1 , 0 ) {\displaystyle (1,0)} . Similarly the points in projective n {\displaystyle n} -space are represented by ( n + 1 ) {\displaystyle (n+1)} -tuples. == Other projective spaces == The use of real numbers gives homogeneous coordinates of points in the classical case of the real projective spaces, however any field may be used, in particular, the complex numbers may be used for complex projective space. For example, the complex projective line uses two homogeneous complex coordinates and is known as the Riemann sphere. Other fields, including finite fields, can be used. Homogeneous coordinates for projective spaces can also be created with elements from a division ring (a skew field). However, in this case, care must be taken to account for the fact that multiplication may not be commutative. For the general ring A, a projective line over A can be defined with homogeneous factors acting on the left and the projective linear group acting on the right. == Alternative definition == Another definition of the real projective plane can be given in terms of equivalence classes. For non-zero elements of R 3 {\displaystyle \mathbb {R} ^{3}} , define ( x 1 , y 1 , z 1 ) ∼ ( x 2 , y 2 , z 2 ) {\displaystyle (x_{1},y_{1},z_{1})\sim (x_{2},y_{2},z_{2})} to mean there is a non-zero λ {\displaystyle \lambda } so that ( x 1 , y 1 , z 1 ) = ( λ x 2 , λ y 2 , λ z 2 ) {\displaystyle (x_{1},y_{1},z_{1})=(\lambda x_{2},\lambda y_{2},\lambda z_{2})} . Then ∼ {\displaystyle \sim } is an equivalence relation and the projective plane can be defined as the equivalence classes of R 3 ∖ { 0 } . {\displaystyle \mathbb {R} ^{3}\setminus \left\{0\right\}.} If ( x , y , z ) {\displaystyle (x,y,z)} is one of the elements of the equivalence class p {\displaystyle p} then these are taken to be homogeneous coordinates of p {\displaystyle p} . Lines in this space are defined to be sets of solutions of equations of the form a x + b y + c z = 0 {\displaystyle ax+by+cz=0} where not all of a {\displaystyle a} , b {\displaystyle b} and c {\displaystyle c} are zero. Satisfaction of the condition a x + b y + c z = 0 {\displaystyle ax+by+cz=0} depends only on the equivalence class of ( x , y , z ) , {\displaystyle (x,y,z),} so the equation defines a set of points in the projective plane. The mapping ( x , y ) → ( x , y , 1 ) {\displaystyle (x,y)\rightarrow (x,y,1)} defines an inclusion from the Euclidean plane to the projective plane and the complement of the image is the set of points with z = 0 {\displaystyle z=0} . The equation z = 0 {\displaystyle z=0} is an equation of a line in the projective plane (see definition of a line in the projective plane), and is called the line at infinity. The equivalence classes, p {\displaystyle p} , are the lines through the origin with the origin removed. The origin does not really play an essential part in the previous discussion so it can be added back in without changing the properties of the projective plane. This produces a variation on the definition, namely the projective plane is defined as the set of lines in R 3 {\displaystyle \mathbb {R} ^{3}} that pass through the origin and the coordinates of a non-zero element ( x , y , z ) {\displaystyle (x,y,z)} of a line are taken to be homogeneous coordinates of the line. These lines are now interpreted as points in the projective plane. Again, this discussion applies analogously to other dimensions. So the projective space of dimension n can be defined as the set of lines through the origin in R n + 1 {\displaystyle \mathbb {R} ^{n+1}} . == Homogeneity == Homogeneous coordinates are not uniquely determined by a point, so a function defined on the coordinates, say f ( x , y , z ) {\displaystyle f(x,y,z)} , does not determine a function defined on points as with Cartesian coordinates. But a condition f ( x , y , z ) = 0 {\displaystyle f(x,y,z)=0} defined on the coordinates, as might be used to describe a curve, determines a condition on points if the function is homogeneous. Specifically, suppose there is a k {\displaystyle k} such that f ( λ x , λ y , λ z ) = λ k f ( x , y , z ) . {\displaystyle f(\lambda x,\lambda y,\lambda z)=\lambda ^{k}f(x,y,z).} If a set of coordinates represents the same point as ( x , y , z ) {\displaystyle (x,y,z)} then it can be written ( λ x , λ y , λ z ) {\displaystyle (\lambda x,\lambda y,\lambda z)} for some non-zero value of λ {\displaystyle \lambda } . Then f ( x , y , z ) = 0 ⟺ f ( λ x , λ y , λ z ) = λ k f ( x , y , z ) = 0. {\displaystyle f(x,y,z)=0\iff f(\lambda x,\lambda y,\lambda z)=\lambda ^{k}f(x,y,z)=0.} A polynomial g ( x , y ) {\displaystyle g(x,y)} of degree k {\displaystyle k} can be turned into a homogeneous polynomial by replacing x {\displaystyle x} with x / z {\displaystyle x/z} , y {\displaystyle y} with y / z {\displaystyle y/z} and multiplying by z k {\displaystyle z^{k}} , in other words by defining f ( x , y , z ) = z k g ( x / z , y / z ) . {\displaystyle f(x,y,z)=z^{k}g(x/z,y/z).} The resulting function f {\displaystyle f} is a polynomial, so it makes sense to extend its domain to triples where z = 0 {\displaystyle z=0} . The process can be reversed by setting z = 1 {\displaystyle z=1} , or g ( x , y ) = f ( x , y , 1 ) . {\displaystyle g(x,y)=f(x,y,1).} The equation f ( x , y , z ) = 0 {\displaystyle f(x,y,z)=0} can then be thought of as the homogeneous form of g ( x , y ) = 0 {\displaystyle g(x,y)=0} and it defines the same curve when restricted to the Euclidean plane. For example, the homogeneous form of the equation of the line a x + b y + c = 0 {\displaystyle ax+by+c=0} is a x + b y + c z = 0. {\displaystyle ax+by+cz=0.} == Line coordinates and duality == The equation of a line in the projective plane may be given as s x + t y + u z = 0 {\displaystyle sx+ty+uz=0} where s {\displaystyle s} , t {\displaystyle t} and u {\displaystyle u} are constants. Each triple ( s , t , u ) {\displaystyle (s,t,u)} determines a line, the line determined is unchanged if it is multiplied by a non-zero scalar, and at least one of s {\displaystyle s} , t {\displaystyle t} and u {\displaystyle u} must be non-zero. So the triple ( s , t , u ) {\displaystyle (s,t,u)} may be taken to be homogeneous coordinates of a line in the projective plane, that is line coordinates as opposed to point coordinates. If in s x + t y + u z = 0 {\displaystyle sx+ty+uz=0} the letters s {\displaystyle s} , t {\displaystyle t} and u {\displaystyle u} are taken as variables and x {\displaystyle x} , y {\displaystyle y} and z {\displaystyle z} are taken as constants then the equation becomes an equation of a set of lines in the space of all lines in the plane. Geometrically it represents the set of lines that pass through the point ( x , y , z ) {\displaystyle (x,y,z)} and may be interpreted as the equation of the point in line-coordinates. In the same way, planes in 3-space may be given sets of four homogeneous coordinates, and so on for higher dimensions. The same relation, s x + t y + u z = 0 {\displaystyle sx+ty+uz=0} , may be regarded as either the equation of a line or the equation of a point. In general, there is no difference either algebraically or logically between homogeneous coordinates of points and lines. So plane geometry with points as the fundamental elements and plane geometry with lines as the fundamental elements are equivalent except for interpretation. This leads to the concept of duality in projective geometry, the principle that the roles of points and lines can be interchanged in a theorem in projective geometry and the result will also be a theorem. Analogously, the theory of points in projective 3-space is dual to the theory of planes in projective 3-space, and so on for higher dimensions. == Plücker coordinates == Assigning coordinates to lines in projective 3-space is more complicated since it would seem that a total of 8 coordinates, either the coordinates of two points which lie on the line or two planes whose intersection is the line, are required. A useful method, due to Julius Plücker, creates a set of six coordinates as the determinants x i y j − x j y i ( 1 ≤ i < j ≤ 4 ) {\displaystyle x_{i}y_{j}-x_{j}y_{i}(1\leq i<j\leq 4)} from the homogeneous coordinates of two points ( x 1 , x 2 , x 3 , x 4 ) {\displaystyle (x_{1},x_{2},x_{3},x_{4})} and ( y 1 , y 2 , y 3 , y 4 ) {\displaystyle (y_{1},y_{2},y_{3},y_{4})} on the line. The Plücker embedding is the generalization of this to create homogeneous coordinates of elements of any dimension m {\displaystyle m} in a projective space of dimension n {\displaystyle n} . == Circular points == The homogeneous form for the equation of a circle in the real or complex projective plane is x 2 + y 2 + 2 a x z + 2 b y z + c z 2 = 0 {\displaystyle x^{2}+y^{2}+2axz+2byz+cz^{2}=0} . The intersection of this curve with the line at infinity can be found by setting z = 0 {\displaystyle z=0} . This produces the equation x 2 + y 2 = 0 {\displaystyle x^{2}+y^{2}=0} which has two solutions over the complex numbers, giving rise to the points with homogeneous coordinates ( 1 , i , 0 ) {\displaystyle (1,i,0)} and ( 1 , − i , 0 ) {\displaystyle (1,-i,0)} in the complex projective plane. These points are called the circular points at infinity and can be regarded as the common points of intersection of all circles. This can be generalized to curves of higher order as circular algebraic curves. == Change of coordinate systems == Just as the selection of axes in the Cartesian coordinate system is somewhat arbitrary, the selection of a single system of homogeneous coordinates out of all possible systems is somewhat arbitrary. Therefore, it is useful to know how the different systems are related to each other. Let ( x , y , z {\displaystyle (x,y,z} ) be homogeneous coordinates of a point in the projective plane. A fixed matrix A = ( a b c d e f g h i ) , {\displaystyle A={\begin{pmatrix}a&b&c\\d&e&f\\g&h&i\end{pmatrix}},} with nonzero determinant, defines a new system of coordinates ( X , Y , Z ) {\displaystyle (X,Y,Z)} by the equation ( X Y Z ) = A ( x y z ) . {\displaystyle {\begin{pmatrix}X\\Y\\Z\end{pmatrix}}=A{\begin{pmatrix}x\\y\\z\end{pmatrix}}.} Multiplication of ( x , y , z ) {\displaystyle (x,y,z)} by a scalar results in the multiplication of ( X , Y , Z ) {\displaystyle (X,Y,Z)} by the same scalar, and X {\displaystyle X} , Y {\displaystyle Y} and Z {\displaystyle Z} cannot be all 0 {\displaystyle 0} unless x {\displaystyle x} , y {\displaystyle y} and z {\displaystyle z} are all zero since A {\displaystyle A} is nonsingular. So ( X , Y , Z ) {\displaystyle (X,Y,Z)} are a new system of homogeneous coordinates for the same point of the projective plane. == Barycentric coordinates == Möbius's original formulation of homogeneous coordinates specified the position of a point as the center of mass (or barycenter) of a system of three point masses placed at the vertices of a fixed triangle. Points within the triangle are represented by positive masses and points outside the triangle are represented by allowing negative masses. Multiplying the masses in the system by a scalar does not affect the center of mass, so this is a special case of a system of homogeneous coordinates. == Trilinear coordinates == Let l {\displaystyle l} , m {\displaystyle m} and n {\displaystyle n} be three lines in the plane and define a set of coordinates X {\displaystyle X} , Y {\displaystyle Y} and Z {\displaystyle Z} of a point p {\displaystyle p} as the signed distances from p {\displaystyle p} to these three lines. These are called the trilinear coordinates of p {\displaystyle p} with respect to the triangle whose vertices are the pairwise intersections of the lines. Strictly speaking these are not homogeneous, since the values of X {\displaystyle X} , Y {\displaystyle Y} and Z {\displaystyle Z} are determined exactly, not just up to proportionality. There is a linear relationship between them however, so these coordinates can be made homogeneous by allowing multiples of ( X , Y , Z ) {\displaystyle (X,Y,Z)} to represent the same point. More generally, X {\displaystyle X} , Y {\displaystyle Y} and Z {\displaystyle Z} can be defined as constants p {\displaystyle p} , r {\displaystyle r} and q {\displaystyle q} times the distances to l {\displaystyle l} , m {\displaystyle m} and n {\displaystyle n} , resulting in a different system of homogeneous coordinates with the same triangle of reference. This is, in fact, the most general type of system of homogeneous coordinates for points in the plane if none of the lines is the line at infinity. == Use in computer graphics and computer vision == Homogeneous coordinates are ubiquitous in computer graphics because they allow common vector operations such as translation, rotation, scaling and perspective projection to be represented as a matrix by which the vector is multiplied. By the chain rule, any sequence of such operations can be multiplied out into a single matrix, allowing simple and efficient processing. By contrast, using Cartesian coordinates, translations and perspective projection cannot be expressed as matrix multiplications, though other operations can. Modern OpenGL and Direct3D graphics cards take advantage of homogeneous coordinates to implement a vertex shader efficiently using vector processors with 4-element registers. For example, in perspective projection, a position in space is associated with the line from it to a fixed point called the center of projection. The point is then mapped to a plane by finding the point of intersection of that plane and the line. This produces an accurate representation of how a three-dimensional object appears to the eye. In the simplest situation, the center of projection is the origin and points are mapped to the plane z = 1 {\displaystyle z=1} , working for the moment in Cartesian coordinates. For a given point in space, ( x , y , z ) {\displaystyle (x,y,z)} , the point where the line and the plane intersect is ( x / z , y / z , 1 ) {\displaystyle (x/z,y/z,1)} . Dropping the now superfluous z {\displaystyle z} coordinate, this becomes ( x / z , y / z ) {\displaystyle (x/z,y/z)} . In homogeneous coordinates, the point ( x , y , z ) {\displaystyle (x,y,z)} is represented by ( x w , y w , z w , w ) {\displaystyle (xw,yw,zw,w)} and the point it maps to on the plane is represented by ( x w , y w , z w ) {\displaystyle (xw,yw,zw)} , so projection can be represented in matrix form as ( 1 0 0 0 0 1 0 0 0 0 1 0 ) {\displaystyle {\begin{pmatrix}1&0&0&0\\0&1&0&0\\0&0&1&0\end{pmatrix}}} Matrices representing other geometric transformations can be combined with this and each other by matrix multiplication. As a result, any perspective projection of space can be represented as a single matrix. == Notes == == References == Bôcher, Maxime (1907). Introduction to Higher Algebra. Macmillan. pp. 11ff. Briot, Charles; Bouquet, Jean Claude (1896). Elements of Analytical Geometry of Two Dimensions. trans. J.H. Boyd. Werner school book company. p. 380. Cox, David A.; Little, John B.; O'Shea, Donal (2007). Ideals, Varieties, and Algorithms. Springer. p. 357. ISBN 978-0-387-35650-1. Garner, Lynn E. (1981), An Outline of Projective Geometry, North Holland, ISBN 0-444-00423-8 Jones, Alfred Clement (1912). An Introduction to Algebraical Geometry. Clarendon. Miranda, Rick (1995). Algebraic Curves and Riemann Surfaces. AMS Bookstore. p. 13. ISBN 0-8218-0268-2. Wilczynski, Ernest Julius (1906). Projective Differential Geometry of Curves and Ruled Surfaces. B.G. Teubner. Woods, Frederick S. (1922). Higher Geometry. Ginn and Co. pp. 27ff. == Further reading == Stillwell, John (2002). Mathematics and its History. Springer. pp. 134ff. ISBN 0-387-95336-1. Rogers, David F. (1976). Mathematical elements for computer graphics. McGraw Hill. ISBN 0070535272. == External links == Jules Bloomenthal and Jon Rokne, Homogeneous coordinates [1] Archived 2021-02-26 at the Wayback Machine Ching-Kuang Shene, Homogeneous coordinates [2] Wolfram MathWorld
|
Wikipedia:Homogeneous function#0
|
In mathematics, a homogeneous function is a function of several variables such that the following holds: If each of the function's arguments is multiplied by the same scalar, then the function's value is multiplied by some power of this scalar; the power is called the degree of homogeneity, or simply the degree. That is, if k is an integer, a function f of n variables is homogeneous of degree k if f ( s x 1 , … , s x n ) = s k f ( x 1 , … , x n ) {\displaystyle f(sx_{1},\ldots ,sx_{n})=s^{k}f(x_{1},\ldots ,x_{n})} for every x 1 , … , x n , {\displaystyle x_{1},\ldots ,x_{n},} and s ≠ 0. {\displaystyle s\neq 0.} This is also referred to a kth-degree or kth-order homogeneous function. For example, a homogeneous polynomial of degree k defines a homogeneous function of degree k. The above definition extends to functions whose domain and codomain are vector spaces over a field F: a function f : V → W {\displaystyle f:V\to W} between two F-vector spaces is homogeneous of degree k {\displaystyle k} if for all nonzero s ∈ F {\displaystyle s\in F} and v ∈ V . {\displaystyle v\in V.} This definition is often further generalized to functions whose domain is not V, but a cone in V, that is, a subset C of V such that v ∈ C {\displaystyle \mathbf {v} \in C} implies s v ∈ C {\displaystyle s\mathbf {v} \in C} for every nonzero scalar s. In the case of functions of several real variables and real vector spaces, a slightly more general form of homogeneity called positive homogeneity is often considered, by requiring only that the above identities hold for s > 0 , {\displaystyle s>0,} and allowing any real number k as a degree of homogeneity. Every homogeneous real function is positively homogeneous. The converse is not true, but is locally true in the sense that (for integer degrees) the two kinds of homogeneity cannot be distinguished by considering the behavior of a function near a given point. A norm over a real vector space is an example of a positively homogeneous function that is not homogeneous. A special case is the absolute value of real numbers. The quotient of two homogeneous polynomials of the same degree gives an example of a homogeneous function of degree zero. This example is fundamental in the definition of projective schemes. == Definitions == The concept of a homogeneous function was originally introduced for functions of several real variables. With the definition of vector spaces at the end of 19th century, the concept has been naturally extended to functions between vector spaces, since a tuple of variable values can be considered as a coordinate vector. It is this more general point of view that is described in this article. There are two commonly used definitions. The general one works for vector spaces over arbitrary fields, and is restricted to degrees of homogeneity that are integers. The second one supposes to work over the field of real numbers, or, more generally, over an ordered field. This definition restricts to positive values the scaling factor that occurs in the definition, and is therefore called positive homogeneity, the qualificative positive being often omitted when there is no risk of confusion. Positive homogeneity leads to considering more functions as homogeneous. For example, the absolute value and all norms are positively homogeneous functions that are not homogeneous. The restriction of the scaling factor to real positive values allows also considering homogeneous functions whose degree of homogeneity is any real number. === General homogeneity === Let V and W be two vector spaces over a field F. A linear cone in V is a subset C of V such that s x ∈ C {\displaystyle sx\in C} for all x ∈ C {\displaystyle x\in C} and all nonzero s ∈ F . {\displaystyle s\in F.} A homogeneous function f from V to W is a partial function from V to W that has a linear cone C as its domain, and satisfies f ( s x ) = s k f ( x ) {\displaystyle f(sx)=s^{k}f(x)} for some integer k, every x ∈ C , {\displaystyle x\in C,} and every nonzero s ∈ F . {\displaystyle s\in F.} The integer k is called the degree of homogeneity, or simply the degree of f. A typical example of a homogeneous function of degree k is the function defined by a homogeneous polynomial of degree k. The rational function defined by the quotient of two homogeneous polynomials is a homogeneous function; its degree is the difference of the degrees of the numerator and the denominator; its cone of definition is the linear cone of the points where the value of denominator is not zero. Homogeneous functions play a fundamental role in projective geometry since any homogeneous function f from V to W defines a well-defined function between the projectivizations of V and W. The homogeneous rational functions of degree zero (those defined by the quotient of two homogeneous polynomial of the same degree) play an essential role in the Proj construction of projective schemes. === Positive homogeneity === When working over the real numbers, or more generally over an ordered field, it is commonly convenient to consider positive homogeneity, the definition being exactly the same as that in the preceding section, with "nonzero s" replaced by "s > 0" in the definitions of a linear cone and a homogeneous function. This change allow considering (positively) homogeneous functions with any real number as their degrees, since exponentiation with a positive real base is well defined. Even in the case of integer degrees, there are many useful functions that are positively homogeneous without being homogeneous. This is, in particular, the case of the absolute value function and norms, which are all positively homogeneous of degree 1. They are not homogeneous since | − x | = | x | ≠ − | x | {\displaystyle |-x|=|x|\neq -|x|} if x ≠ 0. {\displaystyle x\neq 0.} This remains true in the complex case, since the field of the complex numbers C {\displaystyle \mathbb {C} } and every complex vector space can be considered as real vector spaces. Euler's homogeneous function theorem is a characterization of positively homogeneous differentiable functions, which may be considered as the fundamental theorem on homogeneous functions. == Examples == === Simple example === The function f ( x , y ) = x 2 + y 2 {\displaystyle f(x,y)=x^{2}+y^{2}} is homogeneous of degree 2: f ( t x , t y ) = ( t x ) 2 + ( t y ) 2 = t 2 ( x 2 + y 2 ) = t 2 f ( x , y ) . {\displaystyle f(tx,ty)=(tx)^{2}+(ty)^{2}=t^{2}\left(x^{2}+y^{2}\right)=t^{2}f(x,y).} === Absolute value and norms === The absolute value of a real number is a positively homogeneous function of degree 1, which is not homogeneous, since | s x | = s | x | {\displaystyle |sx|=s|x|} if s > 0 , {\displaystyle s>0,} and | s x | = − s | x | {\displaystyle |sx|=-s|x|} if s < 0. {\displaystyle s<0.} The absolute value of a complex number is a positively homogeneous function of degree 1 {\displaystyle 1} over the real numbers (that is, when considering the complex numbers as a vector space over the real numbers). It is not homogeneous, over the real numbers as well as over the complex numbers. More generally, every norm and seminorm is a positively homogeneous function of degree 1 which is not a homogeneous function. As for the absolute value, if the norm or semi-norm is defined on a vector space over the complex numbers, this vector space has to be considered as vector space over the real number for applying the definition of a positively homogeneous function. === Linear Maps === Any linear map f : V → W {\displaystyle f:V\to W} between vector spaces over a field F is homogeneous of degree 1, by the definition of linearity: f ( α v ) = α f ( v ) {\displaystyle f(\alpha \mathbf {v} )=\alpha f(\mathbf {v} )} for all α ∈ F {\displaystyle \alpha \in {F}} and v ∈ V . {\displaystyle v\in V.} Similarly, any multilinear function f : V 1 × V 2 × ⋯ V n → W {\displaystyle f:V_{1}\times V_{2}\times \cdots V_{n}\to W} is homogeneous of degree n , {\displaystyle n,} by the definition of multilinearity: f ( α v 1 , … , α v n ) = α n f ( v 1 , … , v n ) {\displaystyle f\left(\alpha \mathbf {v} _{1},\ldots ,\alpha \mathbf {v} _{n}\right)=\alpha ^{n}f(\mathbf {v} _{1},\ldots ,\mathbf {v} _{n})} for all α ∈ F {\displaystyle \alpha \in {F}} and v 1 ∈ V 1 , v 2 ∈ V 2 , … , v n ∈ V n . {\displaystyle v_{1}\in V_{1},v_{2}\in V_{2},\ldots ,v_{n}\in V_{n}.} === Homogeneous polynomials === Monomials in n {\displaystyle n} variables define homogeneous functions f : F n → F . {\displaystyle f:\mathbb {F} ^{n}\to \mathbb {F} .} For example, f ( x , y , z ) = x 5 y 2 z 3 {\displaystyle f(x,y,z)=x^{5}y^{2}z^{3}\,} is homogeneous of degree 10 since f ( α x , α y , α z ) = ( α x ) 5 ( α y ) 2 ( α z ) 3 = α 10 x 5 y 2 z 3 = α 10 f ( x , y , z ) . {\displaystyle f(\alpha x,\alpha y,\alpha z)=(\alpha x)^{5}(\alpha y)^{2}(\alpha z)^{3}=\alpha ^{10}x^{5}y^{2}z^{3}=\alpha ^{10}f(x,y,z).\,} The degree is the sum of the exponents on the variables; in this example, 10 = 5 + 2 + 3. {\displaystyle 10=5+2+3.} A homogeneous polynomial is a polynomial made up of a sum of monomials of the same degree. For example, x 5 + 2 x 3 y 2 + 9 x y 4 {\displaystyle x^{5}+2x^{3}y^{2}+9xy^{4}} is a homogeneous polynomial of degree 5. Homogeneous polynomials also define homogeneous functions. Given a homogeneous polynomial of degree k {\displaystyle k} with real coefficients that takes only positive values, one gets a positively homogeneous function of degree k / d {\displaystyle k/d} by raising it to the power 1 / d . {\displaystyle 1/d.} So for example, the following function is positively homogeneous of degree 1 but not homogeneous: ( x 2 + y 2 + z 2 ) 1 2 . {\displaystyle \left(x^{2}+y^{2}+z^{2}\right)^{\frac {1}{2}}.} === Min/max === For every set of weights w 1 , … , w n , {\displaystyle w_{1},\dots ,w_{n},} the following functions are positively homogeneous of degree 1, but not homogeneous: min ( x 1 w 1 , … , x n w n ) {\displaystyle \min \left({\frac {x_{1}}{w_{1}}},\dots ,{\frac {x_{n}}{w_{n}}}\right)} (Leontief utilities) max ( x 1 w 1 , … , x n w n ) {\displaystyle \max \left({\frac {x_{1}}{w_{1}}},\dots ,{\frac {x_{n}}{w_{n}}}\right)} === Rational functions === Rational functions formed as the ratio of two homogeneous polynomials are homogeneous functions in their domain, that is, off of the linear cone formed by the zeros of the denominator. Thus, if f {\displaystyle f} is homogeneous of degree m {\displaystyle m} and g {\displaystyle g} is homogeneous of degree n , {\displaystyle n,} then f / g {\displaystyle f/g} is homogeneous of degree m − n {\displaystyle m-n} away from the zeros of g . {\displaystyle g.} === Non-examples === The homogeneous real functions of a single variable have the form x ↦ c x k {\displaystyle x\mapsto cx^{k}} for some constant c. So, the affine function x ↦ x + 5 , {\displaystyle x\mapsto x+5,} the natural logarithm x ↦ ln ( x ) , {\displaystyle x\mapsto \ln(x),} and the exponential function x ↦ e x {\displaystyle x\mapsto e^{x}} are not homogeneous. == Euler's theorem == Roughly speaking, Euler's homogeneous function theorem asserts that the positively homogeneous functions of a given degree are exactly the solution of a specific partial differential equation. More precisely: As a consequence, if f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } is continuously differentiable and homogeneous of degree k , {\displaystyle k,} its first-order partial derivatives ∂ f / ∂ x i {\displaystyle \partial f/\partial x_{i}} are homogeneous of degree k − 1. {\displaystyle k-1.} This results from Euler's theorem by differentiating the partial differential equation with respect to one variable. In the case of a function of a single real variable ( n = 1 {\displaystyle n=1} ), the theorem implies that a continuously differentiable and positively homogeneous function of degree k has the form f ( x ) = c + x k {\displaystyle f(x)=c_{+}x^{k}} for x > 0 {\displaystyle x>0} and f ( x ) = c − x k {\displaystyle f(x)=c_{-}x^{k}} for x < 0. {\displaystyle x<0.} The constants c + {\displaystyle c_{+}} and c − {\displaystyle c_{-}} are not necessarily the same, as it is the case for the absolute value. == Application to differential equations == The substitution v = y / x {\displaystyle v=y/x} converts the ordinary differential equation I ( x , y ) d y d x + J ( x , y ) = 0 , {\displaystyle I(x,y){\frac {\mathrm {d} y}{\mathrm {d} x}}+J(x,y)=0,} where I {\displaystyle I} and J {\displaystyle J} are homogeneous functions of the same degree, into the separable differential equation x d v d x = − J ( 1 , v ) I ( 1 , v ) − v . {\displaystyle x{\frac {\mathrm {d} v}{\mathrm {d} x}}=-{\frac {J(1,v)}{I(1,v)}}-v.} == Generalizations == === Homogeneity under a monoid action === The definitions given above are all specialized cases of the following more general notion of homogeneity in which X {\displaystyle X} can be any set (rather than a vector space) and the real numbers can be replaced by the more general notion of a monoid. Let M {\displaystyle M} be a monoid with identity element 1 ∈ M , {\displaystyle 1\in M,} let X {\displaystyle X} and Y {\displaystyle Y} be sets, and suppose that on both X {\displaystyle X} and Y {\displaystyle Y} there are defined monoid actions of M . {\displaystyle M.} Let k {\displaystyle k} be a non-negative integer and let f : X → Y {\displaystyle f:X\to Y} be a map. Then f {\displaystyle f} is said to be homogeneous of degree k {\displaystyle k} over M {\displaystyle M} if for every x ∈ X {\displaystyle x\in X} and m ∈ M , {\displaystyle m\in M,} f ( m x ) = m k f ( x ) . {\displaystyle f(mx)=m^{k}f(x).} If in addition there is a function M → M , {\displaystyle M\to M,} denoted by m ↦ | m | , {\displaystyle m\mapsto |m|,} called an absolute value then f {\displaystyle f} is said to be absolutely homogeneous of degree k {\displaystyle k} over M {\displaystyle M} if for every x ∈ X {\displaystyle x\in X} and m ∈ M , {\displaystyle m\in M,} f ( m x ) = | m | k f ( x ) . {\displaystyle f(mx)=|m|^{k}f(x).} A function is homogeneous over M {\displaystyle M} (resp. absolutely homogeneous over M {\displaystyle M} ) if it is homogeneous of degree 1 {\displaystyle 1} over M {\displaystyle M} (resp. absolutely homogeneous of degree 1 {\displaystyle 1} over M {\displaystyle M} ). More generally, it is possible for the symbols m k {\displaystyle m^{k}} to be defined for m ∈ M {\displaystyle m\in M} with k {\displaystyle k} being something other than an integer (for example, if M {\displaystyle M} is the real numbers and k {\displaystyle k} is a non-zero real number then m k {\displaystyle m^{k}} is defined even though k {\displaystyle k} is not an integer). If this is the case then f {\displaystyle f} will be called homogeneous of degree k {\displaystyle k} over M {\displaystyle M} if the same equality holds: f ( m x ) = m k f ( x ) for every x ∈ X and m ∈ M . {\displaystyle f(mx)=m^{k}f(x)\quad {\text{ for every }}x\in X{\text{ and }}m\in M.} The notion of being absolutely homogeneous of degree k {\displaystyle k} over M {\displaystyle M} is generalized similarly. === Distributions (generalized functions) === A continuous function f {\displaystyle f} on R n {\displaystyle \mathbb {R} ^{n}} is homogeneous of degree k {\displaystyle k} if and only if ∫ R n f ( t x ) φ ( x ) d x = t k ∫ R n f ( x ) φ ( x ) d x {\displaystyle \int _{\mathbb {R} ^{n}}f(tx)\varphi (x)\,dx=t^{k}\int _{\mathbb {R} ^{n}}f(x)\varphi (x)\,dx} for all compactly supported test functions φ {\displaystyle \varphi } ; and nonzero real t . {\displaystyle t.} Equivalently, making a change of variable y = t x , {\displaystyle y=tx,} f {\displaystyle f} is homogeneous of degree k {\displaystyle k} if and only if t − n ∫ R n f ( y ) φ ( y t ) d y = t k ∫ R n f ( y ) φ ( y ) d y {\displaystyle t^{-n}\int _{\mathbb {R} ^{n}}f(y)\varphi \left({\frac {y}{t}}\right)\,dy=t^{k}\int _{\mathbb {R} ^{n}}f(y)\varphi (y)\,dy} for all t {\displaystyle t} and all test functions φ . {\displaystyle \varphi .} The last display makes it possible to define homogeneity of distributions. A distribution S {\displaystyle S} is homogeneous of degree k {\displaystyle k} if t − n ⟨ S , φ ∘ μ t ⟩ = t k ⟨ S , φ ⟩ {\displaystyle t^{-n}\langle S,\varphi \circ \mu _{t}\rangle =t^{k}\langle S,\varphi \rangle } for all nonzero real t {\displaystyle t} and all test functions φ . {\displaystyle \varphi .} Here the angle brackets denote the pairing between distributions and test functions, and μ t : R n → R n {\displaystyle \mu _{t}:\mathbb {R} ^{n}\to \mathbb {R} ^{n}} is the mapping of scalar division by the real number t . {\displaystyle t.} == Glossary of name variants == Let f : X → Y {\displaystyle f:X\to Y} be a map between two vector spaces over a field F {\displaystyle \mathbb {F} } (usually the real numbers R {\displaystyle \mathbb {R} } or complex numbers C {\displaystyle \mathbb {C} } ). If S {\displaystyle S} is a set of scalars, such as Z , {\displaystyle \mathbb {Z} ,} [ 0 , ∞ ) , {\displaystyle [0,\infty ),} or R {\displaystyle \mathbb {R} } for example, then f {\displaystyle f} is said to be homogeneous over S {\displaystyle S} if f ( s x ) = s f ( x ) {\textstyle f(sx)=sf(x)} for every x ∈ X {\displaystyle x\in X} and scalar s ∈ S . {\displaystyle s\in S.} For instance, every additive map between vector spaces is homogeneous over the rational numbers S := Q {\displaystyle S:=\mathbb {Q} } although it might not be homogeneous over the real numbers S := R . {\displaystyle S:=\mathbb {R} .} The following commonly encountered special cases and variations of this definition have their own terminology: (Strict) Positive homogeneity: f ( r x ) = r f ( x ) {\displaystyle f(rx)=rf(x)} for all x ∈ X {\displaystyle x\in X} and all positive real r > 0. {\displaystyle r>0.} When the function f {\displaystyle f} is valued in a vector space or field, then this property is logically equivalent to nonnegative homogeneity, which by definition means: f ( r x ) = r f ( x ) {\displaystyle f(rx)=rf(x)} for all x ∈ X {\displaystyle x\in X} and all non-negative real r ≥ 0. {\displaystyle r\geq 0.} It is for this reason that positive homogeneity is often also called nonnegative homogeneity. However, for functions valued in the extended real numbers [ − ∞ , ∞ ] = R ∪ { ± ∞ } , {\displaystyle [-\infty ,\infty ]=\mathbb {R} \cup \{\pm \infty \},} which appear in fields like convex analysis, the multiplication 0 ⋅ f ( x ) {\displaystyle 0\cdot f(x)} will be undefined whenever f ( x ) = ± ∞ {\displaystyle f(x)=\pm \infty } and so these statements are not necessarily always interchangeable. This property is used in the definition of a sublinear function. Minkowski functionals are exactly those non-negative extended real-valued functions with this property. Real homogeneity: f ( r x ) = r f ( x ) {\displaystyle f(rx)=rf(x)} for all x ∈ X {\displaystyle x\in X} and all real r . {\displaystyle r.} This property is used in the definition of a real linear functional. Homogeneity: f ( s x ) = s f ( x ) {\displaystyle f(sx)=sf(x)} for all x ∈ X {\displaystyle x\in X} and all scalars s ∈ F . {\displaystyle s\in \mathbb {F} .} It is emphasized that this definition depends on the scalar field F {\displaystyle \mathbb {F} } underlying the domain X . {\displaystyle X.} This property is used in the definition of linear functionals and linear maps. Conjugate homogeneity: f ( s x ) = s ¯ f ( x ) {\displaystyle f(sx)={\overline {s}}f(x)} for all x ∈ X {\displaystyle x\in X} and all scalars s ∈ F . {\displaystyle s\in \mathbb {F} .} If F = C {\displaystyle \mathbb {F} =\mathbb {C} } then s ¯ {\displaystyle {\overline {s}}} typically denotes the complex conjugate of s {\displaystyle s} . But more generally, as with semilinear maps for example, s ¯ {\displaystyle {\overline {s}}} could be the image of s {\displaystyle s} under some distinguished automorphism of F . {\displaystyle \mathbb {F} .} Along with additivity, this property is assumed in the definition of an antilinear map. It is also assumed that one of the two coordinates of a sesquilinear form has this property (such as the inner product of a Hilbert space). All of the above definitions can be generalized by replacing the condition f ( r x ) = r f ( x ) {\displaystyle f(rx)=rf(x)} with f ( r x ) = | r | f ( x ) , {\displaystyle f(rx)=|r|f(x),} in which case that definition is prefixed with the word "absolute" or "absolutely." For example, Absolute homogeneity: f ( s x ) = | s | f ( x ) {\displaystyle f(sx)=|s|f(x)} for all x ∈ X {\displaystyle x\in X} and all scalars s ∈ F . {\displaystyle s\in \mathbb {F} .} This property is used in the definition of a seminorm and a norm. If k {\displaystyle k} is a fixed real number then the above definitions can be further generalized by replacing the condition f ( r x ) = r f ( x ) {\displaystyle f(rx)=rf(x)} with f ( r x ) = r k f ( x ) {\displaystyle f(rx)=r^{k}f(x)} (and similarly, by replacing f ( r x ) = | r | f ( x ) {\displaystyle f(rx)=|r|f(x)} with f ( r x ) = | r | k f ( x ) {\displaystyle f(rx)=|r|^{k}f(x)} for conditions using the absolute value, etc.), in which case the homogeneity is said to be "of degree k {\displaystyle k} " (where in particular, all of the above definitions are "of degree 1 {\displaystyle 1} "). For instance, Real homogeneity of degree k {\displaystyle k} : f ( r x ) = r k f ( x ) {\displaystyle f(rx)=r^{k}f(x)} for all x ∈ X {\displaystyle x\in X} and all real r . {\displaystyle r.} Homogeneity of degree k {\displaystyle k} : f ( s x ) = s k f ( x ) {\displaystyle f(sx)=s^{k}f(x)} for all x ∈ X {\displaystyle x\in X} and all scalars s ∈ F . {\displaystyle s\in \mathbb {F} .} Absolute real homogeneity of degree k {\displaystyle k} : f ( r x ) = | r | k f ( x ) {\displaystyle f(rx)=|r|^{k}f(x)} for all x ∈ X {\displaystyle x\in X} and all real r . {\displaystyle r.} Absolute homogeneity of degree k {\displaystyle k} : f ( s x ) = | s | k f ( x ) {\displaystyle f(sx)=|s|^{k}f(x)} for all x ∈ X {\displaystyle x\in X} and all scalars s ∈ F . {\displaystyle s\in \mathbb {F} .} A nonzero continuous function that is homogeneous of degree k {\displaystyle k} on R n ∖ { 0 } {\displaystyle \mathbb {R} ^{n}\backslash \lbrace 0\rbrace } extends continuously to R n {\displaystyle \mathbb {R} ^{n}} if and only if k > 0. {\displaystyle k>0.} == See also == Homogeneous space Triangle center function – Point in a triangle that can be seen as its middle under some criteriaPages displaying short descriptions of redirect targets == Notes == Proofs == References == == Sources == Blatter, Christian (1979). "20. Mehrdimensionale Differentialrechnung, Aufgaben, 1.". Analysis II (in German) (2nd ed.). Springer Verlag. p. 188. ISBN 3-540-09484-9. Kubrusly, Carlos S. (2011). The Elements of Operator Theory (Second ed.). Boston: Birkhäuser. ISBN 978-0-8176-4998-2. OCLC 710154895. Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. Schechter, Eric (1996). Handbook of Analysis and Its Foundations. San Diego, CA: Academic Press. ISBN 978-0-12-622760-4. OCLC 175294365. == External links == "Homogeneous function", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Eric Weisstein. "Euler's Homogeneous Function Theorem". MathWorld.
|
Wikipedia:Homogeneous linear equation#0
|
In mathematics, a system of linear equations (or linear system) is a collection of two or more linear equations involving the same variables. For example, { 3 x + 2 y − z = 1 2 x − 2 y + 4 z = − 2 − x + 1 2 y − z = 0 {\displaystyle {\begin{cases}3x+2y-z=1\\2x-2y+4z=-2\\-x+{\frac {1}{2}}y-z=0\end{cases}}} is a system of three equations in the three variables x, y, z. A solution to a linear system is an assignment of values to the variables such that all the equations are simultaneously satisfied. In the example above, a solution is given by the ordered triple ( x , y , z ) = ( 1 , − 2 , − 2 ) , {\displaystyle (x,y,z)=(1,-2,-2),} since it makes all three equations valid. Linear systems are a fundamental part of linear algebra, a subject used in most modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and play a prominent role in engineering, physics, chemistry, computer science, and economics. A system of non-linear equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system. Very often, and in this article, the coefficients and solutions of the equations are constrained to be real or complex numbers, but the theory and algorithms apply to coefficients and solutions in any field. For other algebraic structures, other theories have been developed. For coefficients and solutions in an integral domain, such as the ring of integers, see Linear equation over a ring. For coefficients and solutions that are polynomials, see Gröbner basis. For finding the "best" integer solutions among many, see Integer linear programming. For an example of a more exotic structure to which linear algebra can be applied, see Tropical geometry. == Elementary examples == === Trivial example === The system of one equation in one unknown 2 x = 4 {\displaystyle 2x=4} has the solution x = 2. {\displaystyle x=2.} However, most interesting linear systems have at least two equations. === Simple nontrivial example === The simplest kind of nontrivial linear system involves two equations and two variables: 2 x + 3 y = 6 4 x + 9 y = 15 . {\displaystyle {\begin{alignedat}{5}2x&&\;+\;&&3y&&\;=\;&&6&\\4x&&\;+\;&&9y&&\;=\;&&15&.\end{alignedat}}} One method for solving such a system is as follows. First, solve the top equation for x {\displaystyle x} in terms of y {\displaystyle y} : x = 3 − 3 2 y . {\displaystyle x=3-{\frac {3}{2}}y.} Now substitute this expression for x into the bottom equation: 4 ( 3 − 3 2 y ) + 9 y = 15. {\displaystyle 4\left(3-{\frac {3}{2}}y\right)+9y=15.} This results in a single equation involving only the variable y {\displaystyle y} . Solving gives y = 1 {\displaystyle y=1} , and substituting this back into the equation for x {\displaystyle x} yields x = 3 2 {\displaystyle x={\frac {3}{2}}} . This method generalizes to systems with additional variables (see "elimination of variables" below, or the article on elementary algebra.) == General form == A general system of m linear equations with n unknowns and coefficients can be written as { a 11 x 1 + a 12 x 2 + ⋯ + a 1 n x n = b 1 a 21 x 1 + a 22 x 2 + ⋯ + a 2 n x n = b 2 ⋮ a m 1 x 1 + a m 2 x 2 + ⋯ + a m n x n = b m , {\displaystyle {\begin{cases}a_{11}x_{1}+a_{12}x_{2}+\dots +a_{1n}x_{n}=b_{1}\\a_{21}x_{1}+a_{22}x_{2}+\dots +a_{2n}x_{n}=b_{2}\\\vdots \\a_{m1}x_{1}+a_{m2}x_{2}+\dots +a_{mn}x_{n}=b_{m},\end{cases}}} where x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\dots ,x_{n}} are the unknowns, a 11 , a 12 , … , a m n {\displaystyle a_{11},a_{12},\dots ,a_{mn}} are the coefficients of the system, and b 1 , b 2 , … , b m {\displaystyle b_{1},b_{2},\dots ,b_{m}} are the constant terms. Often the coefficients and unknowns are real or complex numbers, but integers and rational numbers are also seen, as are polynomials and elements of an abstract algebraic structure. === Vector equation === One extremely helpful view is that each unknown is a weight for a column vector in a linear combination. x 1 [ a 11 a 21 ⋮ a m 1 ] + x 2 [ a 12 a 22 ⋮ a m 2 ] + ⋯ + x n [ a 1 n a 2 n ⋮ a m n ] = [ b 1 b 2 ⋮ b m ] {\displaystyle x_{1}{\begin{bmatrix}a_{11}\\a_{21}\\\vdots \\a_{m1}\end{bmatrix}}+x_{2}{\begin{bmatrix}a_{12}\\a_{22}\\\vdots \\a_{m2}\end{bmatrix}}+\dots +x_{n}{\begin{bmatrix}a_{1n}\\a_{2n}\\\vdots \\a_{mn}\end{bmatrix}}={\begin{bmatrix}b_{1}\\b_{2}\\\vdots \\b_{m}\end{bmatrix}}} This allows all the language and theory of vector spaces (or more generally, modules) to be brought to bear. For example, the collection of all possible linear combinations of the vectors on the left-hand side (LHS) is called their span, and the equations have a solution just when the right-hand vector is within that span. If every vector within that span has exactly one expression as a linear combination of the given left-hand vectors, then any solution is unique. In any event, the span has a basis of linearly independent vectors that do guarantee exactly one expression; and the number of vectors in that basis (its dimension) cannot be larger than m or n, but it can be smaller. This is important because if we have m independent vectors a solution is guaranteed regardless of the right-hand side (RHS), and otherwise not guaranteed. === Matrix equation === The vector equation is equivalent to a matrix equation of the form A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } where A is an m×n matrix, x is a column vector with n entries, and b is a column vector with m entries. A = [ a 11 a 12 ⋯ a 1 n a 21 a 22 ⋯ a 2 n ⋮ ⋮ ⋱ ⋮ a m 1 a m 2 ⋯ a m n ] , x = [ x 1 x 2 ⋮ x n ] , b = [ b 1 b 2 ⋮ b m ] . {\displaystyle A={\begin{bmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&\cdots &a_{mn}\end{bmatrix}},\quad \mathbf {x} ={\begin{bmatrix}x_{1}\\x_{2}\\\vdots \\x_{n}\end{bmatrix}},\quad \mathbf {b} ={\begin{bmatrix}b_{1}\\b_{2}\\\vdots \\b_{m}\end{bmatrix}}.} The number of vectors in a basis for the span is now expressed as the rank of the matrix. == Solution set == A solution of a linear system is an assignment of values to the variables x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\dots ,x_{n}} such that each of the equations is satisfied. The set of all possible solutions is called the solution set. A linear system may behave in any one of three possible ways: The system has infinitely many solutions. The system has a unique solution. The system has no solution. === Geometric interpretation === For a system involving two variables (x and y), each linear equation determines a line on the xy-plane. Because a solution to a linear system must satisfy all of the equations, the solution set is the intersection of these lines, and is hence either a line, a single point, or the empty set. For three variables, each linear equation determines a plane in three-dimensional space, and the solution set is the intersection of these planes. Thus the solution set may be a plane, a line, a single point, or the empty set. For example, as three parallel planes do not have a common point, the solution set of their equations is empty; the solution set of the equations of three planes intersecting at a point is single point; if three planes pass through two points, their equations have at least two common solutions; in fact the solution set is infinite and consists in all the line passing through these points. For n variables, each linear equation determines a hyperplane in n-dimensional space. The solution set is the intersection of these hyperplanes, and is a flat, which may have any dimension lower than n. === General behavior === In general, the behavior of a linear system is determined by the relationship between the number of equations and the number of unknowns. Here, "in general" means that a different behavior may occur for specific values of the coefficients of the equations. In general, a system with fewer equations than unknowns has infinitely many solutions, but it may have no solution. Such a system is known as an underdetermined system. In general, a system with the same number of equations and unknowns has a single unique solution. In general, a system with more equations than unknowns has no solution. Such a system is also known as an overdetermined system. In the first case, the dimension of the solution set is, in general, equal to n − m, where n is the number of variables and m is the number of equations. The following pictures illustrate this trichotomy in the case of two variables: The first system has infinitely many solutions, namely all of the points on the blue line. The second system has a single unique solution, namely the intersection of the two lines. The third system has no solutions, since the three lines share no common point. It must be kept in mind that the pictures above show only the most common case (the general case). It is possible for a system of two equations and two unknowns to have no solution (if the two lines are parallel), or for a system of three equations and two unknowns to be solvable (if the three lines intersect at a single point). A system of linear equations behave differently from the general case if the equations are linearly dependent, or if it is inconsistent and has no more equations than unknowns. == Properties == === Independence === The equations of a linear system are independent if none of the equations can be derived algebraically from the others. When the equations are independent, each equation contains new information about the variables, and removing any of the equations increases the size of the solution set. For linear equations, logical independence is the same as linear independence. For example, the equations 3 x + 2 y = 6 and 6 x + 4 y = 12 {\displaystyle 3x+2y=6\;\;\;\;{\text{and}}\;\;\;\;6x+4y=12} are not independent — they are the same equation when scaled by a factor of two, and they would produce identical graphs. This is an example of equivalence in a system of linear equations. For a more complicated example, the equations x − 2 y = − 1 3 x + 5 y = 8 4 x + 3 y = 7 {\displaystyle {\begin{alignedat}{5}x&&\;-\;&&2y&&\;=\;&&-1&\\3x&&\;+\;&&5y&&\;=\;&&8&\\4x&&\;+\;&&3y&&\;=\;&&7&\end{alignedat}}} are not independent, because the third equation is the sum of the other two. Indeed, any one of these equations can be derived from the other two, and any one of the equations can be removed without affecting the solution set. The graphs of these equations are three lines that intersect at a single point. === Consistency === A linear system is inconsistent if it has no solution, and otherwise, it is said to be consistent. When the system is inconsistent, it is possible to derive a contradiction from the equations, that may always be rewritten as the statement 0 = 1. For example, the equations 3 x + 2 y = 6 and 3 x + 2 y = 12 {\displaystyle 3x+2y=6\;\;\;\;{\text{and}}\;\;\;\;3x+2y=12} are inconsistent. In fact, by subtracting the first equation from the second one and multiplying both sides of the result by 1/6, we get 0 = 1. The graphs of these equations on the xy-plane are a pair of parallel lines. It is possible for three linear equations to be inconsistent, even though any two of them are consistent together. For example, the equations x + y = 1 2 x + y = 1 3 x + 2 y = 3 {\displaystyle {\begin{alignedat}{7}x&&\;+\;&&y&&\;=\;&&1&\\2x&&\;+\;&&y&&\;=\;&&1&\\3x&&\;+\;&&2y&&\;=\;&&3&\end{alignedat}}} are inconsistent. Adding the first two equations together gives 3x + 2y = 2, which can be subtracted from the third equation to yield 0 = 1. Any two of these equations have a common solution. The same phenomenon can occur for any number of equations. In general, inconsistencies occur if the left-hand sides of the equations in a system are linearly dependent, and the constant terms do not satisfy the dependence relation. A system of equations whose left-hand sides are linearly independent is always consistent. Putting it another way, according to the Rouché–Capelli theorem, any system of equations (overdetermined or otherwise) is inconsistent if the rank of the augmented matrix is greater than the rank of the coefficient matrix. If, on the other hand, the ranks of these two matrices are equal, the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables. Otherwise the general solution has k free parameters where k is the difference between the number of variables and the rank; hence in such a case there is an infinitude of solutions. The rank of a system of equations (that is, the rank of the augmented matrix) can never be higher than [the number of variables] + 1, which means that a system with any number of equations can always be reduced to a system that has a number of independent equations that is at most equal to [the number of variables] + 1. === Equivalence === Two linear systems using the same set of variables are equivalent if each of the equations in the second system can be derived algebraically from the equations in the first system, and vice versa. Two systems are equivalent if either both are inconsistent or each equation of each of them is a linear combination of the equations of the other one. It follows that two linear systems are equivalent if and only if they have the same solution set. == Solving a linear system == There are several algorithms for solving a system of linear equations. === Describing the solution === When the solution set is finite, it is reduced to a single element. In this case, the unique solution is described by a sequence of equations whose left-hand sides are the names of the unknowns and right-hand sides are the corresponding values, for example ( x = 3 , y = − 2 , z = 6 ) {\displaystyle (x=3,\;y=-2,\;z=6)} . When an order on the unknowns has been fixed, for example the alphabetical order the solution may be described as a vector of values, like ( 3 , − 2 , 6 ) {\displaystyle (3,\,-2,\,6)} for the previous example. To describe a set with an infinite number of solutions, typically some of the variables are designated as free (or independent, or as parameters), meaning that they are allowed to take any value, while the remaining variables are dependent on the values of the free variables. For example, consider the following system: x + 3 y − 2 z = 5 3 x + 5 y + 6 z = 7 {\displaystyle {\begin{alignedat}{7}x&&\;+\;&&3y&&\;-\;&&2z&&\;=\;&&5&\\3x&&\;+\;&&5y&&\;+\;&&6z&&\;=\;&&7&\end{alignedat}}} The solution set to this system can be described by the following equations: x = − 7 z − 1 and y = 3 z + 2 . {\displaystyle x=-7z-1\;\;\;\;{\text{and}}\;\;\;\;y=3z+2{\text{.}}} Here z is the free variable, while x and y are dependent on z. Any point in the solution set can be obtained by first choosing a value for z, and then computing the corresponding values for x and y. Each free variable gives the solution space one degree of freedom, the number of which is equal to the dimension of the solution set. For example, the solution set for the above equation is a line, since a point in the solution set can be chosen by specifying the value of the parameter z. An infinite solution of higher order may describe a plane, or higher-dimensional set. Different choices for the free variables may lead to different descriptions of the same solution set. For example, the solution to the above equations can alternatively be described as follows: y = − 3 7 x + 11 7 and z = − 1 7 x − 1 7 . {\displaystyle y=-{\frac {3}{7}}x+{\frac {11}{7}}\;\;\;\;{\text{and}}\;\;\;\;z=-{\frac {1}{7}}x-{\frac {1}{7}}{\text{.}}} Here x is the free variable, and y and z are dependent. === Elimination of variables === The simplest method for solving a system of linear equations is to repeatedly eliminate variables. This method can be described as follows: In the first equation, solve for one of the variables in terms of the others. Substitute this expression into the remaining equations. This yields a system of equations with one fewer equation and unknown. Repeat steps 1 and 2 until the system is reduced to a single linear equation. Solve this equation, and then back-substitute until the entire solution is found. For example, consider the following system: { x + 3 y − 2 z = 5 3 x + 5 y + 6 z = 7 2 x + 4 y + 3 z = 8 {\displaystyle {\begin{cases}x+3y-2z=5\\3x+5y+6z=7\\2x+4y+3z=8\end{cases}}} Solving the first equation for x gives x = 5 + 2 z − 3 y {\displaystyle x=5+2z-3y} , and plugging this into the second and third equation yields { y = 3 z + 2 y = 7 2 z + 1 {\displaystyle {\begin{cases}y=3z+2\\y={\tfrac {7}{2}}z+1\end{cases}}} Since the LHS of both of these equations equal y, equating the RHS of the equations. We now have: 3 z + 2 = 7 2 z + 1 ⇒ z = 2 {\displaystyle {\begin{aligned}3z+2={\tfrac {7}{2}}z+1\\\Rightarrow z=2\end{aligned}}} Substituting z = 2 into the second or third equation gives y = 8, and the values of y and z into the first equation yields x = −15. Therefore, the solution set is the ordered triple ( x , y , z ) = ( − 15 , 8 , 2 ) {\displaystyle (x,y,z)=(-15,8,2)} . === Row reduction === In row reduction (also known as Gaussian elimination), the linear system is represented as an augmented matrix [ 1 3 − 2 5 3 5 6 7 2 4 3 8 ] . {\displaystyle \left[{\begin{array}{rrr|r}1&3&-2&5\\3&5&6&7\\2&4&3&8\end{array}}\right]{\text{.}}} This matrix is then modified using elementary row operations until it reaches reduced row echelon form. There are three types of elementary row operations: Type 1: Swap the positions of two rows. Type 2: Multiply a row by a nonzero scalar. Type 3: Add to one row a scalar multiple of another. Because these operations are reversible, the augmented matrix produced always represents a linear system that is equivalent to the original. There are several specific algorithms to row-reduce an augmented matrix, the simplest of which are Gaussian elimination and Gauss–Jordan elimination. The following computation shows Gauss–Jordan elimination applied to the matrix above: [ 1 3 − 2 5 3 5 6 7 2 4 3 8 ] ∼ [ 1 3 − 2 5 0 − 4 12 − 8 2 4 3 8 ] ∼ [ 1 3 − 2 5 0 − 4 12 − 8 0 − 2 7 − 2 ] ∼ [ 1 3 − 2 5 0 1 − 3 2 0 − 2 7 − 2 ] ∼ [ 1 3 − 2 5 0 1 − 3 2 0 0 1 2 ] ∼ [ 1 3 − 2 5 0 1 0 8 0 0 1 2 ] ∼ [ 1 3 0 9 0 1 0 8 0 0 1 2 ] ∼ [ 1 0 0 − 15 0 1 0 8 0 0 1 2 ] . {\displaystyle {\begin{aligned}\left[{\begin{array}{rrr|r}1&3&-2&5\\3&5&6&7\\2&4&3&8\end{array}}\right]&\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&-4&12&-8\\2&4&3&8\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&-4&12&-8\\0&-2&7&-2\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&1&-3&2\\0&-2&7&-2\end{array}}\right]\\&\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&1&-3&2\\0&0&1&2\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&1&0&8\\0&0&1&2\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&3&0&9\\0&1&0&8\\0&0&1&2\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&0&0&-15\\0&1&0&8\\0&0&1&2\end{array}}\right].\end{aligned}}} The last matrix is in reduced row echelon form, and represents the system x = −15, y = 8, z = 2. A comparison with the example in the previous section on the algebraic elimination of variables shows that these two methods are in fact the same; the difference lies in how the computations are written down. === Cramer's rule === Cramer's rule is an explicit formula for the solution of a system of linear equations, with each variable given by a quotient of two determinants. For example, the solution to the system x + 3 y − 2 z = 5 3 x + 5 y + 6 z = 7 2 x + 4 y + 3 z = 8 {\displaystyle {\begin{alignedat}{7}x&\;+&\;3y&\;-&\;2z&\;=&\;5\\3x&\;+&\;5y&\;+&\;6z&\;=&\;7\\2x&\;+&\;4y&\;+&\;3z&\;=&\;8\end{alignedat}}} is given by x = | 5 3 − 2 7 5 6 8 4 3 | | 1 3 − 2 3 5 6 2 4 3 | , y = | 1 5 − 2 3 7 6 2 8 3 | | 1 3 − 2 3 5 6 2 4 3 | , z = | 1 3 5 3 5 7 2 4 8 | | 1 3 − 2 3 5 6 2 4 3 | . {\displaystyle x={\frac {\,{\begin{vmatrix}5&3&-2\\7&5&6\\8&4&3\end{vmatrix}}\,}{\,{\begin{vmatrix}1&3&-2\\3&5&6\\2&4&3\end{vmatrix}}\,}},\;\;\;\;y={\frac {\,{\begin{vmatrix}1&5&-2\\3&7&6\\2&8&3\end{vmatrix}}\,}{\,{\begin{vmatrix}1&3&-2\\3&5&6\\2&4&3\end{vmatrix}}\,}},\;\;\;\;z={\frac {\,{\begin{vmatrix}1&3&5\\3&5&7\\2&4&8\end{vmatrix}}\,}{\,{\begin{vmatrix}1&3&-2\\3&5&6\\2&4&3\end{vmatrix}}\,}}.} For each variable, the denominator is the determinant of the matrix of coefficients, while the numerator is the determinant of a matrix in which one column has been replaced by the vector of constant terms. Though Cramer's rule is important theoretically, it has little practical value for large matrices, since the computation of large determinants is somewhat cumbersome. (Indeed, large determinants are most easily computed using row reduction.) Further, Cramer's rule has very poor numerical properties, making it unsuitable for solving even small systems reliably, unless the operations are performed in rational arithmetic with unbounded precision. === Matrix solution === If the equation system is expressed in the matrix form A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } , the entire solution set can also be expressed in matrix form. If the matrix A is square (has m rows and n=m columns) and has full rank (all m rows are independent), then the system has a unique solution given by x = A − 1 b {\displaystyle \mathbf {x} =A^{-1}\mathbf {b} } where A − 1 {\displaystyle A^{-1}} is the inverse of A. More generally, regardless of whether m=n or not and regardless of the rank of A, all solutions (if any exist) are given using the Moore–Penrose inverse of A, denoted A + {\displaystyle A^{+}} , as follows: x = A + b + ( I − A + A ) w {\displaystyle \mathbf {x} =A^{+}\mathbf {b} +\left(I-A^{+}A\right)\mathbf {w} } where w {\displaystyle \mathbf {w} } is a vector of free parameters that ranges over all possible n×1 vectors. A necessary and sufficient condition for any solution(s) to exist is that the potential solution obtained using w = 0 {\displaystyle \mathbf {w} =\mathbf {0} } satisfy A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } — that is, that A A + b = b . {\displaystyle AA^{+}\mathbf {b} =\mathbf {b} .} If this condition does not hold, the equation system is inconsistent and has no solution. If the condition holds, the system is consistent and at least one solution exists. For example, in the above-mentioned case in which A is square and of full rank, A + {\displaystyle A^{+}} simply equals A − 1 {\displaystyle A^{-1}} and the general solution equation simplifies to x = A − 1 b + ( I − A − 1 A ) w = A − 1 b + ( I − I ) w = A − 1 b {\displaystyle \mathbf {x} =A^{-1}\mathbf {b} +\left(I-A^{-1}A\right)\mathbf {w} =A^{-1}\mathbf {b} +\left(I-I\right)\mathbf {w} =A^{-1}\mathbf {b} } as previously stated, where w {\displaystyle \mathbf {w} } has completely dropped out of the solution, leaving only a single solution. In other cases, though, w {\displaystyle \mathbf {w} } remains and hence an infinitude of potential values of the free parameter vector w {\displaystyle \mathbf {w} } give an infinitude of solutions of the equation. === Other methods === While systems of three or four equations can be readily solved by hand (see Cracovian), computers are often used for larger systems. The standard algorithm for solving a system of linear equations is based on Gaussian elimination with some modifications. Firstly, it is essential to avoid division by small numbers, which may lead to inaccurate results. This can be done by reordering the equations if necessary, a process known as pivoting. Secondly, the algorithm does not exactly do Gaussian elimination, but it computes the LU decomposition of the matrix A. This is mostly an organizational tool, but it is much quicker if one has to solve several systems with the same matrix A but different vectors b. If the matrix A has some special structure, this can be exploited to obtain faster or more accurate algorithms. For instance, systems with a symmetric positive definite matrix can be solved twice as fast with the Cholesky decomposition. Levinson recursion is a fast method for Toeplitz matrices. Special methods exist also for matrices with many zero elements (so-called sparse matrices), which appear often in applications. A completely different approach is often taken for very large systems, which would otherwise take too much time or memory. The idea is to start with an initial approximation to the solution (which does not have to be accurate at all), and to change this approximation in several steps to bring it closer to the true solution. Once the approximation is sufficiently accurate, this is taken to be the solution to the system. This leads to the class of iterative methods. For some sparse matrices, the introduction of randomness improves the speed of the iterative methods. One example of an iterative method is the Jacobi method, where the matrix A {\displaystyle A} is split into its diagonal component D {\displaystyle D} and its non-diagonal component L + U {\displaystyle L+U} . An initial guess x ( 0 ) {\displaystyle {\mathbf {x}}^{(0)}} is used at the start of the algorithm. Each subsequent guess is computed using the iterative equation: x ( k + 1 ) = D − 1 ( b − ( L + U ) x ( k ) ) {\displaystyle {\mathbf {x}}^{(k+1)}=D^{-1}({\mathbf {b}}-(L+U){\mathbf {x}}^{(k)})} When the difference between guesses x ( k ) {\displaystyle {\mathbf {x}}^{(k)}} and x ( k + 1 ) {\displaystyle {\mathbf {x}}^{(k+1)}} is sufficiently small, the algorithm is said to have converged on the solution. There is also a quantum algorithm for linear systems of equations. == Homogeneous systems == A system of linear equations is homogeneous if all of the constant terms are zero: a 11 x 1 + a 12 x 2 + ⋯ + a 1 n x n = 0 a 21 x 1 + a 22 x 2 + ⋯ + a 2 n x n = 0 ⋮ a m 1 x 1 + a m 2 x 2 + ⋯ + a m n x n = 0. {\displaystyle {\begin{alignedat}{7}a_{11}x_{1}&&\;+\;&&a_{12}x_{2}&&\;+\cdots +\;&&a_{1n}x_{n}&&\;=\;&&&0\\a_{21}x_{1}&&\;+\;&&a_{22}x_{2}&&\;+\cdots +\;&&a_{2n}x_{n}&&\;=\;&&&0\\&&&&&&&&&&\vdots \;\ &&&\\a_{m1}x_{1}&&\;+\;&&a_{m2}x_{2}&&\;+\cdots +\;&&a_{mn}x_{n}&&\;=\;&&&0.\\\end{alignedat}}} A homogeneous system is equivalent to a matrix equation of the form A x = 0 {\displaystyle A\mathbf {x} =\mathbf {0} } where A is an m × n matrix, x is a column vector with n entries, and 0 is the zero vector with m entries. === Homogeneous solution set === Every homogeneous system has at least one solution, known as the zero (or trivial) solution, which is obtained by assigning the value of zero to each of the variables. If the system has a non-singular matrix (det(A) ≠ 0) then it is also the only solution. If the system has a singular matrix then there is a solution set with an infinite number of solutions. This solution set has the following additional properties: If u and v are two vectors representing solutions to a homogeneous system, then the vector sum u + v is also a solution to the system. If u is a vector representing a solution to a homogeneous system, and r is any scalar, then ru is also a solution to the system. These are exactly the properties required for the solution set to be a linear subspace of Rn. In particular, the solution set to a homogeneous system is the same as the null space of the corresponding matrix A. === Relation to nonhomogeneous systems === There is a close relationship between the solutions to a linear system and the solutions to the corresponding homogeneous system: A x = b and A x = 0 . {\displaystyle A\mathbf {x} =\mathbf {b} \qquad {\text{and}}\qquad A\mathbf {x} =\mathbf {0} .} Specifically, if p is any specific solution to the linear system Ax = b, then the entire solution set can be described as { p + v : v is any solution to A x = 0 } . {\displaystyle \left\{\mathbf {p} +\mathbf {v} :\mathbf {v} {\text{ is any solution to }}A\mathbf {x} =\mathbf {0} \right\}.} Geometrically, this says that the solution set for Ax = b is a translation of the solution set for Ax = 0. Specifically, the flat for the first system can be obtained by translating the linear subspace for the homogeneous system by the vector p. This reasoning only applies if the system Ax = b has at least one solution. This occurs if and only if the vector b lies in the image of the linear transformation A. == See also == Arrangement of hyperplanes Iterative refinement – Method to improve accuracy of numerical solutions to systems of linear equations Coates graph – A mathematical graph for solution of linear equations LAPACK – Software library for numerical linear algebra Linear equation over a ring Linear least squares – Least squares approximation of linear functions to dataPages displaying short descriptions of redirect targets Matrix decomposition – Representation of a matrix as a product Matrix splitting – Representation of a matrix as a sum NAG Numerical Library – Software library of numerical-analysis algorithms Rybicki Press algorithm – An algorithm for inverting a matrix Simultaneous equations – Set of equations to be solved togetherPages displaying short descriptions of redirect targets == References == == Bibliography == Anton, Howard (1987), Elementary Linear Algebra (5th ed.), New York: Wiley, ISBN 0-471-84819-0 Beauregard, Raymond A.; Fraleigh, John B. (1973), A First Course In Linear Algebra: with Optional Introduction to Groups, Rings, and Fields, Boston: Houghton Mifflin Company, ISBN 0-395-14017-X Burden, Richard L.; Faires, J. Douglas (1993), Numerical Analysis (5th ed.), Boston: Prindle, Weber and Schmidt, ISBN 0-534-93219-3 Cullen, Charles G. (1990), Matrices and Linear Transformations, MA: Dover, ISBN 978-0-486-66328-9 Golub, Gene H.; Van Loan, Charles F. (1996), Matrix Computations (3rd ed.), Baltimore: Johns Hopkins University Press, ISBN 0-8018-5414-8 Harper, Charlie (1976), Introduction to Mathematical Physics, New Jersey: Prentice-Hall, ISBN 0-13-487538-9 Harrow, Aram W.; Hassidim, Avinatan; Lloyd, Seth (2009), "Quantum Algorithm for Linear Systems of Equations", Physical Review Letters, 103 (15): 150502, arXiv:0811.3171, Bibcode:2009PhRvL.103o0502H, doi:10.1103/PhysRevLett.103.150502, PMID 19905613, S2CID 5187993 Sterling, Mary J. (2009), Linear Algebra for Dummies, Indianapolis, Indiana: Wiley, ISBN 978-0-470-43090-3 Whitelaw, T. A. (1991), Introduction to Linear Algebra (2nd ed.), CRC Press, ISBN 0-7514-0159-5 == Further reading == Axler, Sheldon Jay (1997). Linear Algebra Done Right (2nd ed.). Springer-Verlag. ISBN 0-387-98259-0. Lay, David C. (August 22, 2005). Linear Algebra and Its Applications (3rd ed.). Addison Wesley. ISBN 978-0-321-28713-7. Meyer, Carl D. (February 15, 2001). Matrix Analysis and Applied Linear Algebra. Society for Industrial and Applied Mathematics (SIAM). ISBN 978-0-89871-454-8. Archived from the original on March 1, 2001. Poole, David (2006). Linear Algebra: A Modern Introduction (2nd ed.). Brooks/Cole. ISBN 0-534-99845-3. Anton, Howard (2005). Elementary Linear Algebra (Applications Version) (9th ed.). Wiley International. Leon, Steven J. (2006). Linear Algebra With Applications (7th ed.). Pearson Prentice Hall. Strang, Gilbert (2005). Linear Algebra and Its Applications. Peng, Richard; Vempala, Santosh S. (2024). "Solving Sparse Linear Systems Faster than Matrix Multiplication". Comm. ACM. 67 (7): 79–86. arXiv:2007.10254. doi:10.1145/3615679. == External links == Media related to System of linear equations at Wikimedia Commons
|
Wikipedia:Homography (computer vision)#0
|
In the field of computer vision, any two images of the same planar surface in space are related by a homography (assuming a pinhole camera model). This has many practical applications, such as image rectification, image registration, or camera motion—rotation and translation—between two images. Once camera resectioning has been done from an estimated homography matrix, this information may be used for navigation, or to insert models of 3D objects into an image or video, so that they are rendered with the correct perspective and appear to have been part of the original scene (see Augmented reality). == 3D plane to plane equation == We have two cameras a and b, looking at points P i {\displaystyle P_{i}} in a plane. Passing from the projection b p i = ( b u i ; b v i ; 1 ) {\displaystyle {}^{b}p_{i}=\left({}^{b}u_{i};{}^{b}v_{i};1\right)} of P i {\displaystyle P_{i}} in b to the projection a p i = ( a u i ; a v i ; 1 ) {\displaystyle {}^{a}p_{i}=\left({}^{a}u_{i};{}^{a}v_{i};1\right)} of P i {\displaystyle P_{i}} in a: a p i = b z i a z i K a ⋅ H a b ⋅ K b − 1 ⋅ b p i {\displaystyle {}^{a}p_{i}={\frac {{}^{b}z_{i}}{{}^{a}z_{i}}}K_{a}\cdot H_{ab}\cdot K_{b}^{-1}\cdot {}^{b}p_{i}} where a z i {\displaystyle {}^{a}z_{i}} and b z i {\displaystyle {}^{b}z_{i}} are the z coordinates of P in each camera frame and where the homography matrix H a b {\displaystyle H_{ab}} is given by H a b = R − t n T d {\displaystyle H_{ab}=R-{\frac {tn^{T}}{d}}} . R {\displaystyle R} is the rotation matrix by which b is rotated in relation to a; t is the translation vector from a to b; n and d are the normal vector of the plane and the distance from origin to the plane respectively. Ka and Kb are the cameras' intrinsic parameter matrices. The figure shows camera b looking at the plane at distance d. Note: From above figure, assuming n T P i + d = 0 {\displaystyle n^{T}P_{i}+d=0} as plane model, n T P i {\displaystyle n^{T}P_{i}} is the projection of vector P i {\displaystyle P_{i}} along n {\displaystyle n} , and equal to − d {\displaystyle -d} . So t = t ⋅ 1 = t ( − n T P i d ) {\displaystyle t=t\cdot 1=t\left(-{\frac {n^{T}P_{i}}{d}}\right)} . And we have H a b P i = R P i + t {\displaystyle H_{ab}P_{i}=RP_{i}+t} where H a b = R − t n T d {\displaystyle H_{ab}=R-{\frac {tn^{T}}{d}}} . This formula is only valid if camera b has no rotation and no translation. In the general case where R a , R b {\displaystyle R_{a},R_{b}} and t a , t b {\displaystyle t_{a},t_{b}} are the respective rotations and translations of camera a and b, R = R a R b T {\displaystyle R=R_{a}R_{b}^{T}} and the homography matrix H a b {\displaystyle H_{ab}} becomes H a b = R a R b T − ( − R a ∗ R b T ∗ t b + t a ) n T d {\displaystyle H_{ab}=R_{a}R_{b}^{T}-{\frac {(-R_{a}*R_{b}^{T}*t_{b}+t_{a})n^{T}}{d}}} where d is the distance of the camera b to the plane. == Affine homography == When the image region in which the homography is computed is small or the image has been acquired with a large focal length, an affine homography is a more appropriate model of image displacements. An affine homography is a special type of a general homography whose last row is fixed to h 31 = h 32 = 0 , h 33 = 1. {\displaystyle h_{31}=h_{32}=0,\;h_{33}=1.} == See also == Direct linear transformation Epipolar geometry Feature (computer vision) Fundamental matrix (computer vision) Pose (computer vision) Photogrammetry == References == O. Chum and T. Pajdla and P. Sturm (2005). "The Geometric Error for Homographies" (PDF). Computer Vision and Image Understanding. 97 (1): 86–102. doi:10.1016/j.cviu.2004.03.004. == Toolboxes == homest is a GPL C/C++ library for robust, non-linear (based on the Levenberg–Marquardt algorithm) homography estimation from matched point pairs (Manolis Lourakis). OpenCV is a complete (open and free) computer vision software library that has many routines related to homography estimation (cvFindHomography) and re-projection (cvPerspectiveTransform). == External links == Serge Belongie & David Kriegman (2007) Explanation of Homography Estimation from Department of Computer Science and Engineering, University of California, San Diego. A. Criminisi, I. Reid & A. Zisserman (1997) "A Plane Measuring Device", §3 Computing the Plane to Plane Homography, from Visual Geometry Group, Department of Engineering Science, University of Oxford. Elan Dubrofsky (2009) Homography Estimation, Master's thesis, from Department of Computer Science, University of British Columbia. Richard Hartley & Andrew Zisserman (2004) Multiple View Geometry from Visual Geometry Group, Oxford. Includes Matlab Functions for calculating a homography and the fundamental matrix (computer vision). GIMP Tutorial – using the Perspective Tool by Billy Kerr on YouTube. Shows how to do a perspective transform using GIMP. Allan Jepson (2010) Planar Homographies from Department of Computer Science, University of Toronto. Includes 2D homography from four pairs of corresponding points, mosaics in image processing, removing perspective distortion in computer vision, rendering textures in computer graphics, and computing planar shadows. Plane transfer homography Course notes from CSE576 at University of Washington in Seattle. Etienne Vincent & Robert Laganiere (2000) Detecting Planar Homographies in an Image Pair Archived 2016-03-04 at the Wayback Machine from School of Information Technology and Engineering, University of Ottawa. Describes an algorithm for detecting planes in images, uses random sample consensus (RANSAC) method, describes heuristics and iteration.
|
Wikipedia:Homomorphic secret sharing#0
|
In cryptography, homomorphic secret sharing is a type of secret sharing algorithm in which the secret is encrypted via homomorphic encryption. A homomorphism is a transformation from one algebraic structure into another of the same type so that the structure is preserved. Importantly, this means that for every kind of manipulation of the original data, there is a corresponding manipulation of the transformed data. == Technique == Homomorphic secret sharing is used to transmit a secret to several recipients as follows: Transform the "secret" using a homomorphism. This often puts the secret into a form which is easy to manipulate or store. In particular, there may be a natural way to 'split' the new form as required by step (2). Split the transformed secret into several parts, one for each recipient. The secret must be split in such a way that it can only be recovered when all or most of the parts are combined. (See Secret sharing.) Distribute the parts of the secret to each of the recipients. Combine each of the recipients' parts to recover the transformed secret, perhaps at a specified time. Reverse the homomorphism to recover the original secret. == Examples == Suppose a community wants to perform an election, using a decentralized voting protocol, but they want to ensure that the vote-counters won't lie about the results. Using a type of homomorphic secret sharing known as Shamir's secret sharing, each member of the community can add their vote to a form that is split into pieces, each piece is then submitted to a different vote-counter. The pieces are designed so that the vote-counters can't predict how any alterations to each piece will affect the whole, thus, discouraging vote-counters from tampering with their pieces. When all votes have been received, the vote-counters combine them, allowing them to recover the aggregate election results. In detail, suppose we have an election with: Two possible outcomes, either yes or no. We'll represent those outcomes numerically by +1 and −1, respectively. A number of authorities, k, who will count the votes. A number of voters, n, who will submit votes. In advance, each authority generates a publicly available numerical key, xk. Each voter encodes his vote in a polynomial pn according to the following rules: The polynomial should have degree k − 1, its constant term should be either +1 or −1 (corresponding to voting "yes" or voting "no"), and its other coefficients should be randomly generated. Each voter computes the value of his polynomial pn at each authority's public key xk. This produces k points, one for each authority. These k points are the "pieces" of the vote: If you know all of the points, you can figure out the polynomial pn (and hence you can figure out how the voter voted). However, if you know only some of the points, you can't figure out the polynomial. (This is because you need n points to determine a degree-(n − 1) polynomial. Two points determine a line, three points determine a parabola, etc.) The voter sends each authority the value that was produced using the authority's key. Each authority collects the values that he receives. Since each authority only gets one value from each voter, he can't discover any given voter's polynomial. Moreover, he can't predict how altering the submissions will affect the vote. Once the voters have submitted their votes, each authority k computes and announces the sum Ak of all the values he's received. There are k sums, Ak; when they are combined together, they determine a unique polynomial P(x) – specifically, the sum of all the voter polynomials: P(x) = p1(x) + p2(x) + ... + pn(x). The constant term of P(x) is in fact the sum of all the votes, because the constant term of P(x) is the sum of the constant terms of the individual pn. Thus the constant term of P(x) provides the aggregate election result: if it is positive, more people voted for +1 than for −1; if it is negative, more people voted for −1 than for +1. === Features === This protocol works as long as not all of the k authorities are corrupt — if they were, then they could collaborate to reconstruct P(x) for each voter and also subsequently alter the votes. The protocol requires t + 1 authorities to be completed, therefore in case there are N > t + 1 authorities, N − t − 1 authorities can be corrupted, which gives the protocol a certain degree of robustness. The protocol manages the IDs of the voters (the IDs were submitted with the ballots) and therefore can verify that only legitimate voters have voted. Under the assumptions on t: A ballot cannot be backtracked to the ID so the privacy of the voters is preserved. A voter cannot prove how they voted. It is impossible to verify a vote. The protocol implicitly prevents corruption of ballots. This is because the authorities have no incentive to change the ballot since each authority has only a share of the ballot and has no knowledge how changing this share will affect the outcome. === Vulnerabilities === The voter cannot be certain that their vote has been recorded correctly. The authorities cannot be sure the votes were legal and equal, for example the voter can choose a value that is not a valid option (i.e. not in {−1, 1}) such as −20, 50, which will tilt the results in their favor. == See also == End-to-end auditable voting systems Electronic voting Certification of voting machines Techniques of potential election fraud through physical tampering with voting machines Preventing Election fraud: Testing and certification of electronic voting Vote counting system E-democracy Secure multi-party computation Mental poker == References ==
|
Wikipedia:Homotopy analysis method#0
|
The homotopy analysis method (HAM) is a semi-analytical technique to solve nonlinear ordinary/partial differential equations. The homotopy analysis method employs the concept of the homotopy from topology to generate a convergent series solution for nonlinear systems. This is enabled by utilizing a homotopy-Maclaurin series to deal with the nonlinearities in the system. The HAM was first devised in 1992 by Liao Shijun of Shanghai Jiaotong University in his PhD dissertation and further modified in 1997 to introduce a non-zero auxiliary parameter, referred to as the convergence-control parameter, c0, to construct a homotopy on a differential system in general form. The convergence-control parameter is a non-physical variable that provides a simple way to verify and enforce convergence of a solution series. The capability of the HAM to naturally show convergence of the series solution is unusual in analytical and semi-analytic approaches to nonlinear partial differential equations. == Characteristics == The HAM distinguishes itself from various other analytical methods in four important aspects. First, it is a series expansion method that is not directly dependent on small or large physical parameters. Thus, it is applicable for not only weakly but also strongly nonlinear problems, going beyond some of the inherent limitations of the standard perturbation methods. Second, the HAM is a unified method for the Lyapunov artificial small parameter method, the delta expansion method, the Adomian decomposition method, and the homotopy perturbation method. The greater generality of the method often allows for strong convergence of the solution over larger spatial and parameter domains. Third, the HAM gives excellent flexibility in the expression of the solution and how the solution is explicitly obtained. It provides great freedom to choose the basis functions of the desired solution and the corresponding auxiliary linear operator of the homotopy. Finally, unlike the other analytic approximation techniques, the HAM provides a simple way to ensure the convergence of the solution series. The homotopy analysis method is also able to combine with other techniques employed in nonlinear differential equations such as spectral methods and Padé approximants. It may further be combined with computational methods, such as the boundary element method to allow the linear method to solve nonlinear systems. Different from the numerical technique of homotopy continuation, the homotopy analysis method is an analytic approximation method as opposed to a discrete computational method. Further, the HAM uses the homotopy parameter only on a theoretical level to demonstrate that a nonlinear system may be split into an infinite set of linear systems which are solved analytically, while the continuation methods require solving a discrete linear system as the homotopy parameter is varied to solve the nonlinear system. == Applications == In the last twenty years, the HAM has been applied to solve a growing number of nonlinear ordinary/partial differential equations in science, finance, and engineering. For example, multiple steady-state resonant waves in deep and finite water depth were found with the wave resonance criterion of arbitrary number of traveling gravity waves; this agreed with Phillips' criterion for four waves with small amplitude. Further, a unified wave model applied with the HAM, admits not only the traditional smooth progressive periodic/solitary waves, but also the progressive solitary waves with peaked crest in finite water depth. This model shows peaked solitary waves are consistent solutions along with the known smooth ones. Additionally, the HAM has been applied to many other nonlinear problems such as nonlinear heat transfer, the limit cycle of nonlinear dynamic systems, the American put option, the exact Navier–Stokes equation, the option pricing under stochastic volatility, the electrohydrodynamic flows, the Poisson–Boltzmann equation for semiconductor devices, and others. == Brief mathematical description == Consider a general nonlinear differential equation N [ u ( x ) ] = 0 {\displaystyle {\mathcal {N}}[u(x)]=0} , where N {\displaystyle {\mathcal {N}}} is a nonlinear operator. Let L {\displaystyle {\mathcal {L}}} denote an auxiliary linear operator, u0(x) an initial guess of u(x), and c0 a constant (called the convergence-control parameter), respectively. Using the embedding parameter q ∈ [0,1] from homotopy theory, one may construct a family of equations, ( 1 − q ) L [ U ( x ; q ) − u 0 ( x ) ] = c 0 q N [ U ( x ; q ) ] , {\displaystyle (1-q){\mathcal {L}}[U(x;q)-u_{0}(x)]=c_{0}\,q\,{\mathcal {N}}[U(x;q)],} called the zeroth-order deformation equation, whose solution varies continuously with respect to the embedding parameter q ∈ [0,1]. This is the linear equation L [ U ( x ; q ) − u 0 ( x ) ] = 0 , {\displaystyle {\mathcal {L}}[U(x;q)-u_{0}(x)]=0,} with known initial guess U(x; 0) = u0(x) when q = 0, but is equivalent to the original nonlinear equation N [ u ( x ) ] = 0 {\displaystyle {\mathcal {N}}[u(x)]=0} , when q = 1, i.e. U(x; 1) = u(x)). Therefore, as q increases from 0 to 1, the solution U(x; q) of the zeroth-order deformation equation varies (or deforms) from the chosen initial guess u0(x) to the solution u(x) of the considered nonlinear equation. Expanding U(x; q) in a Taylor series about q = 0, we have the homotopy-Maclaurin series U ( x ; q ) = u 0 ( x ) + ∑ m = 1 ∞ u m ( x ) q m . {\displaystyle U(x;q)=u_{0}(x)+\sum _{m=1}^{\infty }u_{m}(x)\,q^{m}.} Assuming that the so-called convergence-control parameter c0 of the zeroth-order deformation equation is properly chosen that the above series is convergent at q = 1, we have the homotopy-series solution u ( x ) = u 0 ( x ) + ∑ m = 1 ∞ u m ( x ) . {\displaystyle u(x)=u_{0}(x)+\sum _{m=1}^{\infty }u_{m}(x).} From the zeroth-order deformation equation, one can directly derive the governing equation of um(x) L [ u m ( x ) − χ m u m − 1 ( x ) ] = c 0 R m [ u 0 , u 1 , … , u m − 1 ] , {\displaystyle {\mathcal {L}}[u_{m}(x)-\chi _{m}u_{m-1}(x)]=c_{0}\,R_{m}[u_{0},u_{1},\ldots ,u_{m-1}],} called the mth-order deformation equation, where χ 1 = 0 {\displaystyle \chi _{1}=0} and χ k = 1 {\displaystyle \chi _{k}=1} for k > 1, and the right-hand side Rm is dependent only upon the known results u0, u1, ..., um − 1 and can be obtained easily using computer algebra software. In this way, the original nonlinear equation is transferred into an infinite number of linear ones, but without the assumption of any small/large physical parameters. Since the HAM is based on a homotopy, one has great freedom to choose the initial guess u0(x), the auxiliary linear operator L {\displaystyle {\mathcal {L}}} , and the convergence-control parameter c0 in the zeroth-order deformation equation. Thus, the HAM provides the mathematician freedom to choose the equation-type of the high-order deformation equation and the base functions of its solution. The optimal value of the convergence-control parameter c0 is determined by the minimum of the squared residual error of governing equations and/or boundary conditions after the general form has been solved for the chosen initial guess and linear operator. Thus, the convergence-control parameter c0 is a simple way to guarantee the convergence of the homotopy series solution and differentiates the HAM from other analytic approximation methods. The method overall gives a useful generalization of the concept of homotopy. == The HAM and computer algebra == The HAM is an analytic approximation method designed for the computer era with the goal of "computing with functions instead of numbers." In conjunction with a computer algebra system such as Mathematica or Maple, one can gain analytic approximations of a highly nonlinear problem to arbitrarily high order by means of the HAM in only a few seconds. Inspired by the recent successful applications of the HAM in different fields, a Mathematica package based on the HAM, called BVPh, has been made available online for solving nonlinear boundary-value problems [4]. BVPh is a solver package for highly nonlinear ODEs with singularities, multiple solutions, and multipoint boundary conditions in either a finite or an infinite interval, and includes support for certain types of nonlinear PDEs. Another HAM-based Mathematica code, APOh, has been produced to solve for an explicit analytic approximation of the optimal exercise boundary of American put option, which is also available online [5]. == Frequency response analysis for nonlinear oscillators == The HAM has recently been reported to be useful for obtaining analytical solutions for nonlinear frequency response equations. Such solutions are able to capture various nonlinear behaviors such as hardening-type, softening-type or mixed behaviors of the oscillator. These analytical equations are also useful in prediction of chaos in nonlinear systems. == References == == External links == http://numericaltank.sjtu.edu.cn/BVPh.htm http://numericaltank.sjtu.edu.cn/APO.htm
|
Wikipedia:Hong Wang (mathematician)#0
|
Hong Wang (Chinese: 王虹; born 1991) is a Chinese mathematician who works in Fourier analysis and geometric measure theory. She received the Maryam Mirzakhani New Frontiers Prize in 2022. == Early life and education == Wang was born in Guilin, Guangxi, China, in 1991. Her parents are both teachers at a secondary school in Pingle County. She skipped two grades during primary school. In 2004, she attended Guilin High School. In 2007, 16-year-old Wang gained early admission to Peking University's School of Earth and Space Sciences with a score of 653 in the Gaokao. After a year, she transferred to the School of Mathematical Sciences. She received an undergraduate degree in mathematics at Peking University in 2011. In 2014, she graduated with dual degrees: an engineering degree (diplôme d'ingénieur) at École polytechnique and a master's degree from Paris-Sud University. In 2019, she received a PhD in mathematics from the Massachusetts Institute of Technology under the supervision of Larry Guth. == Career == Wang was a member of the Institute for Advanced Study from 2019 to 2021. She then joined University of California, Los Angeles as an assistant professor of mathematics. She is currently an associate professor at the New York University Courant Institute of Mathematical Sciences. On 24 February 2025, Wang and her collaborator Joshua Zahl posted an arXiv preprint "Volume estimates for unions of convex sets, and the Kakeya set conjecture in three dimensions" claiming to solve the Kakeya conjecture in three dimensions. The general Kakeya conjecture has been described by Terence Tao as "one of the most sought-after open problems in geometric measure theory". The claimed proof is considered to be a breakthrough in geometric measure theory. == Awards and honors == Wang was a 2022 recipient of the Maryam Mirzakhani New Frontiers Prize, given "for advances on the restriction conjecture, the local smoothing conjecture, and related problems". == References ==
|
Wikipedia:Hongkai Zhao#0
|
Hongkai Zhao is a Chinese mathematician and Ruth F. DeVarney Distinguished Professor of Mathematics at Duke University. He was formerly the Chancellor's Professor in the Department of Mathematics at the University of California, Irvine. He is known for his work in scientific computing, imaging and numerical analysis, such as the fast sweeping method for Hamilton-Jacobi equation and numerical methods for moving interface problems. Zhao had obtained his Bachelor of Science degree in the applied mathematics from the Peking University in 1990 and two years later got his Master's in the same field from the University of Southern California. From 1992 to 1996 he attended University of California, Los Angeles where he got his Ph.D. in mathematics. From 1996 to 1998 Zhao was a Gábor Szegő Assistant Professor at the Department of Mathematics of Stanford University and then got promoted to Research Associate which he kept till 1999. He has been at the University of California, Irvine since. At the same time he is also a member of the Institute for Mathematical Behavioral Sciences and the Department of Computer Science of UCI. From 2010 to 2013 and 2016 to 2019, Zhao was the chairman of the Department of Mathematics and since 2016 serves as Chancellor's Professor of mathematics. Hongkai Zhao received Alfred P. Sloan Fellowship in 2002 and the Feng Kang Prize in Scientific Computing in 2007. He was elected as a Fellow of the Society for Industrial and Applied Mathematics, in the 2022 Class of SIAM Fellows, "for seminal contributions to scientific computation, numerical analysis, and applications in science and engineering". When it comes to free time he likes to watch and play sports games. == References == == External links == Hongkai Zhao publications indexed by Google Scholar
|
Wikipedia:Horatio Scott Carslaw#0
|
Dr Horatio Scott Carslaw FRSE LLD (12 February 1870, Helensburgh, Dumbartonshire, Scotland – 11 November 1954, Burradoo, New South Wales, Australia) was a Scottish-Australian mathematician. The book he wrote with his colleague John Conrad Jaeger, Conduction of Heat in Solids, remains a classic in the field. == Life == He was born in Helensburgh, Scotland, the son of the Rev Dr William Henderson Carslaw (a Free Church minister) and his wife, Elizabeth Lockhead. He was educated at The Glasgow Academy. He went on to study at Cambridge University and then obtained a postgraduate doctorate at Glasgow University. He was elected a Fellow of the Royal Society of Edinburgh in 1901. He was a Fellow of Emmanuel College, Cambridge and worked as a lecturer in Mathematics at Glasgow University, when in late 1902 he moved to Australia. In 1903, upon the retirement of Theodore Thomas Gurney, Carslaw was appointed Professor and the Chair of Pure and Applied Mathematics in the now School of Mathematics and Statistics at the University of Sydney. He retired in 1935 to his house in Burradoo where he produced most of his best work. The Carslaw Building at the University, completed in the 1960s and containing the School, is named after him. He died at home in Burradoo and was buried in the Anglican section of Bowral Cemetery. == Family == He married Ethel Maude Clarke (daughter of Sir William Clarke, 1st Baronet) in 1907 but she died later in the same year. == Works == An introduction to infinitesimal calculus, 1905 Introduction to the theory of Fourier's series and integrals and the mathematical theory of the conduction of heat, London 1906, revised 2nd edn. 1921, published under the title Introduction to the mathematical theory of the conduction of heat in solids; revised and enlarged 3rd edn. 1930, published under the title Introduction to the theory of Fourier's series and integrals The Elements of Non-Euclidean Plane Geometry and Trigonometry, London 1916 with John Conrad Jaeger: Operational methods in applied mathematics, 1941, 1948 with Jaeger: Conduction of Heat in Solids, Oxford 1947, 1959 == See also == Diffusion equation Heat equation Horosphere Thermal diffusivity == References == == External links == Works by or about Horatio Scott Carslaw at the Internet Archive Horatio Scott Carslaw at the Australian Dictionary of Biography Online O'Connor, John J.; Robertson, Edmund F., "Horatio Scott Carslaw", MacTutor History of Mathematics Archive, University of St Andrews
|
Wikipedia:Horner's method#0
|
In mathematics and computer science, Horner's method (or Horner's scheme) is an algorithm for polynomial evaluation. Although named after William George Horner, this method is much older, as it has been attributed to Joseph-Louis Lagrange by Horner himself, and can be traced back many hundreds of years to Chinese and Persian mathematicians. After the introduction of computers, this algorithm became fundamental for computing efficiently with polynomials. The algorithm is based on Horner's rule, in which a polynomial is written in nested form: a 0 + a 1 x + a 2 x 2 + a 3 x 3 + ⋯ + a n x n = a 0 + x ( a 1 + x ( a 2 + x ( a 3 + ⋯ + x ( a n − 1 + x a n ) ⋯ ) ) ) . {\displaystyle {\begin{aligned}&a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}+\cdots +a_{n}x^{n}\\={}&a_{0}+x{\bigg (}a_{1}+x{\Big (}a_{2}+x{\big (}a_{3}+\cdots +x(a_{n-1}+x\,a_{n})\cdots {\big )}{\Big )}{\bigg )}.\end{aligned}}} This allows the evaluation of a polynomial of degree n with only n {\displaystyle n} multiplications and n {\displaystyle n} additions. This is optimal, since there are polynomials of degree n that cannot be evaluated with fewer arithmetic operations. Alternatively, Horner's method and Horner–Ruffini method also refers to a method for approximating the roots of polynomials, described by Horner in 1819. It is a variant of the Newton–Raphson method made more efficient for hand calculation by application of Horner's rule. It was widely used until computers came into general use around 1970. == Polynomial evaluation and long division == Given the polynomial p ( x ) = ∑ i = 0 n a i x i = a 0 + a 1 x + a 2 x 2 + a 3 x 3 + ⋯ + a n x n , {\displaystyle p(x)=\sum _{i=0}^{n}a_{i}x^{i}=a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}+\cdots +a_{n}x^{n},} where a 0 , … , a n {\displaystyle a_{0},\ldots ,a_{n}} are constant coefficients, the problem is to evaluate the polynomial at a specific value x 0 {\displaystyle x_{0}} of x . {\displaystyle x.} For this, a new sequence of constants is defined recursively as follows: Then b 0 {\displaystyle b_{0}} is the value of p ( x 0 ) {\displaystyle p(x_{0})} . To see why this works, the polynomial can be written in the form p ( x ) = a 0 + x ( a 1 + x ( a 2 + x ( a 3 + ⋯ + x ( a n − 1 + x a n ) ⋯ ) ) ) . {\displaystyle p(x)=a_{0}+x{\bigg (}a_{1}+x{\Big (}a_{2}+x{\big (}a_{3}+\cdots +x(a_{n-1}+x\,a_{n})\cdots {\big )}{\Big )}{\bigg )}\ .} Thus, by iteratively substituting the b i {\displaystyle b_{i}} into the expression, p ( x 0 ) = a 0 + x 0 ( a 1 + x 0 ( a 2 + ⋯ + x 0 ( a n − 1 + b n x 0 ) ⋯ ) ) = a 0 + x 0 ( a 1 + x 0 ( a 2 + ⋯ + x 0 b n − 1 ) ) ⋮ = a 0 + x 0 b 1 = b 0 . {\displaystyle {\begin{aligned}p(x_{0})&=a_{0}+x_{0}{\Big (}a_{1}+x_{0}{\big (}a_{2}+\cdots +x_{0}(a_{n-1}+b_{n}x_{0})\cdots {\big )}{\Big )}\\&=a_{0}+x_{0}{\Big (}a_{1}+x_{0}{\big (}a_{2}+\cdots +x_{0}b_{n-1}{\big )}{\Big )}\\&~~\vdots \\&=a_{0}+x_{0}b_{1}\\&=b_{0}.\end{aligned}}} Now, it can be proven that; This expression constitutes Horner's practical application, as it offers a very quick way of determining the outcome of; p ( x ) / ( x − x 0 ) {\displaystyle p(x)/(x-x_{0})} with b 0 {\displaystyle b_{0}} (which is equal to p ( x 0 ) {\displaystyle p(x_{0})} ) being the division's remainder, as is demonstrated by the examples below. If x 0 {\displaystyle x_{0}} is a root of p ( x ) {\displaystyle p(x)} , then b 0 = 0 {\displaystyle b_{0}=0} (meaning the remainder is 0 {\displaystyle 0} ), which means you can factor p ( x ) {\displaystyle p(x)} as x − x 0 {\displaystyle x-x_{0}} . To finding the consecutive b {\displaystyle b} -values, you start with determining b n {\displaystyle b_{n}} , which is simply equal to a n {\displaystyle a_{n}} . Then you then work recursively using the formula: b n − 1 = a n − 1 + b n x 0 {\displaystyle b_{n-1}=a_{n-1}+b_{n}x_{0}} till you arrive at b 0 {\displaystyle b_{0}} . === Examples === Evaluate f ( x ) = 2 x 3 − 6 x 2 + 2 x − 1 {\displaystyle f(x)=2x^{3}-6x^{2}+2x-1} for x = 3 {\displaystyle x=3} . We use synthetic division as follows: x0│ x3 x2 x1 x0 3 │ 2 −6 2 −1 │ 6 0 6 └──────────────────────── 2 0 2 5 The entries in the third row are the sum of those in the first two. Each entry in the second row is the product of the x-value (3 in this example) with the third-row entry immediately to the left. The entries in the first row are the coefficients of the polynomial to be evaluated. Then the remainder of f ( x ) {\displaystyle f(x)} on division by x − 3 {\displaystyle x-3} is 5. But by the polynomial remainder theorem, we know that the remainder is f ( 3 ) {\displaystyle f(3)} . Thus, f ( 3 ) = 5 {\displaystyle f(3)=5} . In this example, if a 3 = 2 , a 2 = − 6 , a 1 = 2 , a 0 = − 1 {\displaystyle a_{3}=2,a_{2}=-6,a_{1}=2,a_{0}=-1} we can see that b 3 = 2 , b 2 = 0 , b 1 = 2 , b 0 = 5 {\displaystyle b_{3}=2,b_{2}=0,b_{1}=2,b_{0}=5} , the entries in the third row. So, synthetic division (which was actually invented and published by Ruffini 10 years before Horner's publication) is easier to use; it can be shown to be equivalent to Horner's method. As a consequence of the polynomial remainder theorem, the entries in the third row are the coefficients of the second-degree polynomial, the quotient of f ( x ) {\displaystyle f(x)} on division by x − 3 {\displaystyle x-3} . The remainder is 5. This makes Horner's method useful for polynomial long division. Divide x 3 − 6 x 2 + 11 x − 6 {\displaystyle x^{3}-6x^{2}+11x-6} by x − 2 {\displaystyle x-2} : 2 │ 1 −6 11 −6 │ 2 −8 6 └──────────────────────── 1 −4 3 0 The quotient is x 2 − 4 x + 3 {\displaystyle x^{2}-4x+3} . Let f 1 ( x ) = 4 x 4 − 6 x 3 + 3 x − 5 {\displaystyle f_{1}(x)=4x^{4}-6x^{3}+3x-5} and f 2 ( x ) = 2 x − 1 {\displaystyle f_{2}(x)=2x-1} . Divide f 1 ( x ) {\displaystyle f_{1}(x)} by f 2 ( x ) {\displaystyle f_{2}\,(x)} using Horner's method. 0.5 │ 4 −6 0 3 −5 │ 2 −2 −1 1 └─────────────────────── 2 −2 −1 1 −4 The third row is the sum of the first two rows, divided by 2. Each entry in the second row is the product of 1 with the third-row entry to the left. The answer is f 1 ( x ) f 2 ( x ) = 2 x 3 − 2 x 2 − x + 1 − 4 2 x − 1 . {\displaystyle {\frac {f_{1}(x)}{f_{2}(x)}}=2x^{3}-2x^{2}-x+1-{\frac {4}{2x-1}}.} === Efficiency === Evaluation using the monomial form of a degree n {\displaystyle n} polynomial requires at most n {\displaystyle n} additions and ( n 2 + n ) / 2 {\displaystyle (n^{2}+n)/2} multiplications, if powers are calculated by repeated multiplication and each monomial is evaluated individually. The cost can be reduced to n {\displaystyle n} additions and 2 n − 1 {\displaystyle 2n-1} multiplications by evaluating the powers of x {\displaystyle x} by iteration. If numerical data are represented in terms of digits (or bits), then the naive algorithm also entails storing approximately 2 n {\displaystyle 2n} times the number of bits of x {\displaystyle x} : the evaluated polynomial has approximate magnitude x n {\displaystyle x^{n}} , and one must also store x n {\displaystyle x^{n}} itself. By contrast, Horner's method requires only n {\displaystyle n} additions and n {\displaystyle n} multiplications, and its storage requirements are only n {\displaystyle n} times the number of bits of x {\displaystyle x} . Alternatively, Horner's method can be computed with n {\displaystyle n} fused multiply–adds. Horner's method can also be extended to evaluate the first k {\displaystyle k} derivatives of the polynomial with k n {\displaystyle kn} additions and multiplications. Horner's method is optimal, in the sense that any algorithm to evaluate an arbitrary polynomial must use at least as many operations. Alexander Ostrowski proved in 1954 that the number of additions required is minimal. Victor Pan proved in 1966 that the number of multiplications is minimal. However, when x {\displaystyle x} is a matrix, Horner's method is not optimal. This assumes that the polynomial is evaluated in monomial form and no preconditioning of the representation is allowed, which makes sense if the polynomial is evaluated only once. However, if preconditioning is allowed and the polynomial is to be evaluated many times, then faster algorithms are possible. They involve a transformation of the representation of the polynomial. In general, a degree- n {\displaystyle n} polynomial can be evaluated using only ⌊n/2⌋+2 multiplications and n {\displaystyle n} additions. ==== Parallel evaluation ==== A disadvantage of Horner's rule is that all of the operations are sequentially dependent, so it is not possible to take advantage of instruction level parallelism on modern computers. In most applications where the efficiency of polynomial evaluation matters, many low-order polynomials are evaluated simultaneously (for each pixel or polygon in computer graphics, or for each grid square in a numerical simulation), so it is not necessary to find parallelism within a single polynomial evaluation. If, however, one is evaluating a single polynomial of very high order, it may be useful to break it up as follows: p ( x ) = ∑ i = 0 n a i x i = a 0 + a 1 x + a 2 x 2 + a 3 x 3 + ⋯ + a n x n = ( a 0 + a 2 x 2 + a 4 x 4 + ⋯ ) + ( a 1 x + a 3 x 3 + a 5 x 5 + ⋯ ) = ( a 0 + a 2 x 2 + a 4 x 4 + ⋯ ) + x ( a 1 + a 3 x 2 + a 5 x 4 + ⋯ ) = ∑ i = 0 ⌊ n / 2 ⌋ a 2 i x 2 i + x ∑ i = 0 ⌊ n / 2 ⌋ a 2 i + 1 x 2 i = p 0 ( x 2 ) + x p 1 ( x 2 ) . {\displaystyle {\begin{aligned}p(x)&=\sum _{i=0}^{n}a_{i}x^{i}\\[1ex]&=a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}+\cdots +a_{n}x^{n}\\[1ex]&=\left(a_{0}+a_{2}x^{2}+a_{4}x^{4}+\cdots \right)+\left(a_{1}x+a_{3}x^{3}+a_{5}x^{5}+\cdots \right)\\[1ex]&=\left(a_{0}+a_{2}x^{2}+a_{4}x^{4}+\cdots \right)+x\left(a_{1}+a_{3}x^{2}+a_{5}x^{4}+\cdots \right)\\[1ex]&=\sum _{i=0}^{\lfloor n/2\rfloor }a_{2i}x^{2i}+x\sum _{i=0}^{\lfloor n/2\rfloor }a_{2i+1}x^{2i}\\[1ex]&=p_{0}(x^{2})+xp_{1}(x^{2}).\end{aligned}}} More generally, the summation can be broken into k parts: p ( x ) = ∑ i = 0 n a i x i = ∑ j = 0 k − 1 x j ∑ i = 0 ⌊ n / k ⌋ a k i + j x k i = ∑ j = 0 k − 1 x j p j ( x k ) {\displaystyle p(x)=\sum _{i=0}^{n}a_{i}x^{i}=\sum _{j=0}^{k-1}x^{j}\sum _{i=0}^{\lfloor n/k\rfloor }a_{ki+j}x^{ki}=\sum _{j=0}^{k-1}x^{j}p_{j}(x^{k})} where the inner summations may be evaluated using separate parallel instances of Horner's method. This requires slightly more operations than the basic Horner's method, but allows k-way SIMD execution of most of them. Modern compilers generally evaluate polynomials this way when advantageous, although for floating-point calculations this requires enabling (unsafe) reassociative math. Another use of breaking a polynomial down this way is to calculate steps of the inner summations in an alternating fashion to take advantage of instruction-level parallelism. === Application to floating-point multiplication and division === Horner's method is a fast, code-efficient method for multiplication and division of binary numbers on a microcontroller with no hardware multiplier. One of the binary numbers to be multiplied is represented as a trivial polynomial, where (using the above notation) a i = 1 {\displaystyle a_{i}=1} , and x = 2 {\displaystyle x=2} . Then, x (or x to some power) is repeatedly factored out. In this binary numeral system (base 2), x = 2 {\displaystyle x=2} , so powers of 2 are repeatedly factored out. ==== Example ==== For example, to find the product of two numbers (0.15625) and m: ( 0.15625 ) m = ( 0.00101 b ) m = ( 2 − 3 + 2 − 5 ) m = ( 2 − 3 ) m + ( 2 − 5 ) m = 2 − 3 ( m + ( 2 − 2 ) m ) = 2 − 3 ( m + 2 − 2 ( m ) ) . {\displaystyle {\begin{aligned}(0.15625)m&=(0.00101_{b})m=\left(2^{-3}+2^{-5}\right)m=\left(2^{-3})m+(2^{-5}\right)m\\&=2^{-3}\left(m+\left(2^{-2}\right)m\right)=2^{-3}\left(m+2^{-2}(m)\right).\end{aligned}}} ==== Method ==== To find the product of two binary numbers d and m: A register holding the intermediate result is initialized to d. Begin with the least significant (rightmost) non-zero bit in m. If all the non-zero bits were counted, then the intermediate result register now holds the final result. Otherwise, add d to the intermediate result, and continue in step 2 with the next most significant bit in m. ==== Derivation ==== In general, for a binary number with bit values ( d 3 d 2 d 1 d 0 {\displaystyle d_{3}d_{2}d_{1}d_{0}} ) the product is ( d 3 2 3 + d 2 2 2 + d 1 2 1 + d 0 2 0 ) m = d 3 2 3 m + d 2 2 2 m + d 1 2 1 m + d 0 2 0 m . {\displaystyle (d_{3}2^{3}+d_{2}2^{2}+d_{1}2^{1}+d_{0}2^{0})m=d_{3}2^{3}m+d_{2}2^{2}m+d_{1}2^{1}m+d_{0}2^{0}m.} At this stage in the algorithm, it is required that terms with zero-valued coefficients are dropped, so that only binary coefficients equal to one are counted, thus the problem of multiplication or division by zero is not an issue, despite this implication in the factored equation: = d 0 ( m + 2 d 1 d 0 ( m + 2 d 2 d 1 ( m + 2 d 3 d 2 ( m ) ) ) ) . {\displaystyle =d_{0}\left(m+2{\frac {d_{1}}{d_{0}}}\left(m+2{\frac {d_{2}}{d_{1}}}\left(m+2{\frac {d_{3}}{d_{2}}}(m)\right)\right)\right).} The denominators all equal one (or the term is absent), so this reduces to = d 0 ( m + 2 d 1 ( m + 2 d 2 ( m + 2 d 3 ( m ) ) ) ) , {\displaystyle =d_{0}(m+2{d_{1}}(m+2{d_{2}}(m+2{d_{3}}(m)))),} or equivalently (as consistent with the "method" described above) = d 3 ( m + 2 − 1 d 2 ( m + 2 − 1 d 1 ( m + d 0 ( m ) ) ) ) . {\displaystyle =d_{3}(m+2^{-1}{d_{2}}(m+2^{-1}{d_{1}}(m+{d_{0}}(m)))).} In binary (base-2) math, multiplication by a power of 2 is merely a register shift operation. Thus, multiplying by 2 is calculated in base-2 by an arithmetic shift. The factor (2−1) is a right arithmetic shift, a (0) results in no operation (since 20 = 1 is the multiplicative identity element), and a (21) results in a left arithmetic shift. The multiplication product can now be quickly calculated using only arithmetic shift operations, addition and subtraction. The method is particularly fast on processors supporting a single-instruction shift-and-addition-accumulate. Compared to a C floating-point library, Horner's method sacrifices some accuracy, however it is nominally 13 times faster (16 times faster when the "canonical signed digit" (CSD) form is used) and uses only 20% of the code space. === Other applications === Horner's method can be used to convert between different positional numeral systems – in which case x is the base of the number system, and the ai coefficients are the digits of the base-x representation of a given number – and can also be used if x is a matrix, in which case the gain in computational efficiency is even greater. However, for such cases faster methods are known. == Polynomial root finding == Using the long division algorithm in combination with Newton's method, it is possible to approximate the real roots of a polynomial. The algorithm works as follows. Given a polynomial p n ( x ) {\displaystyle p_{n}(x)} of degree n {\displaystyle n} with zeros z n < z n − 1 < ⋯ < z 1 , {\displaystyle z_{n}<z_{n-1}<\cdots <z_{1},} make some initial guess x 0 {\displaystyle x_{0}} such that z 1 < x 0 {\displaystyle z_{1}<x_{0}} . Now iterate the following two steps: Using Newton's method, find the largest zero z 1 {\displaystyle z_{1}} of p n ( x ) {\displaystyle p_{n}(x)} using the guess x 0 {\displaystyle x_{0}} . Using Horner's method, divide out ( x − z 1 ) {\displaystyle (x-z_{1})} to obtain p n − 1 {\displaystyle p_{n-1}} . Return to step 1 but use the polynomial p n − 1 {\displaystyle p_{n-1}} and the initial guess z 1 {\displaystyle z_{1}} . These two steps are repeated until all real zeros are found for the polynomial. If the approximated zeros are not precise enough, the obtained values can be used as initial guesses for Newton's method but using the full polynomial rather than the reduced polynomials. === Example === Consider the polynomial p 6 ( x ) = ( x + 8 ) ( x + 5 ) ( x + 3 ) ( x − 2 ) ( x − 3 ) ( x − 7 ) {\displaystyle p_{6}(x)=(x+8)(x+5)(x+3)(x-2)(x-3)(x-7)} which can be expanded to p 6 ( x ) = x 6 + 4 x 5 − 72 x 4 − 214 x 3 + 1127 x 2 + 1602 x − 5040. {\displaystyle p_{6}(x)=x^{6}+4x^{5}-72x^{4}-214x^{3}+1127x^{2}+1602x-5040.} From the above we know that the largest root of this polynomial is 7 so we are able to make an initial guess of 8. Using Newton's method the first zero of 7 is found as shown in black in the figure to the right. Next p ( x ) {\displaystyle p(x)} is divided by ( x − 7 ) {\displaystyle (x-7)} to obtain p 5 ( x ) = x 5 + 11 x 4 + 5 x 3 − 179 x 2 − 126 x + 720 {\displaystyle p_{5}(x)=x^{5}+11x^{4}+5x^{3}-179x^{2}-126x+720} which is drawn in red in the figure to the right. Newton's method is used to find the largest zero of this polynomial with an initial guess of 7. The largest zero of this polynomial which corresponds to the second largest zero of the original polynomial is found at 3 and is circled in red. The degree 5 polynomial is now divided by ( x − 3 ) {\displaystyle (x-3)} to obtain p 4 ( x ) = x 4 + 14 x 3 + 47 x 2 − 38 x − 240 {\displaystyle p_{4}(x)=x^{4}+14x^{3}+47x^{2}-38x-240} which is shown in yellow. The zero for this polynomial is found at 2 again using Newton's method and is circled in yellow. Horner's method is now used to obtain p 3 ( x ) = x 3 + 16 x 2 + 79 x + 120 {\displaystyle p_{3}(x)=x^{3}+16x^{2}+79x+120} which is shown in green and found to have a zero at −3. This polynomial is further reduced to p 2 ( x ) = x 2 + 13 x + 40 {\displaystyle p_{2}(x)=x^{2}+13x+40} which is shown in blue and yields a zero of −5. The final root of the original polynomial may be found by either using the final zero as an initial guess for Newton's method, or by reducing p 2 ( x ) {\displaystyle p_{2}(x)} and solving the linear equation. As can be seen, the expected roots of −8, −5, −3, 2, 3, and 7 were found. == Divided difference of a polynomial == Horner's method can be modified to compute the divided difference ( p ( y ) − p ( x ) ) / ( y − x ) . {\displaystyle (p(y)-p(x))/(y-x).} Given the polynomial (as before) p ( x ) = ∑ i = 0 n a i x i = a 0 + a 1 x + a 2 x 2 + a 3 x 3 + ⋯ + a n x n , {\displaystyle p(x)=\sum _{i=0}^{n}a_{i}x^{i}=a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}+\cdots +a_{n}x^{n},} proceed as follows b n = a n , d n = b n , b n − 1 = a n − 1 + b n x , d n − 1 = b n − 1 + d n y , ⋮ ⋮ b 1 = a 1 + b 2 x , d 1 = b 1 + d 2 y , b 0 = a 0 + b 1 x . {\displaystyle {\begin{aligned}b_{n}&=a_{n},&\quad d_{n}&=b_{n},\\b_{n-1}&=a_{n-1}+b_{n}x,&\quad d_{n-1}&=b_{n-1}+d_{n}y,\\&{}\ \ \vdots &\quad &{}\ \ \vdots \\b_{1}&=a_{1}+b_{2}x,&\quad d_{1}&=b_{1}+d_{2}y,\\b_{0}&=a_{0}+b_{1}x.\end{aligned}}} At completion, we have p ( x ) = b 0 , p ( y ) − p ( x ) y − x = d 1 , p ( y ) = b 0 + ( y − x ) d 1 . {\displaystyle {\begin{aligned}p(x)&=b_{0},\\{\frac {p(y)-p(x)}{y-x}}&=d_{1},\\p(y)&=b_{0}+(y-x)d_{1}.\end{aligned}}} This computation of the divided difference is subject to less round-off error than evaluating p ( x ) {\displaystyle p(x)} and p ( y ) {\displaystyle p(y)} separately, particularly when x ≈ y {\displaystyle x\approx y} . Substituting y = x {\displaystyle y=x} in this method gives d 1 = p ′ ( x ) {\displaystyle d_{1}=p'(x)} , the derivative of p ( x ) {\displaystyle p(x)} . == History == Horner's paper, titled "A new method of solving numerical equations of all orders, by continuous approximation", was read before the Royal Society of London, at its meeting on July 1, 1819, with a sequel in 1823. Horner's paper in Part II of Philosophical Transactions of the Royal Society of London for 1819 was warmly and expansively welcomed by a reviewer in the issue of The Monthly Review: or, Literary Journal for April, 1820; in comparison, a technical paper by Charles Babbage is dismissed curtly in this review. The sequence of reviews in The Monthly Review for September, 1821, concludes that Holdred was the first person to discover a direct and general practical solution of numerical equations. Fuller showed that the method in Horner's 1819 paper differs from what afterwards became known as "Horner's method" and that in consequence the priority for this method should go to Holdred (1820). Unlike his English contemporaries, Horner drew on the Continental literature, notably the work of Arbogast. Horner is also known to have made a close reading of John Bonneycastle's book on algebra, though he neglected the work of Paolo Ruffini. Although Horner is credited with making the method accessible and practical, it was known long before Horner. In reverse chronological order, Horner's method was already known to: Paolo Ruffini in 1809 (see Ruffini's rule) Isaac Newton in 1669 the Chinese mathematician Zhu Shijie in the 14th century the Chinese mathematician Qin Jiushao in his Mathematical Treatise in Nine Sections in the 13th century the Persian mathematician Sharaf al-Dīn al-Ṭūsī in the 12th century (the first to use that method in a general case of cubic equation) the Chinese mathematician Jia Xian in the 11th century (Song dynasty) The Nine Chapters on the Mathematical Art, a Chinese work of the Han dynasty (202 BC – 220 AD) edited by Liu Hui (fl. 3rd century). Qin Jiushao, in his Shu Shu Jiu Zhang (Mathematical Treatise in Nine Sections; 1247), presents a portfolio of methods of Horner-type for solving polynomial equations, which was based on earlier works of the 11th century Song dynasty mathematician Jia Xian; for example, one method is specifically suited to bi-quintics, of which Qin gives an instance, in keeping with the then Chinese custom of case studies. Yoshio Mikami in Development of Mathematics in China and Japan (Leipzig 1913) wrote:"... who can deny the fact of Horner's illustrious process being used in China at least nearly six long centuries earlier than in Europe ... We of course don't intend in any way to ascribe Horner's invention to a Chinese origin, but the lapse of time sufficiently makes it not altogether impossible that the Europeans could have known of the Chinese method in a direct or indirect way." Ulrich Libbrecht concluded: It is obvious that this procedure is a Chinese invention ... the method was not known in India. He said, Fibonacci probably learned of it from Arabs, who perhaps borrowed from the Chinese. The extraction of square and cube roots along similar lines is already discussed by Liu Hui in connection with Problems IV.16 and 22 in Jiu Zhang Suan Shu, while Wang Xiaotong in the 7th century supposes his readers can solve cubics by an approximation method described in his book Jigu Suanjing. == See also == Clenshaw algorithm to evaluate polynomials in Chebyshev form De Boor's algorithm to evaluate splines in B-spline form De Casteljau's algorithm to evaluate polynomials in Bézier form Estrin's scheme to facilitate parallelization on modern computer architectures Lill's method to approximate roots graphically Ruffini's rule and synthetic division to divide a polynomial by a binomial of the form x − r == Notes == == References == == External links == "Horner scheme", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Qiu Jin-Shao, Shu Shu Jiu Zhang (Cong Shu Ji Cheng ed.) For more on the root-finding application see [1] Archived 2018-09-28 at the Wayback Machine
|
Wikipedia:Hortensia Galeana Sánchez#0
|
Hortensia Galeana Sánchez (born 6 November 1955) is a Mexican mathematician specializing in graph theory, including graph coloring and the independent dominating sets ("kernels") of directed graphs. She is the director of the Institute of Mathematics at the National Autonomous University of Mexico (UNAM). == Education and career == Galeana is originally from Mexico City. She was educated at UNAM, earning her bachelor's, master's, and doctoral degrees there in 1978, 1981, and 1985 respectively. Her 1985 doctoral dissertation, Algunos resultados en la teoría de núcleos en digráficas, was supervised by Víctor Neumann-Lara. She has taught at UNAM since 1977 and was named director of the UNAM Institute of Mathematics in 2022. == Recognition == Galeana is a member of the Mexican Academy of Sciences. In 1995 and again in 2015 UNAM gave her their National University Prize. == References == == External links == Salcedo Meza, Concepción, "No. 3 Hortensia Galeana Sánchez", ¿Cómo ves? Revista de Divulgación de la Ciencia, UNAM
|
Wikipedia:Hortensia Soto#0
|
Hortensia Soto is a Mexican–American mathematics educator, and a professor of mathematics at Colorado State University. In May 2018, she was appointed Associate Secretary of the Mathematical Association of America (MAA). She became the president of the MAA in 2022. == Early life and education == Soto was born in a sod house in Belén del Refugio, part of the municipality of Teocaltiche in Jalisco, Mexico. Her family moved to a farm near Morrill, Nebraska when she was one year old, and she grew up in Nebraska. Her talent for mathematics was encouraged in elementary school and recognized in high school; already at that age she was called on to act as a substitute mathematics teacher. At Eastern Wyoming College, Soto started a political science degree. Soto has a bachelor's degree in mathematics and a master's degree in mathematics education from Chadron State College in Nebraska, earned in 1988 and 1989 respectively. She earned a second master's degree in mathematics at the University of Arizona in 1994, and completed Ph.D. in educational mathematics at the University of Northern Colorado in 1996. == Career == Soto worked at the University of Southern Colorado from 1989 to 1992 as director of the Mathematics Learning Center. In 1995, she became an assistant professor of mathematics at the university, earning tenure there as an associate professor in 2001; the university became known as Colorado State University–Pueblo in 2003. In 2005 she moved to the University of Northern Colorado, taking a step down to become an assistant professor again. She was promoted to associate professor in 2008 and to full professor in 2014, before moving to Colorado State University as a professor of mathematics. At the University of Northern Colorado, Soto founded and directed a summer program for high school girls, Las Chicas de Matemáticas: UNC Math Camp for Young Women, from 2008 to 2014, and returned to rural Nebraska to participate in a teacher education program there, Math in the Middle. She is a fellow of Project NExT, and has been governor of the Rocky Mountain Section of the Mathematical Association of America. She is also a principal investigator of the Embodied Mathematical Imagination & Cognition project. She has a long association with the MAA and has been increasingly involved with its governance. In May 2018, she took over from Gerald Venama as its Associate Secretary. In October 2021, she was elected as President-Elect of MAA and is serving a two year term, starting February 1, 2022. == Recognition == In 2001, Chadron State College gave Soto their Distinguished Young Alumni Award. In 2012, the Mathematical Association of America (MAA) gave Soto their Meritorious Service Award. She was the 2016 winner of the Burton W. Jones Distinguished Teaching Award of the Rocky Mountain Section of the MAA, and one of the 2018 winners of the Deborah and Franklin Haimo Awards for Distinguished College or University Teaching of Mathematics. She is included in a deck of playing cards featuring notable women mathematicians published by the Association of Women in Mathematics. == References == == External links == Meet a Mathematician! Video Interview
|
Wikipedia:Hovhannes Imastaser#0
|
Hovhannes Imastaser (c. 1045–50 – 1129), also known as Hovhannes Sarkavag, was a medieval Armenian multi-disciplinary scholar known for his works on philosophy, theology, mathematics, cosmology, and literature. He was also a gifted hymnologist and pedagogue. == Biography == Hovhannes Imastaser was born in c. 1045–50 into a priest's family. Varying information exists about his birthplace. Vardan Areveltsi writes that he was from the district of Parisos (historically also known as Gardman, in modern-day northwestern Azerbaijan). Kirakos Gandzaketsi reports that Hovhannes was "from the land of Gandzak, like me." Parisos was a part of the Emirate of Ganja (Gandzak) at the time, so these reports may be compatible. An anonymous hagiography of Hovhannes refers to Ani as "his own city" and "the place of [his] upbringing"; thus, Manuk Abeghian thought it possible that Ani was Hovhannes's actual birthplace, or that he was born in Parisos and was raised in Ani. Ashot Abrahamian considers it most likely that Hovhannes was born in the district of Parisos. Hovhannes received his education in theology and science at Haghbat, and possibly also at Sanahin, two important monastic centers of Armenian medieval scholarship. His teacher was his maternal uncle, who was nicknamed Urchetsi vardapet (archimandrite, Doctor of Theology). He was probably ordained as a sarkavag (deacon) early on in Haghbat. Hovhannes eventually rose to become a vardapet of the Armenian Church. But it was the title sarkavag, however, that became attached to his name, probably because he had held that rank for a long time. (In some manuscripts of his works, he is called by both titles: "Hovhannes Sarkavag vardapet.") After completing his education, Hovhannes moved to the former Armenian capital city of Ani, where he taught philosophy, mathematics, music, cosmography and grammar. One of his students was the chronicler Samuel of Ani. He also served as the parish priest at the cathedral of Ani. For his learnedness, he received the epithets Imastaser ('the Philosopher/Scholar'), Poetikos ('the Poet'), and Sophestos ('the Wise'); the latter appears on the inscription on his gravestone. Hovhannes is said to have been a respected figure at the court of Georgia and among the Georgian clergy. After many years in Ani, Hovhannes returned to Haghbat and headed the school there. Ashot Abrahamian suggests that Hovhannes wrote most of works in his old age, while at the monastery. He died in Haghbat in 1129. == Works == While Hovhannes Imastaser was recognized as a master of Armenian literature, his works acquired wider publicity only in the 19th century when they were published by Abbot Ghevont Alishan, a member of the Mekhitarist Congregation in Venice. Imastaser's innovative approach to literature, for which he is often referred to as a key representative of the medieval Armenian literary renaissance, is fully demonstrated in his poem Ban Imastutian (Discourse on wisdom). In the poem, written as a dialogue between the author and a blackbird, the bird symbolizes nature, which, per the author, is the main inspiration behind art. In Imastaser's time, artistic inspiration was usually attributed to divine reasons. As a hymnologist, Imastaser wrote several important sharakans (hymns): Tagh Harutean (Ode to the Resurrection), Paytsaratsan Aysor (They brightened on this day), Anskizbn Bann Astvatz (God, the infinite word), Anchareli Bann Astavatz (God, the inexpressible word). The latter two are acrostic compositions, each encompassing within their ten stanzas thirty-six letters of the Armenian Alphabet. In them, Imastaser glorifies heroes and martyrs who sacrificed their lives defending Armenian homeland and their Christian faith. Imastaser also introduced another patriotic theme to Armenian literature and music: emigration. In his hymns Imastaser prays to God so that Armenians who left their country could find strength to return home. Hovhannes Imastaser also contributed to the standardization of the Armenian prayer book and Psalter. Hovhannes Imastaser's work in mathematics is represented by the volume Haghaks Ankiunavor Tvots (Concerning Polygonal Numbers). This work indicates a profound knowledge of all important ancient and medieval mathematicians, including Pythagoras, Euclid and Aristotle. Hovhannes Imastaser translated into Armenian the works of the following classical scholars: Philo of Alexandria, Dionysius the Areopagite, Gregory of Nyssa, Porphyry, and, as mentioned, Aristotle and Euclid. In 1084, Hovhannes Imastaser became involved in the project of developing so-called Minor Armenian Calendar, which included all 365 days plus one additional day. Eventually, his work on calendars led to the invention of a perpetual or eternal calendar. Hovhannes Imastaser recognized the importance of the empirical method in science. 150 years before Roger Bacon, he noted: "Without experimentation, no opinion can be considered probable and acceptable; only experiment produces confirmation and certainty." == Notes == == References == == Further reading == Cowe, S. Peter (1994–1995). "Armenological Paradigms and Yovhannēs Sarkawag's 'Discourse on Wisdom'—Philosophical Underpinnings of an Armenian Renaissance?". Revue des Études Arméniennes. 25: 125–56. doi:10.2143/REA.25.0.2003778. Hovhannes Sarkavag Imastaser; Yeghiazaryan, Vano (2017). Tagher ev hogevor erger [Taghs and spiritual songs] (in Armenian). Yerevan: Armav hratarakchʻutʻyun. ISBN 978-9939-863-49-8.
|
Wikipedia:How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension#0
|
The coastline paradox is the counterintuitive observation that the coastline of a landmass does not have a well-defined length. This results from the fractal curve-like properties of coastlines; i.e., the fact that a coastline typically has a fractal dimension. Although the "paradox of length" was previously noted by Hugo Steinhaus, the first systematic study of this phenomenon was by Lewis Fry Richardson, and it was expanded upon by Benoit Mandelbrot. The measured length of the coastline depends on the method used to measure it and the degree of cartographic generalization. Since a landmass has features at all scales, from hundreds of kilometers in size to tiny fractions of a millimeter and below, there is no obvious size of the smallest feature that should be taken into consideration when measuring, and hence no single well-defined perimeter to the landmass. Various approximations exist when specific assumptions are made about minimum feature size. The problem is fundamentally different from the measurement of other, simpler edges. It is possible, for example, to accurately measure the length of a straight, idealized metal bar by using a measurement device to determine that the length is less than a certain amount and greater than another amount—that is, to measure it within a certain degree of uncertainty. The more precise the measurement device, the closer results will be to the true length of the edge. With a coastline, however, measuring in finer and finer detail does not improve the accuracy; it merely adds to the total. Unlike with the metal bar, it is impossible even in theory to obtain an exact value for the length of a coastline. In three-dimensional space, the coastline paradox is readily extended to the concept of fractal surfaces, whereby the area of a surface varies depending on the measurement resolution. == Discovery == Shortly before 1951, Lewis Fry Richardson, in researching the possible effect of border lengths on the probability of war, noticed that the Portuguese reported their measured border with Spain to be 987 km (613 mi), but the Spanish reported it as 1,214 km (754 mi). This was the beginning of the coastline problem, which is a mathematical uncertainty inherent in the measurement of boundaries that are irregular. The prevailing method of estimating the length of a border (or coastline) was to lay out n equal straight-line segments of length l with dividers on a map or aerial photograph. Each end of the segment must be on the boundary. Investigating the discrepancies in border estimation, Richardson discovered what is now termed the "Richardson effect": the sum of the segments monotonically increases when the common length of the segments decreases. In effect, the shorter the ruler, the longer the measured border; the Spanish and Portuguese geographers were simply using different-length rulers. The result most astounding to Richardson is that, under certain circumstances, as l approaches zero, the length of the coastline approaches infinity. Richardson had believed, based on Euclidean geometry, that a coastline would approach a fixed length, as do similar estimations of regular geometric figures. For example, the perimeter of a regular polygon inscribed in a circle approaches the circumference with increasing numbers of sides (and decrease in the length of one side). In geometric measure theory such a smooth curve as the circle that can be approximated by small straight segments with a definite limit is termed a rectifiable curve. Benoit Mandelbrot devised an alternative measure of length for coastlines, the Hausdorff dimension, and showed that it does not depend on the length l in the same way. == Mathematical aspects == The basic concept of length originates from Euclidean distance. In Euclidean geometry, a straight line represents the shortest distance between two points. This line has only one length. On the surface of a sphere, this is replaced by the geodesic length (also called the great circle length), which is measured along the surface curve that exists in the plane containing both endpoints and the center of the sphere. The length of basic curves is more complicated but can also be calculated. Measuring with rulers, one can approximate the length of a curve by adding the sum of the straight lines which connect the points: Using a few straight lines to approximate the length of a curve will produce an estimate lower than the true length; when increasingly short (and thus more numerous) lines are used, the sum approaches the curve's true length, and that length is the least upper bound or supremum of all such approximations. A precise value for this length can be found using calculus, the branch of mathematics enabling the calculation of infinitesimally small distances. The following animation illustrates how a smooth curve can be meaningfully assigned a precise length: Not all curves can be measured in this way. A fractal is, by definition, a curve whose perceived complexity does not decrease with measurement scale. Whereas approximations of a smooth curve tend to a single value as measurement precision increases, the measured value for a fractal does not converge. As the length of a fractal curve always diverges to infinity, if one were to measure a coastline with infinite or near-infinite resolution, the length of the infinitely short kinks in the coastline would add up to infinity. However, this figure relies on the assumption that space can be subdivided into infinitesimal sections. The truth value of this assumption—which underlies Euclidean geometry and serves as a useful model in everyday measurement—is a matter of philosophical speculation, and may or may not reflect the changing realities of "space" and "distance" on the atomic level (approximately the scale of a nanometer). Coastlines are less definite in their construction than idealized fractals such as the Mandelbrot set because they are formed by various natural events that create patterns in statistically random ways, whereas idealized fractals are formed through repeated iterations of simple, formulaic sequences. === Measuring a coastline === More than a decade after Richardson completed his work, Benoit Mandelbrot developed a new branch of mathematics, fractal geometry, to describe just such non-rectifiable complexes in nature as the infinite coastline. His own definition of the new figure serving as the basis for his study is: I coined fractal from the Latin adjective fractus. The corresponding Latin verb frangere means "to break:" to create irregular fragments. It is therefore sensible ... that, in addition to "fragmented" ... fractus should also mean "irregular". In "How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension", published on 5 May 1967, Mandelbrot discusses self-similar curves that have Hausdorff dimension between 1 and 2. These curves are examples of fractals, although Mandelbrot does not use this term in the paper, as he did not coin it until 1975. The paper is one of Mandelbrot's first publications on the topic of fractals. Empirical evidence suggests that the smaller the increment of measurement, the longer the measured length becomes. If one were to measure a stretch of coastline with a yardstick, one would get a shorter result than if the same stretch were measured with a 1-foot (30 cm) ruler. This is because one would be laying the ruler along a more curvilinear route than that followed by the yardstick. The empirical evidence suggests a rule which, if extrapolated, shows that the measured length increases without limit as the measurement scale decreases towards zero. This discussion implies that it is meaningless to talk about the length of a coastline; some other means of quantifying coastlines are needed. Mandelbrot then describes various mathematical curves, related to the Koch snowflake, which are defined in such a way that they are strictly self-similar. Mandelbrot shows how to calculate the Hausdorff dimension of each of these curves, each of which has a dimension D between 1 and 2 (he also mentions but does not give a construction for the space-filling Peano curve, which has a dimension exactly 2). The paper does not claim that any coastline or geographic border actually has fractional dimension. Instead, it notes that Richardson's empirical law is compatible with the idea that geographic curves, such as coastlines, can be modelled by random self-similar figures of fractional dimension. Near the end of the paper Mandelbrot briefly discusses how one might approach the study of fractal-like objects in nature that look random rather than regular. For this he defines statistically self-similar figures and says that these are encountered in nature. The paper is important because it is a "turning point" in Mandelbrot's early thinking on fractals. It is an example of the linking of mathematical objects with natural forms that was a theme of much of his later work. A key property of some fractals is self-similarity; that is, at any scale the same general configuration appears. A coastline is perceived as bays alternating with promontories. In the hypothetical situation that a given coastline has this property of self-similarity, then no matter how great any one small section of coastline is magnified, a similar pattern of smaller bays and promontories superimposed on larger bays and promontories appears, right down to the grains of sand. At that scale the coastline appears as a momentarily shifting, potentially infinitely long thread with a stochastic arrangement of bays and promontories formed from the small objects at hand. In such an environment (as opposed to smooth curves) Mandelbrot asserts "coastline length turns out to be an elusive notion that slips between the fingers of those who want to grasp it". There are different kinds of fractals. A coastline with the stated property is in "a first category of fractals, namely curves whose fractal dimension is greater than 1". That last statement represents an extension by Mandelbrot of Richardson's thought. Mandelbrot's statement of the Richardson effect is: where L, coastline length, a function of the measurement unit ε, is approximated by the expression. F is a constant, and D is a parameter that Richardson found depended on the coastline approximated by L. He gave no theoretical explanation, but Mandelbrot identified D with a non-integer form of the Hausdorff dimension, later the fractal dimension. Rearranging the expression yields where Fε−D must be the number of units ε required to obtain L. The broken line measuring a coast does not extend in one direction nor does it represent an area, but is intermediate between the two and can be thought of as a band of width 2ε. D is its fractal dimension, ranging between 1 and 2 (and typically less than 1.5). More broken coastlines have greater D, and therefore L is longer for the same ε. D is approximately 1.02 for the coastline of South Africa, and approximately 1.25 for the west coast of Great Britain. For lake shorelines, the typical value of D is 1.28. == Solutions == The coastline paradox describes a problem with real-world applications, including trivial matters such as which river, beach, border, coastline is the longest, with the former two records a matter of fierce debate; furthermore, the problem extends to demarcating territorial boundaries, property rights, erosion monitoring, and the theoretical implications of our geometric modelling. To resolve this problem, several solutions have been proposed. These solutions resolve the practical problems around the problem by setting the definition of "coastline," establishing the practical physical limits of a coastline, and using mathematical integers within these practical limitations to calculate the length to a meaningful level of precision. These practical solutions to the problem can resolve the problem for all practical applications while it persists as a theoretical/mathematical concept within our models. == Criticisms and misunderstandings == The coastline paradox is often criticized because coastlines are inherently finite, real features in space, and, therefore, there is a quantifiable answer to their length. The comparison to fractals, while useful as a metaphor to explain the problem, is criticized as not fully accurate, as coastlines are not self-repeating and are fundamentally finite. The source of the paradox is based on the way reality is measured and is most relevant when attempting to use those measurements to create cartographic models of coasts. Modern technology, such as LiDAR, Global Positioning Systems and Geographic Information Systems, has made addressing the paradox much easier; however, the limitations of survey measurements and the vector software persist. Critics argue that these problems are more theoretical and not practical considerations for planners. Alternately, the concept of a coast "line" is in itself a human construct that depends on assignment of tidal datum which is not flat relative to any vertical datum, and thus any line constructed between land and sea somewhere in the intertidal zone is semi-arbitrary and in constant flux. Thus wide number of "shorelines" may be constructed for varied analytical purposes using different data sources and methodologies, each with a different length. This may complicate the quantification of ecosystem services using methods that depend on shoreline length. == See also == Alaska boundary dispute – Alaskan and Canadian claims to the Alaskan Panhandle differed greatly, based on competing interpretations of the ambiguous phrase setting the border at "a line parallel to the windings of the coast", applied to the fjord-dense region. Fractal dimension Gabriel's horn, a geometric figure with infinite surface area but finite volume List of countries by length of coastline Scale (geography) Paradox of the heap Staircase paradox, similar paradox where a straight segment approximation converges to a different value Zeno's paradoxes List of longest beaches List of river systems by length List of countries and territories by number of land borders == References == === Citations === === Sources === == External links == "Coastlines" at Fractal Geometry (ed. Michael Frame, Benoit Mandelbrot, and Nial Neger; maintained for Math 190a at Yale University) The Atlas of Canada – Coastline and Shoreline NOAA GeoZone Blog on Digital Coast What Is The Coastline Paradox? – YouTube video by Veritasium
|
Wikipedia:Howard Elton Lacey#0
|
Howard Elton Lacey (February 9, 1937 in Leakey, Texas – June 21, 2013) was an American mathematician who studied analysis. After beginning his undergraduate studies at Texas A&M University, Lacey graduated from Abilene Christian University with a bachelor's degree in mathematics in 1959 and a master's degree in 1960. He completed his Ph.D. in 1963 New Mexico State University; his dissertation, Generalized Compact Operators in Locally Convex Spaces, was supervised by Edward Thorp. He returned to Abilene Christian University as an assistant professor, then in 1964 moved to the University of Texas at Austin. In 1980 he returned to Texas A&M University as a professor and chair of the mathematics department, a position he held for eleven years. He was Associate Dean of Science for one year and retired in 2002. His specialty was Banach spaces. He also worked a lot with Polish scientists, including 1972/73 as a Fulbright Fellow at the Polish Academy of Sciences in Warsaw. He also worked in applied research at the White Sands Missile Range and at the Man Space Craft Center in Houston. He had been married to Bonnie Brown since 1958, whom he met while he was on a summer job with the Army Corps of Engineers in Mississippi in 1957. == References ==
|
Wikipedia:Howard Smith (diplomat)#0
|
Sir Howard Frank Trayton Smith (15 October 1919 – 7 May 1996) was a British diplomat who served as Director General of MI5 from 1978 to 1981. == Career == Smith was born and raised in Wembley. He was educated at Regent Street Polytechnic and Sidney Sussex College, Cambridge, where he won an exhibition to read mathematics and was a contemporary of Asa Briggs, with whom he played chess. At the outset of the Second World War he was drafted to work at Bletchley Park as a codebreaker, recommending his friend Briggs to fellow Cambridge mathematician Gordon Welchman for service in Hut 6. From 1946 to 1950, Smith served in the Foreign Service in Oslo and Washington. In 1953, he was Consul in Caracas; between 1961 and 1963, he was Counsellor of State in Moscow. Returning to London, he was the Head of the Department at the Foreign Office dealing with the Soviet Union and Eastern Europe for the next five years. He then served as Ambassador to Czechoslovakia from 1968 to 1971 and later served as Ambassador to the Soviet Union in Moscow from 1976 to 1978. In 1978 Smith was unexpectedly appointed Director General (DG) of MI5, the United Kingdom's internal security service, by Prime Minister James Callaghan, serving until March 1981. He was the first DG from a background in the diplomatic service. Callaghan later explained that he wanted 'to bring someone into the office from a different culture'. == Honours == Companion of the Order of St Michael and St George, 1966 Knight Commander of the Order of St Michael and St George, 1976 Knight Grand Cross of the Order of St Michael and St George, 1981 == References ==
|
Wikipedia:Hrvoje Kraljević#0
|
Kraljević and Kraljevič (sometimes written Kraljevic or Kraljevich) is a surname of Croatian and Serbian origin. Notable people with the surname include: Blaž Kraljević (1947–1992), Bosnian Croat paramilitary leader Davor Kraljević (born 1978), Croatian footballer Hrvoje Kraljević (born 1944), Croatian mathematician and a former politician Joviša Kraljevič (born 1976), Slovenian footballer Marko Kraljević (footballer), Croatian footballer Miroslav Kraljević (1885–1913), Croatian painter As a title, it has been used to refer to: Prince Marko (c. 1335 – 1395), also known as Marko Kraljević, de jure the Serbian king from 1371 to 1395
|
Wikipedia:Hua's identity#0
|
In algebra, Hua's identity named after Hua Luogeng, states that for any elements a, b in a division ring, a − ( a − 1 + ( b − 1 − a ) − 1 ) − 1 = a b a {\displaystyle a-\left(a^{-1}+\left(b^{-1}-a\right)^{-1}\right)^{-1}=aba} whenever a b ≠ 0 , 1 {\displaystyle ab\neq 0,1} . Replacing b {\displaystyle b} with − b − 1 {\displaystyle -b^{-1}} gives another equivalent form of the identity: ( a + a b − 1 a ) − 1 + ( a + b ) − 1 = a − 1 . {\displaystyle \left(a+ab^{-1}a\right)^{-1}+(a+b)^{-1}=a^{-1}.} == Hua's theorem == The identity is used in a proof of Hua's theorem, which states that if σ {\displaystyle \sigma } is a function between division rings satisfying σ ( a + b ) = σ ( a ) + σ ( b ) , σ ( 1 ) = 1 , σ ( a − 1 ) = σ ( a ) − 1 , {\displaystyle \sigma (a+b)=\sigma (a)+\sigma (b),\quad \sigma (1)=1,\quad \sigma (a^{-1})=\sigma (a)^{-1},} then σ {\displaystyle \sigma } is a homomorphism or an antihomomorphism. This theorem is connected to the fundamental theorem of projective geometry. == Proof of the identity == One has ( a − a b a ) ( a − 1 + ( b − 1 − a ) − 1 ) = 1 − a b + a b ( b − 1 − a ) ( b − 1 − a ) − 1 = 1. {\displaystyle (a-aba)\left(a^{-1}+\left(b^{-1}-a\right)^{-1}\right)=1-ab+ab\left(b^{-1}-a\right)\left(b^{-1}-a\right)^{-1}=1.} The proof is valid in any ring as long as a , b , a b − 1 {\displaystyle a,b,ab-1} are units. == References ==
|
Wikipedia:Hubbard–Stratonovich transformation#0
|
The Hubbard–Stratonovich (HS) transformation is an exact mathematical transformation invented by Russian physicist Ruslan L. Stratonovich and popularized by British physicist John Hubbard. It is used to convert a particle theory into its respective field theory by linearizing the density operator in the many-body interaction term of the Hamiltonian and introducing an auxiliary scalar field. It is defined via the integral identity exp ( − a 2 x 2 ) = 1 2 π a ∫ − ∞ ∞ exp ( − y 2 2 a − i x y ) d y , {\displaystyle \exp \left(-{\frac {a}{2}}x^{2}\right)={\sqrt {\frac {1}{2\pi a}}}\;\int _{-\infty }^{\infty }\exp \left(-{\frac {y^{2}}{2a}}-ixy\right)\,dy,} where the real constant a > 0 {\displaystyle a>0} . The basic idea of the HS transformation is to reformulate a system of particles interacting through two-body potentials into a system of independent particles interacting with a fluctuating field. The procedure is widely used in polymer physics, classical particle physics, spin glass theory, and electronic structure theory. == Calculation of resulting field theories == The resulting field theories are well-suited for the application of effective approximation techniques, like the mean field approximation. A major difficulty arising in the simulation with such field theories is their highly oscillatory nature in case of strong interactions, which leads to the well-known numerical sign problem. The problem originates from the repulsive part of the interaction potential, which implicates the introduction of the complex factor via the HS transformation. == References ==
|
Wikipedia:Hubert Bray#0
|
Hubert Lewis Bray is a mathematician and differential geometer. He is known for having proved the Riemannian Penrose inequality. He works as professor of mathematics and physics at Duke University. == Early life and education == He earned his B.A. and B.S. degrees in Mathematics and Physics in 1992 from Rice University and obtained his Ph.D. in 1997 from Stanford University under the mentorship of Richard Melvin Schoen. == Career == He was an invited speaker at the 2002 International Congress of Mathematicians in Beijing (in the section of differential geometry). He is one of the inaugural fellows of the American Mathematical Society. Hubert was appointed professor of mathematics in 2004, an additionally professor of physics in 2019. In 2019, he was appointed director of undergraduate studies of Duke's mathematics department. == Personal life == Hubert is the grandson of Hubert Evelyn Bray, professor of mathematics at Rice University and the first person awarded a Ph.D. by the then Rice Institute. Hubert Bray and his brother Clark Bray share similar educations and jobs, both having studied at Rice University (undergraduate), Stanford University (graduate), and are professors of mathematics at Duke University. == References ==
|
Wikipedia:Hudde's rules#0
|
Johannes (van Waveren) Hudde (23 April 1628 – 15 April 1704) was a mathematician, burgomaster (mayor) of Amsterdam between 1672 – 1703, and governor of the Dutch East India Company. Hudde initially studied law at the University of Leiden, until he turned to mathematics under the influence of Frans van Schooten. He contributed to the theory of equations in his posthumous De reductione aequationum of 1713, in which he was the first to take literal coefficients in algebra as indifferently positive or negative. In the Latin translation that Van Schooten made of Descartes' La Géométrie, Hudde, together with Johan de Witt and Hendrik van Heuraet, published work of their own. Hudde's contribution consisted of describing an algorithm for simplifying the calculations necessary to determine a double root to a polynomial equation. And establishing two properties of polynomial roots known as Hudde's rules, that point toward algorithms of calculus. As a "burgemeester" of Amsterdam he ordered that the city canals should be flushed at high tide and that the polluted water of the town "secreten" should be diverted to pits outside the town instead of into the canals. He also promoted hygiene in and around the town's water supply. "Hudde's stones" were marker stones that were used to mark the summer high water level at several points in the city. They later were the foundation for the "NAP", the now Europe-wide system for measuring water levels. == Mathematical work == Hudde studied law at the University of Leiden, but turned to mathematics under the influence of his teacher Frans van Schooten. From 1654 to 1663 he worked under van Schooten. La Géométrie (1637) by René Descartes provided an introduction to analytic geometry in French, whereas Latin was still the international language of science. Schooten and his students including Hudde, Johan de Witt and Hendrik van Heuraet published a Latin translation of La Geometrie in 1659. Each of the students added to the work. Hudde's contribution described Hudde's rules and made a study of maxima and minima. He added to the translation of La Geometrie two papers of his own: Epistola Prima de Reductione Aequationum on algebraic equations, and Epistola Secunda de Maximis et Minimis, in which he described an algorithm for simplifying the calculations necessary to determine a double root to a polynomial equation. Together with René-François de Sluse, Hudde provided general algorithms by which one could routinely construct tangents to curves given by polynomial equations. Hudde corresponded with Baruch Spinoza and Christiaan Huygens, Johann Bernoulli, Isaac Newton and Leibniz. Newton and Leibniz mention Hudde, and especially Hudde's rule, many times and used some of his ideas in their own work on infinitesimal calculus. == See also == History of group theory Mercator series Tangent == References == Karlheinz Haas (1956) "Die mathematischen Arbeiten von Johann Hudde (1628 to 1704) Bürgermeister von Amsterdam", Centaurus 4: 235–84 doi:10.1111/j.1600-0498.1956.tb00477.x J. Hudde (1656) Specilla circularia (circular Lens, in Dutch) == External links == O'Connor, John J.; Robertson, Edmund F., "Johannes Hudde", MacTutor History of Mathematics Archive, University of St Andrews Johannes Hudde at the Mathematics Genealogy Project
|
Wikipedia:Hugh Burkhardt#0
|
Hugh Burkhardt (4 April 1935 – 3 February 2024) was a British theoretical physicist and educational designer. He was Director of The Shell Centre for Mathematical Education at the University of Nottingham, UK from 1976 to 1992 and is the creator of ISDDE, the International Society for Design and Development in Education. == Life and career == Burkhardt had a bachelor's degree in Theoretical Physics from the University of Oxford, and PhD in Mathematical Physics from the University of Birmingham. He joined the Shell Centre in Nottingham in 1976 as Director and Professor of Mathematical Education, serving until 1992. Since then he has led a series of international education design projects including Balanced Assessment and MARS (the Mathematics Assessment Resource Service) in the US, with visiting appointments at UC Berkeley and at Michigan State University. He is Emeritus Professor of Mathematical Education at the University of Nottingham. In 2005 Burkhardt recognised a need for an international organisation which focused on the design and development process that is used to produce educational materials, and so set up ISDDE. Burkhardt died on 3 February 2024, at the age of 88. == Publications == Burkhardt published articles, papers and book chapters on mathematics education, in particular mathematical modelling, educational design (what Burkhardt terms "engineering research") and education policy. == Awards == Burkhardt, along with colleague at the Shell Centre Malcolm Swan, were the first recipients of the Emma Castelnuovo Award for Excellence in the Practice of Mathematics Education, awarded by the International Commission on Mathematical instruction (ICMI). In 2013 he won the ISDDE Prize for Excellence in Educational Design (affectionately known as the Eddie) for 'his leadership of the Shell Centre for Mathematical Education, his contributions to a large number of its influential products, and the development of its engineering research methodology.' == References ==
|
Wikipedia:Hugo Duminil-Copin#0
|
Hugo Duminil-Copin (born 26 August 1985) is a French mathematician specializing in probability theory. He was awarded the Fields Medal in 2022. == Biography == The son of a middle school sports teacher and a former female dancer who became a primary school teacher, Duminil-Copin grew up in the outer suburbs of Paris, where he played a lot of sports as a child, and initially considered attending a sports-oriented high school to pursue his interest in handball. He decided to attend a school focused on mathematics and science, and enrolled at the Lycée Louis-le-Grand in Paris, then at the École normale supérieure (Paris) and the University Paris-Sud. He decided to focus on math instead of physics, because he found the rigour of mathematical proof more satisfying, but developed an interest in percolation theory, which is used in mathematical physics to address issues in statistical mechanics. In 2008, he moved to the University of Geneva to write a PhD thesis under Stanislav Smirnov. Duminil-Copin and Smirnov used percolation theory and the vertices and edges connecting them in a lattice to model fluid flow and with it phase transitions. The pair investigated the number of self-avoiding walks that were possible in hexagonal lattices, connecting combinatorics to percolation theory. This was published in the Annals of Mathematics in 2012, the same year in which Duminil-Copil was awarded his PhD at the age of 27. In 2013, after his postdoctorate, Duminil-Copin was appointed assistant professor, then full professor in 2014 at the University of Geneva. In 2016, he became permanent professor at the Institut des Hautes Études Scientifiques. Since 2019, he has been member of the Academia Europaea. Since 2017, Duminil-Copin has been the principal investigator of the European Research Council – Starting Grant “Critical behavior of lattice models (CriBLam)”. He is a member of the Laboratory Alexander Grothendieck, a CNRS joint research unit with IHES. Duminil-Copin's work focuses on the mathematical area of statistical physics. Duminil-Copin uses ideas from probability theory to study the critical behavior of various models on networks. His work focuses on identifying the critical point at which phase transitions occur, what happens at the critical point, and the behaviour of the system just above and below the critical point. He has been working on dependent percolation models whereby the state of an edge in one part of a lattice will affect the state of edges elsewhere, to shed light of Ising models, which are used to study phase transitions in ferromagnetic materials. In collaboration with Vincent Beffara in 2011, he was able to produce a formula for a determining the critical point for many two-dimensional dependent percolation models. In 2019, along with Vincent Tassion and Aran Raoufi, he published research on the size of connected components in the lattice when the system is just below and above the critical point. They showed that below the critical point, the probability of having two vertices in the same connected component of the lattice would decay exponentially with separation distance, and that a similar result applies above the critical point, and that there is an infinite connected component above the critical point. Duminil-Copin and his associates proved this characteristic, which they called "sharpness", using mathematical analysis and computer science. He has also shed more light on the nature of the phase transition at the critical point itself, and whether the transition will be continuous or discontinuous, under various circumstances, with a focus on Potts models. Duminil-Copin is researching conformal invariance in dependent percolation models in two dimensions. He said that by proving the existence of these symmetries, a great deal of information about the models would be extracted. In 2020, he and his collaborators proved that rotational invariance exists at the boundary between phases in many physical systems. Duminil-Copin was awarded the 2017 New Horizons in Mathematics Prize for his work on Ising type models. Duminil-Copin was awarded the Fields Medal in 2022 for "solving longstanding problems in the probabilistic theory of phase transitions in statistical physics, especially in dimensions three and four". Wendelin Werner credited Duminil-Copin with generalising the field of percolation theory, saying that "Everything is easier, streamlined. The results are stronger. … The whole understanding of these physical phenomena has been transformed." Werner said that Duminil-Copin has solved "Basically half of the main open questions" in percolation theory. Duminil-Copin's hobbies include sports, which he has stated helps him find inspiration when working. He is married and has a daughter. == Awards == 2022 Fields Medal 2019 Dobrushin Prize 2018 Invited speaker (session Probability and session Mathematical Physics) at the International Congress of Mathematicians of Rio de Janeiro 2017 Loève Prize 2017 Jacques Herbrand Prize 2017 New Horizons in Mathematics Prize 2016 Prize of the European Mathematical Society 2015 Early Career Award of the International Association of Mathematical Physics 2015 Peccot Lectures at the Collège de France 2013 Oberwolfach Prize 2012 Rollo Davidson Prize (with Vincent Beffara) 2012 Vacheron-Constantin Prize == Selected publications == Duminil-Copin, Hugo (2016). Graphical representations of lattice spin models : cours Peccot, Collège de France : janvier-février 2015. Spartacus IDH. ISBN 978-2-36693-022-1. OCLC 987302673. Duminil-Copin, Hugo; Smirnov, Stanislav (1 May 2012). "The connective constant of the honeycomb lattice equals 2 + 2 {\displaystyle {\sqrt {2+{\sqrt {2}}}}} ". Annals of Mathematics. 175 (3): 1653–1665. arXiv:1007.0575. doi:10.4007/annals.2012.175.3.14. ISSN 0003-486X. S2CID 59164280. Beffara, Vincent; Duminil-Copin, Hugo (18 March 2011). "The self-dual point of the two-dimensional random-cluster model is critical for q ≥ 1 {\displaystyle q\geq 1} " (PDF). Probability Theory and Related Fields. 153 (3–4). Springer Science and Business Media LLC: 511–542. arXiv:1006.5073. doi:10.1007/s00440-011-0353-8. ISSN 0178-8051. S2CID 55391558. Duminil-Copin, Hugo; Hammond, Alan (13 October 2013). "Self-Avoiding Walk is Sub-Ballistic". Communications in Mathematical Physics. 324 (2). Springer Science and Business Media LLC: 401–423. arXiv:1205.0401. Bibcode:2013CMaPh.324..401D. doi:10.1007/s00220-013-1811-1. ISSN 0010-3616. S2CID 10829154. Bauerschmidt, Roland; Duminil-Copin, Hugo; Goodman, Jesse; Slade, Gordon (2012). "Lectures on Self-Avoiding Walks". MathProblems. arXiv:1206.2092. == References == == Further reading == Chang, Kenneth (5 July 2022). "Fields Medals in Mathematics Won by Four Under Age 40". The New York Times. Retrieved 5 July 2022. "Le Français Hugo Duminil-Copin remporte la médaille Fields, avec trois autres mathématiciens". leparisien.fr (in French). 5 July 2022. Retrieved 5 July 2022. Bischoff, Manon (5 July 2022). "Fields-Medaille: Mathematik-Preis für 24-dimensionale Kugeln". Spektrum der Wissenschaft (in German). Retrieved 5 July 2022. == External links == Hugo Duminil-Copin's papers on arxiv "Chemins auto-évitants sur le réseau en nid d'abeille, Lecture at the Institut Henri Poincaré, 19/11/2016". YouTube. 23 November 2016. "Page on IHES website". Hugo Duminil-Copin at the Mathematics Genealogy Project
|
Wikipedia:Hugo Scolnik#0
|
Hugo Scolnik is an Argentine mathematician and computer scientist. He is a professor at the University of Buenos Aires. Scolnik has an Honoris Causa Phd from National University of Cuyo and was awarded a Platinum Konex in 2003. == Career == Having received his PhD, Scolnik returned to Argentina to work in the Latin American World Model where he later became the deputy director. He then worked in Brazil at Federal University of Rio de Janeiro and University Cándido Mends. He was twice elected vice president at large of IFORS. He was the first director of the Department of Computing at the University of Buenos Aires. == References ==
|
Wikipedia:Hui-Hsiung Kuo#0
|
Hui-Hsiung Kuo (born October 21, 1941) is a Taiwanese-American mathematician, author, and academic. He is Nicholson Professor Emeritus at Louisiana State University and one of the founders of the field of white noise analysis. Kuo is most known for his research in stochastic analysis, with a focus on stochastic integration, white noise theory, and infinite dimensional analysis. He together with T. Hida, J. Potthoff, and L. Streit founded the field of white noise analysis. He has authored several books, including White Noise: An Infinite-Dimensional Calculus, Introduction to Stochastic Integration, Gaussian Measures in Banach Spaces, and White Noise Distribution Theory and served as an editor for books, such as White Noise Analysis: Mathematics and Applications and Stochastic Analysis on Infinite-Dimensional Spaces. He has received the Graduate Teaching Award and Distinguished Faculty Award from Louisiana State University. Kuo is a member of the Association of Quantum Probability and Infinite Dimensional Analysis. He has served on the Program Committee for the IFIP Conference and contributed to the Summer Institutes for the American Mathematical Society. He also serves as an associate editor for the Taiwanese Journal of Mathematics and holds the position of editor-in-chief for the Journal of Stochastic Analysis as well as Communications on Stochastic Analysis. He is an editor of Infinite Dimensional Analysis, Quantum Probability and Related Topics. == Early life and education == Kuo was born in Taichung, Taiwan, the son of Kin-sueh and Fan-Po Kuo. He graduated from Taichung First High School in 1961. He earned his bachelor's degree in mathematics from National Taiwan University in 1965, then earned a Master of Arts in mathematics in 1968 and a Ph.D. in the subject in 1970 from Cornell University. == Career == Kuo started his academic career as a Visiting Member at New York University's Courant Institute in 1970 and assumed the role of Assistant Professor at the University of Virginia in 1971. Following this, he became a Visiting Assistant Professor at the State University of New York and held the appointment of Associate Professor at Wayne State University in 1976. In 1977, he joined Louisiana State University, as an Associate Professor later assuming the role of full Professor in 1982 and serving as a Nicholson Professor of Mathematics from 2000 to 2014. Furthermore, he served as the NSC Chair Professor of Mathematics at National Cheng Kung University in 1998 and was appointed as a Fulbright Scholar Professor and Lecturer at the University of Roma Tor Vergata and University of Tunis El Manar. He has been serving as the Nicholson Professor Emeritus at LSU since 2014. Kuo held various administrative positions starting with his role as a Foreign Faculty Member at Meijo University's Graduate School where he served from 1993 to 2013. In 1997, he served as an NSF Official Observer at Tulane University's Mathematical Sciences Research Institute and has been serving on the Board of Global Advisors for the International Federation of Nonlinear Analysis since 2010. == Works == === Gaussian measures in banach spaces === Kuo addressed the question of whether the Lebesgue measure can be extended to separable Hilbert spaces by exploring the conditions necessary for a valid measure through a contradiction involving orthonormal bases and defined balls. In his review, Michael B. Marcus noted that the book is organized to emphasize the abstract Wiener space in its initial section, which entails a complex and multi-step theory development process, yet the book meticulously guides to ensures that readers find the material accessible. === White noise distribution theory === Kuo’s book White Noise Distribution Theory provided an introduction to the fundamentals of white noise theory and offered insights into its mathematical foundations and practical applications. The book showed the relevance of white noise analysis in a series of stochastic cable equations. === Introduction to stochastic integration === Kuo's book Introduction to Stochastic Integration served as an introductory guide to stochastic integration and the Ito calculus offering information about stochastic processes, stochastic differential equations, concepts of finance, signal processing, and electrical engineering in various fields. Ita Cirovic Donev in his review for The Mathematical Sciences Digital Library, noted that the author doesn't delve deeply into real-world applications, he instead focuses on demonstrating the practical application of theoretical concepts across various fields. == Research == Kuo has focused his research on subjects of Theory of Stochastic Integration, White Noise Theory and Infinite Dimensional Analysis. He has published several research papers on stochastic differential equations featuring adapted integrands and a range of initial conditions, particularly focusing on their examination within the framework of Itô's theory. Kuo has conducted research on solving stochastic differential and infinite-dimensional equations within the framework of white noise spaces. In 1990, he explored the Lévy Laplacian operator, denoted as ΔF(ξ), and established two key equivalences establishing a connection between the Lévy Laplacian Δ and the Gross Laplacian ΔGF(ξ) while also discussing an application of these findings in the context of white noise calculus. He conducted research on the S-transform by encompassing the characterization of its range through properties like analyticity and growth, with subsequent applications in white noise analysis extending the Gaussian L2-space. The research also featured examples of the generalized functions and the introduction of new distribution classes marked by bounded growth through iterated exponentials. His research on stochastic differential equations primarily centered on understanding their behavior and exploring potential variations in solutions, exemplified by the introduction of two illustrative scenarios: one demonstrating that a solution can explode in finite time with almost sure certainty, and another showcasing situations where multiple solutions to a stochastic differential equation may coexist. == Awards and honors == 1998 – NSC Chair Professor of Mathematics, Cheng Kung University 2001 – Special Visiting Professor, Hiroshima University 2003 – Distinguished Faculty Award, Louisiana State University 2004 – CIES Fulbright Lecturing Award, Fulbright Scholar Program 2010 – Graduate Training Award, LSU College of Basic Sciences 2011 – Fulbright Lecturing Award, Council for International Exchange of Scholars == Bibliography == === Books === Gaussian Measures in Banach Spaces (1975) ISBN 978-3540071733 White Noise: An Infinite Dimensional Calculus, co-authored with T. Hida, J. Potthoff and L. Streit (1993) ISBN 978-0792322337 White Noise Distribution Theory (1996) ISBN 978-0849380778 Introduction to Stochastic Integration (2005) ISBN 978-0387287201 === Selected articles === Kuo, H. H., Obata, N., & Saitô, K. (1990). Lévy Laplacian of generalized functions on a nuclear space. Journal of Functional Analysis, 94(1), 74-92. Kuo, H. H. (1992). Lectures on white noise analysis. Soochow J. Math, 18(3), 229-300. Cochran, W. G., Kuo, H. H., & Sengupta, A. (1998). A new class of white noise generalized functions. Infinite Dimensional Analysis, Quantum Probability and Related Topics, 1(01), 43-67. Asai, N., Kubo, I., & Kuo, H. H. (2003). Multiplicative renormalization and generating functions I. Taiwanese Journal of Mathematics, 7(1), 89-101. Accardi, L., Kuo, H. H., & Stan, A. (2007). Moments and commutators of probability measures. Infinite Dimensional Analysis, Quantum Probability and Related Topics, 10(04), 591-612. Kubo, I., Kuo, H. H., & Namli, S. (2007). The characterization of a class of probability measures by multiplicative renormalization. Communications on Stochastic Analysis, 1(3), 8. Ayed, W., & Kuo, H. H. (2008). An extension of the Itô integral. Communications on Stochastic Analysis, 2(3), 5. Asai, N., Kubo, I., & Kuo, H. H. (2013). The Brenke type generating functions and explicit forms of MRM-triples by means of q-hypergeometric series. Infinite Dimensional Analysis, Quantum Probability and Related Topics, 16(02), 1350010. Kuo, H. H., Shrestha, P., & Sinha, S. (2021). An intrinsic proof of an extension of itô’s isometry for anticipating stochastic integrals. Journal of Stochastic Analysis, 2(4), 8. == References ==
|
Wikipedia:Hundred Fowls Problem#0
|
The Hundred Fowls Problem is a problem first discussed in the fifth century CE Chinese mathematics text Zhang Qiujian suanjing (The Mathematical Classic of Zhang Qiujian), a book of mathematical problems written by Zhang Qiujian. It is one of the best known examples of indeterminate problems in the early history of mathematics. The problem appears as the final problem in Zhang Qiujian suanjing (Problem 38 in Chapter 3). However, the problem and its variants have appeared in the medieval mathematical literature of India, Europe and the Arab world. The name "Hundred Fowls Problem" is due to the Belgian historian Louis van Hee. == Problem statement == The Hundred Fowls Problem as presented in Zhang Qiujian suanjing can be translated as follows: "Now one cock is worth 5 qian, one hen 3 qian and 3 chicks 1 qian. It is required to buy 100 fowls with 100 qian. In each case, find the number of cocks, hens and chicks bought." == Mathematical formulation == Let x be the number of cocks, y be the number of hens, and z be the number of chicks, then the problem is to find x, y and z satisfying the following equations: x + y +z = 100 5x + 3y + z/3 = 100 Obviously, only non-negative integer values are acceptable. Expressing y and z in terms of x we get y = 25 − (7/4)x z = 75 + (3/4)x Since x, y and z all must be integers, the expression for y suggests that x must be a multiple of 4. Hence the general solution of the system of equations can be expressed using an integer parameter t as follows: x = 4t y = 25 − 7t z = 75 + 3t Since y should be a non-negative integer, the only possible values of t are 0, 1, 2 and 3. So the complete set of solutions is given by (x,y,z) = (0,25,75), (4,18,78), (8,11,81), (12,4,84). of which the last three have been given in Zhang Qiujian suanjing. However, no general method for solving such problems has been indicated, leading to a suspicion of whether the solutions have been obtained by trial and error. The Hundred Fowls Problem found in Zhang Qiujian suanjing is a special case of the general problem of finding integer solutions of the following system of equations: x + y + z = d ax + by + cz = d Any problem of this type is sometime referred to as "Hundred Fowls problem". == Variations == Some variants of the Hundred Fowls Problem have appeared in the mathematical literature of several cultures. In the following we present a few sample problems discussed in these cultures. === Indian mathematics === Mahavira's Ganita-sara-sangraha contains the following problem: Pigeons are sold at the rate of 5 for 3, sarasa-birds at the rate of 7 for 5, swans at the rate of 9 for 7, and peacocks at the rate of 3 for 9 (panas). A certain man was told to bring 100 birds for 100 panas. What does he give for each of the various kinds of birds he buys? The Bakshali manuscript gives the problem of solving the following equations: x + y + z = 20 3x + (3/2)y + (1/2)z = 20 === Medieval Europe === The English mathematician Alcuin of York (8th century, c.735-19 May 804 AD) has stated seven problems similar to the Hundred Fowls Problem in his Propositiones ad acuendos iuvenes. Here is a typical problem: If 100 bushels of corn be distributed among 100 people such that each man gets 3 bushels, each woman 2 bushels and each child half a bushel, then how many men, women and children were there? === Arabian mathematics === Abu Kamil (850 - 930 CE) considered non-negative integer solutions of the following equations: x + y + z = 100 3x + (/20)y+ (1/3)z = 100. == References ==
|
Wikipedia:Hunter Snevily#0
|
Hunter Snevily (1956–2013) was an American mathematician with expertise and contributions in Set theory, Graph theory, Discrete geometry, and Ramsey theory on the integers. == Education and career == Hunter received his undergraduate degree from Emory University in 1981, and his Ph.D. degree from the University of Illinois Urbana-Champaign under the supervision of Douglas West in 1991. After a postdoctoral fellowship at Caltech, where he mentored many students, Hunter took a faculty position at the University of Idaho in 1993 where he was a professor until 2010. He retired early while fighting with Parkinsons, but continued research in mathematics till his last days. == Mathematics research == The following are some of Hunter's most important contributions (as discussed in ): Hunter formulated a conjecture (1991) bounding the size of a family of sets under intersection constraints. He conjectured that if L {\displaystyle {\mathcal {L}}} is a set of k {\displaystyle k} positive integers and { A 1 , A 2 , … , A m } {\displaystyle \{A_{1},A_{2},\ldots ,A_{m}\}} is a family of subsets of an n {\displaystyle n} -set satisfying | A i ∩ A j | ∈ L {\displaystyle |A_{i}\cap A_{j}|\in {\mathcal {L}}} whenever i ≠ j {\displaystyle i\neq j} , then m ≤ ∑ i = 0 k ( n − 1 i ) {\displaystyle m\leq \sum _{i=0}^{k}{n-1 \choose i}} . His conjecture was ambitious in a way it would beautifully unify classical results of Nicolaas Govert de Bruijn and Paul Erdős (1948), Bose (1949), Majumdar (1953), H. J. Ryser (1968), Frankl and Füredi (1981), and Frankl and Wilson (1981). Hunter finally proved his conjecture in 2003 Hunter made important contribution to the well known Chvátal's Conjecture (1974) which states that every hereditary family F {\displaystyle {\mathcal {F}}} of sets has a largest intersecting subfamily consisting of sets with a common element. Schönheim proved this when the maximal members of F {\displaystyle {\mathcal {F}}} have a common element. Vašek Chvátal proved it when there is a linear order on the elements such that { b 1 , b 2 , … , b k } ∈ F {\displaystyle \{b_{1},b_{2},\ldots ,b_{k}\}\in {\mathcal {F}}} implies { a 1 , a 2 , … , a k } ∈ F {\displaystyle \{a_{1},a_{2},\ldots ,a_{k}\}\in {\mathcal {F}}} when a i ≤ b i {\displaystyle a_{i}\leq b_{i}} for 1 ≤ i ≤ k {\displaystyle 1\leq i\leq k} . A family F {\displaystyle {\mathcal {F}}} has x {\displaystyle x} as a dominant element if substituting x {\displaystyle x} for any element of a member of F {\displaystyle {\mathcal {F}}} not containing x {\displaystyle x} yields another member of F {\displaystyle {\mathcal {F}}} . Hunter's 1992 result greatly strengthened both Schönheim's result and Chvátal's result by proving the conjecture for all families having a dominant element; it was major progress on the problem. One of his most cited papers is with Lior Pachter and Bill Voxman on Graph pebbling. This paper and Hunter's later paper with Foster added several conjectures on the subject and together have been cited in more than 50 papers. Hunter made important contributions on the Snake-in-the-box problem and on the Graceful labeling of graphs. One of Hunter's conjectures (1999) became known as Snevily's Conjecture: Given an abelian group G {\displaystyle G} of odd order, and subsets { a 1 , a 2 , … , a k } {\displaystyle \{a_{1},a_{2},\ldots ,a_{k}\}} and { b 1 , b 2 , … , b k } {\displaystyle \{b_{1},b_{2},\ldots ,b_{k}\}} of G {\displaystyle G} , there exists a permutation π {\displaystyle \pi } of [ k ] {\displaystyle [k]} such that a 1 + b π ( 1 ) , a 2 + b π ( 2 ) , … , a k + b π ( k ) {\displaystyle a_{1}+b_{\pi }(1),a_{2}+b_{\pi }(2),\ldots ,a_{k}+b_{\pi }(k)} are distinct. Noga Alon proved this for cyclic groups of prime order. Dasgupta et al. (2001). proved it for all cyclic groups. Finally, after a decade, the conjecture was proved for all groups by a young mathematician Arsovski. Terence Tao devoted a section to Snevily's Conjecture in his well-known book Additive Combinatorics. Hunter collaborated the most with his long-term friend André Kézdy. After retirement, he became friends with Tanbir Ahmed and explored experimental mathematics that resulted in several publications == References ==
|
Wikipedia:Hurst exponent#0
|
The Hurst exponent is used as a measure of long-term memory of time series. It relates to the autocorrelations of the time series, and the rate at which these decrease as the lag between pairs of values increases. Studies involving the Hurst exponent were originally developed in hydrology for the practical matter of determining optimum dam sizing for the Nile river's volatile rain and drought conditions that had been observed over a long period of time. The name "Hurst exponent", or "Hurst coefficient", derives from Harold Edwin Hurst (1880–1978), who was the lead researcher in these studies; the use of the standard notation H for the coefficient also relates to his name. In fractal geometry, the generalized Hurst exponent has been denoted by H or Hq in honor of both Harold Edwin Hurst and Ludwig Otto Hölder (1859–1937) by Benoît Mandelbrot (1924–2010). H is directly related to fractal dimension, D, and is a measure of a data series' "mild" or "wild" randomness. The Hurst exponent is referred to as the "index of dependence" or "index of long-range dependence". It quantifies the relative tendency of a time series either to regress strongly to the mean or to cluster in a direction. A value H in the range 0.5–1 indicates a time series with long-term positive autocorrelation, meaning that the decay in autocorrelation is slower than exponential, following a power law; for the series it means that a high value tends to be followed by another high value and that future excursions to more high values do occur. A value in the range 0 – 0.5 indicates a time series with long-term switching between high and low values in adjacent pairs, meaning that a single high value will probably be followed by a low value and that the value after that will tend to be high, with this tendency to switch between high and low values lasting a long time into the future, also following a power law. A value of H=0.5 indicates short-memory, with (absolute) autocorrelations decaying exponentially quickly to zero. == Definition == The Hurst exponent, H, is defined in terms of the asymptotic behaviour of the rescaled range as a function of the time span of a time series as follows; E [ R ( n ) S ( n ) ] = C n H as n → ∞ , {\displaystyle \mathbb {E} \left[{\frac {R(n)}{S(n)}}\right]=Cn^{H}{\text{ as }}n\to \infty \,,} where R ( n ) {\displaystyle R(n)} is the range of the first n {\displaystyle n} cumulative deviations from the mean S ( n ) {\displaystyle S(n)} is the series (sum) of the first n standard deviations E [ x ] {\displaystyle \mathbb {E} \left[x\right]\,} is the expected value n {\displaystyle n} is the time span of the observation (number of data points in a time series) C {\displaystyle C} is a constant. == Relation to Fractal Dimension == For self-similar time series, H is directly related to fractal dimension, D, where 1 < D < 2, such that D = 2 - H. The values of the Hurst exponent vary between 0 and 1, with higher values indicating a smoother trend, less volatility, and less roughness. For more general time series or multi-dimensional process, the Hurst exponent and fractal dimension can be chosen independently, as the Hurst exponent represents structure over asymptotically longer periods, while fractal dimension represents structure over asymptotically shorter periods. == Estimating the exponent == A number of estimators of long-range dependence have been proposed in the literature. The oldest and best-known is the so-called rescaled range (R/S) analysis popularized by Mandelbrot and Wallis and based on previous hydrological findings of Hurst. Alternatives include DFA, Periodogram regression, aggregated variances, local Whittle's estimator, wavelet analysis, both in the time domain and frequency domain. === Rescaled range (R/S) analysis === To estimate the Hurst exponent, one must first estimate the dependence of the rescaled range on the time span n of observation. A time series of full length N is divided into a number of nonoverlapping shorter time series of length n, where n takes values N, N/2, N/4, ... (in the convenient case that N is a power of 2). The average rescaled range is then calculated for each value of n. For each such time series of length n {\displaystyle n} , X = X 1 , X 2 , … , X n {\displaystyle X=X_{1},X_{2},\dots ,X_{n}\,} , the rescaled range is calculated as follows: Calculate the mean; m = 1 n ∑ i = 1 n X i . {\displaystyle m={\frac {1}{n}}\sum _{i=1}^{n}X_{i}\,.} Create a mean-adjusted series; Y t = X t − m for t = 1 , 2 , … , n . {\displaystyle Y_{t}=X_{t}-m\quad {\text{ for }}t=1,2,\dots ,n\,.} Calculate the cumulative deviate series Z {\displaystyle Z} ; Z t = ∑ i = 1 t Y i for t = 1 , 2 , … , n . {\displaystyle Z_{t}=\sum _{i=1}^{t}Y_{i}\quad {\text{ for }}t=1,2,\dots ,n\,.} Compute the range R {\displaystyle R} ; R ( n ) = max ( Z 1 , Z 2 , … , Z n ) − min ( Z 1 , Z 2 , … , Z n ) . {\displaystyle R(n)=\operatorname {max} \left(Z_{1},Z_{2},\dots ,Z_{n}\right)-\operatorname {min} \left(Z_{1},Z_{2},\dots ,Z_{n}\right).} Compute the standard deviation S {\displaystyle S} ; S ( n ) = 1 n ∑ i = 1 n ( X i − m ) 2 . {\displaystyle S(n)={\sqrt {{\frac {1}{n}}\sum _{i=1}^{n}\left(X_{i}-m\right)^{2}}}.} Calculate the rescaled range R ( n ) / S ( n ) {\displaystyle R(n)/S(n)} and average over all the partial time series of length n . {\displaystyle n.} The Hurst exponent is estimated by fitting the power law E [ R ( n ) / S ( n ) ] = C n H {\displaystyle \mathbb {E} [R(n)/S(n)]=Cn^{H}} to the data. This can be done by plotting log [ R ( n ) / S ( n ) ] {\displaystyle \log[R(n)/S(n)]} as a function of log n {\displaystyle \log n} , and fitting a straight line; the slope of the line gives H {\displaystyle H} . A more principled approach would be to fit the power law in a maximum-likelihood fashion. Such a graph is called a box plot. However, this approach is known to produce biased estimates of the power-law exponent. For small n {\displaystyle n} there is a significant deviation from the 0.5 slope. Anis and Lloyd estimated the theoretical (i.e., for white noise) values of the R/S statistic to be: E [ R ( n ) / S ( n ) ] = { Γ ( n − 1 2 ) π Γ ( n 2 ) ∑ i = 1 n − 1 n − i i , for n ≤ 340 1 n π 2 ∑ i = 1 n − 1 n − i i , for n > 340 {\displaystyle \mathbb {E} [R(n)/S(n)]={\begin{cases}{\frac {\Gamma ({\frac {n-1}{2}})}{{\sqrt {\pi }}\Gamma ({\frac {n}{2}})}}\sum \limits _{i=1}^{n-1}{\sqrt {\frac {n-i}{i}}},&{\text{for }}n\leq 340\\{\frac {1}{\sqrt {n{\frac {\pi }{2}}}}}\sum \limits _{i=1}^{n-1}{\sqrt {\frac {n-i}{i}}},&{\text{for }}n>340\end{cases}}} where Γ {\displaystyle \Gamma } is the Euler gamma function. The Anis-Lloyd corrected R/S Hurst exponent is calculated as 0.5 plus the slope of R ( n ) / S ( n ) − E [ R ( n ) / S ( n ) ] {\displaystyle R(n)/S(n)-\mathbb {E} [R(n)/S(n)]} . === Confidence intervals === No asymptotic distribution theory has been derived for most of the Hurst exponent estimators so far. However, Weron used bootstrapping to obtain approximate functional forms for confidence intervals of the two most popular methods, i.e., for the Anis-Lloyd corrected R/S analysis: and for DFA: Here M = log 2 N {\displaystyle M=\log _{2}N} and N {\displaystyle N} is the series length. In both cases only subseries of length n > 50 {\displaystyle n>50} were considered for estimating the Hurst exponent; subseries of smaller length lead to a high variance of the R/S estimates. == Generalized exponent == The basic Hurst exponent can be related to the expected size of changes, as a function of the lag between observations, as measured by E(|Xt+τ−Xt|2). For the generalized form of the coefficient, the exponent here is replaced by a more general term, denoted by q. There are a variety of techniques that exist for estimating H, however assessing the accuracy of the estimation can be a complicated issue. Mathematically, in one technique, the Hurst exponent can be estimated such that: H q = H ( q ) , {\displaystyle H_{q}=H(q),} for a time series g ( t ) , t = 1 , 2 , … {\displaystyle g(t),t=1,2,\dots } may be defined by the scaling properties of its structure functions S q {\displaystyle S_{q}} ( τ {\displaystyle \tau } ): S q = ⟨ | g ( t + τ ) − g ( t ) | q ⟩ t ∼ τ q H ( q ) , {\displaystyle S_{q}=\left\langle \left|g(t+\tau )-g(t)\right|^{q}\right\rangle _{t}\sim \tau ^{qH(q)},} where q > 0 {\displaystyle q>0} , τ {\displaystyle \tau } is the time lag and averaging is over the time window t ≫ τ , {\displaystyle t\gg \tau ,} usually the largest time scale of the system. Practically, in nature, there is no limit to time, and thus H is non-deterministic as it may only be estimated based on the observed data; e.g., the most dramatic daily move upwards ever seen in a stock market index can always be exceeded during some subsequent day. In the above mathematical estimation technique, the function H(q) contains information about averaged generalized volatilities at scale τ {\displaystyle \tau } (only q = 1, 2 are used to define the volatility). In particular, the H1 exponent indicates persistent (H1 > 1⁄2) or antipersistent (H1 < 1⁄2) behavior of the trend. For the BRW (brown noise, 1 / f 2 {\displaystyle 1/f^{2}} ) one gets H q = 1 2 , {\displaystyle H_{q}={\frac {1}{2}},} and for pink noise ( 1 / f {\displaystyle 1/f} ) H q = 0. {\displaystyle H_{q}=0.} The Hurst exponent for white noise is dimension dependent, and for 1D and 2D it is H q 1 D = − 1 2 , H q 2 D = − 1. {\displaystyle H_{q}^{1D}=-{\frac {1}{2}},\quad H_{q}^{2D}=-1.} For the popular Lévy stable processes and truncated Lévy processes with parameter α it has been found that H q = q / α , {\displaystyle H_{q}=q/\alpha ,} for q < α {\displaystyle q<\alpha } , and H q = 1 {\displaystyle H_{q}=1} for q ≥ α {\displaystyle q\geq \alpha } . Multifractal detrended fluctuation analysis is one method to estimate H ( q ) {\displaystyle H(q)} from non-stationary time series. When H ( q ) {\displaystyle H(q)} is a non-linear function of q the time series is a multifractal system. === Note === In the above definition two separate requirements are mixed together as if they would be one. Here are the two independent requirements: (i) stationarity of the increments, x(t+T) − x(t) = x(T) − x(0) in distribution. This is the condition that yields longtime autocorrelations. (ii) Self-similarity of the stochastic process then yields variance scaling, but is not needed for longtime memory. E.g., both Markov processes (i.e., memory-free processes) and fractional Brownian motion scale at the level of 1-point densities (simple averages), but neither scales at the level of pair correlations or, correspondingly, the 2-point probability density. An efficient market requires a martingale condition, and unless the variance is linear in the time this produces nonstationary increments, x(t+T) − x(t) ≠ x(T) − x(0). Martingales are Markovian at the level of pair correlations, meaning that pair correlations cannot be used to beat a martingale market. Stationary increments with nonlinear variance, on the other hand, induce the longtime pair memory of fractional Brownian motion that would make the market beatable at the level of pair correlations. Such a market would necessarily be far from "efficient". An analysis of economic time series by means of the Hurst exponent using rescaled range and Detrended fluctuation analysis is conducted by econophysicist A.F. Bariviera. This paper studies the time varying character of Long-range dependency and, thus of informational efficiency. Hurst exponent has also been applied to the investigation of long-range dependency in DNA, and photonic band gap materials. == See also == Long-range dependency – Phenomenon in linguistics and data analysisPages displaying short descriptions of redirect targets Anomalous diffusion – Diffusion process with a non-linear relationship to time Rescaled range – Statistical measure of time series variability Detrended fluctuation analysis – Method to detect power-law scaling in time series == Implementations == Matlab code for computing R/S, DFA, periodogram regression and wavelet estimates of the Hurst exponent and their corresponding confidence intervals is available from RePEc: https://ideas.repec.org/s/wuu/hscode.html Implementation of R/S in Python: https://github.com/Mottl/hurst and of DFA and MFDFA in Python: https://github.com/LRydin/MFDFA Matlab code for computing real Hurst and complex Hurst: https://www.mathworks.com/matlabcentral/fileexchange/49803-calculate-complex-hurst Excel sheet can also be used to do so: https://www.researchgate.net/publication/272792633_Excel_Hurst_Calculator == References ==
|
Wikipedia:Hurwitz determinant#0
|
In mathematics, Hurwitz determinants were introduced by Adolf Hurwitz (1895), who used them to give a criterion for all roots of a polynomial to have negative real part. == Definition == Consider a characteristic polynomial P in the variable λ of the form: P ( λ ) = a 0 λ n + a 1 λ n − 1 + ⋯ + a n − 1 λ + a n {\displaystyle P(\lambda )=a_{0}\lambda ^{n}+a_{1}\lambda ^{n-1}+\cdots +a_{n-1}\lambda +a_{n}} where a i {\displaystyle a_{i}} , i = 0 , 1 , … , n {\displaystyle i=0,1,\ldots ,n} , are real. The square Hurwitz matrix associated to P is given below: H = ( a 1 a 3 a 5 … … … 0 0 0 a 0 a 2 a 4 ⋮ ⋮ ⋮ 0 a 1 a 3 ⋮ ⋮ ⋮ ⋮ a 0 a 2 ⋱ 0 ⋮ ⋮ ⋮ 0 a 1 ⋱ a n ⋮ ⋮ ⋮ ⋮ a 0 ⋱ a n − 1 0 ⋮ ⋮ ⋮ 0 a n − 2 a n ⋮ ⋮ ⋮ ⋮ a n − 3 a n − 1 0 0 0 0 … … … a n − 4 a n − 2 a n ) . {\displaystyle H={\begin{pmatrix}a_{1}&a_{3}&a_{5}&\dots &\dots &\dots &0&0&0\\a_{0}&a_{2}&a_{4}&&&&\vdots &\vdots &\vdots \\0&a_{1}&a_{3}&&&&\vdots &\vdots &\vdots \\\vdots &a_{0}&a_{2}&\ddots &&&0&\vdots &\vdots \\\vdots &0&a_{1}&&\ddots &&a_{n}&\vdots &\vdots \\\vdots &\vdots &a_{0}&&&\ddots &a_{n-1}&0&\vdots \\\vdots &\vdots &0&&&&a_{n-2}&a_{n}&\vdots \\\vdots &\vdots &\vdots &&&&a_{n-3}&a_{n-1}&0\\0&0&0&\dots &\dots &\dots &a_{n-4}&a_{n-2}&a_{n}\end{pmatrix}}.} The i-th Hurwitz determinant is the i-th leading principal minor (minor is a determinant) of the above Hurwitz matrix H. There are n Hurwitz determinants for a characteristic polynomial of degree n. == See also == Transfer matrix == References == Hurwitz, A. (1895), "Ueber die Bedingungen, unter welchen eine Gleichung nur Wurzeln mit negativen reellen Theilen besitzt", Mathematische Annalen, 46 (2): 273–284, doi:10.1007/BF01446812, S2CID 121036103 Wall, H. S. (1945), "Polynomials whose zeros have negative real parts", The American Mathematical Monthly, 52 (6): 308–322, doi:10.1080/00029890.1945.11991574, ISSN 0002-9890, JSTOR 2305291, MR 0012709
|
Wikipedia:Hutchinson operator#0
|
In mathematics, in the study of fractals, a Hutchinson operator is the collective action of a set of contractions, called an iterated function system. The iteration of the operator converges to a unique attractor, which is the often self-similar fixed set of the operator. == Definition == Let { f i : X → X | 1 ≤ i ≤ N } {\displaystyle \{f_{i}:X\to X\ |\ 1\leq i\leq N\}} be an iterated function system, or a set of contractions from a compact set X {\displaystyle X} to itself. The operator H {\displaystyle H} is defined over subsets S ⊂ X {\displaystyle S\subset X} as H ( S ) = ⋃ i = 1 N f i ( S ) . {\displaystyle H(S)=\bigcup _{i=1}^{N}f_{i}(S).\,} A key question is to describe the attractors A = H ( A ) {\displaystyle A=H(A)} of this operator, which are compact sets. One way of generating such a set is to start with an initial compact set S 0 ⊂ X {\displaystyle S_{0}\subset X} (which can be a single point, called a seed) and iterate H {\displaystyle H} as follows S n + 1 = H ( S n ) = ⋃ i = 1 N f i ( S n ) {\displaystyle S_{n+1}=H(S_{n})=\bigcup _{i=1}^{N}f_{i}(S_{n})} and taking the limit, the iteration converges to the attractor A = lim n → ∞ S n . {\displaystyle A=\lim _{n\to \infty }S_{n}.} == Properties == Hutchinson showed in 1981 the existence and uniqueness of the attractor A {\displaystyle A} . The proof follows by showing that the Hutchinson operator is contractive on the set of compact subsets of X {\displaystyle X} in the Hausdorff distance. The collection of functions f i {\displaystyle f_{i}} together with composition form a monoid. With N functions, then one may visualize the monoid as a full N-ary tree or a Cayley tree. == References ==
|
Wikipedia:Hyers–Ulam–Rassias stability#0
|
The stability problem of functional equations originated from a question of Stanisław Ulam, posed in 1940, concerning the stability of group homomorphisms. In the next year, Donald H. Hyers gave a partial affirmative answer to the question of Ulam in the context of Banach spaces in the case of additive mappings, that was the first significant breakthrough and a step toward more solutions in this area. Since then, a large number of papers have been published in connection with various generalizations of Ulam's problem and Hyers's theorem. In 1978, Themistocles M. Rassias succeeded in extending Hyers's theorem for mappings between Banach spaces by considering an unbounded Cauchy difference subject to a continuity condition upon the mapping. He was the first to prove the stability of the linear mapping. This result of Rassias attracted several mathematicians worldwide who began to be stimulated to investigate the stability problems of functional equations. By regarding a large influence of S. M. Ulam, D. H. Hyers, and Th. M. Rassias on the study of stability problems of functional equations, the stability phenomenon proved by Th. M. Rassias led to the development of what is now known as Hyers–Ulam–Rassias stability of functional equations. For an extensive presentation of the stability of functional equations in the context of Ulam's problem, the interested reader is referred to the books by S.-M. Jung, S. Czerwik, Y.J. Cho, C. Park, Th.M. Rassias and R. Saadati, Y.J. Cho, Th.M. Rassias and R. Saadati, and Pl. Kannappan, as well as to the following papers. In 1950, T. Aoki considered an unbounded Cauchy difference which was generalised later by Rassias to the linear case. This result is known as Hyers–Ulam–Aoki stability of the additive mapping. Aoki (1950) had not considered continuity upon the mapping, whereas Rassias (1978) imposed extra continuity hypothesis which yielded a formally stronger conclusion. == References == == See also == Th. M. Rassias, On the stability of functional equations and a problem of Ulam, Acta Applicandae Mathematicae, 62(1)(2000), 23-130. P. Gavruta, A generalization of the Hyers-Ulam-Rassias stability of approximately additive mappings, J. Math. Anal. Appl. 184(1994), 431–436. P. Gavruta and L. Gavruta, A new method for the generalized Hyers–Ulam–Rassias stability, Int. J. Nonlinear Anal. Appl. 1(2010), No. 2, 6 pp. J. Chung, Hyers-Ulam-Rassias stability of Cauchy equation in the space of Schwartz distributions, J. Math. Anal. Appl. 300(2)(2004), 343 – 350. T. Miura, S.-E. Takahasi, and G. Hirasawa, Hyers-Ulam-Rassias stability of Jordan homomorphisms on Banach algebras, J. Inequal. Appl. 4(2005), 435–441. A. Najati and C. Park, Hyers–Ulam-Rassias stability of homomorphisms in quasi-Banach algebras associated to the Pexiderized Cauchy functional equation, J. Math. Anal. Appl. 335(2007), 763–778. Th. M. Rassias and J. Brzdek (eds.), Functional Equations in Mathematical Analysis, Springer, New York, 2012, ISBN 978-1-4614-0054-7. D. Zhang and J. Wang, On the Hyers-Ulam-Rassias stability of Jensen’s equation, Bull. Korean Math. Soc. 46(4)(2009), 645–656. T. Trif, Hyers-Ulam-Rassias stability of a Jensen type functional equation, J. Math. Anal. Appl. 250(2000), 579–588. Pl. Kannappan, Functional Equations and Inequalities with Applications, Springer, New York, 2009, ISBN 978-0-387-89491-1. P. K. Sahoo and Pl. Kannappan, Introduction to Functional Equations, CRC Press, Chapman & Hall Book, Florida, 2011, ISBN 978-1-4398-4111-2. W. W. Breckner and T. Trif, Convex Functions and Related Functional Equations. Selected Topics, Cluj University Press, Cluj, 2008.
|
Wikipedia:Hyman Bass#0
|
Hyman Bass (; born October 5, 1932) is an American mathematician, known for work in algebra and in mathematics education. From 1959 to 1998 he was Professor in the Mathematics Department at Columbia University. He is currently the Samuel Eilenberg Distinguished University Professor of Mathematics and Professor of Mathematics Education at the University of Michigan. == Life == Born to a Jewish family in Houston, Texas, he earned his B.A. in 1955 from Princeton University and his Ph.D. in 1959 from the University of Chicago. His thesis, titled Global dimensions of rings, was written under the supervision of Irving Kaplansky. He has held visiting appointments at the Institute for Advanced Study in Princeton, New Jersey, Institut des Hautes Études Scientifiques and École Normale Supérieure (Paris), Tata Institute of Fundamental Research (Bombay), University of Cambridge, University of California, Berkeley, University of Rome, IMPA (Rio), National Autonomous University of Mexico, Mittag-Leffler Institute (Stockholm), and the University of Utah. He was president of the American Mathematical Society. Bass formerly chaired the Mathematical Sciences Education Board (1992–2000) at the National Academy of Sciences, and the Committee on Education of the American Mathematical Society. He was the President of ICMI from 1999 to 2006. Since 1996 he has been collaborating with Deborah Ball and her research group at the University of Michigan on the mathematical knowledge and resources entailed in the teaching of mathematics at the elementary level. He has worked to build bridges between diverse professional communities and stakeholders involved in mathematics education. == Work == His research interests have been in algebraic K-theory, commutative algebra and algebraic geometry, algebraic groups, geometric methods in group theory, and ζ functions on finite simple graphs. == Awards and recognitions == Bass was elected as a member of the National Academy of Sciences in 1982. In 1983, he was elected a Fellow of the American Academy of Arts and Sciences. In 2002 he was elected a fellow of The World Academy of Sciences. He is a 2006 National Medal of Science laureate. In 2009 he was elected a member of the National Academy of Education. In 2012 he became a fellow of the American Mathematical Society. He was awarded the Mary P. Dolciani Award in 2013. == See also == Bass number Bass–Serre theory Bass–Quillen conjecture == References == == External links == Hyman Bass at the Mathematics Genealogy Project Directory page at University of Michigan Author profile in the database zbMATH
|
Wikipedia:Hyperbolic growth#0
|
When a quantity grows towards a singularity under a finite variation (a "finite-time singularity") it is said to undergo hyperbolic growth. More precisely, the reciprocal function 1 / x {\displaystyle 1/x} has a hyperbola as a graph, and has a singularity at 0, meaning that the limit as x → 0 {\displaystyle x\to 0} is infinite: any similar graph is said to exhibit hyperbolic growth. == Description == If the output of a function is inversely proportional to its input, or inversely proportional to the difference from a given value x 0 {\displaystyle x_{0}} , the function will exhibit hyperbolic growth, with a singularity at x 0 {\displaystyle x_{0}} . In the real world hyperbolic growth is created by certain non-linear positive feedback mechanisms. === Comparisons with other growth functions === Like exponential growth and logistic growth, hyperbolic growth is highly nonlinear, but differs in important respects. These functions can be confused, as exponential growth, hyperbolic growth, and the first half of logistic growth are convex functions; however their asymptotic behavior (behavior as input gets large) differs dramatically: logistic growth is constrained (has a finite limit, even as time goes to infinity), exponential growth grows to infinity as time goes to infinity (but is always finite for finite time), hyperbolic growth has a singularity in finite time (grows to infinity at a finite time). == Applications == === Global macrodevelopment === A 1960 issue of Science magazine included an article by Heinz von Foerster and his colleagues, P. M. Mora and L. W. Amiot, proposing an equation representing the best fit to the historical data on the Earth's population available in 1958: Fifty years ago, Science published a study with the provocative title "Doomsday: Friday, 13 November, A.D. 2026". It fitted world population during the previous two millennia with P = 179 × 109/(2026.9 − t)0.99. This "quasi-hyperbolic" equation (hyperbolic having exponent 1.00 in the denominator) projected to infinite population in 2026—and to an imaginary one thereafter. —Taagepera, Rein. In 1975, von Hoerner suggested that von Foerster's doomsday equation can be written, without a significant loss of accuracy, in a simplified hyperbolic form (i.e. with the exponent in the denominator assumed to be 1.00): Global population = 179000000000 2026.9 − t , {\displaystyle {\text{Global population}}={\frac {179000000000}{2026.9-t}},} where 2026.9 (more precisely, 13 November 2026 AD) is the date of the so-called "demographic singularity" and von Foerster's 115th anniversary; t is the number of a year of the Gregorian calendar. Despite its simplicity, von Foerster's equation is very accurate in the range from 4,000,000 BP to 1997 AD. For example, the doomsday equation (developed in 1958, when the Earth's population was 2,911,249,671) predicts a population of 5,986,622,074 for the beginning of the year 1997: 179000000000 2026.9 − 1997 = 5986622074. {\displaystyle {\frac {179000000000}{2026.9-1997}}=5986622074.} The actual figure was 5,924,787,816. Having analyzed the timing of the events plotted on Ray Kurzweil's "Countdown to Singularity" graph, Andrey Korotayev arrived at the following best-fit equation: Speed of sociotechnological progress = 2054 2029 − t , {\displaystyle {\text{Speed of sociotechnological progress}}={\frac {2054}{2029-t}},} where "speed of sociotechnological progress" is the number of sociotechnological phase transitions per a unit of time; 2029 (more precisely, the beginning of the year 2029 AD) is the date of sociotechnological singularity; t is the number of a year of the Gregorian calendar. Korotayev also analyzed the timing of the events on the list of sociotechnological phase transition points independently compiled by Alexander Panov, and arrived at the following best-fit equation: Speed of sociotechnological progress = 1886 2027 − t , {\displaystyle {\text{Speed of sociotechnological progress}}={\frac {1886}{2027-t}},} where "speed of sociotechnological progress" is the number of sociotechnological phase transitions per a unit of time; 2027 (more precisely, the beginning of the year 2027 AD) is the date of sociotechnological singularity; t is the number of a year of the Gregorian calendar. Korotayev discovered that these two equations are entirely identical with von Foerster's doomsday equation describing the world population growth. Both empirical and mathematical analyses indicate that all the three hyperbolic equations describe the same global macrodevelopmental process, in which demography is indivisibly combined with technology. It can be set forth as follows: technological advance → increase in the carrying capacity of the Earth → population growth → more potential inventors → acceleration of technological advance → faster growth of the Earth's carrying capacity → faster population growth → faster growth of the number of potential inventors → faster technological advance → faster growth of the Earth's carrying capacity, and so on. === Lorentz factor === The Lorentz factor γ is defined as γ = 1 1 − ( v c ) 2 = 1 1 − β 2 , {\displaystyle \gamma ={\frac {1}{\sqrt {1-\left({\frac {v}{c}}\right)^{2}}}}={\frac {1}{\sqrt {1-\beta ^{2}}}},} where v {\displaystyle v} is the relative velocity between inertial reference frames; c {\displaystyle c} is the speed of light in vacuum; β is the ratio of v {\displaystyle v} to c, or the speed in units of c {\displaystyle c} . Proxima Centauri is approximately 4.27 light-years away from the Earth. From a terrestrial observer's perspective, a traveller would cover the distance to Proxima Centauri in approximately 8.54 years at half the speed of light. However, due to the Lorentz factor, the time experienced by the traveller would be shorter: Proper time = Time γ = 8.54 light-years 1.155 ≈ 7.39 light-years . {\displaystyle {\text{Proper time}}={\frac {\text{Time}}{\text{γ}}}={\frac {\text{8.54 light-years}}{1.155}}{\text{≈ 7.39 light-years}}.} The following graph shows the journey times for twenty runs to Proxima Centauri from the ship viewpoint. Notice that as speeds approach the speed of light, the journey times reduce dramatically, even though the actual increments in speed appear slight. On the 20th run, at 1048575/1048576 of the speed of light, the distance shrinks to 0.0059 light-years and the traveller experiences a journey time of 2.15 days. Whereas to those on Earth the ship looks almost "frozen" and the journey still takes 4.27 years, plus a couple of days. The equation describing the growth of the Lorentz factor with speed is unmistakably hyperbolic, so the Lorentz factor of a spaceship, subjected to even a small but constant accelerating force, must become infinite in a finite proper time. This requirement is met by assuming that a translationally accelerating spaceship loses its rest mass (which is the spaceship's resistance to its further translational acceleration along the path of flight): γ = m rel m 0 , {\displaystyle \gamma ={\frac {m_{\text{rel}}}{m_{0}}},} where m 0 {\displaystyle {m_{0}}} is rest mass (resistance to longitudinal acceleration); m rel {\displaystyle {m_{\text{rel}}}} is relativistic mass (resistance to transverse acceleration). At v {\displaystyle v} = 0, the magnitude of the Lorentz factor is γ = m rel m 0 = 1 1 = 1. {\displaystyle \gamma ={\frac {m_{\text{rel}}}{m_{0}}}={\frac {1}{1}}=1.} At v {\displaystyle v} = 0.5 c, the magnitude of the Lorentz factor is γ = m rel m 0 = 1.072 0.928 = 1.155. {\displaystyle \gamma ={\frac {m_{\text{rel}}}{m_{0}}}={\frac {1.072}{0.928}}=1.155.} At v {\displaystyle v} = 0.999 c, the magnitude of the Lorentz factor is γ = m rel m 0 = 1.914 0.086 = 22.366. {\displaystyle \gamma ={\frac {m_{\text{rel}}}{m_{0}}}={\frac {1.914}{0.086}}=22.366.} Following this pattern, the spaceship will, after a finite proper time, turn into a beam of photons: Photons may be regarded as limiting particles whose rest mass has become zero while their Lorentz factor has become infinite. —Rindler, W. The light-speed spaceship will then cover the remaining distance to its destination in zero proper time: Since when traveling at the speed of light no apparent time elapses, the spacecraft would arrive instantly and simultaneously at all locations along the path of flight. Thus to the crew on the spacecraft, all spatial separations would collapse to zero along this path‑of‑flight. There is no relativistic dilatation, as all spatial separations are transverse to a light-speed spacecraft's flight. <...> Thus the spacecraft would disappear after reaching light speed, followed immediately by its reappearance trillions of miles away in the proximity of the target star, when the spacecraft returns to sub-light speed, Figure 9.6. —Czysz, Paul A.; Bruno, Claudio. The universe's matter is falling into the universe's gravitational field: Gravity rules. The moon orbiting Earth, matter falling into black holes, and the overall structure of the universe are dominated by gravity. —Seeds, Michael A.; Backman, Dana. Consequently, the universe's matter accelerates to ever greater speeds, so that its Lorentz factor hyperbolically increases to infinity, while its rest mass hyperbolically vanishes: As we go forwards in time, material weight continually changes into radiation. Conversely, as we go backwards in time, the total material weight of the universe must continually increase. —Jeans, James Hopwood. At the end of the hyperbolic growth of its Lorentz factor, the universe's matter attains the speed of light: 'It all just seemed unbelievably boring to me,' Penrose says. Then he found something interesting within it: at the very end of the universe, the only remaining particles will be massless. That means everything that exists will travel at the speed of light, making the flow of time meaningless. —Brooks, Michael. So, the universe will eventually consist of relativistic kinetic energy, which is negative, i.e. hierarchically binding/enslaving: A beam of negative energy that travels into the past can be generated by the acceleration of the source to high speeds. —Skinner, Ray. It is seen that the relativistic kinetic energy is always negative and therefore will lower the energy levels of a bound system. —Ruei, K. H. This hierarchically binding/enslaving negative energy is the universe's spirit or information: Remember, more binding energy means the system is more bound—has greater negative energy. —Shu, Frank. The Spirit is the binding energy expressed by the word re-ligio/religion—a word that itself reflects the brokenness and fragmentation of the universe, that God is trying to heal. —Grey, Mary C. Szilard's explanation was accepted by the physics community, and information was accepted as a scientific concept, defined by its statistical‑mechanical properties as a kind of negative energy that introduced order into a system. —Aspray, William. Thus, the hyperbolic growth of the Lorentz factor of the universe's matter hierarchically binds/enslaves or, which is the same, animates/informs the universe's matter. The sociotechnological singularity of the terrestrial animated/informed matter, expected at the end of the year 2026 AD (see Global macrodevelopment) will signify that the Lorentz factor of the universe's matter has become infinite—since the end of the year 2026 AD, the universe's matter will be falling into the universe's animating/informing gravitational field (which is the funnel-shaped gradient of matter's negative-energiedness, animateness, informedness) at the speed of light: The negative energy of the gravitational field is what allows negative entropy, equivalent to information, to grow, making the Universe a more complicated and interesting place. —Gribbin, John. === Queuing theory === Another example of hyperbolic growth can be found in queueing theory: the average waiting time of randomly arriving customers grows hyperbolically as a function of the average load ratio of the server. The singularity in this case occurs when the average amount of work arriving to the server equals the server's processing capacity. If the processing needs exceed the server's capacity, then there is no well-defined average waiting time, as the queue can grow without bound. A practical implication of this particular example is that for highly loaded queuing systems the average waiting time can be extremely sensitive to the processing capacity. === Enzyme kinetics === A further practical example of hyperbolic growth can be found in enzyme kinetics. When the rate of reaction (termed velocity) between an enzyme and substrate is plotted against various concentrations of the substrate, a hyperbolic plot is obtained for many simpler systems. When this happens, the enzyme is said to follow Michaelis-Menten kinetics. == Mathematical example == The function x ( t ) = 1 t c − t {\displaystyle x(t)={\frac {1}{t_{c}-t}}} exhibits hyperbolic growth with a singularity at time t c {\displaystyle t_{c}} : in the limit as t → t c {\displaystyle t\to t_{c}} , the function goes to infinity. More generally, the function x ( t ) = K t c − t {\displaystyle x(t)={\frac {K}{t_{c}-t}}} exhibits hyperbolic growth, where K {\displaystyle K} is a scale factor. Note that this algebraic function can be regarded as an analytical solution for the function's differential: d x d t = K ( t c − t ) 2 = x 2 K . {\displaystyle {\frac {dx}{dt}}={\frac {K}{(t_{c}-t)^{2}}}={\frac {x^{2}}{K}}.} This means that with hyperbolic growth the absolute growth rate of the variable x in the moment t is proportional to the square of the value of x in the moment t. Respectively, the quadratic-hyperbolic function looks as follows: x ( t ) = K ( t c − t ) 2 . {\displaystyle x(t)={\frac {K}{(t_{c}-t)^{2}}}.} == See also == Exponential growth Logistic growth Mathematical singularity == References == == Bibliography == Alexander V. Markov, and Andrey V. Korotayev (2007). "Phanerozoic marine biodiversity follows a hyperbolic trend". Palaeoworld. Volume 16. Issue 4. Pages 311-318]. Kremer, Michael. 1993. "Population Growth and Technological Change: One Million B.C. to 1990," The Quarterly Journal of Economics 108(3): 681-716. Korotayev A., Malkov A., Khaltourina D. 2006. Introduction to Social Macrodynamics: Compact Macromodels of the World System Growth. Moscow: URSS. ISBN 5-484-00414-4 . Rein Taagepera (1979) People, skills, and resources: An interaction model for world population growth. Technological Forecasting and Social Change 13, 13-30.
|
Wikipedia:Hypercomplex analysis#0
|
In mathematics, hypercomplex analysis is the extension of complex analysis to the hypercomplex numbers. The first instance is functions of a quaternion variable, where the argument is a quaternion (in this case, the sub-field of hypercomplex analysis is called quaternionic analysis). A second instance involves functions of a motor variable where arguments are split-complex numbers. In mathematical physics, there are hypercomplex systems called Clifford algebras. The study of functions with arguments from a Clifford algebra is called Clifford analysis. A matrix may be considered a hypercomplex number. For example, the study of functions of 2 × 2 real matrices shows that the topology of the space of hypercomplex numbers determines the function theory. Functions such as square root of a matrix, matrix exponential, and logarithm of a matrix are basic examples of hypercomplex analysis. The function theory of diagonalizable matrices is particularly transparent since they have eigendecompositions. Suppose T = ∑ i = 1 N λ i E i {\displaystyle \textstyle T=\sum _{i=1}^{N}\lambda _{i}E_{i}} where the Ei are projections. Then for any polynomial f {\displaystyle f} , f ( T ) = ∑ i = 1 N f ( λ i ) E i . {\displaystyle f(T)=\sum _{i=1}^{N}f(\lambda _{i})E_{i}.} The modern terminology for a "system of hypercomplex numbers" is an algebra over the real numbers, and the algebras used in applications are often Banach algebras since Cauchy sequences can be taken to be convergent. Then the function theory is enriched by sequences and series. In this context the extension of holomorphic functions of a complex variable is developed as the holomorphic functional calculus. Hypercomplex analysis on Banach algebras is called functional analysis. == See also == Giovanni Battista Rizza == References == === Sources === Daniel Alpay (ed.) (2006) Wavelets, Multiscale systems and Hypercomplex Analysis, Springer, ISBN 9783764375881 . Enrique Ramirez de Arellanon (1998) Operator theory for complex and hypercomplex analysis, American Mathematical Society (Conference proceedings from a meeting in Mexico City in December 1994). J. A. Emanuello (2015) Analysis of functions of split-complex, multi-complex, and split-quaternionic variables and their associated conformal geometries, Ph.D. Thesis, Florida State University Sorin D. Gal (2004) Introduction to the Geometric Function theory of Hypercomplex variables, Nova Science Publishers, ISBN 1-59033-398-5. Lávička, Roman; O'Farrell, Anthony G.; Short, Ian (2007). "Reversible maps in the group of quaternionic Möbius transformations" (PDF). Mathematical Proceedings of the Cambridge Philosophical Society. 143 (1): 57–69. Bibcode:2007MPCPS.143...57L. doi:10.1017/S030500410700028X. Irene Sabadini and Franciscus Sommen (eds.) (2011) Hypercomplex Analysis and Applications, Birkhauser Mathematics. Irene Sabadini & Michael V. Shapiro & F. Sommen (editors) (2009) Hypercomplex Analysis, Birkhauser ISBN 978-3-7643-9892-7. Sabadini, Sommen, Struppa (eds.) (2012) Advances in Hypercomplex Analysis, Springer.
|
Wikipedia:Hypergeometric identity#0
|
In mathematics, hypergeometric identities are equalities involving sums over hypergeometric terms, i.e. the coefficients occurring in hypergeometric series. These identities occur frequently in solutions to combinatorial problems, and also in the analysis of algorithms. These identities were traditionally found 'by hand'. There exist now several algorithms which can find and prove all hypergeometric identities. == Examples == ∑ i = 0 n ( n i ) = 2 n {\displaystyle \sum _{i=0}^{n}{n \choose i}=2^{n}} ∑ i = 0 n ( n i ) 2 = ( 2 n n ) {\displaystyle \sum _{i=0}^{n}{n \choose i}^{2}={2n \choose n}} ∑ k = 0 n k ( n k ) = n 2 n − 1 {\displaystyle \sum _{k=0}^{n}k{n \choose k}=n2^{n-1}} ∑ i = n N i ( i n ) = ( n + 1 ) ( N + 2 n + 2 ) − ( N + 1 n + 1 ) {\displaystyle \sum _{i=n}^{N}i{i \choose n}=(n+1){N+2 \choose n+2}-{N+1 \choose n+1}} == Definition == There are two definitions of hypergeometric terms, both used in different cases as explained below. See also hypergeometric series. A term tk is a hypergeometric term if t k + 1 t k {\displaystyle {\frac {t_{k+1}}{t_{k}}}} is a rational function in k. A term F(n,k) is a hypergeometric term if F ( n , k + 1 ) F ( n , k ) {\displaystyle {\frac {F(n,k+1)}{F(n,k)}}} is a rational function in k. There exist two types of sums over hypergeometric terms, the definite and indefinite sums. A definite sum is of the form ∑ k t k . {\displaystyle \sum _{k}t_{k}.} The indefinite sum is of the form ∑ k = 0 n F ( n , k ) . {\displaystyle \sum _{k=0}^{n}F(n,k).} == Proofs == Although in the past proofs have been found for many specific identities, there exist several general algorithms to find and prove identities. These algorithms first find a simple expression for a sum over hypergeometric terms and then provide a certificate which anyone can use to check and prove the correctness of the identity. For each of the hypergeometric sum types there exist one or more methods to find a simple expression. These methods also provide the certificate to check the identity's proof: Definite sums: Sister Celine's Method, Zeilberger's algorithm Indefinite sums: Gosper's algorithm The book A = B by Marko Petkovšek, Herbert Wilf and Doron Zeilberger describes the three main approaches mentioned above. == See also == Table of Newtonian series == External links == The book "A = B", this book is freely downloadable from the internet. Special-functions examples at exampleproblems.com
|
Wikipedia:Hyperplane#0
|
In geometry, a hyperplane is a generalization of a two-dimensional plane in three-dimensional space to mathematical spaces of arbitrary dimension. Like a plane in space, a hyperplane is a flat hypersurface, a subspace whose dimension is one less than that of the ambient space. Two lower-dimensional examples of hyperplanes are one-dimensional lines in a plane and zero-dimensional points on a line. Most commonly, the ambient space is n-dimensional Euclidean space, in which case the hyperplanes are the (n − 1)-dimensional "flats", each of which separates the space into two half spaces. A reflection across a hyperplane is a kind of motion (geometric transformation preserving distance between points), and the group of all motions is generated by the reflections. A convex polytope is the intersection of half-spaces. In non-Euclidean geometry, the ambient space might be the n-dimensional sphere or hyperbolic space, or more generally a pseudo-Riemannian space form, and the hyperplanes are the hypersurfaces consisting of all geodesics through a point which are perpendicular to a specific normal geodesic. In other kinds of ambient spaces, some properties from Euclidean space are no longer relevant. For example, in affine space, there is no concept of distance, so there are no reflections or motions. In a non-orientable space such as elliptic space or projective space, there is no concept of half-planes. In greatest generality, the notion of hyperplane is meaningful in any mathematical space in which the concept of the dimension of a subspace is defined. The difference in dimension between a subspace and its ambient space is known as its codimension. A hyperplane has codimension 1. == Technical description == In geometry, a hyperplane of an n-dimensional space V is a subspace of dimension n − 1, or equivalently, of codimension 1 in V. The space V may be a Euclidean space or more generally an affine space, or a vector space or a projective space, and the notion of hyperplane varies correspondingly since the definition of subspace differs in these settings; in all cases however, any hyperplane can be given in coordinates as the solution of a single (due to the "codimension 1" constraint) algebraic equation of degree 1. If V is a vector space, one distinguishes "vector hyperplanes" (which are linear subspaces, and therefore must pass through the origin) and "affine hyperplanes" (which need not pass through the origin; they can be obtained by translation of a vector hyperplane). A hyperplane in a Euclidean space separates that space into two half spaces, and defines a reflection that fixes the hyperplane and interchanges those two half spaces. == Special types of hyperplanes == Several specific types of hyperplanes are defined with properties that are well suited for particular purposes. Some of these specializations are described here. === Affine hyperplanes === An affine hyperplane is an affine subspace of codimension 1 in an affine space. In Cartesian coordinates, such a hyperplane can be described with a single linear equation of the following form (where at least one of the a i {\displaystyle a_{i}} s is non-zero and b {\displaystyle b} is an arbitrary constant): a 1 x 1 + a 2 x 2 + ⋯ + a n x n = b . {\displaystyle a_{1}x_{1}+a_{2}x_{2}+\cdots +a_{n}x_{n}=b.\ } In the case of a real affine space, in other words when the coordinates are real numbers, this affine space separates the space into two half-spaces, which are the connected components of the complement of the hyperplane, and are given by the inequalities a 1 x 1 + a 2 x 2 + ⋯ + a n x n < b {\displaystyle a_{1}x_{1}+a_{2}x_{2}+\cdots +a_{n}x_{n}<b\ } and a 1 x 1 + a 2 x 2 + ⋯ + a n x n > b . {\displaystyle a_{1}x_{1}+a_{2}x_{2}+\cdots +a_{n}x_{n}>b.\ } As an example, a point is a hyperplane in 1-dimensional space, a line is a hyperplane in 2-dimensional space, and a plane is a hyperplane in 3-dimensional space. A line in 3-dimensional space is not a hyperplane, and does not separate the space into two parts (the complement of such a line is connected). Any hyperplane of a Euclidean space has exactly two unit normal vectors: ± n ^ {\displaystyle \pm {\hat {n}}} . In particular, if we consider R n + 1 {\displaystyle \mathbb {R} ^{n+1}} equipped with the conventional inner product (dot product), then one can define the affine subspace with normal vector n ^ {\displaystyle {\hat {n}}} and origin translation b ~ ∈ R n + 1 {\displaystyle {\tilde {b}}\in \mathbb {R} ^{n+1}} as the set of all x ∈ R n + 1 {\displaystyle x\in \mathbb {R} ^{n+1}} such that n ^ ⋅ ( x − b ~ ) = 0 {\displaystyle {\hat {n}}\cdot (x-{\tilde {b}})=0} . Affine hyperplanes are used to define decision boundaries in many machine learning algorithms such as linear-combination (oblique) decision trees, and perceptrons. === Vector hyperplanes === In a vector space, a vector hyperplane is a subspace of codimension 1, only possibly shifted from the origin by a vector, in which case it is referred to as a flat. Such a hyperplane is the solution of a single linear equation. === Projective hyperplanes === Projective hyperplanes, are used in projective geometry. A projective subspace is a set of points with the property that for any two points of the set, all the points on the line determined by the two points are contained in the set. Projective geometry can be viewed as affine geometry with vanishing points (points at infinity) added. An affine hyperplane together with the associated points at infinity forms a projective hyperplane. One special case of a projective hyperplane is the infinite or ideal hyperplane, which is defined with the set of all points at infinity. In projective space, a hyperplane does not divide the space into two parts; rather, it takes two hyperplanes to separate points and divide up the space. The reason for this is that the space essentially "wraps around" so that both sides of a lone hyperplane are connected to each other. == Applications == In convex geometry, two disjoint convex sets in n-dimensional Euclidean space are separated by a hyperplane, a result called the hyperplane separation theorem. In machine learning, hyperplanes are a key tool to create support vector machines for such tasks as computer vision and natural language processing. The datapoint and its predicted value via a linear model is a hyperplane. In astronomy, hyperplanes can be used to calculate shortest distance between star systems, galaxies and celestial bodies with regard of general relativity and curvature of space-time as optimized geodesic or paths influenced by gravitational fields. == Dihedral angles == The dihedral angle between two non-parallel hyperplanes of a Euclidean space is the angle between the corresponding normal vectors. The product of the transformations in the two hyperplanes is a rotation whose axis is the subspace of codimension 2 obtained by intersecting the hyperplanes, and whose angle is twice the angle between the hyperplanes. === Support hyperplanes === A hyperplane H is called a "support" hyperplane of the polyhedron P if P is contained in one of the two closed half-spaces bounded by H and H ∩ P ≠ ∅ {\displaystyle H\cap P\neq \varnothing } . The intersection of P and H is defined to be a "face" of the polyhedron. The theory of polyhedra and the dimension of the faces are analyzed by looking at these intersections involving hyperplanes. == See also == Hypersurface Decision boundary Ham sandwich theorem Arrangement of hyperplanes Supporting hyperplane theorem == References == Binmore, Ken G. (1980). The Foundations of Topological Analysis: A Straightforward Introduction: Book 2 Topological Ideas. Cambridge University Press. p. 13. ISBN 0-521-29930-6. Charles W. Curtis (1968) Linear Algebra, page 62, Allyn & Bacon, Boston. Heinrich Guggenheimer (1977) Applicable Geometry, page 7, Krieger, Huntington ISBN 0-88275-368-1 . Victor V. Prasolov & VM Tikhomirov (1997, 2001) Geometry, page 22, volume 200 in Translations of Mathematical Monographs, American Mathematical Society, Providence ISBN 0-8218-2038-9 . == External links == Weisstein, Eric W. "Hyperplane". MathWorld. Weisstein, Eric W. "Flat". MathWorld.
|
Wikipedia:Hyperreal number#0
|
In mathematics, hyperreal numbers are an extension of the real numbers to include certain classes of infinite and infinitesimal numbers. A hyperreal number x {\displaystyle x} is said to be finite if, and only if, | x | < n {\displaystyle |x|<n} for some integer n {\displaystyle n} . x {\displaystyle x} is said to be infinitesimal if, and only if, | x | < 1 / n {\displaystyle |x|<1/n} for all positive integers n {\displaystyle n} . The term "hyper-real" was introduced by Edwin Hewitt in 1948. The hyperreal numbers satisfy the transfer principle, a rigorous version of Leibniz's heuristic law of continuity. The transfer principle states that true first-order statements about R are also valid in *R. For example, the commutative law of addition, x + y = y + x, holds for the hyperreals just as it does for the reals; since R is a real closed field, so is *R. Since sin ( π n ) = 0 {\displaystyle \sin({\pi n})=0} for all integers n, one also has sin ( π H ) = 0 {\displaystyle \sin({\pi H})=0} for all hyperintegers H {\displaystyle H} . The transfer principle for ultrapowers is a consequence of Łoś's theorem of 1955. Concerns about the soundness of arguments involving infinitesimals date back to ancient Greek mathematics, with Archimedes replacing such proofs with ones using other techniques such as the method of exhaustion. In the 1960s, Abraham Robinson proved that the hyperreals were logically consistent if and only if the reals were. This put to rest the fear that any proof involving infinitesimals might be unsound, provided that they were manipulated according to the logical rules that Robinson delineated. The application of hyperreal numbers and in particular the transfer principle to problems of analysis is called nonstandard analysis. One immediate application is the definition of the basic concepts of analysis such as the derivative and integral in a direct fashion, without passing via logical complications of multiple quantifiers. Thus, the derivative of f(x) becomes f ′ ( x ) = st ( f ( x + Δ x ) − f ( x ) Δ x ) {\displaystyle f'(x)=\operatorname {st} \left({\frac {f(x+\Delta x)-f(x)}{\Delta x}}\right)} for an infinitesimal Δ x {\displaystyle \Delta x} , where st(⋅) denotes the standard part function, which "rounds off" each finite hyperreal to the nearest real. Similarly, the integral is defined as the standard part of a suitable infinite sum. == Transfer principle == The idea of the hyperreal system is to extend the real numbers R to form a system *R that includes infinitesimal and infinite numbers, but without changing any of the elementary axioms of algebra. Any statement of the form "for any number x ..." that is true for the reals is also true for the hyperreals. For example, the axiom that states "for any number x, x + 0 = x" still applies. The same is true for quantification over several numbers, e.g., "for any numbers x and y, xy = yx." This ability to carry over statements from the reals to the hyperreals is called the transfer principle. However, statements of the form "for any set of numbers S ..." may not carry over. The only properties that differ between the reals and the hyperreals are those that rely on quantification over sets, or other higher-level structures such as functions and relations, which are typically constructed out of sets. Each real set, function, and relation has its natural hyperreal extension, satisfying the same first-order properties. The kinds of logical sentences that obey this restriction on quantification are referred to as statements in first-order logic. The transfer principle, however, does not mean that R and *R have identical behavior. For instance, in *R there exists an element ω such that 1 < ω , 1 + 1 < ω , 1 + 1 + 1 < ω , 1 + 1 + 1 + 1 < ω , … . {\displaystyle 1<\omega ,\quad 1+1<\omega ,\quad 1+1+1<\omega ,\quad 1+1+1+1<\omega ,\ldots .} but there is no such number in R. (In other words, *R is not Archimedean.) This is possible because the nonexistence of ω cannot be expressed as a first-order statement. == Use in analysis == Informal notations for non-real quantities have historically appeared in calculus in two contexts: as infinitesimals, like dx, and as the symbol ∞, used, for example, in limits of integration of improper integrals. As an example of the transfer principle, the statement that for any nonzero number x, 2x ≠ x, is true for the real numbers, and it is in the form required by the transfer principle, so it is also true for the hyperreal numbers. This shows that it is not possible to use a generic symbol such as ∞ for all the infinite quantities in the hyperreal system; infinite quantities differ in magnitude from other infinite quantities, and infinitesimals from other infinitesimals. Similarly, the casual use of 1/0 = ∞ is invalid, since the transfer principle applies to the statement that zero has no multiplicative inverse. The rigorous counterpart of such a calculation would be that if ε is a non-zero infinitesimal, then 1/ε is infinite. For any finite hyperreal number x, the standard part, st(x), is defined as the unique closest real number to x; it necessarily differs from x only infinitesimally. The standard part function can also be defined for infinite hyperreal numbers as follows: If x is a positive infinite hyperreal number, set st(x) to be the extended real number + ∞ {\displaystyle +\infty } , and likewise, if x is a negative infinite hyperreal number, set st(x) to be − ∞ {\displaystyle -\infty } (the idea is that an infinite hyperreal number should be smaller than the "true" absolute infinity but closer to it than any real number is). === Differentiation === One of the key uses of the hyperreal number system is to give a precise meaning to the differential operator d as used by Leibniz to define the derivative and the integral. For any real-valued function f , {\displaystyle f,} the differential d f {\displaystyle df} is defined as a map which sends every ordered pair ( x , d x ) {\displaystyle (x,dx)} (where x {\displaystyle x} is real and d x {\displaystyle dx} is nonzero infinitesimal) to an infinitesimal d f ( x , d x ) := st ( f ( x + d x ) − f ( x ) d x ) d x . {\displaystyle df(x,dx):=\operatorname {st} \left({\frac {f(x+dx)-f(x)}{dx}}\right)\ dx.} Note that the very notation " d x {\displaystyle dx} " used to denote any infinitesimal is consistent with the above definition of the operator d , {\displaystyle d,} for if one interprets x {\displaystyle x} (as is commonly done) to be the function f ( x ) = x , {\displaystyle f(x)=x,} then for every ( x , d x ) {\displaystyle (x,dx)} the differential d ( x ) {\displaystyle d(x)} will equal the infinitesimal d x {\displaystyle dx} . A real-valued function f {\displaystyle f} is said to be differentiable at a point x {\displaystyle x} if the quotient d f ( x , d x ) d x = st ( f ( x + d x ) − f ( x ) d x ) {\displaystyle {\frac {df(x,dx)}{dx}}=\operatorname {st} \left({\frac {f(x+dx)-f(x)}{dx}}\right)} is the same for all nonzero infinitesimals d x . {\displaystyle dx.} If so, this quotient is called the derivative of f {\displaystyle f} at x {\displaystyle x} . For example, to find the derivative of the function f ( x ) = x 2 {\displaystyle f(x)=x^{2}} , let d x {\displaystyle dx} be a non-zero infinitesimal. Then, The use of the standard part in the definition of the derivative is a rigorous alternative to the traditional practice of neglecting the square of an infinitesimal quantity. Dual numbers are a number system based on this idea. After the third line of the differentiation above, the typical method from Newton through the 19th century would have been simply to discard the dx2 term. In the hyperreal system, dx2 ≠ 0, since dx is nonzero, and the transfer principle can be applied to the statement that the square of any nonzero number is nonzero. However, the quantity dx2 is infinitesimally small compared to dx; that is, the hyperreal system contains a hierarchy of infinitesimal quantities. Using hyperreal numbers for differentiation allows for a more algebraically manipulable approach to derivatives. In standard differentiation, partial differentials and higher-order differentials are not independently manipulable through algebraic techniques. However, using the hyperreals, a system can be established for doing so, though resulting in a slightly different notation. === Integration === Another key use of the hyperreal number system is to give a precise meaning to the integral sign ∫ used by Leibniz to define the definite integral. For any infinitesimal function ε ( x ) , {\displaystyle \ \varepsilon (x),\ } one may define the integral ∫ ( ε ) {\displaystyle \int (\varepsilon )\ } as a map sending any ordered triple ( a , b , d x ) {\displaystyle (a,b,dx)} (where a {\displaystyle \ a\ } and b {\displaystyle \ b\ } are real, and d x {\displaystyle \ dx\ } is infinitesimal of the same sign as b − a {\displaystyle \,b-a} ) to the value ∫ a b ( ε , d x ) := st ( ∑ n = 0 N ε ( a + n d x ) ) , {\displaystyle \int _{a}^{b}(\varepsilon ,dx):=\operatorname {st} \left(\sum _{n=0}^{N}\varepsilon (a+n\ dx)\right),} where N {\displaystyle \ N\ } is any hyperinteger number satisfying st ( N d x ) = b − a . {\displaystyle \ \operatorname {st} (N\ dx)=b-a.} A real-valued function f {\displaystyle f} is then said to be integrable over a closed interval [ a , b ] {\displaystyle \ [a,b]\ } if for any nonzero infinitesimal d x , {\displaystyle \ dx,\ } the integral ∫ a b ( f d x , d x ) {\displaystyle \int _{a}^{b}(f\ dx,dx)} is independent of the choice of d x . {\displaystyle \ dx.} If so, this integral is called the definite integral (or antiderivative) of f {\displaystyle f} on [ a , b ] . {\displaystyle \ [a,b].} This shows that using hyperreal numbers, Leibniz's notation for the definite integral can actually be interpreted as a meaningful algebraic expression (just as the derivative can be interpreted as a meaningful quotient). == Properties == The hyperreals *R form an ordered field containing the reals R as a subfield. Unlike the reals, the hyperreals do not form a standard metric space, but by virtue of their order they carry an order topology. The use of the definite article the in the phrase the hyperreal numbers is somewhat misleading in that there is not a unique ordered field that is referred to in most treatments. However, a 2003 paper by Vladimir Kanovei and Saharon Shelah shows that there is a definable, countably saturated (meaning ω-saturated but not countable) elementary extension of the reals, which therefore has a good claim to the title of the hyperreal numbers. Furthermore, the field obtained by the ultrapower construction from the space of all real sequences, is unique up to isomorphism if one assumes the continuum hypothesis. The condition of being a hyperreal field is a stronger one than that of being a real closed field strictly containing R. It is also stronger than that of being a superreal field in the sense of Dales and Woodin. == Development == The hyperreals can be developed either axiomatically or by more constructively oriented methods. The essence of the axiomatic approach is to assert (1) the existence of at least one infinitesimal number, and (2) the validity of the transfer principle. In the following subsection we give a detailed outline of a more constructive approach. This method allows one to construct the hyperreals if given a set-theoretic object called an ultrafilter, but the ultrafilter itself cannot be explicitly constructed. === From Leibniz to Robinson === When Newton and (more explicitly) Leibniz introduced differentials, they used infinitesimals and these were still regarded as useful by later mathematicians such as Euler and Cauchy. Nonetheless these concepts were from the beginning seen as suspect, notably by George Berkeley. Berkeley's criticism centered on a perceived shift in hypothesis in the definition of the derivative in terms of infinitesimals (or fluxions), where dx is assumed to be nonzero at the beginning of the calculation, and to vanish at its conclusion (see Ghosts of departed quantities for details). When in the 1800s calculus was put on a firm footing through the development of the (ε, δ)-definition of limit by Bolzano, Cauchy, Weierstrass, and others, infinitesimals were largely abandoned, though research in non-Archimedean fields continued (Ehrlich 2006). However, in the 1960s Abraham Robinson showed how infinitely large and infinitesimal numbers can be rigorously defined and used to develop the field of nonstandard analysis. Robinson developed his theory nonconstructively, using model theory; however it is possible to proceed using only algebra and topology, and proving the transfer principle as a consequence of the definitions. In other words hyperreal numbers per se, aside from their use in nonstandard analysis, have no necessary relationship to model theory or first order logic, although they were discovered by the application of model theoretic techniques from logic. Hyper-real fields were in fact originally introduced by Hewitt (1948) by purely algebraic techniques, using an ultrapower construction. === Ultrapower construction === We are going to construct a hyperreal field via sequences of reals. In fact we can add and multiply sequences componentwise; for example: ( a 0 , a 1 , a 2 , … ) + ( b 0 , b 1 , b 2 , … ) = ( a 0 + b 0 , a 1 + b 1 , a 2 + b 2 , … ) {\displaystyle (a_{0},a_{1},a_{2},\ldots )+(b_{0},b_{1},b_{2},\ldots )=(a_{0}+b_{0},a_{1}+b_{1},a_{2}+b_{2},\ldots )} and analogously for multiplication. This turns the set of such sequences into a commutative ring, which is in fact a real algebra A. We have a natural embedding of R in A by identifying the real number r with the sequence (r, r, r, …) and this identification preserves the corresponding algebraic operations of the reals. The intuitive motivation is, for example, to represent an infinitesimal number using a sequence that approaches zero. The inverse of such a sequence would represent an infinite number. As we will see below, the difficulties arise because of the need to define rules for comparing such sequences in a manner that, although inevitably somewhat arbitrary, must be self-consistent and well defined. For example, we may have two sequences that differ in their first n members, but are equal after that; such sequences should clearly be considered as representing the same hyperreal number. Similarly, most sequences oscillate randomly forever, and we must find some way of taking such a sequence and interpreting it as, say, 7 + ϵ {\displaystyle 7+\epsilon } , where ϵ {\displaystyle \epsilon } is a certain infinitesimal number. Comparing sequences is thus a delicate matter. We could, for example, try to define a relation between sequences in a componentwise fashion: ( a 0 , a 1 , a 2 , … ) ≤ ( b 0 , b 1 , b 2 , … ) ⟺ ( a 0 ≤ b 0 ) ∧ ( a 1 ≤ b 1 ) ∧ ( a 2 ≤ b 2 ) … {\displaystyle (a_{0},a_{1},a_{2},\ldots )\leq (b_{0},b_{1},b_{2},\ldots )\iff (a_{0}\leq b_{0})\wedge (a_{1}\leq b_{1})\wedge (a_{2}\leq b_{2})\ldots } but here we run into trouble, since some entries of the first sequence may be bigger than the corresponding entries of the second sequence, and some others may be smaller. It follows that the relation defined in this way is only a partial order. To get around this, we have to specify which positions matter. Since there are infinitely many indices, we don't want finite sets of indices to matter. A consistent choice of index sets that matter is given by any free ultrafilter U on the natural numbers; these can be characterized as ultrafilters that do not contain any finite sets. (The good news is that Zorn's lemma guarantees the existence of many such U; the bad news is that they cannot be explicitly constructed.) We think of U as singling out those sets of indices that "matter": We write (a0, a1, a2, ...) ≤ (b0, b1, b2, ...) if and only if the set of natural numbers { n : an ≤ bn } is in U. This is a total preorder and it turns into a total order if we agree not to distinguish between two sequences a and b if a ≤ b and b ≤ a. With this identification, the ordered field *R of hyperreals is constructed. From an algebraic point of view, U allows us to define a corresponding maximal ideal I in the commutative ring A (namely, the set of the sequences that vanish in some element of U), and then to define *R as A/I; as the quotient of a commutative ring by a maximal ideal, *R is a field. This is also notated A/U, directly in terms of the free ultrafilter U; the two are equivalent. The maximality of I follows from the possibility of, given a sequence a, constructing a sequence b inverting the non-null elements of a and not altering its null entries. If the set on which a vanishes is not in U, the product ab is identified with the number 1, and any ideal containing 1 must be A. In the resulting field, these a and b are inverses. The field A/U is an ultrapower of R. Since this field contains R it has cardinality at least that of the continuum. Since A has cardinality ( 2 ℵ 0 ) ℵ 0 = 2 ℵ 0 2 = 2 ℵ 0 , {\displaystyle (2^{\aleph _{0}})^{\aleph _{0}}=2^{\aleph _{0}^{2}}=2^{\aleph _{0}},} it is also no larger than 2 ℵ 0 {\displaystyle 2^{\aleph _{0}}} , and hence has the same cardinality as R. One question we might ask is whether, if we had chosen a different free ultrafilter V, the quotient field A/U would be isomorphic as an ordered field to A/V. This question turns out to be equivalent to the continuum hypothesis; in ZFC with the continuum hypothesis we can prove this field is unique up to order isomorphism, and in ZFC with the negation of continuum hypothesis we can prove that there are non-order-isomorphic pairs of fields that are both countably indexed ultrapowers of the reals. For more information about this method of construction, see ultraproduct. === An intuitive approach to the ultrapower construction === The following is an intuitive way of understanding the hyperreal numbers. The approach taken here is very close to the one in the book by Goldblatt. Recall that the sequences converging to zero are sometimes called infinitely small. These are almost the infinitesimals in a sense; the true infinitesimals include certain classes of sequences that contain a sequence converging to zero. Let us see where these classes come from. Consider first the sequences of real numbers. They form a ring, that is, one can multiply, add and subtract them, but not necessarily divide by a non-zero element. The real numbers are considered as the constant sequences, the sequence is zero if it is identically zero, that is, an = 0 for all n. In our ring of sequences one can get ab = 0 with neither a = 0 nor b = 0. Thus, if for two sequences a , b {\displaystyle a,b} one has ab = 0, at least one of them should be declared zero. Surprisingly enough, there is a consistent way to do it. As a result, the equivalence classes of sequences that differ by some sequence declared zero will form a field, which is called a hyperreal field. It will contain the infinitesimals in addition to the ordinary real numbers, as well as infinitely large numbers (the reciprocals of infinitesimals, including those represented by sequences diverging to infinity). Also every hyperreal that is not infinitely large will be infinitely close to an ordinary real, in other words, it will be the sum of an ordinary real and an infinitesimal. This construction is parallel to the construction of the reals from the rationals given by Cantor. He started with the ring of the Cauchy sequences of rationals and declared all the sequences that converge to zero to be zero. The result is the reals. To continue the construction of hyperreals, consider the zero sets of our sequences, that is, the z ( a ) = { i : a i = 0 } {\displaystyle z(a)=\{i:a_{i}=0\}} , that is, z ( a ) {\displaystyle z(a)} is the set of indexes i {\displaystyle i} for which a i = 0 {\displaystyle a_{i}=0} . It is clear that if a b = 0 {\displaystyle ab=0} , then the union of z ( a ) {\displaystyle z(a)} and z ( b ) {\displaystyle z(b)} is N (the set of all natural numbers), so: One of the sequences that vanish on two complementary sets should be declared zero. If a {\displaystyle a} is declared zero, a b {\displaystyle ab} should be declared zero too, no matter what b {\displaystyle b} is. If both a {\displaystyle a} and b {\displaystyle b} are declared zero, then a + b {\displaystyle a+b} should also be declared zero. Now the idea is to single out a bunch U of subsets X of N and to declare that a = 0 {\displaystyle a=0} if and only if z ( a ) {\displaystyle z(a)} belongs to U. From the above conditions one can see that: From two complementary sets one belongs to U. Any set having a subset that belongs to U, also belongs to U. An intersection of any two sets belonging to U belongs to U. Finally, we do not want the empty set to belong to U because then everything would belong to U, as every set has the empty set as a subset. Any family of sets that satisfies (2–4) is called a filter (an example: the complements to the finite sets, it is called the Fréchet filter and it is used in the usual limit theory). If (1) also holds, U is called an ultrafilter (because you can add no more sets to it without breaking it). The only explicitly known example of an ultrafilter is the family of sets containing a given element (in our case, say, the number 10). Such ultrafilters are called trivial, and if we use it in our construction, we come back to the ordinary real numbers. Any ultrafilter containing a finite set is trivial. It is known that any filter can be extended to an ultrafilter, but the proof uses the axiom of choice. The existence of a nontrivial ultrafilter (the ultrafilter lemma) can be added as an extra axiom, as it is weaker than the axiom of choice. Now if we take a nontrivial ultrafilter (which is an extension of the Fréchet filter) and do our construction, we get the hyperreal numbers as a result. If f {\displaystyle f} is a real function of a real variable x {\displaystyle x} then f {\displaystyle f} naturally extends to a hyperreal function of a hyperreal variable by composition: f ( { x n } ) = { f ( x n ) } {\displaystyle f(\{x_{n}\})=\{f(x_{n})\}} where { … } {\displaystyle \{\dots \}} means "the equivalence class of the sequence … {\displaystyle \dots } relative to our ultrafilter", two sequences being in the same class if and only if the zero set of their difference belongs to our ultrafilter. All the arithmetical expressions and formulas make sense for hyperreals and hold true if they are true for the ordinary reals. It turns out that any finite (that is, such that | x | < a {\displaystyle |x|<a} for some ordinary real a {\displaystyle a} ) hyperreal x {\displaystyle x} will be of the form y + d {\displaystyle y+d} where y {\displaystyle y} is an ordinary (called standard) real and d {\displaystyle d} is an infinitesimal. It can be proven by bisection method used in proving the Bolzano-Weierstrass theorem, the property (1) of ultrafilters turns out to be crucial. == Properties of infinitesimal and infinite numbers == The finite elements F of *R form a local ring, and in fact a valuation ring, with the unique maximal ideal S being the infinitesimals; the quotient F/S is isomorphic to the reals. Hence we have a homomorphic mapping, st(x), from F to R whose kernel consists of the infinitesimals and which sends every element x of F to a unique real number whose difference from x is in S; which is to say, is infinitesimal. Put another way, every finite nonstandard real number is "very close" to a unique real number, in the sense that if x is a finite nonstandard real, then there exists one and only one real number st(x) such that x – st(x) is infinitesimal. This number st(x) is called the standard part of x, conceptually the same as x to the nearest real number. This operation is an order-preserving homomorphism and hence is well-behaved both algebraically and order theoretically. It is order-preserving though not isotonic; i.e. x ≤ y {\displaystyle x\leq y} implies st ( x ) ≤ st ( y ) {\displaystyle \operatorname {st} (x)\leq \operatorname {st} (y)} , but x < y {\displaystyle x<y} does not imply st ( x ) < st ( y ) {\displaystyle \operatorname {st} (x)<\operatorname {st} (y)} . We have, if both x and y are finite, st ( x + y ) = st ( x ) + st ( y ) {\displaystyle \operatorname {st} (x+y)=\operatorname {st} (x)+\operatorname {st} (y)} st ( x y ) = st ( x ) st ( y ) {\displaystyle \operatorname {st} (xy)=\operatorname {st} (x)\operatorname {st} (y)} If x is finite and not infinitesimal. st ( 1 / x ) = 1 / st ( x ) {\displaystyle \operatorname {st} (1/x)=1/\operatorname {st} (x)} x is real if and only if st ( x ) = x {\displaystyle \operatorname {st} (x)=x} The map st is continuous with respect to the order topology on the finite hyperreals; in fact it is locally constant. == Hyperreal fields == Suppose X is a Tychonoff space, also called a T3.5 space, and C(X) is the algebra of continuous real-valued functions on X. Suppose M is a maximal ideal in C(X). Then the factor algebra A = C(X)/M is a totally ordered field F containing the reals. If F strictly contains R then M is called a hyperreal ideal (terminology due to Hewitt (1948)) and F a hyperreal field. Note that no assumption is being made that the cardinality of F is greater than R; it can in fact have the same cardinality. An important special case is where the topology on X is the discrete topology; in this case X can be identified with a cardinal number κ and C(X) with the real algebra Rκ of functions from κ to R. The hyperreal fields we obtain in this case are called ultrapowers of R and are identical to the ultrapowers constructed via free ultrafilters in model theory. == See also == Constructive nonstandard analysis Hyperinteger – A hyperreal number that is equal to its own integer part Influence of nonstandard analysis Nonstandard calculus – Modern application of infinitesimals Real closed field – Non algebraically closed field whose extension by sqrt(–1) is algebraically closed Real line – Line formed by the real numbersPages displaying short descriptions of redirect targets Surreal number – Generalization of the real numbers – Surreal numbers are a much larger class of numbers, that contains the hyperreals as well as other classes of non-real numbers. == References == == Further reading == Ball, W.W. Rouse (1960), A Short Account of the History of Mathematics (4th ed. [Reprint. Original publication: London: Macmillan & Co., 1908] ed.), New York: Dover Publications, pp. 50–62, ISBN 0-486-20630-0 {{citation}}: ISBN / Date incompatibility (help) Hatcher, William S. (1982) "Calculus is Algebra", American Mathematical Monthly 89: 362–370. Hewitt, Edwin (1948) Rings of real-valued continuous functions. I. Trans. Amer. Math. Soc. 64, 45—99. Jerison, Meyer; Gillman, Leonard (1976), Rings of continuous functions, Berlin, New York: Springer-Verlag, ISBN 978-0-387-90198-5 Keisler, H. Jerome (1994) The hyperreal line. Real numbers, generalizations of the reals, and theories of continua, 207—237, Synthese Lib., 242, Kluwer Acad. Publ., Dordrecht. Kleinberg, Eugene M.; Henle, James M. (2003), Infinitesimal Calculus, New York: Dover Publications, ISBN 978-0-486-42886-4 == External links == Crowell, Brief Calculus. A text using infinitesimals. Hermoso, Nonstandard Analysis and the Hyperreals. A gentle introduction. Keisler, Elementary Calculus: An Approach Using Infinitesimals. Includes an axiomatic treatment of the hyperreals, and is freely available under a Creative Commons license
|
Wikipedia:Hyperstructure#0
|
Hyperstructures are algebraic structures equipped with at least one multi-valued operation, called a hyperoperation. The largest classes of the hyperstructures are the ones called H v {\displaystyle Hv} – structures. A hyperoperation ( ⋆ ) {\displaystyle (\star )} on a nonempty set H {\displaystyle H} is a mapping from H × H {\displaystyle H\times H} to the nonempty power set P ∗ ( H ) {\displaystyle P^{*}\!(H)} , meaning the set of all nonempty subsets of H {\displaystyle H} , i.e. ⋆ : H × H → P ∗ ( H ) {\displaystyle \star :H\times H\to P^{*}\!(H)} ( x , y ) ↦ x ⋆ y ⊆ H . {\displaystyle \quad \ (x,y)\mapsto x\star y\subseteq H.} For A , B ⊆ H {\displaystyle A,B\subseteq H} we define A ⋆ B = ⋃ a ∈ A , b ∈ B a ⋆ b {\displaystyle A\star B=\bigcup _{a\in A,\,b\in B}a\star b} and A ⋆ x = A ⋆ { x } , {\displaystyle A\star x=A\star \{x\},\,} x ⋆ B = { x } ⋆ B . {\displaystyle x\star B=\{x\}\star B.} ( H , ⋆ ) {\displaystyle (H,\star )} is a semihypergroup if ( ⋆ ) {\displaystyle (\star )} is an associative hyperoperation, i.e. x ⋆ ( y ⋆ z ) = ( x ⋆ y ) ⋆ z {\displaystyle x\star (y\star z)=(x\star y)\star z} for all x , y , z ∈ H . {\displaystyle x,y,z\in H.} Furthermore, a hypergroup is a semihypergroup ( H , ⋆ ) {\displaystyle (H,\star )} , where the reproduction axiom is valid, i.e. a ⋆ H = H ⋆ a = H {\displaystyle a\star H=H\star a=H} for all a ∈ H . {\displaystyle a\in H.} == References == AHA (Algebraic Hyperstructures & Applications). A scientific group at Democritus University of Thrace, School of Education, Greece. aha.eled.duth.gr Applications of Hyperstructure Theory, Piergiulio Corsini, Violeta Leoreanu, Springer, 2003, ISBN 1-4020-1222-5, ISBN 978-1-4020-1222-8 Functional Equations on Hypergroups, László, Székelyhidi, World Scientific Publishing, 2012, ISBN 978-981-4407-00-7
|
Wikipedia:Hypertranscendental function#0
|
A hypertranscendental function or transcendentally transcendental function is a transcendental analytic function which is not the solution of an algebraic differential equation with coefficients in Z {\displaystyle \mathbb {Z} } (the integers) and with algebraic initial conditions. == History == The term 'transcendentally transcendental' was introduced by E. H. Moore in 1896; the term 'hypertranscendental' was introduced by D. D. Morduhai-Boltovskoi in 1914. == Definition == One standard definition (there are slight variants) defines solutions of differential equations of the form F ( x , y , y ′ , ⋯ , y ( n ) ) = 0 {\displaystyle F\left(x,y,y',\cdots ,y^{(n)}\right)=0} , where F {\displaystyle F} is a polynomial with constant coefficients, as algebraically transcendental or differentially algebraic. Transcendental functions which are not algebraically transcendental are transcendentally transcendental. Hölder's theorem shows that the gamma function is in this category. Hypertranscendental functions usually arise as the solutions to functional equations, for example the gamma function. == Examples == === Hypertranscendental functions === The zeta functions of algebraic number fields, in particular, the Riemann zeta function The gamma function (cf. Hölder's theorem) === Transcendental but not hypertranscendental functions === The exponential function, logarithm, and the trigonometric and hyperbolic functions. The generalized hypergeometric functions, including special cases such as Bessel functions (except some special cases which are algebraic). === Non-transcendental (algebraic) functions === All algebraic functions, in particular polynomials. == See also == Hypertranscendental number == Notes == == References == Loxton, J.H., Poorten, A.J. van der, "A class of hypertranscendental functions", Aequationes Mathematicae, Periodical volume 16 Mahler, K., "Arithmetische Eigenschaften einer Klasse transzendental-transzendenter Funktionen", Math. Z. 32 (1930) 545-585. Morduhaĭ-Boltovskoĭ, D. (1949), "On hypertranscendental functions and hypertranscendental numbers", Doklady Akademii Nauk SSSR, New Series (in Russian), 64: 21–24, MR 0028347
|
Wikipedia:Hypoelliptic operator#0
|
In the theory of partial differential equations, a partial differential operator P {\displaystyle P} defined on an open subset U ⊂ R n {\displaystyle U\subset {\mathbb {R} }^{n}} is called hypoelliptic if for every distribution u {\displaystyle u} defined on an open subset V ⊂ U {\displaystyle V\subset U} such that P u {\displaystyle Pu} is C ∞ {\displaystyle C^{\infty }} (smooth), u {\displaystyle u} must also be C ∞ {\displaystyle C^{\infty }} . If this assertion holds with C ∞ {\displaystyle C^{\infty }} replaced by real-analytic, then P {\displaystyle P} is said to be analytically hypoelliptic. Every elliptic operator with C ∞ {\displaystyle C^{\infty }} coefficients is hypoelliptic. In particular, the Laplacian is an example of a hypoelliptic operator (the Laplacian is also analytically hypoelliptic). In addition, the operator for the heat equation ( P ( u ) = u t − k Δ u {\displaystyle P(u)=u_{t}-k\,\Delta u\,} ) P = ∂ t − k Δ x {\displaystyle P=\partial _{t}-k\,\Delta _{x}\,} (where k > 0 {\displaystyle k>0} ) is hypoelliptic but not elliptic. However, the operator for the wave equation ( P ( u ) = u t t − c 2 Δ u {\displaystyle P(u)=u_{tt}-c^{2}\,\Delta u\,} ) P = ∂ t 2 − c 2 Δ x {\displaystyle P=\partial _{t}^{2}-c^{2}\,\Delta _{x}\,} (where c ≠ 0 {\displaystyle c\neq 0} ) is not hypoelliptic. == References == Shimakura, Norio (1992). Partial differential operators of elliptic type: translated by Norio Shimakura. American Mathematical Society, Providence, R.I. ISBN 0-8218-4556-X. Egorov, Yu. V.; Schulze, Bert-Wolfgang (1997). Pseudo-differential operators, singularities, applications. Birkhäuser. ISBN 3-7643-5484-4. Vladimirov, V. S. (2002). Methods of the theory of generalized functions. Taylor & Francis. ISBN 0-415-27356-0. Folland, G. B. (2009). Fourier Analysis and its applications. AMS. ISBN 978-0-8218-4790-9. This article incorporates material from Hypoelliptic on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
|
Wikipedia:Hypograph (mathematics)#0
|
In mathematics, the hypograph or subgraph of a function f : R n → R {\displaystyle f:\mathbb {R} ^{n}\rightarrow \mathbb {R} } is the set of points lying on or below its graph. A related definition is that of such a function's epigraph, which is the set of points on or above the function's graph. The domain (rather than the codomain) of the function is not particularly important for this definition; it can be an arbitrary set instead of R n {\displaystyle \mathbb {R} ^{n}} . == Definition == The definition of the hypograph was inspired by that of the graph of a function, where the graph of f : X → Y {\displaystyle f:X\to Y} is defined to be the set graph f := { ( x , y ) ∈ X × Y : y = f ( x ) } . {\displaystyle \operatorname {graph} f:=\left\{(x,y)\in X\times Y~:~y=f(x)\right\}.} The hypograph or subgraph of a function f : X → [ − ∞ , ∞ ] {\displaystyle f:X\to [-\infty ,\infty ]} valued in the extended real numbers [ − ∞ , ∞ ] = R ∪ { ± ∞ } {\displaystyle [-\infty ,\infty ]=\mathbb {R} \cup \{\pm \infty \}} is the set hyp f = { ( x , r ) ∈ X × R : r ≤ f ( x ) } = [ f − 1 ( ∞ ) × R ] ∪ ⋃ x ∈ f − 1 ( R ) ( { x } × ( − ∞ , f ( x ) ] ) . {\displaystyle {\begin{alignedat}{4}\operatorname {hyp} f&=\left\{(x,r)\in X\times \mathbb {R} ~:~r\leq f(x)\right\}\\&=\left[f^{-1}(\infty )\times \mathbb {R} \right]\cup \bigcup _{x\in f^{-1}(\mathbb {R} )}(\{x\}\times (-\infty ,f(x)]).\end{alignedat}}} Similarly, the set of points on or above the function is its epigraph. The strict hypograph is the hypograph with the graph removed: hyp S f = { ( x , r ) ∈ X × R : r < f ( x ) } = hyp f ∖ graph f = ⋃ x ∈ X ( { x } × ( − ∞ , f ( x ) ) ) . {\displaystyle {\begin{alignedat}{4}\operatorname {hyp} _{S}f&=\left\{(x,r)\in X\times \mathbb {R} ~:~r<f(x)\right\}\\&=\operatorname {hyp} f\setminus \operatorname {graph} f\\&=\bigcup _{x\in X}(\{x\}\times (-\infty ,f(x))).\end{alignedat}}} Despite the fact that f {\displaystyle f} might take one (or both) of ± ∞ {\displaystyle \pm \infty } as a value (in which case its graph would not be a subset of X × R {\displaystyle X\times \mathbb {R} } ), the hypograph of f {\displaystyle f} is nevertheless defined to be a subset of X × R {\displaystyle X\times \mathbb {R} } rather than of X × [ − ∞ , ∞ ] . {\displaystyle X\times [-\infty ,\infty ].} == Properties == The hypograph of a function f {\displaystyle f} is empty if and only if f {\displaystyle f} is identically equal to negative infinity. A function is concave if and only if its hypograph is a convex set. The hypograph of a real affine function g : R n → R {\displaystyle g:\mathbb {R} ^{n}\to \mathbb {R} } is a halfspace in R n + 1 . {\displaystyle \mathbb {R} ^{n+1}.} A function is upper semicontinuous if and only if its hypograph is closed. == See also == Effective domain Epigraph (mathematics) – Region above a graph Proper convex function == Citations == == References == Rockafellar, R. Tyrrell; Wets, Roger J.-B. (26 June 2009). Variational Analysis. Grundlehren der mathematischen Wissenschaften. Vol. 317. Berlin New York: Springer Science & Business Media. ISBN 9783642024313. OCLC 883392544.
|
Wikipedia:Hypostatic abstraction#0
|
Hypostatic abstraction in philosophy and mathematical logic, also known as hypostasis or subjectal abstraction, is a formal operation that transforms a predicate into a relation; for example "Honey is sweet" is transformed into "Honey has sweetness". The relation is created between the original subject and a new term that represents the property expressed by the original predicate. == Description == === Technical definition === Hypostasis changes a propositional formula of the form X is Y to another one of the form X has the property of being Y or X has Y-ness. The logical functioning of the second object Y-ness consists solely in the truth-values of those propositions that have the corresponding abstract property Y as the predicate. The object of thought introduced in this way may be called a hypostatic object and in some senses an abstract object and a formal object. The above definition is adapted from the one given by Charles Sanders Peirce. As Peirce describes it, the main point about the formal operation of hypostatic abstraction, insofar as it operates on formal linguistic expressions, is that it converts a predicative adjective or predicate into an extra subject, thus increasing by one the number of "subject" slots—called the arity or adicity—of the main predicate. The distinction between particular objects and a formal object is noted by Anthony Kenny: we might identify any object as having a certain property to which we respond, but the formal object of that response is the property which we implicitly ascribe to the particular object by virtue of us having that response: thus if a certain red rose is "lovely", the rose has the property of loveliness, and this loveliness is the formal object of our aesthetic appreciation of the rose. === Application === The grammatical trace of this hypostatic transformation is a process that extracts the adjective "sweet" from the predicate "is sweet", replacing it by a new, increased-arity predicate "possesses", and as a by-product of the reaction, as it were, precipitating out the substantive "sweetness" as a second subject of the new predicate. The abstraction of hypostasis takes the concrete physical sense of "taste" found in "honey is sweet" and ascribes to it the formal metaphysical characteristics in "honey has sweetness". == See also == == References == === Sources ===
|
Wikipedia:Hölder's theorem#0
|
In mathematics, Hölder's theorem states that the gamma function does not satisfy any algebraic differential equation whose coefficients are rational functions. This result was first proved by Otto Hölder in 1887; several alternative proofs have subsequently been found. The theorem also generalizes to the q {\displaystyle q} -gamma function. == Statement of the theorem == For every n ∈ N 0 , {\displaystyle n\in \mathbb {N} _{0},} there is no non-zero polynomial P ∈ C [ X ; Y 0 , Y 1 , … , Y n ] {\displaystyle P\in \mathbb {C} [X;Y_{0},Y_{1},\ldots ,Y_{n}]} such that ∀ z ∈ C ∖ Z ≤ 0 : P ( z ; Γ ( z ) , Γ ′ ( z ) , … , Γ ( n ) ( z ) ) = 0 , {\displaystyle \forall z\in \mathbb {C} \setminus \mathbb {Z} _{\leq 0}:\qquad P\left(z;\Gamma (z),\Gamma '(z),\ldots ,{\Gamma ^{(n)}}(z)\right)=0,} where Γ {\displaystyle \Gamma } is the gamma function. For example, define P ∈ C [ X ; Y 0 , Y 1 , Y 2 ] {\displaystyle P\in \mathbb {C} [X;Y_{0},Y_{1},Y_{2}]} by P = df X 2 Y 2 + X Y 1 + ( X 2 − ν 2 ) Y 0 . {\displaystyle P~{\stackrel {\text{df}}{=}}~X^{2}Y_{2}+XY_{1}+(X^{2}-\nu ^{2})Y_{0}.} Then the equation P ( z ; f ( z ) , f ′ ( z ) , f ″ ( z ) ) = z 2 f ″ ( z ) + z f ′ ( z ) + ( z 2 − ν 2 ) f ( z ) ≡ 0 {\displaystyle P\left(z;f(z),f'(z),f''(z)\right)=z^{2}f''(z)+zf'(z)+\left(z^{2}-\nu ^{2}\right)f(z)\equiv 0} is called an algebraic differential equation, which, in this case, has the solutions f = J ν {\displaystyle f=J_{\nu }} and f = Y ν {\displaystyle f=Y_{\nu }} — the Bessel functions of the first and second kind respectively. Hence, we say that J ν {\displaystyle J_{\nu }} and Y ν {\displaystyle Y_{\nu }} are differentially algebraic (also algebraically transcendental). Most of the familiar special functions of mathematical physics are differentially algebraic. All algebraic combinations of differentially algebraic functions are differentially algebraic. Furthermore, all compositions of differentially algebraic functions are differentially algebraic. Hölder’s Theorem simply states that the gamma function, Γ {\displaystyle \Gamma } , is not differentially algebraic and is therefore transcendentally transcendental. == Proof == Let n ∈ N 0 , {\displaystyle n\in \mathbb {N} _{0},} and assume that a non-zero polynomial P ∈ C [ X ; Y 0 , Y 1 , … , Y n ] {\displaystyle P\in \mathbb {C} [X;Y_{0},Y_{1},\ldots ,Y_{n}]} exists such that ∀ z ∈ C ∖ Z ≤ 0 : P ( z ; Γ ( z ) , Γ ′ ( z ) , … , Γ ( n ) ( z ) ) = 0. {\displaystyle \forall z\in \mathbb {C} \setminus \mathbb {Z} _{\leq 0}:\qquad P\left(z;\Gamma (z),\Gamma '(z),\ldots ,{\Gamma ^{(n)}}(z)\right)=0.} As a non-zero polynomial in C [ X ] {\displaystyle \mathbb {C} [X]} can never give rise to the zero function on any non-empty open domain of C {\displaystyle \mathbb {C} } (by the fundamental theorem of algebra), we may suppose, without loss of generality, that P {\displaystyle P} contains a monomial term having a non-zero power of one of the indeterminates Y 0 , Y 1 , … , Y n {\displaystyle Y_{0},Y_{1},\ldots ,Y_{n}} . Assume also that P {\displaystyle P} has the lowest possible overall degree with respect to the lexicographic ordering Y 0 < Y 1 < ⋯ < Y n < X . {\displaystyle Y_{0}<Y_{1}<\cdots <Y_{n}<X.} For example, deg ( − 3 X 10 Y 0 2 Y 1 4 + i X 2 Y 2 ) < deg ( 2 X Y 0 3 − Y 1 4 ) {\displaystyle \deg \left(-3X^{10}Y_{0}^{2}Y_{1}^{4}+iX^{2}Y_{2}\right)<\deg \left(2XY_{0}^{3}-Y_{1}^{4}\right)} because the highest power of Y 0 {\displaystyle Y_{0}} in any monomial term of the first polynomial is smaller than that of the second polynomial. Next, observe that for all z ∈ C ∖ Z ≤ 0 {\displaystyle z\in \mathbb {C} \smallsetminus \mathbb {Z} _{\leq 0}} we have: P ( z + 1 ; Γ ( z + 1 ) , Γ ′ ( z + 1 ) , Γ ″ ( z + 1 ) , … , Γ ( n ) ( z + 1 ) ) = P ( z + 1 ; z Γ ( z ) , [ z Γ ( z ) ] ′ , [ z Γ ( z ) ] ″ , … , [ z Γ ( z ) ] ( n ) ) = P ( z + 1 ; z Γ ( z ) , z Γ ′ ( z ) + Γ ( z ) , z Γ ″ ( z ) + 2 Γ ′ ( z ) , … , z Γ ( n ) ( z ) + n Γ ( n − 1 ) ( z ) ) . {\displaystyle {\begin{aligned}&P\left(z+1;\Gamma (z+1),\Gamma '(z+1),\Gamma ''(z+1),\ldots ,\Gamma ^{(n)}(z+1)\right)\\[1ex]&=P\left(z+1;z\Gamma (z),[z\Gamma (z)]',[z\Gamma (z)]'',\ldots ,[z\Gamma (z)]^{(n)}\right)\\[1ex]&=P\left(z+1;z\Gamma (z),z\Gamma '(z)+\Gamma (z),z\Gamma ''(z)+2\Gamma '(z),\ldots ,z{\Gamma ^{(n)}}(z)+n{\Gamma ^{(n-1)}}(z)\right).\end{aligned}}} If we define a second polynomial Q ∈ C [ X ; Y 0 , Y 1 , … , Y n ] {\displaystyle Q\in \mathbb {C} [X;Y_{0},Y_{1},\ldots ,Y_{n}]} by the transformation Q = df P ( X + 1 ; X Y 0 , X Y 1 + Y 0 , X Y 2 + 2 Y 1 , … , X Y n + n Y n − 1 ) , {\displaystyle Q~{\stackrel {\text{df}}{=}}~P(X+1;XY_{0},XY_{1}+Y_{0},XY_{2}+2Y_{1},\ldots ,XY_{n}+nY_{n-1}),} then we obtain the following algebraic differential equation for Γ {\displaystyle \Gamma } : ∀ z ∈ C ∖ Z ≤ 0 : Q ( z ; Γ ( z ) , Γ ′ ( z ) , … , Γ ( n ) ( z ) ) ≡ 0. {\displaystyle \forall z\in \mathbb {C} \setminus \mathbb {Z} _{\leq 0}:\qquad Q\left(z;\Gamma (z),\Gamma '(z),\ldots ,{\Gamma ^{(n)}}(z)\right)\equiv 0.} Furthermore, if X h Y 0 h 0 Y 1 h 1 ⋯ Y n h n {\displaystyle X^{h}Y_{0}^{h_{0}}Y_{1}^{h_{1}}\cdots Y_{n}^{h_{n}}} is the highest-degree monomial term in P {\displaystyle P} , then the highest-degree monomial term in Q {\displaystyle Q} is X h + h 0 + h 1 + ⋯ + h n Y 0 h 0 Y 1 h 1 ⋯ Y n h n . {\displaystyle X^{h+h_{0}+h_{1}+\cdots +h_{n}}Y_{0}^{h_{0}}Y_{1}^{h_{1}}\cdots Y_{n}^{h_{n}}.} Consequently, the polynomial Q − X h 0 + h 1 + ⋯ + h n P {\displaystyle Q-X^{h_{0}+h_{1}+\cdots +h_{n}}P} has a smaller overall degree than P {\displaystyle P} , and as it clearly gives rise to an algebraic differential equation for Γ {\displaystyle \Gamma } , it must be the zero polynomial by the minimality assumption on P {\displaystyle P} . Hence, defining R ∈ C [ X ] {\displaystyle R\in \mathbb {C} [X]} by R = df X h 0 + h 1 + ⋯ + h n , {\displaystyle R~{\stackrel {\text{df}}{=}}~X^{h_{0}+h_{1}+\cdots +h_{n}},} we get Q = P ( X + 1 ; X Y 0 , X Y 1 + Y 0 , X Y 2 + 2 Y 1 , … , X Y n + n Y n − 1 ) = R ( X ) ⋅ P ( X ; Y 0 , Y 1 , … , Y n ) . {\displaystyle Q=P(X+1;XY_{0},XY_{1}+Y_{0},XY_{2}+2Y_{1},\ldots ,XY_{n}+nY_{n-1})=R(X)\cdot P(X;Y_{0},Y_{1},\ldots ,Y_{n}).} Now, let X = 0 {\displaystyle X=0} in Q {\displaystyle Q} to obtain Q ( 0 ; Y 0 , Y 1 , … , Y n ) = P ( 1 ; 0 , Y 0 , 2 Y 1 , … , n Y n − 1 ) = R ( 0 ) ⋅ P ( 0 ; Y 0 , Y 1 , … , Y n ) = 0 C [ Y 0 , Y 1 , … , Y n ] . {\displaystyle Q(0;Y_{0},Y_{1},\ldots ,Y_{n})=P(1;0,Y_{0},2Y_{1},\ldots ,nY_{n-1})=R(0)\cdot P(0;Y_{0},Y_{1},\ldots ,Y_{n})=0_{\mathbb {C} [Y_{0},Y_{1},\ldots ,Y_{n}]}.} A change of variables then yields P ( 1 ; 0 , Y 1 , Y 2 , … , Y n ) = 0 C [ Y 0 , Y 1 , … , Y n ] , {\displaystyle P(1;0,Y_{1},Y_{2},\ldots ,Y_{n})=0_{\mathbb {C} [Y_{0},Y_{1},\ldots ,Y_{n}]},} and an application of mathematical induction (along with a change of variables at each induction step) to the earlier expression P ( X + 1 ; X Y 0 , X Y 1 + Y 0 , X Y 2 + 2 Y 1 , … , X Y n + n Y n − 1 ) = R ( X ) ⋅ P ( X ; Y 0 , Y 1 , … , Y n ) {\displaystyle P(X+1;XY_{0},XY_{1}+Y_{0},XY_{2}+2Y_{1},\ldots ,XY_{n}+nY_{n-1})=R(X)\cdot P(X;Y_{0},Y_{1},\ldots ,Y_{n})} reveals that ∀ m ∈ N : P ( m ; 0 , Y 1 , Y 2 , … , Y n ) = 0 C [ Y 0 , Y 1 , … , Y n ] . {\displaystyle \forall m\in \mathbb {N} :\qquad P(m;0,Y_{1},Y_{2},\ldots ,Y_{n})=0_{\mathbb {C} [Y_{0},Y_{1},\ldots ,Y_{n}]}.} This is possible only if P {\displaystyle P} is divisible by Y 0 {\displaystyle Y_{0}} , which contradicts the minimality assumption on P {\displaystyle P} . Therefore, no such P {\displaystyle P} exists, and so Γ {\displaystyle \Gamma } is not differentially algebraic. Q.E.D. == References ==
|
Wikipedia:IM 67118#0
|
IM 67118, also known as Db2-146, is an Old Babylonian clay tablet in the collection of the Iraq Museum that contains the solution to a problem in plane geometry concerning a rectangle with given area and diagonal. In the last part of the text, the solution is proved correct using the Pythagorean theorem. The steps of the solution are believed to represent cut-and-paste geometry operations involving a diagram from which, it has been suggested, ancient Mesopotamians might, at an earlier time, have derived the Pythagorean theorem. == Description == The tablet was excavated in 1962 at Tell edh-Dhiba'i, an Old Babylonian settlement near modern Baghdad that was once part of the kingdom of Eshnunna, and was published by Taha Baqir in the same year. It dates to approximately 1770 BCE (according to the middle chronology), during the reign of Ibal-pi-el II, who ruled Eshnunna at the same time that Hammurabi ruled Babylon. The tablet measures 11.5 cm × 6.8 cm × 3.3 cm (4+1⁄2 in × 2+3⁄4 in × 1+1⁄4 in). Its language is Akkadian, written in cuneiform script. There are 19 lines of text on the tablet's obverse and six on its reverse. The reverse also contains a diagram consisting of the rectangle of the problem and one of its diagonals. Along that diagonal is written its length in sexagesimal notation; the area of the rectangle is written in the triangular region below the diagonal. == Problem and its solution == In modern mathematical language, the problem posed on the tablet is the following: a rectangle has area A = 0.75 and diagonal c = 1.25. What are the lengths a and b of the sides of the rectangle? The solution can be understood as proceeding in two stages: in stage 1, the quantity c 2 − 2 A {\displaystyle {\sqrt {c^{2}-2A}}} is computed to be 0.25. In stage 2, the well-attested Old Babylonian method of completing the square is used to solve what is effectively the system of equations b − a = 0.25, ab = 0.75. Geometrically this is the problem of computing the lengths of the sides of a rectangle whose area A and side-length difference b−a are known, which was a recurring problem in Old Babylonian mathematics. In this case it is found that b = 1 and a = 0.75. The solution method suggests that whoever devised the solution was using the property c2 − 2A = c2 − 2ab = (b − a)2. It must be emphasized, however, that the modern notation for equations and the practice of representing parameters and unknowns by letters were unheard of in ancient times. It is now widely accepted as a result of Jens Høyrup's extensive analysis of the vocabulary of Old Babylonian mathematics, that underlying the procedures in texts such as IM 67118 was a set of standard cut-and-paste geometric operations, not a symbolic algebra. From the vocabulary of the solution Høyrup concludes that c2, the square of the diagonal, is to be understood as a geometric square, from which an area equal to 2A is to be "cut off", that is, removed, leaving a square with side b − a. Høyrup suggests that the square on the diagonal was possibly formed by making four copies of the rectangle, each rotated by 90°, and that the area 2A was the area of the four right triangles contained in the square on the diagonal. The remainder is the small square in the center of the figure. The geometric procedure for computing the lengths of the sides of a rectangle of given area A and side-length difference b − a was to transform the rectangle into a gnomon of area A by cutting off a rectangular piece of dimensions a×½(b − a) and pasting this piece onto the side of the rectangle. The gnomon was then completed to a square by adding a smaller square of side ½(b − a) to it. In this problem, the side of the completed square is computed to be A + 1 4 ( b − a ) 2 = 0.75 + 0.015625 = 0.875 {\displaystyle {\sqrt {A+{\tfrac {1}{4}}(b-a)^{2}}}={\sqrt {0.75+0.015625}}=0.875} . The quantity ½(b − a)=0.125 is then added to the horizontal side of the square and subtracted from the vertical side. The resulting line segments are the sides of the desired rectangle. One difficulty in reconstructing Old Babylonian geometric diagrams is that known tablets never include diagrams in solutions—even in geometric solutions where explicit constructions are described in text—although diagrams are often included in formulations of problems. Høyrup argues that the cut-and-paste geometry would have been performed in some medium other than clay, perhaps in sand or on a "dust abacus", at least in the early stages of a scribe's training before mental facility with geometric calculation had been developed. Friberg does describe some tablets containing drawings of "figures within figures", including MS 2192, in which the band separating two concentric equilateral triangles is divided into three trapezoids. He writes, "The idea of computing the area of a triangular band as the area of a chain of trapezoids is a variation on the idea of computing the area of a square band as the area of a chain of four rectangles. This is a simple idea, and it is likely that it was known by Old Babylonian mathematicians, although no cuneiform mathematical text has yet been found where this idea enters in an explicit way." He argues that this idea is implicit in the text of IM 67118. He also invites a comparison with the diagram of YBC 7329, in which two concentric squares are shown. The band separating the squares is not subdivided into four rectangles on this tablet, but the numerical value of the area of one of the rectangles area does appear next to the figure. == Checking the solution == The solution b = 1, a = 0.75 is proved correct by computing the areas of squares with the corresponding side-lengths, adding these areas, and computing the side-length of the square with the resulting area, that is, by taking the square root. This is an application of the Pythagorean theorem, c = a 2 + b 2 {\displaystyle c={\sqrt {a^{2}+b^{2}}}} , and the result agrees with the given value, c = 1.25. That the area is also correct is verified by computing the product, ab. == Translation == The following translation is given by Britton, Proust, and Shnider and is based on the translation of Høyrup, which in turn is based on the hand copy and transliteration of Baqir, with some small corrections. Babylonian sexagesimal numbers are translated into decimal notation with base-60 digits separated by commas. Hence 1,15 means 1 + 15/60 = 5/4 = 1.25. Note that there was no "sexagesimal point" in the Babylonian system, so the overall power of 60 multiplying a number had to be inferred from context. The translation is "conformal", which, as described by Eleanor Robson, "involves consistently translating Babylonian technical terms with existing English words or neologisms which match the original meanings as closely as possible"; it also preserves Akkadian word order. Old Babylonian mathematics used different words for multiplication depending on the underlying geometric context and similarly for the other arithmetic operations. Obverse If, about a (rectangle with) diagonal, (somebody) asks you thus, 1,15 the diagonal, 45 the surface; length and width corresponding to what? You, by your proceeding, 1,15, your diagonal, its counterpart lay down: make them hold: 1,33,45 comes up, 1,33,45 may (?) your (?) hand hold (?) 45 your surface to two bring: 1,30 comes up. From 1,33,45 cut off: 3,45 the remainder. The equalside of 3,45 take: 15 comes up. Its half-part, 7,30 comes up, to 7,30 raise: 56,15 comes up 56,15 your hand. 45 your surface over your hand, 45,56,15 comes up. The equalside of 45,56,15 take: 52,30 comes up, 52,30 its counterpart lay down, 7,30 which you have made hold to one append: from one cut off. 1 your length, 45 the width. If 1 the length, 45 the width, the surface and the diagonal corresponding to what? (You by your) making, the length make hold: (1 comes up ...) may your head hold. Reverse [...]: 45, the width, make hold: 33,45 comes up. To your length append: 1,33,45 comes up. The equalside of 1,33,45 take: 1,15 comes up. 1,15 your diagonal. Your length to the width raise, 45 your surface. Thus the procedure. The problem statement is given in lines 1–3, stage 1 of the solution in lines 3–9, stage 2 of the solution in lines 9–16, and verification of the solution in lines 16–24. Note that "1,15 your diagonal, its counterpart lay down: make them hold" means to form a square by laying down perpendicular copies of the diagonal, the "equalside" is the side of a square, or the square root of its area, "may your head hold" means to remember, and "your hand" may refer to "a pad or a device for computation". == Relation to other texts == Problem 2 on the tablet MS 3971 in the Schøyen collection, published by Friberg, is identical to the problem on IM 67118. The solution is very similar but proceeds by adding 2A to c2, rather than subtracting it. The side of the resulting square equals b + a = 1.75 in this case. The system of equations b + a = 1.75, ab = 0.75 is again solved by completing the square. MS 3971 contains no diagram and does not perform the verification step. Its language is "terse" and uses many Sumerian logograms in comparison with the "verbose" IM 67118, which is in syllabic Akkadian. Friberg believes this text comes from Uruk, in southern Iraq, and dates it before 1795 BCE. Friberg points out a similar problem in a 3rd-century BCE Egyptian Demotic papyrus, P. Cairo, problems 34 and 35, published by Parker in 1972. Friberg also sees a possible connection to A.A. Vaiman's explanation of an entry in the Old Babylonian table of constants TMS 3, which reads, "57 36, constant of the šàr". Vaiman notes that the cuneiform sign for šàr resembles a chain of four right triangles arranged in a square, as in the proposed figure. The area of such a chain is 24/25 (equal to 57 36 in sexagesimal) if one assumes 3-4-5 right triangles with hypotenuse normalized to length 1. Høyrup writes that the problem of IM 67118 "turns up, solved in precisely the same way, in a Hebrew manual from 1116 ce". == Significance == Although the problem on IM 67118 is concerned with a specific rectangle, whose sides and diagonal form a scaled version of the 3-4-5 right triangle, the language of the solution is general, usually specifying the functional role of each number as it is used. In the later part of the text, an abstract formulation is seen in places, making no reference to particular values ("the length make hold", "Your length to the width raise."). Høyrup sees in this "an unmistakeable trace of the 'Pythagorean rule' in abstract formulation". The manner of discovery of the Pythagorean rule is unknown, but some scholars see a possible path in the method of solution used on IM 67118. The observation that subtracting 2A from c2 yields (b − a)2 need only be augmented by a geometric rearrangement of areas corresponding to a2, b2, and −2A = −2ab to obtain rearrangement proof of the rule, one which is well known in modern times and which is also suggested in the third century CE in Zhao Shuang's commentary on the ancient Chinese Zhoubi Suanjing (Gnomon of the Zhou). The formulation of the solution in MS 3971, problem 2, having no subtracted areas, provides a possibly even more straightforward derivation. Høyrup proposes the hypothesis, based in part on similarities among word problems that reappear over a broad range of times and places and on the language and numerical content of such problems, that much of the scribal Old Babylonian mathematical material was imported from the practical surveyor tradition, where solving riddle problems was used as a badge of professional skill. Høyrup believes that this surveyor culture survived the demise of Old Babylonian scribal culture that resulted from the Hittite conquest of Mesopotamia in the early 16th century BCE and that it influenced the mathematics of ancient Greece, of Babylon during the Seleucid period, of the Islamic empire, and of medieval Europe. Among the problems Høyrup ascribes to this practical surveyor tradition are several rectangle problems requiring completing the square, including the problem of IM 67118. On the basis that no third-millennium BCE references to the Pythagorean rule are known, and that the formulation of IM 67118 is already adapted to the scribal culture, Høyrup writes, "To judge from this evidence alone it is therefore likely that the Pythagorean rule was discovered within the lay surveyors' environment, possibly as a spin-off from the problem treated in Db2-146, somewhere between 2300 and 1825 BC." Thus the rule named after Pythagoras, who was born about 570 BCE and died c.495 BCE, is shown to have been discovered about 12 centuries before his birth. == See also == Plimpton 322 YBC 7289 == Notes == == References == Baqir, Taha (1962). "Tell Dhiba'i: New mathematical texts". Sumer. 18: 11–14, pl. 1–3. Baqir, Taha (2019). "P254557". Cuneiform Digital Library Initiative. Retrieved 6 August 2019. Britton, John P.; Proust, Christine; Shnider, Steve (2011). "Plimpton 322: a review and a different perspective". Archive for History of Exact Sciences. 65 (5): 519–566. doi:10.1007/s00407-011-0083-4. S2CID 120417550. Friberg, Jöran (2007), A Remarkable Collection of Babylonian Mathematical Texts: Manuscripts in the Schøyen Collection, Cuneiform Texts I, Sources and Studies in the History of Mathematics and Physical Sciences, Berlin: Springer, ISBN 978-0-387-48977-3 Guthrie, William Keith Chambers (1978). A history of Greek philosophy, Volume 1: The earlier Presocratics and the Pythagoreans. Cambridge University Press. p. 173. ISBN 978-0-521-29420-1. The dates of [Pythagoras'] life cannot be fixed exactly, but assuming the approximate correctness of the statement of Aristoxenus (ap. Porph. V.P. 9) that he left Samos to escape the tyranny of Polycrates at the age of forty, we may put his birth round about 570 BC, or a few years earlier. The length of his life was variously estimated in antiquity, but it is agreed that he lived to a fairly ripe old age, and most probably he died at about seventy-five or eighty. Høyrup, Jens (1990). "Algebra and naive geometry: an investigation of some basic aspects of Old Babylonian mathematical thought II". Altorientalische Forschungen. 17 (1–2): 262–354. doi:10.1524/aofo.1990.17.12.262. S2CID 201669080. Høyrup, Jens (1998). "Pythagorean 'Rule' and 'Theorem' – Mirror of the Relation Between Babylonian and Greek Mathematics". In Renger, Johannes (ed.). Babylon: Focus mesopotamischer Geschichte, Wiege früher Gelehrsamkeit, Mythos in der Moderne. 2. Internationales Colloquium der Deutschen Orient-Gesellschaft 24.–26. März 1998 in Berlin (PDF). Berlin: Deutsche Orient-Gesellschaft / Saarbrücken: SDV Saarbrücker Druckerei und Verlag. pp. 393–407. Høyrup, Jens (2002). Lengths, Widths and Surfaces. A portrait of old Babylonian algebra and its kin. Sources and Studies in the History of Mathematics and Physical Sciences. Springer. doi:10.1007/978-1-4757-3685-4. ISBN 978-1-4419-2945-7. Høyrup, Jens (2016). "Seleucid, Demotic and Mediterranean Mathematics versus Chapters VIII and IX of the Nine Chapters: Accidental or Significant Similarities?" (PDF). Studies in the History of Natural Sciences. 35 (4): 463–476. Høyrup, Jens (2017). Algebra in Cuneiform: Introduction to an Old Babylonian Geometrical Technique. Edition Open Access. ISBN 978-3-945561-15-7. Isma'el, Khalid Salim; Robson, Eleanor (2010). "Arithmetical tablets from Iraqi excavations in the Diyala". In Baker, H.D.; Robson, E.; Zólyomi, G.G. (eds.). Your praise is sweet: a memorial volume for Jeremy Black from students, colleagues and friends. London: British Institute for the Study of Iraq. pp. 151–164. ISBN 978-0-903472-28-9. Robson, Eleanor (22 May 2002). "MAA Review: Lengths, Widths, Surfaces: A Portrait of Old Babylonian Algebra and its Kin". Mathematical Association of America. Werr, Lamia Al-Gailani (2005). "Chapter 1: A Museum is Born". In Polk, Milbry; Schuster, Angela M. H. (eds.). The Looting of the Iraq Museum Baghdad: the Lost Legacy of Ancient Mesopotamia. New York: Harry N. Abrams. pp. 27–33. ISBN 9780810958722. == External links == The Cuneiform Digital Library Initiative (CDLI) catalog has entries for tablets discussed in this article: The entry for IM 67118 includes Taha Baqir's hand copy of the tablet and photographs of the tablet. MS 3179 MS 2192 MS 2192 at the Schøyen Collection. YBC 7359 at the Yale Babylonian Collection. Lion de Tell Harmal (IM 52560), début du IIe millénaire, containing a photograph of the reverse of the tablet and photographs of artifacts from nearby sites.
|
Wikipedia:Ian G. Macdonald#0
|
Ian Grant Macdonald (11 October 1928 – 8 August 2023) was a British mathematician known for his contributions to symmetric functions, special functions, Lie algebra theory and other aspects of algebra, algebraic combinatorics, and combinatorics. == Early life and education == Born in London, he was educated at Winchester College and Trinity College, Cambridge, graduating in 1952. == Career == He then spent five years as a civil servant. He was offered a position at Manchester University in 1957 by Max Newman, on the basis of work he had done while outside academia. In 1960 he moved to the University of Exeter, and in 1963 became a Fellow of Magdalen College, Oxford. Macdonald became Fielden Professor at Manchester in 1972, and professor at Queen Mary College, University of London, in 1976. He worked on symmetric products of algebraic curves, Jordan algebras and the representation theory of groups over local fields. In 1972 he proved the Macdonald identities, after a pattern known to Freeman Dyson. His 1979 book Symmetric Functions and Hall Polynomials has become a classic. Symmetric functions are an old theory, part of the theory of equations, to which both K-theory and representation theory lead. His was the first text to integrate much classical theory, such as Hall polynomials, Schur functions, the Littlewood–Richardson rule, with the abstract algebra approach. It was both an expository work and, in part, a research monograph, and had a major impact in the field. The Macdonald polynomials are now named after him. The Macdonald conjectures from 1982 also proved most influential. Macdonald was elected a Fellow of the Royal Society in 1979. He was an invited speaker in 1970 at the International Congress of Mathematicians (ICM) in Nice and a plenary speaker in 1998 at the ICM in Berlin. In 1991 he received the Pólya Prize of the London Mathematical Society. In 2002 he received an honorary doctorate from the University of Amsterdam. He was awarded the 2009 Steele Prize for Mathematical Exposition. In 2012 he became a fellow of the American Mathematical Society. == Personal life and death == Ian G. Macdonald died on 8 August 2023, at the age of 94. == Selected publications == Macdonald, I. G. Affine Hecke Algebras and Orthogonal Polynomials. Cambridge Tracts in Mathematics, 157. Cambridge University Press, Cambridge, 2003. x+175 pp. ISBN 0-521-82472-9 MR1976581 Macdonald, I. G. Symmetric Functions and Hall Polynomials. Second edition. Oxford Mathematical Monographs. Oxford Science Publications. The Clarendon Press, Oxford University Press, New York, 1995. x+475 pp. ISBN 0-19-853489-2 MR1354144 1st edition. 1979. Macdonald, I. G. Symmetric Functions and Orthogonal Polynomials. Dean Jacqueline B. Lewis Memorial Lectures presented at Rutgers University, New Brunswick, New Jersey. University Lecture Series, 12. American Mathematical Society, Providence, Rhode Island, 1998. xvi+53 pp. ISBN 0-8218-0770-6 MR1488699 Atiyah, M. F.; Macdonald, I. G. Introduction to Commutative Algebra. Addison-Wesley Publishing Co., Reading, Mass.-London-Don Mills, Ont., 1969. ix+128 pp. ISBN 0-201-40751-5 MR0242802; 1994 pbk edition == References == == External links == Ian G. Macdonald at the Mathematics Genealogy Project Biographical notice
|
Wikipedia:Ian Goulden#0
|
Ian P. Goulden is a Canadian and British mathematician. He works as a professor at the University of Waterloo in the department of Combinatorics and Optimization. He obtained his PhD from the University of Waterloo in 1979 under the supervision of David M. Jackson. His PhD thesis was titled Combinatorial Decompositions in the Theory of Algebraic Enumeration. Goulden is well known for his contributions in enumerative combinatorics such as the Goulden-Jackson cluster method. Goulden was the dean at the University of Waterloo Faculty of Mathematics from 2010 to 2015 and served as chair of the Department of Combinatorics and Optimization three times. == Awards and honors == In 2010 Goulden was elected as a Fellow of the Royal Society of Canada. In 2009 received the University of Waterloo Faculty of Mathematics Award for Distinction in Teaching, and in 1976 he received the Alumni Gold Medal for highest academic achievement at the University of Waterloo. == Contributions == Goulden and Jackson published the book Combinatorial Enumeration. Goulden also published the book Finite Mathematics with R.G. Dunkley, R.J. MacKay, K.S. Brown, R.F. de Peiza, D.A. DiFelice, and D.E. Matthews. He has written over 90 research articles in the fields of Combinatorics, Enumerative Combinatorics, and Algebraic Geometry. == Selected publications == Goulden, I. P. and Jackson, D. M. (2004). Combinatorial Enumeration. ISBN 0486435970. Goulden, I. P.; Jackson, David M.; Vakil, R. (2005). "Towards the Geometry of Double Hurwitz Numbers". Advances in Mathematics. 198: 43–92. arXiv:math/0309440. doi:10.1016/j.aim.2005.01.008. S2CID 8872816. Goulden, I. P.; Jackson, D. M.; Vainshtein, A (2000). "The number of ramified coverings of the sphere by the torus and surfaces of higher genera". Annals of Combinatorics. 4: 27–46. arXiv:math/9902125. doi:10.1007/PL00001274. S2CID 16725623. Goulden, I. P.; Jackson, D. M. (1997). "Transitive factorisations into transpositions and holomorphic mappings on the sphere". Proc. Amer. Math. Soc. 125: 381–436. doi:10.1090/S0002-9939-97-03880-X. == See also == List of University of Waterloo people == References == == External links == Ian Goulden at the Mathematics Genealogy Project
|
Wikipedia:Ian Grojnowski#0
|
Ian Grojnowski is a mathematician working at the Department of Pure Mathematics and Mathematical Statistics at the University of Cambridge. == Awards and honours == Grojnowski was the first recipient of the Fröhlich Prize of the London Mathematical Society in 2004 for his work in representation theory and algebraic geometry. The citation readsGrojnowski's insights into geometric contexts for representation theory go back to his thesis with George Lusztig on character sheaves over homogeneous spaces. He has exploited these ideas to make breakthroughs in several completely unexpected areas, including representations of the affine Hecke algebras at roots of 1 (generalising results of Kazhdan and Lusztig), the representation theory of the symmetric groups Sn in characteristic p, the introduction (simultaneously with Nakajima) of vertex operators on the cohomology of the Hilbert schemes of finite subschemes of a complex algebraic surface, and (in joint work with Fishel and Teleman) the proof of the strong Macdonald conjecture of Hanlon and Feigin for reductive Lie algebras. == References ==
|
Wikipedia:Ian Wanless#0
|
Ian Murray Wanless (born 7 December 1969 in Canberra, Australia) is an Australian mathematician. He is a professor in the School of Mathematics at Monash University in Melbourne, Australia. His research area is combinatorics, principally Latin squares, graph theory and matrix permanents. Wanless completed his secondary education at Phillip College and represented Australia at the International Mathematical Olympiad in Cuba in 1987. Wanless received a Ph.D. in mathematics from the Australian National University in 1998. His thesis "Permanents, matchings and Latin rectangles" was supervised by Brendan McKay. He held a postdoctoral research position at Melbourne University (1998–1999), before becoming a junior research fellow at Christ Church, Oxford (1999–2003). He then had a research position at Australian National University (2003–2004) before spending 2005 as a senior lecturer at Charles Darwin University. Since 2006 he has been at Monash University, where he was promoted to professor in 2014. He has been awarded distinguished fellowships from the Australian Research Council including a QEII fellowship (2006–2010) and a Future Fellowship (2011–2014). The Institute of Combinatorics and its Applications awarded him its Kirkman Medal in 2002 and its Hall Medal in 2008. The Australian Institute of Policy and Science awarded him a Victorian Young Tall Poppy Award in 2008. The Australian Mathematical Society awarded him its medal in 2009. Wanless is a life member of the Combinatorial Mathematics Society of Australasia (CMSA). He has served two terms as the CMSA's President (2007–09 and 2014). He is an editor in chief of the Electronic Journal of Combinatorics and is on the editorial board of several other journals including the Journal of Combinatorial Designs. Wanless is the coauthor (with Charles Colbourn and Jeff Dinitz) of the chapter on Latin squares in the CRC Handbook of Combinatorial Designs and the author of the chapter on matrix permanents in the CRC Handbook of Linear Algebra. == References ==
|
Wikipedia:Ib Madsen#0
|
Ib Henning Madsen (born 12 April 1942, in Copenhagen) is a Danish mathematician, a professor of mathematics at the University of Copenhagen. He is known for (with Michael Weiss) proving the Mumford conjecture on the cohomology of the stable mapping class group, and for developing topological cyclic homology theory. == Professional career == Madsen earned a candidate degree from the University of Copenhagen in 1965, and a Ph.D. from the University of Chicago in 1970 under the supervision of J. Peter May. In 1971 he took a faculty position at Aarhus University, and he remained there until 2008, when he moved to Copenhagen. His doctoral students have included Søren Galatius and Lars Hesselholt. == Awards and honors == Madsen was elected as a member of the Royal Danish Academy of Science and Letters in 1978, as a foreign member of the Royal Swedish Academy of Sciences in 1998, and as a foreign member of the Royal Norwegian Society of Sciences and Letters in 2000. In 2012 he became a fellow of the American Mathematical Society. In 1992, he was awarded the Humboldt Prize. In 2011, he won the Ostrowski Prize for outstanding achievement in pure mathematics, shared with Kannan Soundararajan and David Preiss. == Publications == Madsen, Ib; Weiss, Michael (2007). "The stable moduli space of Riemann surfaces: Mumford's conjecture". Annals of Mathematics. (2). 165 (3): 843–941. arXiv:math/0212321. doi:10.4007/annals.2007.165.843. MR 2335797. Galatius, Søren; Tillmann, Ulrike; Madsen, Ib; Weiss, Michael (2009). "The homotopy type of the cobordism category". Acta Mathematica. 202 (2): 195–239. arXiv:math/0605249. doi:10.1007/s11511-009-0036-9. MR 2506750. == References ==
|
Wikipedia:Ibn Fallus#0
|
Shams ad-Dīn Abû’t-Tāhir Ismāʽīl ibn-Ibrāhīm ibn-Ġāzī ibn-ʽAlī ibn Muhammad al-Ḥanafī al-Māridīnī (1194–1252), often called Ismāʽīl ibn-Fullūs or Ibn Fallus, was an Arab Egyptian mathematician of the Islamic Golden Age. Whilst on pilgrimage to Mecca, he tells of an epitome he wrote on number theory (extant in manuscript), which building on the work of Nicomachus, added three new perfect numbers (33,550,336; 8,589,869,056; and 137,438,691,328) to the four already discovered by the Greeks. His table also included seven other supposed perfect numbers which are now known to be incorrect. Roshdi Rashed believes the errors emerged from over-reliance on Nicomachus' method. This work did not reach Europe, and the three perfect numbers were only rediscovered there during the Renaissance, including by Pietro Cataldi. == Further reading == Ismāʽīl ibn-Fullūs, Iʽdād al-asrār fī asrār al-iʽdād == See also == Regiomontanus Johannes Scheubel Pietro Cataldi == References ==
|
Wikipedia:Ibn Turk#0
|
ʿAbd al-Hamīd ibn Turk (fl. 830), known also as ʿAbd al-Hamīd ibn Wase ibn Turk al-Jili (Arabic: ابومحمد عبدالحمید بن واسع بن ترک الجیلی) was a ninth-century Turkic Muslim mathematician. Not much is known about his life. The two records of him, one by Ibn Nadim and the other by al-Qifti are not identical. Al-Qifi mentions his name as ʿAbd al-Hamīd ibn Wase ibn Turk al-Jili. Jili means from Gilan. On the other hand, Ibn Nadim mentions his nisbah as khuttali (ختلی), which is a region located north of the Oxus and west of Badakhshan. In one of the two remaining manuscripts of his al-jabr wa al-muqabila, the recording of his nisbah is closer to al-Jili. David Pingree / Encyclopaedia Iranica states that he originally hailed from Khuttal or Gilan. He wrote a work on algebra entitled Logical Necessities in Mixed Equations, which is very similar to al-Khwarzimi's Al-Jabr and was published at around the same time as, or even possibly earlier than, Al-Jabr. Only a chapter called "Logical Necessities in Mixed Equations", on the solution of quadratic equations, has survived. The manuscript gives exactly the same geometric demonstration as is found in Al-Jabr, and in one case the same example as found in Al-Jabr, and even goes beyond Al-Jabr by giving a geometric proof that if the discriminant is negative then the quadratic equation has no solution. The similarity between these two works has led some historians to conclude that algebra may have been well developed by the time of al-Khwarizmi and 'Abd al-Hamid. == References == == Further reading == == External links == Pingree, David. "ʿABD-AL-ḤAMĪD B. VĀSEʿ". www.iranicaonline.org. Encyclopaedia Iranica.
|
Wikipedia:Ibn al-Banna' al-Marrakushi#0
|
Ibn al‐Bannāʾ al‐Marrākushī (Arabic: ابن البناء المراكشي), full name: Abu'l-Abbas Ahmad ibn Muhammad ibn Uthman al-Azdi al-Marrakushi (Arabic: أبو العباس أحمد بن محمد بن عثمان الأزدي) (29 December 1256 – 31 July 1321), was an Arab Muslim polymath who was active as a mathematician, astronomer, Islamic scholar, Sufi and astrologer. == Biography == Ahmad ibn Muhammad ibn Uthman was born in the Qa'at Ibn Nahid Quarter of Marrakesh on 29 or 30 December 1256. His nisba al-Marrakushi is in relation to his birth and death in his hometown Marrakesh and al azdi means he was from the big arab tribe Azd. His father was a mason thus the kunya Ibn al-Banna' (lit. the son of the mason). Ibn al-Banna' studied a variety of subjects under at least 17 masters: Quran under the Qari's Muhammad ibn al-bashir and shaykh al-Ahdab. ʻilm al-ḥadīth under qadi al-Jama'a (chief judge) of Fez َAbu al-Hajjaj Yusuf ibn Ahmad ibn Hakam al-Tujibi, Abu Yusuf Ya'qub ibn Abd al-Rahman al-Jazuli and Abu abd allah ibn. Fiqh and Usul al-Fiqh under Abu Imran Musa ibn Abi Ali az-Zanati al-Marrakushi and Abu al-Hasan Muhammad ibn Abd al-Rahman al-Maghili who taugh him al-Juwayni's Kitab al-Irsahd. He also studied Arabic grammar under Abu Ishaq Ibrahim ibn Abd as-Salam as-Sanhaji and Muhammad ibn Ali ibn Yahya as-sharif al-marrakushi who also taugh him Euclid’s Elements. ʿArūḍ and ʿilm al-farāʾiḍ under Abu Bakr Muhammad ibn Idris ibn Malik al-Quda'i al-Qallusi. Arithmetic under Muhammad ibn Ali, known as Ibn Ḥajala. Ibn al-Banna' also studied astronomy under Abu 'Abdallah Muhammad ibn Makhluf as-Sijilmassi. He also studied medecine under al-Mirrīkh. He is known to have attached himself to the founder of the Hazmiriyya zawiya and sufi saint of Aghmat, Abu Zayd Abd al-Rahman al-Hazmiri, who guided his arithmetic skills toward divinational predictions. Ibn al-Banna' taught classes in Marrakesh and some of his students were: Abd al-Aziz ibn Ali al-Hawari al-Misrati (d.1344), Abd al-Rahman ibn Sulayman al-Laja'i (d. 1369) and Muhammad ibn Ali ibn Ibrahim al-Abli (d. 1356). He died at Marrakesh on 31 July 1321. == Works == Ibn al-Banna' wrote over 100 works encompassing such varied topics as Astronomy, Astrology, the division of inheritances, Linguistics, Logic, Mathematics, Meteorology, Rhetoric, Tafsir, Usūl al-Dīn and Usul al-Fiqh. One of his works, called Talkhīṣ ʿamal al-ḥisāb (Arabic: تلخيص أعمال الحساب) (Summary of arithmetical operations), includes topics such as fractions and sums of squares and cubes. Another, called Tanbīh al-Albāb, covers topics related to: calculations regarding the drop in irrigation canal levels, arithmetical explanation of the Muslim laws of inheritance determination of the hour of the Asr prayer, explanation of frauds linked to instruments of measurement, enumeration of delayed prayers which have to be said in a precise order, and calculation of legal tax in the case of a delayed payment He also wrote an introduction to Euclid's Elements. He also wrote Rafʿ al-Ḥijāb 'an Wujuh A'mal al-Hisab (Lifting the Veil from Faces of the Workings of Calculations) which covered topics such as computing square roots of a number and the theory of simple continued fractions. == See also == Ibn Ghazi al-Miknasi == References == == Sources == Calvo, Emilia (2008). "Ibn al-Banna'". In Selin, Helaine (ed.). Encyclopaedia of the History of Science, Technology, and Medicine in Non-Western Cultures (2nd ed.). Springer Science & Business Media. ISBN 978-1-4020-4559-2. Cherkaoui, Ahmed Iqbal (1992). "Ibn al-Banna', Ahmad ibn Muhammad ibn Uthman". In Toufiq, Ahmed; Hajji, Mohamed (eds.). Ma'lamat al-Maghrib (in Arabic). Vol. 5. al-Jamī‘a al-Maghribiyya li-l-Ta’līf wa-l-Tarjama wa-l-Nashr. Oaks, Jeffrey (2017). "Ibn al- Bannāʾ al- Marrākushī". In Fleet, Kate; Krämer, Gudrun; Matringe, Denis; Nawas, John; Rowson, Everett (eds.). Encyclopaedia of Islam (3rd ed.). E. J. Brill. Samsó, Julio (2007). "Ibn al-Bannāʾ: Abū al-ʿAbbās Aḥmad ibn Muḥammad ibn ʿUthmān al-Azdī al-Marrākushī". In Thomas Hockey; et al. (eds.). The Biographical Encyclopedia of Astronomers. New York: Springer. pp. 551–552. ISBN 978-0-387-31022-0. Sarton, George (1931). Introduction to the History of Science. Vol. II. From Rabbi Ben Ezra to Roger Bacon. Carnegie Institution of Washington. Stearns, Justin (2012). Akyeampong, Emmanuel Kwaku; Gates Jr., Henry Louis (eds.). Dictionary of African Biography. Vol. 4. OUP USA. ISBN 978-0-19-538207-5. Suter, H.; Bencheneb, M. (1986) [1971]. "Ibn al- Bannāʾ al- Marrākus̲h̲ī". In Lewis, B.; Ménage, V. L.; Pellat, C.; Schacht, J. (eds.). Encyclopaedia of Islam. Vol. III (2nd ed.). Leiden, Netherlands: E. J. BRILL. ISBN 9004081186.
|
Wikipedia:Ibn al-Durayhim#0
|
ʿAlī ibn Muḥammad ibn ʿAbd al-ʿAzīz Ibn Futūḥ ibn Ibrahīm ibn Abū Bakr (Arabic: علي بن محمد بن عبد العزيز بن فتوح بن ابراهيم بن أبي بكر; 1312–1359/62 CE), known as Ibn Durayhim al-Mawsilī (Arabic: ابن الدريهم الموصلي) was an Arab writer, mathematician, cryptologist and scribe. == Cryptology == Ibn al-Durayhim gave detailed descriptions of eight cipher systems that discussed substitution ciphers, leading to the earliest suggestion of a "tableau" of the kind that two centuries later became known as the "Vigenère table". His book entitled Clear Chapters Goals and Solving Ciphers (مقاصد الفصول المترجمة عن حل الترجمة) was recently discovered, but has yet to be published. It includes the use of the statistical techniques pioneered by Al-Kindi and Ibn 'Adlan. == The Book on the Usefulness of Animals == In the year 1355 CE, Ibn al-Durayhim compiled the Book on the Usefulness of Animals (Arabic: كتاب منافع الحيوان). The work draws largely on the work of Ibn Bakhtīshūʿ and Aristotle. The manuscript is now housed in the Escorial Library (Ar.898). The text contains 91 illustrations of various animals. == References ==
|
Wikipedia:Ibrahim Eltayeb#0
|
Ibrahim Abdelrazzak Eltayeb (Arabic: ابراهيم عبد الرزاق الطيب) is a Sudanese mathematician and professor of applied mathematics at the University of Nizwa in Oman. He is a member of the African Academy of Sciences, the Royal Astronomical Society, The World Academy of Sciences (TWAS) and the Royal Society of Edinburgh. == Early life and education == Eltayeb was born in a village on the Nile river, about 500 km north of Khartoum in Sudan, where he attended government schools. He studied in the United Kingdom on a scholarship and received a BSc in mathematics from the University of London in 1968. In 1972, he obtained his PhD in mathematics from the University of Newcastle upon Tyne. == Career and research == Eltayeb began as a lecturer at the University of Khartoum in 1972 and was promoted to professor in 1980. He served as the dean of the university's Faculty of Mathematical Sciences from 1982 to 1984. In 1986, he moved to the Sultan Qaboos University in Oman, where he established the Department of Mathematics and Computing and directed it from 1988 to 1998. Following this, he served as professor of applied mathematics at the University of Nizwa in Oman. Eltayeb has published more than 50 papers in journals, mainly in the fields of geophysics, geomagnetism, and aeronomy. He has also developed mathematical models and methods for studying the Earth's deep interior. == Honours and awards == Eltayeb was elected a fellow of the African Academy of Sciences in 1986, the Royal Astronomical Society of London in 1988, The World Academy of Sciences (TWAS) in 1996, and the Royal Society of Edinburgh in 2012. He received the TWAS Prize in Mathematics in 1995 and the COMSTECH Award in Mathematics in 2007. He served on the executive committee of the International Association of Geomagnetism and Aeronomy (IAGA) from 1991 to 1999 and chaired the scientific division I of IAGA on internal fields and secular variations from 1999 to 2003. He has been a member of the council of the Study of the Earth's Deep Interior (SEDI) since 1987 and was on the awards committee for mathematics international prize of TWAS from 1998 to 2006. == References == == External links == Profile at TWAS Profile at Royal Society of Edinburgh
|
Wikipedia:Ibrahim al-Astal#0
|
Ibrahim Hamed Hussein al-Astal (Arabic: إبراهيم حامد حسين الأسطل; 20 January 1961 – 23 October 2023) was a Palestinian educational theorist and researcher. He worked as a dean and professor at Islamic University of Gaza, Faculty of Education. He was the Editor-in-chief of IUG Journal of Educational and Psychological Studies. == Biography == Ibrahim al-Astal was born on 20 January 1961. He was awarded a PhD in Curriculum and Mathematics Teaching Methods in 1996, after which he worked as a professor at Al Ain University and Ajman University in the United Arab Emirates, then at Islamic University of Gaza. He worked in several positions at the Faculty of Education at the Islamic University of Gaza, he was the head of Curriculum and Teaching Methods Department (September 2011–August 2013), then the vice Dean (September 2013–August 2015), then the dean of South Branch (August 2015–August 2017) and finally the Dean since 1 September 2019. Al-Astal worked as a coordinator for several programs and projects, including: "Improving the Quality of Technology Education Teacher Preparation Programs in the Universities and Colleges in Gaza Strip", funded by International Development Association and European Union in 2010 and "Teacher Education Improvement Program (TEIP)", funded by World Bank and Palestinian Ministry of Education and Higher Education, between 2011 and 2019. He was also a member of the steering committee of the project "Mathematics Education Pre-service Teacher Education Development Program", led by the Arab American University in partnership with UCL Institute of Education (2010 - 2012) and member of steering committee of the project "Developing the Teacher Education Preparation for Early Children Education and Elementary School in Palestine and Norway", led by University of South-Eastern Norway from January 2019. In 2005, he published a book entitled Mihnat altaelim 'adwar almuealimin fe madrasat almustaqbal (‘The teaching profession, the roles of teachers in the school of the future’ Arabic: مهنة التعليم أدوار المعلمين في مدرسة المستقبل) in cooperation with Faryal Younis Al-Khalidi. On 23 October 2023, he, his wife, his daughters, and a number of his relatives were killed during an airstrike attack fired by Israeli Air Force on his home, during the Gaza war. Al-Astal was 62. == References == == External links == Publications by Ibrahim al-Astal at ResearchGate Ibrahim al-Astal on Facebook Ibrahim al-Astal on LinkedIn
|
Wikipedia:Ichiro Tsuda#0
|
Ichiro Tsuda (June 4, 1953-) is a Japanese mathematical scientist, applied mathematician, physicist, and writer. His research carrier started at an early stage of chaos studies. His research interests cover chaotic dynamical systems, complex systems, brain dynamics and artificial intelligence. He has contributed to the development and promotion of complex systems researches. He found several novel phenomena in chaotic dynamical systems such as Noise-Induced Order and Chaotic Itinerancy. He also proposed a chaotic theory for dynamic behaviors of brain (for example,) within a framework of cerebral hermeneutics. He received academic awards such as 2010 HFSP Program Award for international collaborative research on the neural mechanisms of deliberative decision making in rats (Joint awardees were David Redish, Jan Lauwereyns, Emma Wood, and Paul Dudchenko). He was also awarded as a plenary lecturer in prestigious international conferences such as in the 6th ICIAM 2007, the international conference for industrial and applied mathematics. He is also known as a translator and an editor for the Japanese translation of the highly rated textbook in the field of chaotic dynamical systems == References == == External links == Profile of Prof. Tsuda Chubu University
|
Wikipedia:Icosian calculus#0
|
The icosian calculus is a non-commutative algebraic structure discovered by the Irish mathematician William Rowan Hamilton in 1856. In modern terms, he gave a group presentation of the icosahedral rotation group by generators and relations. Hamilton's discovery derived from his attempts to find an algebra of "triplets" or 3-tuples that he believed would reflect the three Cartesian axes. The symbols of the icosian calculus correspond to moves between vertices on a dodecahedron. (Hamilton originally thought in terms of moves between the faces of an icosahedron, which is equivalent by duality. This is the origin of the name "icosian".) Hamilton's work in this area resulted indirectly in the terms Hamiltonian circuit and Hamiltonian path in graph theory. He also invented the icosian game as a means of illustrating and popularising his discovery. == Informal definition == The algebra is based on three symbols, ι {\displaystyle \iota } , κ {\displaystyle \kappa } , and λ {\displaystyle \lambda } , that Hamilton described as "roots of unity", by which he meant that repeated application of any of them a particular number of times yields the identity, which he denoted by 1. Specifically, they satisfy the relations, ι 2 = 1 , κ 3 = 1 , λ 5 = 1. {\displaystyle {\begin{aligned}\iota ^{2}&=1,\\\kappa ^{3}&=1,\\\lambda ^{5}&=1.\end{aligned}}} Hamilton gives one additional relation between the symbols, λ = ι κ , {\displaystyle \lambda =\iota \kappa ,} which is to be understood as application of κ {\displaystyle \kappa } followed by application of ι {\displaystyle \iota } . Hamilton points out that application in the reverse order produces a different result, implying that composition or multiplication of symbols is not generally commutative, although it is associative. The symbols generate a group of order 60, isomorphic to the group of rotations of a regular icosahedron or dodecahedron, and therefore to the alternating group of degree five. This, however, is not how Hamilton described them. Hamilton drew comparisons between the icosians and his system of quaternions, but noted that, unlike quaternions, which can be added and multiplied, obeying a distributive law, the icosians could only, as far as he knew, be multiplied. Hamilton understood his symbols by reference to the dodecahedron, which he represented in flattened form as a graph in the plane. The dodecahedron has 30 edges, and if arrows are placed on edges, there are two possible arrow directions for each edge, resulting in 60 directed edges. Each symbol corresponds to a permutation of the set of directed edges. The definitions below refer to the labeled diagram above. The notation ( A , B ) {\displaystyle (A,B)} represents a directed edge from vertex A {\displaystyle A} to vertex B {\displaystyle B} . Vertex A {\displaystyle A} is the tail of ( A , B ) {\displaystyle (A,B)} and vertex B {\displaystyle B} is its head. The icosian symbol ι {\displaystyle \iota } reverses the arrow on every directed edge, that is, it interchanges the head and tail. Hence ( B , C ) {\displaystyle (B,C)} is transformed into ( C , B ) {\displaystyle (C,B)} . Similarly, applying ι {\displaystyle \iota } to ( C , B ) {\displaystyle (C,B)} produces ( B , C ) {\displaystyle (B,C)} , and to ( R , S ) {\displaystyle (R,S)} produces ( S , R ) {\displaystyle (S,R)} . The icosian symbol κ {\displaystyle \kappa } , applied to a directed edge e {\displaystyle e} , produces the directed edge that (1) has the same head as e {\displaystyle e} and that (2) is encountered first as one moves around the head of e {\displaystyle e} in the anticlockwise direction. Hence applying κ {\displaystyle \kappa } to ( B , C ) {\displaystyle (B,C)} produces ( D , C ) {\displaystyle (D,C)} , to ( C , B ) {\displaystyle (C,B)} produces ( Z , B ) {\displaystyle (Z,B)} , and to ( R , S ) {\displaystyle (R,S)} produces ( N , S ) {\displaystyle (N,S)} . The icosian symbol λ {\displaystyle \lambda } applied to a directed edge e {\displaystyle e} produces the directed edge that results from making a right turn at the head of e {\displaystyle e} . Hence applying λ {\displaystyle \lambda } to ( B , C ) {\displaystyle (B,C)} produces ( C , D ) {\displaystyle (C,D)} , to ( C , B ) {\displaystyle (C,B)} produces ( B , A ) {\displaystyle (B,A)} , and to ( R , S ) {\displaystyle (R,S)} produces ( S , N ) {\displaystyle (S,N)} . Comparing the results of applying κ {\displaystyle \kappa } and λ {\displaystyle \lambda } to the same directed edge exhibits the rule λ = ι κ {\displaystyle \lambda =\iota \kappa } . It is useful to define the symbol μ {\displaystyle \mu } for the operation that produces the directed edge that results from making a left turn at the head of the directed edge to which the operation is applied. This symbol satisfies the relations μ = λ κ = ι κ 2 . {\displaystyle \mu =\lambda \kappa =\iota \kappa ^{2}.} For example, the directed edge obtained by making a left turn from ( B , C ) {\displaystyle (B,C)} is ( C , P ) {\displaystyle (C,P)} . Indeed, κ {\displaystyle \kappa } applied to ( B , C ) {\displaystyle (B,C)} produces ( D , C ) {\displaystyle (D,C)} and λ {\displaystyle \lambda } applied to ( D , C ) {\displaystyle (D,C)} produces ( C , P ) {\displaystyle (C,P)} . Also, κ 2 {\displaystyle \kappa ^{2}} applied to ( B , C ) {\displaystyle (B,C)} produces ( P , C ) {\displaystyle (P,C)} and ι {\displaystyle \iota } applied to ( P , C ) {\displaystyle (P,C)} produces ( C , P ) {\displaystyle (C,P)} . These permutations are not rotations of the dodecahedron. Nevertheless, the group of permutations generated by these symbols is isomorphic to the rotation group of the dodecahedron, a fact that can be deduced from a specific feature of symmetric cubic graphs, of which the dodecahedron graph is an example. The rotation group of the dodecahedron has the property that for a given directed edge there is a unique rotation that sends that directed edge to any other specified directed edge. Hence by choosing a reference edge, say ( B , C ) {\displaystyle (B,C)} , a one-to-one correspondence between directed edges and rotations is established: let g E {\displaystyle g_{E}} be the rotation that sends the reference edge R {\displaystyle R} to directed edge E {\displaystyle E} . (Indeed, there are 60 directed edges and 60 rotations.) The rotations are permutations of the set of directed edges of a different sort. Let g ( E ) {\displaystyle g(E)} denote the image of edge E {\displaystyle E} under the rotation g {\displaystyle g} . The icosian associated to g {\displaystyle g} sends the reference edge R {\displaystyle R} to the same directed edge as does g {\displaystyle g} , namely to g ( R ) {\displaystyle g(R)} . The result of applying that icosian to any other directed edge E {\displaystyle E} is g E g ( R ) = g E g g E − 1 ( E ) {\displaystyle g_{E}g(R)=g_{E}gg_{E}^{-1}(E)} . == Application to Hamiltonian circuits on the edges of the dodecahedron == A word consisting of the symbols λ {\displaystyle \lambda } and μ {\displaystyle \mu } corresponds to a sequence of right and left turns in the graph. Specifying such a word along with an initial directed edge therefore specifies a directed path along the edges of the dodecahedron. If the group element represented by the word equals the identity, then the path returns to the initial directed edge in the final step. If the additional requirement is imposed that every vertex of the graph be visited exactly once—specifically that every vertex occur exactly once as the head of a directed edge in the path—then a Hamiltonian circuit is obtained. Finding such a circuit was one of the challenges posed by Hamilton's icosian game. Hamilton exhibited the word ( λ 3 μ 3 ( λ μ ) 2 ) 2 {\displaystyle (\lambda ^{3}\mu ^{3}(\lambda \mu )^{2})^{2}} with the properties described above. Any of the 60 directed edges may serve as initial edge as a consequence of the symmetry of the dodecahedron, but only 30 distinct Hamiltonian circuits are obtained in this way, up to shift in starting point, because the word consists of the same sequence of 10 left and right turns repeated twice. The word with the roles of λ {\displaystyle \lambda } and μ {\displaystyle \mu } interchanged has the same properties, but these give the same Hamiltonian cycles, up to shift in initial edge and reversal of direction. Hence Hamilton's word accounts for all Hamiltonian cycles in the dodecahedron, whose number is known to be 30. == Legacy == The icosian calculus is one of the earliest examples of many mathematical ideas, including: presenting and studying a group by generators and relations; visualization of a group by a graph, which led to combinatorial group theory and later geometric group theory; Hamiltonian circuits and Hamiltonian paths in graph theory; dessin d'enfant – see dessin d'enfant: history for details. == See also == Icosian == References ==
|
Wikipedia:Ida Busbridge#0
|
Ida Winifred Busbridge (1908–1988) was a British mathematician who taught at the University of Oxford from 1935 until 1970. She was the first woman to be appointed to an Oxford fellowship in mathematics. == Early life and education == Ida Busbridge was born to Percival George Busbridge and May Edith Webb on 10 February 1908. She was the youngest of four children. Her father died when she was 8 months old of complications from influenza. This left her mother, a primary school teacher, to care for the children. Busbridge started her schooling at 6 in a school in her mother's district. She moved schools to Christ's Hospital in 1918, having won a scholarship at the age of ten. Here she received a firm grounding in mathematics from Miss Mitchener. Busbridge became head girl of the school and was later described as the most brilliant pupil in its 400-year history. In 1926 she enrolled in Royal Holloway College, London, intending to specialise in physics, but switched to mathematics in 1928. She turned down a place at Newnham College Cambridge as the scholarship to Royal Holloway was significantly more generous. During her schooling she was involved in the Choral Society and University Choir along with the Science Discussion Society. Busbridge graduated in 1929 with first class honours and was the awarded the Sir John Lubbock Prize for best first class honours of all London University Colleges. She continued her education at Royal Holloway, earning a masters with distinction in mathematics in 1933. == Career == She began teaching as a demonstrator in mathematics at University College, London in 1933. She moved to St Hugh's College Oxford in 1935 to teach mathematics alongside Dorothy Wrinch to undergraduates of five women's colleges. Influenced by Madge Adam and Harry Plaskett, she shifted her interest to applications of maths in astronomy and physics. During the Second World War, her workload increased to take on the education of physicists and engineers at Oxford. Her workload was especially great - not only because other mathematicians at the university were called up for special war service - but also because women formed a higher percentage of the undergraduate population during the war years. She was appointed to a Fellowship of St Hugh's College, Oxford, in 1946 – the first women to be appointed to a college fellowship in mathematics. In 1962, she was awarded a Doctor of Science degree by Oxford. She was also a Fellow of the Royal Astronomical Society. She was president of the Mathematical Association for 1964. Busbridge's work included integral equations and radiative transfer. She was highly regarded as a lecturer and tutor, attending to her students' educational and personal needs. She retired from Oxford in 1970, and became one of the early tutors and developers of courses at the Open University, teaching Lebesgue integration and tutoring complex analysis. Ida Busbridge died on 27 December 1988. A funeral service was held on 9 January 1989 in the church in Keston, where her ashes were later interred, and a memorial service was held at St Hugh's College Oxford on 25 February 1989. The principal, Rachel Trickett, quoted Dorothy Wrinch in describing Ida Busbridge as ‘quite simply the best woman mathematician I’ve ever met: brilliant and yet so capable and unassuming’. == Commemoration == In 1983, a former student made a donation of £280,000 to endow the Ida Busbridge fellowship in mathematics at St Hugh's. Ida Busbridge's biography was published by the Oxford Dictionary of National Biography on 13 August 2020 as part of their collection of biographies of astronomers and mathematicians. == References == == Further reading == Friedman, E. Clare (2014). Strawberries and Nightingales with Buz: The Pioneering Mathematical Life of Ida Busbridge (1908–1988). CreateSpace Independent Publishing Platform. ISBN 978-1499693898.
|
Wikipedia:Idealizer#0
|
In abstract algebra, the idealizer of a subsemigroup T of a semigroup S is the largest subsemigroup of S in which T is an ideal. Such an idealizer is given by I S ( T ) = { s ∈ S ∣ s T ⊆ T and T s ⊆ T } . {\displaystyle \mathbb {I} _{S}(T)=\{s\in S\mid sT\subseteq T{\text{ and }}Ts\subseteq T\}.} In ring theory, if A is an additive subgroup of a ring R, then I R ( A ) {\displaystyle \mathbb {I} _{R}(A)} (defined in the multiplicative semigroup of R) is the largest subring of R in which A is a two-sided ideal. In Lie algebra, if L is a Lie ring (or Lie algebra) with Lie product [x,y], and S is an additive subgroup of L, then the set { r ∈ L ∣ [ r , S ] ⊆ S } {\displaystyle \{r\in L\mid [r,S]\subseteq S\}} is classically called the normalizer of S, however it is apparent that this set is actually the Lie ring equivalent of the idealizer. It is not necessary to specify that [S,r] ⊆ S, because anticommutativity of the Lie product causes [s,r] = −[r,s] ∈ S. The Lie "normalizer" of S is the largest subring of L in which S is a Lie ideal. == Comments == Often, when right or left ideals are the additive subgroups of R of interest, the idealizer is defined more simply by taking advantage of the fact that multiplication by ring elements is already absorbed on one side. Explicitly, I R ( T ) = { r ∈ R ∣ r T ⊆ T } {\displaystyle \mathbb {I} _{R}(T)=\{r\in R\mid rT\subseteq T\}} if T is a right ideal, or I R ( L ) = { r ∈ R ∣ L r ⊆ L } {\displaystyle \mathbb {I} _{R}(L)=\{r\in R\mid Lr\subseteq L\}} if L is a left ideal. In commutative algebra, the idealizer is related to a more general construction. Given a commutative ring R, and given two subsets A and B of a right R-module M, the conductor or transporter is given by ( A : B ) := { r ∈ R ∣ B r ⊆ A } {\displaystyle (A:B):=\{r\in R\mid Br\subseteq A\}} . In terms of this conductor notation, an additive subgroup B of R has idealizer I R ( B ) = ( B : B ) {\displaystyle \mathbb {I} _{R}(B)=(B:B)} . When A and B are ideals of R, the conductor is part of the structure of the residuated lattice of ideals of R. Examples The multiplier algebra M(A) of a C*-algebra A is isomorphic to the idealizer of π(A) where π is any faithful nondegenerate representation of A on a Hilbert space H. == Notes == == References == Goodearl, K. R. (1976), Ring theory: Nonsingular rings and modules, Pure and Applied Mathematics, No. 33, New York: Marcel Dekker Inc., pp. viii+206, MR 0429962 Levy, Lawrence S.; Robson, J. Chris (2011), Hereditary Noetherian prime rings and idealizers, Mathematical Surveys and Monographs, vol. 174, Providence, RI: American Mathematical Society, pp. iv+228, ISBN 978-0-8218-5350-4, MR 2790801 Mikhalev, Alexander V.; Pilz, Günter F., eds. (2002), The concise handbook of algebra, Dordrecht: Kluwer Academic Publishers, pp. xvi+618, ISBN 0-7923-7072-4, MR 1966155
|
Wikipedia:Idempotence#0
|
Idempotence (UK: , US: ) is the property of certain operations in mathematics and computer science whereby they can be applied multiple times without changing the result beyond the initial application. The concept of idempotence arises in a number of places in abstract algebra (in particular, in the theory of projectors and closure operators) and functional programming (in which it is connected to the property of referential transparency). The term was introduced by American mathematician Benjamin Peirce in 1870 in the context of elements of algebras that remain invariant when raised to a positive integer power, and literally means "(the quality of having) the same power", from idem + potence (same + power). == Definition == An element x {\displaystyle x} of a set S {\displaystyle S} equipped with a binary operator ⋅ {\displaystyle \cdot } is said to be idempotent under ⋅ {\displaystyle \cdot } if x ⋅ x = x {\displaystyle x\cdot x=x} . The binary operation ⋅ {\displaystyle \cdot } is said to be idempotent if x ⋅ x = x {\displaystyle x\cdot x=x} for all x ∈ S {\displaystyle x\in S} . == Examples == In the monoid ( N , × ) {\displaystyle (\mathbb {N} ,\times )} of the natural numbers with multiplication, only 0 {\displaystyle 0} and 1 {\displaystyle 1} are idempotent. Indeed, 0 × 0 = 0 {\displaystyle 0\times 0=0} and 1 × 1 = 1 {\displaystyle 1\times 1=1} . In the monoid ( N , + ) {\displaystyle (\mathbb {N} ,+)} of the natural numbers with addition, only 0 {\displaystyle 0} is idempotent. Indeed, 0 + 0 = 0. In a magma ( M , ⋅ ) {\displaystyle (M,\cdot )} , an identity element e {\displaystyle e} or an absorbing element a {\displaystyle a} , if it exists, is idempotent. Indeed, e ⋅ e = e {\displaystyle e\cdot e=e} and a ⋅ a = a {\displaystyle a\cdot a=a} . In a group ( G , ⋅ ) {\displaystyle (G,\cdot )} , the identity element e {\displaystyle e} is the only idempotent element. Indeed, if x {\displaystyle x} is an element of G {\displaystyle G} such that x ⋅ x = x {\displaystyle x\cdot x=x} , then x ⋅ x = x ⋅ e {\displaystyle x\cdot x=x\cdot e} and finally x = e {\displaystyle x=e} by multiplying on the left by the inverse element of x {\displaystyle x} . In the monoids ( P ( E ) , ∪ ) {\displaystyle ({\mathcal {P}}(E),\cup )} and ( P ( E ) , ∩ ) {\displaystyle ({\mathcal {P}}(E),\cap )} of the power set P ( E ) {\displaystyle {\mathcal {P}}(E)} of the set E {\displaystyle E} with set union ∪ {\displaystyle \cup } and set intersection ∩ {\displaystyle \cap } respectively, ∪ {\displaystyle \cup } and ∩ {\displaystyle \cap } are idempotent. Indeed, x ∪ x = x {\displaystyle x\cup x=x} for all x ∈ P ( E ) {\displaystyle x\in {\mathcal {P}}(E)} , and x ∩ x = x {\displaystyle x\cap x=x} for all x ∈ P ( E ) {\displaystyle x\in {\mathcal {P}}(E)} . In the monoids ( { 0 , 1 } , ∨ ) {\displaystyle (\{0,1\},\vee )} and ( { 0 , 1 } , ∧ ) {\displaystyle (\{0,1\},\wedge )} of the Boolean domain with logical disjunction ∨ {\displaystyle \vee } and logical conjunction ∧ {\displaystyle \wedge } respectively, ∨ {\displaystyle \vee } and ∧ {\displaystyle \wedge } are idempotent. Indeed, x ∨ x = x {\displaystyle x\vee x=x} for all x ∈ { 0 , 1 } {\displaystyle x\in \{0,1\}} , and x ∧ x = x {\displaystyle x\wedge x=x} for all x ∈ { 0 , 1 } {\displaystyle x\in \{0,1\}} . In a GCD domain (for instance in Z {\displaystyle \mathbb {Z} } ), the operations of GCD and LCM are idempotent. In a Boolean ring, multiplication is idempotent. In a Tropical semiring, addition is idempotent. In a ring of quadratic matrices, the determinant of an idempotent matrix is either 0 or 1. If the determinant is 1, the matrix necessarily is the identity matrix. === Idempotent functions === In the monoid ( E E , ∘ ) {\displaystyle (E^{E},\circ )} of the functions from a set E {\displaystyle E} to itself (see set exponentiation) with function composition ∘ {\displaystyle \circ } , idempotent elements are the functions f : E → E {\displaystyle f\colon E\to E} such that f ∘ f = f {\displaystyle f\circ f=f} , that is such that f ( f ( x ) ) = f ( x ) {\displaystyle f(f(x))=f(x)} for all x ∈ E {\displaystyle x\in E} (in other words, the image f ( x ) {\displaystyle f(x)} of each element x ∈ E {\displaystyle x\in E} is a fixed point of f {\displaystyle f} ). For example: the absolute value is idempotent. Indeed, abs ∘ abs = abs {\displaystyle \operatorname {abs} \circ \operatorname {abs} =\operatorname {abs} } , that is abs ( abs ( x ) ) = abs ( x ) {\displaystyle \operatorname {abs} (\operatorname {abs} (x))=\operatorname {abs} (x)} for all x {\displaystyle x} ; constant functions are idempotent; the identity function is idempotent; the floor, ceiling and fractional part functions are idempotent; the real part function R e ( z ) {\displaystyle \mathrm {Re} (z)} of a complex number, is idempotent. the subgroup generated function from the power set of a group to itself is idempotent; the convex hull function from the power set of an affine space over the reals to itself is idempotent; the closure and interior functions of the power set of a topological space to itself are idempotent; the Kleene star and Kleene plus functions of the power set of a monoid to itself are idempotent; the idempotent endomorphisms of a vector space are its projections. If the set E {\displaystyle E} has n {\displaystyle n} elements, we can partition it into k {\displaystyle k} chosen fixed points and n − k {\displaystyle n-k} non-fixed points under f {\displaystyle f} , and then k n − k {\displaystyle k^{n-k}} is the number of different idempotent functions. Hence, taking into account all possible partitions, ∑ k = 0 n ( n k ) k n − k {\displaystyle \sum _{k=0}^{n}{n \choose k}k^{n-k}} is the total number of possible idempotent functions on the set. The integer sequence of the number of idempotent functions as given by the sum above for n = 0, 1, 2, 3, 4, 5, 6, 7, 8, ... starts with 1, 1, 3, 10, 41, 196, 1057, 6322, 41393, ... (sequence A000248 in the OEIS). Neither the property of being idempotent nor that of being not is preserved under function composition. As an example for the former, f ( x ) = x {\displaystyle f(x)=x} mod 3 and g ( x ) = max ( x , 5 ) {\displaystyle g(x)=\max(x,5)} are both idempotent, but f ∘ g {\displaystyle f\circ g} is not, although g ∘ f {\displaystyle g\circ f} happens to be. As an example for the latter, the negation function ¬ {\displaystyle \neg } on the Boolean domain is not idempotent, but ¬ ∘ ¬ {\displaystyle \neg \circ \neg } is. Similarly, unary negation − ( ⋅ ) {\displaystyle -(\cdot )} of real numbers is not idempotent, but − ( ⋅ ) ∘ − ( ⋅ ) {\displaystyle -(\cdot )\circ -(\cdot )} is. In both cases, the composition is simply the identity function, which is idempotent. == Computer science meaning == In computer science, the term idempotence may have a different meaning depending on the context in which it is applied: in imperative programming, a subroutine with side effects is idempotent if multiple calls to the subroutine have the same effect on the system state as a single call, in other words if the function from the system state space to itself associated with the subroutine is idempotent in the mathematical sense given in the definition; in functional programming, a pure function is idempotent if it is idempotent in the mathematical sense given in the definition. This is a very useful property in many situations, as it means that an operation can be repeated or retried as often as necessary without causing unintended effects. With non-idempotent operations, the algorithm may have to keep track of whether the operation was already performed or not. === Computer science examples === A function looking up a customer's name and address in a database is typically idempotent, since this will not cause the database to change. Similarly, a request for changing a customer's address to XYZ is typically idempotent, because the final address will be the same no matter how many times the request is submitted. However, a customer request for placing an order is typically not idempotent since multiple requests will lead to multiple orders being placed. A request for canceling a particular order is idempotent because no matter how many requests are made the order remains canceled. A sequence of idempotent subroutines where at least one subroutine is different from the others, however, is not necessarily idempotent if a later subroutine in the sequence changes a value that an earlier subroutine depends on—idempotence is not closed under sequential composition. For example, suppose the initial value of a variable is 3 and there is a subroutine sequence that reads the variable, then changes it to 5, and then reads it again. Each step in the sequence is idempotent: both steps reading the variable have no side effects and the step changing the variable to 5 will always have the same effect no matter how many times it is executed. Nonetheless, executing the entire sequence once produces the output (3, 5), but executing it a second time produces the output (5, 5), so the sequence is not idempotent. In the Hypertext Transfer Protocol (HTTP), idempotence and safety are the major attributes that separate HTTP methods. Of the major HTTP methods, GET, PUT, and DELETE should be implemented in an idempotent manner according to the standard, but POST doesn't need to be. GET retrieves the state of a resource; PUT updates the state of a resource; and DELETE deletes a resource. As in the example above, reading data usually has no side effects, so it is idempotent (in fact nullipotent). Updating and deleting a given data are each usually idempotent as long as the request uniquely identifies the resource and only that resource again in the future. PUT and DELETE with unique identifiers reduce to the simple case of assignment to a variable of either a value or the null-value, respectively, and are idempotent for the same reason; the end result is always the same as the result of the initial execution, even if the response differs. Violation of the unique identification requirement in storage or deletion typically causes violation of idempotence. For example, storing or deleting a given set of content without specifying a unique identifier: POST requests, which do not need to be idempotent, often do not contain unique identifiers, so the creation of the identifier is delegated to the receiving system which then creates a corresponding new record. Similarly, PUT and DELETE requests with nonspecific criteria may result in different outcomes depending on the state of the system - for example, a request to delete the most recent record. In each case, subsequent executions will further modify the state of the system, so they are not idempotent. In event stream processing, idempotence refers to the ability of a system to produce the same outcome, even if the same file, event or message is received more than once. In a load–store architecture, instructions that might possibly cause a page fault are idempotent. So if a page fault occurs, the operating system can load the page from disk and then simply re-execute the faulted instruction. In a processor where such instructions are not idempotent, dealing with page faults is much more complex. When reformatting output, pretty-printing is expected to be idempotent. In other words, if the output is already "pretty", there should be nothing to do for the pretty-printer. In service-oriented architecture (SOA), a multiple-step orchestration process composed entirely of idempotent steps can be replayed without side-effects if any part of that process fails. Many operations that are idempotent often have ways to "resume" a process if it is interrupted – ways that finish much faster than starting all over from the beginning. For example, resuming a file transfer, synchronizing files, creating a software build, installing an application and all of its dependencies with a package manager, etc. == Applied examples == Applied examples that many people could encounter in their day-to-day lives include elevator call buttons and crosswalk buttons. The initial activation of the button moves the system into a requesting state, until the request is satisfied. Subsequent activations of the button between the initial activation and the request being satisfied have no effect, unless the system is designed to adjust the time for satisfying the request based on the number of activations. Similarly, the elevator "close" button may be pressed many times to the same effect as once, since the doors close on a fixed schedule - unless the "open" button is pressed. Which is not idempotent, because each press adds further delay. == See also == Biordered set Closure operator Fixed point (mathematics) Idempotent of a code Idempotent analysis Idempotent matrix Idempotent relation – a generalization of idempotence to binary relations Idempotent (ring theory) Involution (mathematics) Iterated function List of matrices Nilpotent Pure function Referential transparency == Notes == == References == == Further reading == == External links == "idempotent" at the Free On-line Dictionary of Computing
|
Wikipedia:Idempotent matrix#0
|
In linear algebra, an idempotent matrix is a matrix which, when multiplied by itself, yields itself. That is, the matrix A {\displaystyle A} is idempotent if and only if A 2 = A {\displaystyle A^{2}=A} . For this product A 2 {\displaystyle A^{2}} to be defined, A {\displaystyle A} must necessarily be a square matrix. Viewed this way, idempotent matrices are idempotent elements of matrix rings. == Example == Examples of 2 × 2 {\displaystyle 2\times 2} idempotent matrices are: [ 1 0 0 1 ] [ 3 − 6 1 − 2 ] {\displaystyle {\begin{bmatrix}1&0\\0&1\end{bmatrix}}\qquad {\begin{bmatrix}3&-6\\1&-2\end{bmatrix}}} Examples of 3 × 3 {\displaystyle 3\times 3} idempotent matrices are: [ 1 0 0 0 1 0 0 0 1 ] [ 2 − 2 − 4 − 1 3 4 1 − 2 − 3 ] {\displaystyle {\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}}\qquad {\begin{bmatrix}2&-2&-4\\-1&3&4\\1&-2&-3\end{bmatrix}}} == Real 2 × 2 case == If a matrix ( a b c d ) {\displaystyle {\begin{pmatrix}a&b\\c&d\end{pmatrix}}} is idempotent, then a = a 2 + b c , {\displaystyle a=a^{2}+bc,} b = a b + b d , {\displaystyle b=ab+bd,} implying b ( 1 − a − d ) = 0 {\displaystyle b(1-a-d)=0} so b = 0 {\displaystyle b=0} or d = 1 − a , {\displaystyle d=1-a,} c = c a + c d , {\displaystyle c=ca+cd,} implying c ( 1 − a − d ) = 0 {\displaystyle c(1-a-d)=0} so c = 0 {\displaystyle c=0} or d = 1 − a , {\displaystyle d=1-a,} d = b c + d 2 . {\displaystyle d=bc+d^{2}.} Thus, a necessary condition for a 2 × 2 {\displaystyle 2\times 2} matrix to be idempotent is that either it is diagonal or its trace equals 1. For idempotent diagonal matrices, a {\displaystyle a} and d {\displaystyle d} must be either 1 or 0. If b = c {\displaystyle b=c} , the matrix ( a b b 1 − a ) {\displaystyle {\begin{pmatrix}a&b\\b&1-a\end{pmatrix}}} will be idempotent provided a 2 + b 2 = a , {\displaystyle a^{2}+b^{2}=a,} so a satisfies the quadratic equation a 2 − a + b 2 = 0 , {\displaystyle a^{2}-a+b^{2}=0,} or ( a − 1 2 ) 2 + b 2 = 1 4 {\displaystyle \left(a-{\frac {1}{2}}\right)^{2}+b^{2}={\frac {1}{4}}} which is a circle with center (1/2, 0) and radius 1/2. In terms of an angle θ, A = 1 2 ( 1 − cos θ sin θ sin θ 1 + cos θ ) {\displaystyle A={\frac {1}{2}}{\begin{pmatrix}1-\cos \theta &\sin \theta \\\sin \theta &1+\cos \theta \end{pmatrix}}} is idempotent. However, b = c {\displaystyle b=c} is not a necessary condition: any matrix ( a b c 1 − a ) {\displaystyle {\begin{pmatrix}a&b\\c&1-a\end{pmatrix}}} with a 2 + b c = a {\displaystyle a^{2}+bc=a} is idempotent. == Properties == === Singularity and regularity === The only non-singular idempotent matrix is the identity matrix; that is, if a non-identity matrix is idempotent, its number of independent rows (and columns) is less than its number of rows (and columns). This can be seen from writing A 2 = A {\displaystyle A^{2}=A} , assuming that A has full rank (is non-singular), and pre-multiplying by A − 1 {\displaystyle A^{-1}} to obtain A = I A = A − 1 A 2 = A − 1 A = I {\displaystyle A=IA=A^{-1}A^{2}=A^{-1}A=I} . When an idempotent matrix is subtracted from the identity matrix, the result is also idempotent. This holds since ( I − A ) ( I − A ) = I − A − A + A 2 = I − A − A + A = I − A . {\displaystyle (I-A)(I-A)=I-A-A+A^{2}=I-A-A+A=I-A.} If a matrix A is idempotent then for all positive integers n, A n = A {\displaystyle A^{n}=A} . This can be shown using proof by induction. Clearly we have the result for n = 1 {\displaystyle n=1} , as A 1 = A {\displaystyle A^{1}=A} . Suppose that A k − 1 = A {\displaystyle A^{k-1}=A} . Then, A k = A k − 1 A = A A = A {\displaystyle A^{k}=A^{k-1}A=AA=A} , since A is idempotent. Hence by the principle of induction, the result follows. === Eigenvalues === An idempotent matrix is always diagonalizable. Its eigenvalues are either 0 or 1: if x {\displaystyle \mathbf {x} } is a non-zero eigenvector of some idempotent matrix A {\displaystyle A} and λ {\displaystyle \lambda } its associated eigenvalue, then λ x = A x = A 2 x = A λ x = λ A x = λ 2 x , {\textstyle \lambda \mathbf {x} =A\mathbf {x} =A^{2}\mathbf {x} =A\lambda \mathbf {x} =\lambda A\mathbf {x} =\lambda ^{2}\mathbf {x} ,} which implies λ ∈ { 0 , 1 } . {\displaystyle \lambda \in \{0,1\}.} This further implies that the determinant of an idempotent matrix is always 0 or 1. As stated above, if the determinant is equal to one, the matrix is invertible and is therefore the identity matrix. === Trace === The trace of an idempotent matrix — the sum of the elements on its main diagonal — equals the rank of the matrix and thus is always an integer. This provides an easy way of computing the rank, or alternatively an easy way of determining the trace of a matrix whose elements are not specifically known (which is helpful in statistics, for example, in establishing the degree of bias in using a sample variance as an estimate of a population variance). === Relationships between idempotent matrices === In regression analysis, the matrix M = I − X ( X ′ X ) − 1 X ′ {\displaystyle M=I-X(X'X)^{-1}X'} is known to produce the residuals e {\displaystyle e} from the regression of the vector of dependent variables y {\displaystyle y} on the matrix of covariates X {\displaystyle X} . (See the section on Applications.) Now, let X 1 {\displaystyle X_{1}} be a matrix formed from a subset of the columns of X {\displaystyle X} , and let M 1 = I − X 1 ( X 1 ′ X 1 ) − 1 X 1 ′ {\displaystyle M_{1}=I-X_{1}(X_{1}'X_{1})^{-1}X_{1}'} . It is easy to show that both M {\displaystyle M} and M 1 {\displaystyle M_{1}} are idempotent, but a somewhat surprising fact is that M M 1 = M {\displaystyle MM_{1}=M} . This is because M X 1 = 0 {\displaystyle MX_{1}=0} , or in other words, the residuals from the regression of the columns of X 1 {\displaystyle X_{1}} on X {\displaystyle X} are 0 since X 1 {\displaystyle X_{1}} can be perfectly interpolated as it is a subset of X {\displaystyle X} (by direct substitution it is also straightforward to show that M X = 0 {\displaystyle MX=0} ). This leads to two other important results: one is that ( M 1 − M ) {\displaystyle (M_{1}-M)} is symmetric and idempotent, and the other is that ( M 1 − M ) M = 0 {\displaystyle (M_{1}-M)M=0} , i.e., ( M 1 − M ) {\displaystyle (M_{1}-M)} is orthogonal to M {\displaystyle M} . These results play a key role, for example, in the derivation of the F test. Any similar matrices of an idempotent matrix are also idempotent. Idempotency is conserved under a change of basis. This can be shown through multiplication of the transformed matrix S A S − 1 {\displaystyle SAS^{-1}} with A {\displaystyle A} being idempotent: ( S A S − 1 ) 2 = ( S A S − 1 ) ( S A S − 1 ) = S A ( S − 1 S ) A S − 1 = S A 2 S − 1 = S A S − 1 {\displaystyle (SAS^{-1})^{2}=(SAS^{-1})(SAS^{-1})=SA(S^{-1}S)AS^{-1}=SA^{2}S^{-1}=SAS^{-1}} . == Applications == Idempotent matrices arise frequently in regression analysis and econometrics. For example, in ordinary least squares, the regression problem is to choose a vector β of coefficient estimates so as to minimize the sum of squared residuals (mispredictions) ei: in matrix form, Minimize ( y − X β ) T ( y − X β ) {\displaystyle (y-X\beta )^{\textsf {T}}(y-X\beta )} where y {\displaystyle y} is a vector of dependent variable observations, and X {\displaystyle X} is a matrix each of whose columns is a column of observations on one of the independent variables. The resulting estimator is β ^ = ( X T X ) − 1 X T y {\displaystyle {\hat {\beta }}=\left(X^{\textsf {T}}X\right)^{-1}X^{\textsf {T}}y} where superscript T indicates a transpose, and the vector of residuals is e ^ = y − X β ^ = y − X ( X T X ) − 1 X T y = [ I − X ( X T X ) − 1 X T ] y = M y . {\displaystyle {\hat {e}}=y-X{\hat {\beta }}=y-X\left(X^{\textsf {T}}X\right)^{-1}X^{\textsf {T}}y=\left[I-X\left(X^{\textsf {T}}X\right)^{-1}X^{\textsf {T}}\right]y=My.} Here both M {\displaystyle M} and X ( X T X ) − 1 X T {\displaystyle X\left(X^{\textsf {T}}X\right)^{-1}X^{\textsf {T}}} (the latter being known as the hat matrix) are idempotent and symmetric matrices, a fact which allows simplification when the sum of squared residuals is computed: e ^ T e ^ = ( M y ) T ( M y ) = y T M T M y = y T M M y = y T M y . {\displaystyle {\hat {e}}^{\textsf {T}}{\hat {e}}=(My)^{\textsf {T}}(My)=y^{\textsf {T}}M^{\textsf {T}}My=y^{\textsf {T}}MMy=y^{\textsf {T}}My.} The idempotency of M {\displaystyle M} plays a role in other calculations as well, such as in determining the variance of the estimator β ^ {\displaystyle {\hat {\beta }}} . An idempotent linear operator P {\displaystyle P} is a projection operator on the range space R ( P ) {\displaystyle R(P)} along its null space N ( P ) {\displaystyle N(P)} . P {\displaystyle P} is an orthogonal projection operator if and only if it is idempotent and symmetric. == See also == Idempotence Nilpotent Projection (linear algebra) Hat matrix == References ==
|
Wikipedia:Identity (mathematics)#0
|
In mathematics, an identity is an equality relating one mathematical expression A to another mathematical expression B, such that A and B (which might contain some variables) produce the same value for all values of the variables within a certain domain of discourse. In other words, A = B is an identity if A and B define the same functions, and an identity is an equality between functions that are differently defined. For example, ( a + b ) 2 = a 2 + 2 a b + b 2 {\displaystyle (a+b)^{2}=a^{2}+2ab+b^{2}} and cos 2 θ + sin 2 θ = 1 {\displaystyle \cos ^{2}\theta +\sin ^{2}\theta =1} are identities. Identities are sometimes indicated by the triple bar symbol ≡ instead of =, the equals sign. Formally, an identity is a universally quantified equality. == Common identities == === Algebraic identities === Certain identities, such as a + 0 = a {\displaystyle a+0=a} and a + ( − a ) = 0 {\displaystyle a+(-a)=0} , form the basis of algebra, while other identities, such as ( a + b ) 2 = a 2 + 2 a b + b 2 {\displaystyle (a+b)^{2}=a^{2}+2ab+b^{2}} and a 2 − b 2 = ( a + b ) ( a − b ) {\displaystyle a^{2}-b^{2}=(a+b)(a-b)} , can be useful in simplifying algebraic expressions and expanding them. === Trigonometric identities === Geometrically, trigonometric identities are identities involving certain functions of one or more angles. They are distinct from triangle identities, which are identities involving both angles and side lengths of a triangle. Only the former are covered in this article. These identities are useful whenever expressions involving trigonometric functions need to be simplified. Another important application is the integration of non-trigonometric functions: a common technique which involves first using the substitution rule with a trigonometric function, and then simplifying the resulting integral with a trigonometric identity. One of the most prominent examples of trigonometric identities involves the equation sin 2 θ + cos 2 θ = 1 , {\displaystyle \sin ^{2}\theta +\cos ^{2}\theta =1,} which is true for all real values of θ {\displaystyle \theta } . On the other hand, the equation cos θ = 1 {\displaystyle \cos \theta =1} is only true for certain values of θ {\displaystyle \theta } , not all. For example, this equation is true when θ = 0 , {\displaystyle \theta =0,} but false when θ = 2 {\displaystyle \theta =2} . Another group of trigonometric identities concerns the so-called addition/subtraction formulas (e.g. the double-angle identity sin ( 2 θ ) = 2 sin θ cos θ {\displaystyle \sin(2\theta )=2\sin \theta \cos \theta } , the addition formula for tan ( x + y ) {\displaystyle \tan(x+y)} ), which can be used to break down expressions of larger angles into those with smaller constituents. === Exponential identities === The following identities hold for all integer exponents, provided that the base is non-zero: b m + n = b m ⋅ b n ( b m ) n = b m ⋅ n ( b ⋅ c ) n = b n ⋅ c n {\displaystyle {\begin{aligned}b^{m+n}&=b^{m}\cdot b^{n}\\(b^{m})^{n}&=b^{m\cdot n}\\(b\cdot c)^{n}&=b^{n}\cdot c^{n}\end{aligned}}} Unlike addition and multiplication, exponentiation is not commutative. For example, 2 + 3 = 3 + 2 = 5 and 2 · 3 = 3 · 2 = 6, but 23 = 8 whereas 32 = 9. Also unlike addition and multiplication, exponentiation is not associative either. For example, (2 + 3) + 4 = 2 + (3 + 4) = 9 and (2 · 3) · 4 = 2 · (3 · 4) = 24, but 23 to the 4 is 84 (or 4,096) whereas 2 to the 34 is 281 (or 2,417,851,639,229,258,349,412,352). When no parentheses are written, by convention the order is top-down, not bottom-up: b p q := b ( p q ) , {\displaystyle b^{p^{q}}:=b^{(p^{q})},} whereas ( b p ) q = b p ⋅ q . {\displaystyle (b^{p})^{q}=b^{p\cdot q}.} === Logarithmic identities === Several important formulas, sometimes called logarithmic identities or log laws, relate logarithms to one another: ==== Product, quotient, power and root ==== The logarithm of a product is the sum of the logarithms of the numbers being multiplied; the logarithm of the ratio of two numbers is the difference of the logarithms. The logarithm of the pth power of a number is p times the logarithm of the number itself; the logarithm of a pth root is the logarithm of the number divided by p. The following table lists these identities with examples. Each of the identities can be derived after substitution of the logarithm definitions x = b log b x , {\displaystyle x=b^{\log _{b}x},} and/or y = b log b y , {\displaystyle y=b^{\log _{b}y},} in the left hand sides. ==== Change of base ==== The logarithm logb(x) can be computed from the logarithms of x and b with respect to an arbitrary base k using the following formula: log b ( x ) = log k ( x ) log k ( b ) . {\displaystyle \log _{b}(x)={\frac {\log _{k}(x)}{\log _{k}(b)}}.} Typical scientific calculators calculate the logarithms to bases 10 and e. Logarithms with respect to any base b can be determined using either of these two logarithms by the previous formula: log b ( x ) = log 10 ( x ) log 10 ( b ) = log e ( x ) log e ( b ) . {\displaystyle \log _{b}(x)={\frac {\log _{10}(x)}{\log _{10}(b)}}={\frac {\log _{e}(x)}{\log _{e}(b)}}.} Given a number x and its logarithm logb(x) to an unknown base b, the base is given by: b = x 1 log b ( x ) . {\displaystyle b=x^{\frac {1}{\log _{b}(x)}}.} === Hyperbolic function identities === The hyperbolic functions satisfy many identities, all of them similar in form to the trigonometric identities. In fact, Osborn's rule states that one can convert any trigonometric identity into a hyperbolic identity by expanding it completely in terms of integer powers of sines and cosines, changing sine to sinh and cosine to cosh, and switching the sign of every term which contains a product of an even number of hyperbolic sines. The Gudermannian function gives a direct relationship between the trigonometric functions and the hyperbolic ones that does not involve complex numbers. == Logic and universal algebra == Formally, an identity is a true universally quantified formula of the form ∀ x 1 , … , x n : s = t , {\displaystyle \forall x_{1},\ldots ,x_{n}:s=t,} where s and t are terms with no other free variables than x 1 , … , x n . {\displaystyle x_{1},\ldots ,x_{n}.} The quantifier prefix ∀ x 1 , … , x n {\displaystyle \forall x_{1},\ldots ,x_{n}} is often left implicit, when it is stated that the formula is an identity. For example, the axioms of a monoid are often given as the formulas ∀ x , y , z : x ∗ ( y ∗ z ) = ( x ∗ y ) ∗ z , ∀ x : x ∗ 1 = x , ∀ x : 1 ∗ x = x , {\displaystyle \forall x,y,z:x*(y*z)=(x*y)*z,\quad \forall x:x*1=x,\quad \forall x:1*x=x,} or, shortly, x ∗ ( y ∗ z ) = ( x ∗ y ) ∗ z , x ∗ 1 = x , 1 ∗ x = x . {\displaystyle x*(y*z)=(x*y)*z,\qquad x*1=x,\qquad 1*x=x.} So, these formulas are identities in every monoid. As for any equality, the formulas without quantifier are often called equations. In other words, an identity is an equation that is true for all values of the variables. == See also == Accounting identity List of mathematical identities Law (mathematics) == References == === Notes === === Citations === === Sources === == External links == The Encyclopedia of Equation Online encyclopedia of mathematical identities (archived) A Collection of Algebraic Identities Archived 2011-10-01 at the Wayback Machine
|
Wikipedia:Identity channel#0
|
In quantum information theory, the identity channel is a noise-free quantum channel. That is, the channel outputs exactly what was put in. The identity channel is commonly denoted as I {\displaystyle I} , i d {\displaystyle {\mathsf {id}}} or I {\displaystyle \mathbb {I} } . == References ==
|
Wikipedia:Idun Reiten#0
|
Idun Reiten (born 1 January 1942) is a Norwegian professor of mathematics. She is considered to be one of Norway's greatest mathematicians today. With national and international honors and recognition, she has supervised 11 students and has 28 academic descendants as of March 2024. She is an expert in representation theory, and is known for work in tilting theory and Artin algebras. == Career == She took her PhD degree at the University of Illinois in 1971, becoming the second Norwegian woman to earn a PhD in mathematics. She was appointed as a professor at the University of Trondheim in 1982, now named the Norwegian University of Science and Technology. Her research area is representation theory for Artinian algebras, commutative algebra, and homological algebra. Her work with Maurice Auslander now forms the part of the study of Artinian algebras known as Auslander–Reiten theory. This theory utilizes such concepts as almost-split sequences and Auslander-Reiten quivers, which were developed in a series of papers. == Awards and Honors == In 2005, Reiten received the Humboldt Research Award. In 2007, Reiten was awarded the Möbius prize. In 2009 she was awarded Fridtjof Nansen's award for successful researchers, (in the field of mathematics and the natural sciences), and the Nansen Medal for Outstanding Research. In 2007, she was elected a foreign member of the Royal Swedish Academy of Sciences. She is also a member of the Norwegian Academy of Science and Letters, the Royal Norwegian Society of Sciences and Letters, and Academia Europaea. In 2012, she became a fellow of the American Mathematical Society. She was named MSRI Clay Senior Scholar and Simons Professor for 2012-13. She delivered the Emmy Noether Lecture at the International Congress of Mathematicians (ICM) in 2010 in Hyderabad and was an Invited Speaker at the ICM in 1998 in Berlin. In 2014, the Norwegian King appointed Reiten as commander of the Order of St. Olav "for her work as a mathematician". She is the namesake of the IDUN: From PhD to Professor program at the Norwegian University of Science and Technology Faculty of Information Technology and Electrical Engineering, which aimed at "increasing the number of female scientists in top positions at NTNU's Faculty of Computer Science and Electrical Engineering." == See also == Krull–Schmidt category == References == == External links == Idun Reiten at the Mathematics Genealogy Project Publikasjonsliste Publication List at the Mathematical Reviews Angeleri Hügel, Lidia (2006), An Introduction to Auslander-Reiten Theory (PDF)
|
Wikipedia:Ietje Paalman-de Miranda#0
|
Aïda Beatrijs “Ietje” Paalman-de Miranda (20 February 1936 – 11 May 2020) was a Surinamese-born Dutch mathematician and full professor. She was born in Uitvlugt, Paramaribo. When she was 17 years old she moved from Suriname to the Netherlands to study mathematics at the University of Amsterdam. In that era, it was very unusual to study mathematics and she was the only woman at the faculty. She graduated cum laude on 23 November 1960. She started a PhD with Johannes de Groot as her supervisor. She defended her PhD thesis "Topological Semigroups" and obtained her degree in 1960, also cum laude. In 1980 she became a full professor in pure mathematics, becoming the first female full professor of mathematics in Amsterdam. == Research == Paalman's research was foremost in topology and set theory. She was the PhD advisor of three students, co-advised by Jan van Mill. She published a book (Topological semigroups - Mathematical Centre Tracts, 1964, Mathematisch Centrum, Amsterdam) and 11 research papers about W-groups, topological representations of semi-groups and about compact groups. Paalman was also the author of numerous lecture notes for the courses she taught at the Korteweg-de Vries Institute for Mathematics at the University of Amsterdam. == Personal == She married Dolf Paalman (chief pharmacist). They had two children and three grand children. == References ==
|
Wikipedia:Igon value#0
|
Malcolm Timothy Gladwell (born 3 September 1963) is a Canadian journalist, author, and public speaker. He has been a staff writer for The New Yorker since 1996. He has published eight books. He is also the host of the podcast Revisionist History and co-founder of the podcast company Pushkin Industries. Gladwell's writings often deal with the unexpected implications of research in the social sciences, such as sociology and psychology, and make frequent and extended use of academic work. Gladwell was appointed to the Order of Canada in 2011. == Early life and education == Gladwell was born in Fareham, Hampshire, United Kingdom. His mother Joyce (née Nation) Gladwell, is a Jamaican psychotherapist. His father, Graham Gladwell, was a mathematics professor from Kent, England. When he was six his family moved from Southampton to the Mennonite community of Elmira, Ontario, Canada. He has two brothers. Throughout his childhood, Malcolm lived in rural Ontario Mennonite country, where he attended a Mennonite church. Research done by historian Henry Louis Gates Jr. revealed that one of Gladwell's maternal ancestors was a Jamaican free woman of colour (mixed black and white) who was a slaveowner. His great-great-great-grandmother was of Igbo ethnicity from Nigeria. In the epilogue of his 2008 book Outliers he describes many lucky circumstances that came to his family over the course of several generations, contributing to his path towards success. Gladwell has said that his mother is his role model as a writer. Gladwell's father noted that Malcolm was an unusually single-minded and ambitious boy. When Malcolm was 11, his father, a professor of mathematics and engineering at the University of Waterloo, allowed his son to wander around the offices at his university, which stoked the boy's interest in reading and libraries. In the spring of 1982, Gladwell interned with the National Journalism Center in Washington, D.C. He graduated with a bachelor's degree in history from Trinity College of the University of Toronto in 1984. == Career == Gladwell decided to pursue advertising as a career after college. After being rejected by every advertising agency he applied to, he accepted a journalism position at conservative magazine The American Spectator and moved to Indiana. He subsequently wrote for Insight on the News, a conservative magazine owned by Sun Myung Moon's Unification Church. In 1987, Gladwell began covering business and science for The Washington Post, where he worked until 1996. In a personal elucidation of the 10,000-hour rule he popularized in Outliers, Gladwell notes, "I was a basket case at the beginning, and I felt like an expert at the end. It took 10 years—exactly that long." When Gladwell started at The New Yorker in 1996, he wanted to "mine current academic research for insights, theories, direction, or inspiration". His first assignment was to write a piece about fashion. Instead of writing about high-class fashion, Gladwell opted to write a piece about a man who manufactured T-shirts, saying: "[I]t was much more interesting to write a piece about someone who made a T-shirt for $8 than it was to write about a dress that costs $100,000. I mean, you or I could make a dress for $100,000, but to make a T-shirt for $8—that's much tougher." Gladwell gained popularity with two New Yorker articles, both written in 1996: "The Tipping Point" and "The Coolhunt". These two pieces would become the basis for Gladwell's first book, The Tipping Point, for which he received a $1 million advance. He continues to write for The New Yorker. Gladwell also served as a contributing editor for Grantland, a sports journalism website founded by former ESPN columnist Bill Simmons. In a July 2002 article in The New Yorker, Gladwell introduced the concept of the "talent myth" that companies and organizations, in his view, incorrectly follow. This work examines different managerial and administrative techniques that companies, both winners and losers, have used. He states that the misconception seems to be that management and executives are all too ready to classify employees without ample performance records and thus make hasty decisions. Many companies believe in disproportionately rewarding "stars" over other employees with bonuses and promotions. However, with the quick rise of inexperienced workers with little in-depth performance review, promotions are often incorrectly made, putting employees into positions they should not have and keeping other, more experienced employees from rising. He also points out that under this system, narcissistic personality types are more likely to climb the ladder, since they are more likely to take more credit for achievements and take less blame for failure. He states both that narcissists make the worst managers and that the system of rewarding "stars" eventually worsens a company's position. Gladwell states that the most successful long-term companies are those who reward experience above all else and require greater time for promotions. == Works == With the release of Revenge of the Tipping Point: Overstories, Superspreaders, and the Rise of Social Engineering in 2024, Gladwell has had eight books published. When asked for the process behind his writing, he said: "I have two parallel things I'm interested in. One is, I'm interested in collecting interesting stories, and the other is I'm interested in collecting interesting research. What I'm looking for is cases where they overlap". === The Tipping Point === The initial inspiration for his first book, The Tipping Point, which was published in 2000, came from the sudden drop of crime in New York City. He wanted the book to have a broader appeal than just crime, however, and sought to explain similar phenomena through the lens of epidemiology. While Gladwell was a reporter for The Washington Post, he covered the AIDS epidemic. He began to take note of "how strange epidemics were", saying epidemiologists have a "strikingly different way of looking at the world". The term "tipping point" comes from the moment in an epidemic when the virus reaches critical mass and begins to spread at a much higher rate. Gladwell's theories of crime were heavily influenced by the "broken windows theory" of policing, and Gladwell is credited for packaging and popularizing the theory in a way that was implementable in New York City. Gladwell's theoretical implementation bears a striking resemblance to the "stop-and-frisk" policies of the NYPD. However, in the decade and a half since its publication, The Tipping Point and Gladwell have both come under fire for the tenuous link between "broken windows" and New York City's drop in violent crime. During a 2013 interview with BBC journalist Jon Ronson for The Culture Show, Gladwell admitted that he was "too in love with the broken-windows notion". He went on to say that he was "so enamored by the metaphorical simplicity of that idea that I overstated its importance". === Blink === After The Tipping Point, Gladwell published Blink in 2005. The book explains how the human unconscious interprets events or cues as well as how past experiences can lead people to make informed decisions very rapidly. Gladwell uses examples like the Getty kouros and psychologist John Gottman's research on the likelihood of divorce in married couples. Gladwell's hair was the inspiration for Blink. He stated that once he allowed his hair to get longer, he started to get speeding tickets all the time, an oddity considering that he had never gotten one before and that he started getting pulled out of airport security lines for special attention. In a particular incident, he was apprehended by three police officers while walking in downtown Manhattan because his curly hair matched the profile of a rapist, despite the fact the suspect looked nothing like him otherwise. Gladwell's The Tipping Point (2000) and Blink (2005) were international bestsellers. The Tipping Point sold more than two million copies in the United States. Blink sold equally well. As of November 2008, the two books had sold a combined 4.5 million copies. === Outliers === Gladwell's third book, Outliers, published in 2008, examines how a person's environment, in conjunction with personal drive and motivation, affects his or her possibility and opportunity for success. Gladwell's original question revolved around lawyers: "We take it for granted that there's this guy in New York who's the corporate lawyer, right? I just was curious: Why is it all the same guy?", referring to the fact that "a surprising number of the most powerful and successful corporate lawyers in New York City have almost the exact same biography". In another example given in the book, Gladwell noticed that people ascribe Bill Gates's success to being "really smart" or "really ambitious". He noted that he knew a lot of people who are really smart and really ambitious, but not worth $60 billion. "It struck me that our understanding of success was really crude—and there was an opportunity to dig down and come up with a better set of explanations." === What the Dog Saw === Gladwell's fourth book, What the Dog Saw: And Other Adventures, was published in 2009. What the Dog Saw bundles together Gladwell's favourites of his articles from The New Yorker since he joined the magazine as a staff writer in 1996. The stories share a common theme, namely that Gladwell tries to show us the world through the eyes of others, even if that other happens to be a dog. === David and Goliath === Gladwell's fifth book, David and Goliath, was released in October 2013, and examines the struggle of underdogs versus favourites. The book is partially inspired by an article Gladwell wrote for The New Yorker in 2009 titled "How David Beats Goliath". The book was a bestseller but received mixed reviews. === Talking to Strangers === Gladwell's sixth book, Talking to Strangers, was released September 2019. The book examines interactions with strangers, covers examples that include the deceptions of Bernie Madoff, the trial of Amanda Knox, the suicide of Sylvia Plath, the Jerry Sandusky pedophilia case at Penn State, and the death of Sandra Bland. Gladwell explained what inspired him to write the book as being "struck by how many high profile cases in the news were about the same thing—strangers misunderstanding each other." It challenges the assumptions we are programmed to make when encountering strangers, and the potentially dangerous consequences of misreading people we do not know. === The Bomber Mafia === Gladwell's seventh book, The Bomber Mafia: A Dream, a Temptation, and the Longest Night of the Second World War, was released in April 2021. === Revenge of the Tipping Point === Gladwell's eighth book, Revenge of the Tipping Point was released in October 2024. The book is a sequel to his best seller The Tipping Point, which was released in 2000. The book discusses social epidemics and tipping points, this time with the aim of explaining the dark side of contagious phenomena, and offers an alternate history of two of the biggest epidemics of our day: COVID and the opioid crisis. == Reception == The Tipping Point was named as one of the best books of the decade by The A.V. Club, The Guardian, and The Times. It was also Barnes & Noble's fifth-best-selling non-fiction book of the decade. Blink was named to Fast Company's list of the best business books of 2005. It was also number 5 on Amazon customers' favourite books of 2005, named to The Christian Science Monitor's best non-fiction books of 2005, and in the top 50 of Amazon customers' favourite books of the decade. Outliers was a number 1 New York Times bestseller for 11 straight weeks and was Time's number 10 non-fiction book of 2008 as well as named to the San Francisco Chronicle's list of the 50 best non-fiction books of 2008. Fortune described The Tipping Point as "a fascinating book that makes you see the world in a different way". The Daily Telegraph called it "a wonderfully offbeat study of that little-understood phenomenon, the social epidemic". Reviewing Blink, The Baltimore Sun dubbed Gladwell "the most original American journalist since the young Tom Wolfe." Farhad Manjoo at Salon described the book as "a real pleasure. As in the best of Gladwell's work, Blink brims with surprising insights about our world and ourselves." The Economist called Outliers "a compelling read with an important message". David Leonhardt wrote in The New York Times Book Review: "In the vast world of nonfiction writing, Malcolm Gladwell is as close to a singular talent as exists today" and Outliers "leaves you mulling over its inventive theories for days afterward". Ian Sample wrote in The Guardian: "Brought together, the pieces form a dazzling record of Gladwell's art. There is depth to his research and clarity in his arguments, but it is the breadth of subjects he applies himself to that is truly impressive." Gladwell's critics have described him as prone to oversimplification. The New Republic called the final chapter of Outliers, "impervious to all forms of critical thinking" and said Gladwell believes "a perfect anecdote proves a fatuous rule". Gladwell has also been criticized for his emphasis on anecdotal evidence over research to support his conclusions. Maureen Tkacik and Steven Pinker have challenged the integrity of Gladwell's approach. Even while praising Gladwell's writing style and content, Pinker summed up Gladwell as "a minor genius who unwittingly demonstrates the hazards of statistical reasoning", while accusing him of "cherry-picked anecdotes, post-hoc sophistry and false dichotomies" in his book Outliers. Referencing a Gladwell reporting mistake in which Gladwell refers to "eigenvalue" as "Igon Value", Pinker criticizes his lack of expertise: "I will call this the Igon Value Problem: when a writer's education on a topic consists in interviewing an expert, he is apt to offer generalizations that are banal, obtuse or flat wrong." A writer in The Independent accused Gladwell of posing "obvious" insights. The British website The Register has accused Gladwell of making arguments by weak analogy and commented Gladwell has an "aversion for fact", adding: "Gladwell has made a career out of handing simple, vacuous truths to people and dressing them up with flowery language and an impressionistic take on the scientific method." In that regard, The New Republic has called him "America's Best-Paid Fairy-Tale Writer". His approach was satirized by the online site "The Malcolm Gladwell Book Generator". In 2005, Gladwell commanded a $45,000 speaking fee. In 2008, he was making "about 30 speeches a year—most for tens of thousands of dollars, some for free", according to a profile in New York magazine. In 2011, he gave three talks to groups of small businessmen as part of a three-city speaking tour put on by Bank of America. The program was titled "Bank of America Small Business Speaker Series: A Conversation with Malcolm Gladwell". Paul Starobin, writing in the Columbia Journalism Review, said the engagement's "entire point seemed to be to forge a public link between a tarnished brand (the bank), and a winning one (a journalist often described in profiles as the epitome of cool)". An article by Melissa Bell of The Washington Post posed the question: "Malcolm Gladwell: Bank of America's new spokesman?" Mother Jones editor Clara Jeffery said Gladwell's job for Bank of America had "terrible ethical optics". However, Gladwell says he was unaware that Bank of America was "bragging about his speaking engagements" until the Atlantic Wire emailed him. Gladwell explained: I did a talk about innovation for a group of entrepreneurs in Los Angeles a while back, sponsored by Bank of America. They liked the talk, and asked me to give the same talk at two more small business events—in Dallas and yesterday in D.C. That's the extent of it. No different from any other speaking gig. I haven't been asked to do anything else and imagine that's it. In 2012, CBS's 60 Minutes attributed the trend of American parents "redshirting" their five-year-olds (postponing entrance into kindergarten to give them an advantage) to a section in Gladwell's Outliers. Sociology professor Shayne Lee referenced Outliers in a CNN editorial commemorating Martin Luther King Jr.'s birthday. Lee discussed the strategic timing of King's ascent from a "Gladwellian perspective". Gladwell gives credit to Richard Nisbett and Lee Ross for inventing the Gladwellian genre. Gladwell has provided blurbs for "scores of book covers", leading The New York Times to ask, "Is it possible that Mr. Gladwell has been spreading the love a bit too thinly?" Gladwell, who said he did not know how many blurbs he had written, acknowledged, "The more blurbs you give, the lower the value of the blurb. It's the tragedy of the commons." == Podcast == Gladwell is host of the podcast Revisionist History, initially produced through Panoply Media and now through Gladwell's own podcast company. It began in 2016 and has aired seven 10-episode seasons. Each episode begins with an inquiry about a person, event, or idea, and proceeds to question the received wisdom about the subject. Gladwell was recruited to create a podcast by Jacob Weisberg, editor-in-chief of The Slate Group, which also includes the podcast network Panoply Media. In September 2018, Gladwell announced he was co-founding a podcast company, later named Pushkin Industries, with Weisberg. About this decision, Gladwell told the Los Angeles Times: "There is a certain kind of whimsy and emotionality that can only be captured on audio." He also has a music podcast with Bruce Headlam and Rick Rubin, titled Broken Record where they interview musicians. It has two seasons, 2018–2019 and 2020 with a total of 49 episodes. The Unusual Suspects with Kenya Barris and Malcom Gladwell, premiered January 30, 2025. The podcast features candid interviews with influential figures across a spectrum of disciplines. A common thread throughout these interviews are discussions about each subject's path to success. Interview subjects have ranged from trailblazing Fortune 500 CEO Ursula Burns to hip hop recording artist and producer Dr. Dre. == Personal life == Gladwell is a Christian. His family attended Above Bar Church in Southampton, U.K., and later Gale Presbyterian in Elmira when they moved to Canada. His parents and siblings are part of the Mennonite community in Southwestern Ontario. Gladwell wandered away from his Christian roots when he moved to New York, only to rediscover his faith during the writing of David and Goliath and his encounter with Wilma Derksen regarding the death of her child. Gladwell was a national class runner and an Ontario High School (Ontario Federation of School Athletic Associations – OFSAA) champion. He was among Canada's fastest teenagers at 1500 metres, running 4:14 at the age of 13 and 4:05 when aged 14. At university, Gladwell ran 1500 metres in 3:55. In 2014, at the age of 51, he ran a 4:54 at the Fifth Avenue Mile. At 57 he ran a 5:15 mile. He had his first child, a daughter, in 2022. In 2024 it was reported that "In a span of five years, he got engaged, had two children, turned 61, and moved from Manhattan to pastoral Hudson, New York.". Gladwell is passionate about cars and reading car magazines, particularly the British magazine Car. == Awards and honours == In 2005, Time named Gladwell one of its 100 most influential people. In 2007, he received the American Sociological Association's first Award for Excellence in the Reporting of Social Issues. The same year, he received an honorary degree from the University of Waterloo. In 2011, he was named a Member of the Order of Canada, the second highest honour for merit in the system of orders, decorations, and medals of Canada. He has received honorary degrees from the University of Waterloo (2007) and the University of Toronto (2011). His is a recipient of the 2024 Audio Vanguard Award presented by On Air Fest. == Bibliography == === Books === Gladwell, Malcolm (2000). The Tipping Point: How Little Things Can Make a Big Difference. Boston: Little, Brown. ISBN 0-316-31696-2. — (2005). Blink: The Power of Thinking Without Thinking. New York: Little, Brown. ISBN 0-316-17232-4. — (2008). Outliers: The Story of Success. New York: Little, Brown & Co. ISBN 978-0-316-01792-3. — (2009). What the Dog Saw: And Other Adventures. New York: Little, Brown & Co. ISBN 978-0-316-07584-8. — (2013). David and Goliath: Underdogs, Misfits, and the Art of Battling Giants. New York: Little, Brown & Co. ISBN 978-0-316-20436-1. — (2019). Talking to Strangers: What We Should Know About the People We Don't Know. New York: Little, Brown & Co. ISBN 978-0-316-47852-6. — (2021). The Bomber Mafia: A Dream, a Temptation, and the Longest Night of the Second World War. New York: Little, Brown & Co. ISBN 978-0-316-29661-8. — (2024). Revenge of the Tipping Point: Overstories, Superspreaders, and the Rise of Social Engineering. New York: Little, Brown & Co. ISBN 978-0-316-57580-5. === Audiobooks === Miracle and Wonder: Conversations with Paul Simon I Hate the Ivy League: Riffs and Rants on Elite Education === Essays and reporting === Gladwell, Malcolm (6 September 2004). "The Ketchup Conundrum". The New Yorker. Retrieved 29 June 2022. — (12 September 2005). "Letter from Saddleback: The Cellular Church: How Rick Warren's congregation grew". The New Yorker. Retrieved 7 January 2019. — (13 February 2006). "Million-Dollar Murray: why problems like homelessness may be easier to solve than to manage". The New Yorker. Archived from the original on 18 March 2015. Retrieved 14 June 2015. — (5 November 2007). "Dangerous Minds". The New Yorker. Retrieved 19 June 2020. — (20 October 2008). "Late Bloomers". The New Yorker. Retrieved 4 January 2016. — (4 October 2010). "Small Change: Why the revolution will not be tweeted". The New Yorker. Archived from the original on 10 January 2011. Retrieved 1 May 2024. — (14 November 2011). "The Tweaker". Annals of Technology. The New Yorker. Vol. 87, no. 36. pp. 32–35. Retrieved 23 April 2014. — (31 March 2014). "Sacred and profane: how not to negotiate with believers". Annals of Religion. The New Yorker. Vol. 90, no. 6. pp. 22–28. — (28 July 2014). "Trust No One: Kim Philby and the hazards of mistrust". The Critics. A Critic at Large. The New Yorker. Vol. 90, no. 21. pp. 70–75. Archived from the original on 23 July 2014. Retrieved 30 September 2014. Includes review of MacIntyre, Ben (2014). A Spy Among Friends: Kim Philby and the Great Betrayal. Crown. ISBN 978-0-80413663-1. — (4 May 2015). "The engineer's lament: two ways of thinking about automotive safety". Dept. of Transportation. The New Yorker. Vol. 91, no. 11. pp. 46–55. Retrieved 1 July 2015. — (26 December 2016). "The outside man: what's the difference between Daniel Ellsberg and Edward Snowden?". The Critics. A Critic at Large. The New Yorker. Vol. 92, no. 42. pp. 119–125. === Podcasts === Gladwell, Malcolm (2016). Revisionist History. The Slate Group. Gladwell, Malcolm & Rubin, Rick (2018). Broken Record. Pushkin Industries. Barris, Kenya & Malcolm, Gladwell (2025). The Unusual Suspects with Kenya Barris and Malcolm Gladwell. Khalabo Productions, Inc. === Book reviews === === Filmography === The Missionary (2013, TV movie) === Other appearances === Gladwell was a featured storyteller for the Moth podcast. He told a story about a well-intentioned wedding toast for a young man and his friends that went wrong. Gladwell was featured in General Motors "EVerybody in." campaign. Gladwell is the only guest to have been featured as a headliner at every OZY Fest festival—an annual music and ideas festival produced by OZY Media—other than OZY co-founder and CEO Carlos Watson. Gladwell has also appeared on several television shows for OZY Media, including the Carlos Watson Show (YouTube) and Third Rail With OZY (PBS). Gladwell has a chapter giving advice in Tim Ferriss's book Tools of Titans. Gladwell was voiced by Colton Dunn in Solar Opposites S3.E1 The Extremity Triangulator. == References == == External links == Appearances on C-SPAN Malcolm Gladwell on Charlie Rose Malcolm Gladwell at IMDb Revisionist History podcast Malcolm Gladwell collected news and commentary at The Guardian Articles and Essays by Malcolm Gladwell
|
Wikipedia:Igor Dolgachev#0
|
Igor V. Dolgachev (born 7 April 1944) is a Russian–American mathematician specializing in algebraic geometry. He has been a professor at the University of Michigan since 1978. He introduced Dolgachev surfaces in 1981. Dolgachev completed his Ph.D. at Moscow State University in 1970, with thesis On the purity of the degeneration locus of families of curves written under the supervision of Igor Shafarevich. == References == == External links == Dolgachev's website at University of Michigan
|
Wikipedia:Igor Kluvánek#0
|
Igor Kluvánek (27 January 1931 – 24 July 1993) was a Slovak-Australian mathematician. == Academic career == Igor Kluvánek obtained his first degree in electrical engineering from the Slovak Polytechnic University, Bratislava, in 1953. His first appointment was in the Department of Mathematics of the same institution. At the same time he worked for his C.Sc. degree obtained from the Slovak Academy of Sciences. In the early 60's he joined the Department of Mathematical Analysis of the University of Pavol Jozef Šafárik in Košice. During 1967–68 he held a visiting position at The Flinders University of South Australia. The events of 1968 in Czechoslovakia made it impossible for him and his family to return to their homeland. The Flinders University of South Australia was able to create a chair in applied mathematics to which he was appointed in January 1969 and occupied until his resignation in 1986. == Early years == Kluvánek graduated in 1953 from the Slovak Polytechnic University with a degree in electrical engineering specialising in vacuum technology. That year, he married a former classmate from the gymnasium at Rimavská Sobota. To support himself, he became a part-time tutor/lecturer in the Department of Mathematics at the Faculty of Electrical Engineering, where he remained after completing his studies. At the same time, he worked for his C.Sc. degree obtained from the Slovak Academy of Sciences. In 1961, it became known at the polytechnic that he was a practising Catholic, which was deemed to be incompatible with the position of a socialist teacher. At that time, an attempt was made to minimise ideological confrontations in the interests of economic development. The affair blew over when he joined the Department of Mathematical Analysis of ŠafárikUniversity in his birthplace, Košice. == To Australia == With the approval of the Czechoslovak authorities, he arrived with his wife and five children in Adelaide in March 1967 to take up a two-year visiting position at the newly established Flinders University of South Australia. His wife and children departed Australia on 20 August 1968, in time for the children to start the new School year in September. While they were on their way, the 1968 Warsaw Pact invasion of Czechoslovakia took place. They landed in Zürich, but all communications with Czechoslovakia were severed. They had no entry visa to any country, so the Swiss authorities put them on a plane back to Australia that day. Thus started his twenty-year sojourn in Australia. It seems that he would have returned with his family if he had not been sentenced in Czechoslovakia, in absentia, to a two-year prison sentence after his unexpected stay in Australia was deemed illegal by the Czechoslovak authorities. The enquiries conducted by his family in Czechoslovakia on his behalf, suggest that this penalty was only quashed in the late 1980s. Besides his prison sentence, his wife had one year imprisonment imposed and all his property at home was forfeited, so they were effectively destitute and stateless. Fortunately, after his contract expired, Flinders created a chair in applied mathematics, which they offered to him. His wife died in 1981. == Back To Slovakia == He resigned his chair at Flinders in 1986 and after some unsuccessful attempts to study at seminaries in Sydney (1982) and Melbourne (1987–88) followed by temporary positions at the Centre for Mathematical Analysis in Canberra, he eventually left Australia in 1989 to settle in Bratislava. His children have remained in Australia. The gradual process of liberalisation in Czechoslovakia had facilitated his departure. The velvet revolution heralded his return home, and so his third life began. He became a member of the Slovak Academy of Sciences and remarried. His persecution by the old régime had conferred upon him the status of something of a celebrity. He declined an invitation to become minister of education. There was some disillusionment with the nature and pace of the institutional reform in Slovakia and he held several positions in quick succession. It was as he was about to leave his last position at the Slovak Technical University that he died. His five children stayed in Australia after his death, initially living in Adelaide and Melbourne. == Research == Igor Kluvánek made significant contributions to applied mathematics, functional analysis, operator theory and vector-valued integration. One needs only to consult his book Vector Measures and Control Systems written with Greg Knowles or examine the contents and historical notes of the monograph Vector Measures by J. Diestel and J.J. Uhl, Jr., to see that his penetrating studies into this area, of which he is one of the pioneers, pervade the subject. He has also made important contributions to various topics in harmonic analysis. For a sample of his influence in this area, see the excellent survey article "Five short stories about the cardinal series", Bull. Amer. Math. Soc., 12 (1985), 45–89, by J.R. Higgins which highlights the essential role played by just one of Kluvánek's paper in the "story" of the sampling theorem. Kluvánek introduced the concept of a closed vector measure. This notion was crucial for his investigations of the range of a vector measure and led to the extension to infinite dimensional spaces of the classical Liapunov convexity theorem, together with many consequences and applications. This work was in collaboration with G. Knowles and settled many of the major problems in this area. The notion of a closed vector measure stimulated much research, especially by W. Graves and his students at Chapel Hill, North Carolina. In intervening years it turned out that this notion is not only a basic tool in the study of algebras of operators generated by Boolean algebras of projections but lies at the very core of the major theorems in this area, even throwing a new perspective on the classical results in this field. As successful as the theory of integration with respect to countably additive vector measures has been in various branches of mathematics, such as mathematical physics, functional analysis and operator theory, for example, it is also known that there are fundamental problems which cannot be treated in this way. Nevertheless, these problems still seem to require for their solution "some sort of integration process" that Kluvánek pursued to the end of his career. Some of his galaxy of ideas about integration appeared in his book Integration Structures. As well as his research publications, it should be mentioned that Igor Kluvánek co-authored, with L. Mišík and M. Švec, a two volume text book (in Slovak) on Differential and Integral calculus, Analytic geometry, Differential equations and Complex variables which has seen two editions and been widely used in Czechoslovakia. He also wrote lecture notes (in Slovak) with M. Kováříková and Z. Kovářík on first year university analysis and a popular book (also in Slovak) with L. Bukovský on the pigeonhole principle. He spent a great deal of time during his appointment at Flinders developing course material for a basic foundation in mathematics. His presentation of the material changed over time as he developed new research ideas. He could not get it published in English but two volumes have been translated and published in Slovak with the third volume to appear in 2008. In addition, he wrote various articles of a pedagogic nature. == Notes == == External links == Slovak Biography Rodney Nillsen, Igor Kluvánek His life, achievements and influence in Australia == References == Jefferies, B., McIntosh A., Ricker W. (eds), "Miniconference on Operator Theory and Partial Differential Equations", Proc. Centre Math. Anal., Vol 14, 1986, (ii)–(vii). Kluvánek, I., Knowles, G, "Vector Measures and Control Systems", North-Holland Mathematics Studies, Vol 20, Amsterdam, 1976. Kluvánek, I., "Integration Structures", Proc. Centre Math. Anal. Vol 18, 1988.
|
Wikipedia:Igor Krichever#0
|
Igor Moiseevich Krichever (Russian: Игорь Моисеевич Кричевер; 8 October 1950 – 1 December 2022) was a Russian academic and mathematician. == Biography == Krichever was born in Kuybyshev to aviation engineer Moisey Solomonovich Krichever and Maria Leyzerovna Arlievskaya. He received a silver medal at the 1967 International Mathematical Olympiad. He graduated from the MSU Faculty of Mechanics and Mathematics in 1972. From 1975 to 1988, Krichever was a researcher at the Energy Institute G. M. Krzhizhanovsky. He was then a senior researcher at the Institute for Problems in Mechanics of the Russian Academy of Sciences. In 1990, he became a senior researcher for the Landau Institute for Theoretical Physics. From 1992 to 1996, he was a professor at the Independent University of Moscow. In 1997, he became a professor at Columbia University in New York City, where he served as dean of the mathematics department from 2008 to 2011. In 2013, he became a professor at the Higher School of Economics in Moscow. That same year, he became deputy director of the Institute for Information Transmission Problems of Russian Academy of Sciences. In 2011, Krichever was awarded the Medal of the Order "For Merit to the Fatherland" II class. Igor Krichever died in New York City on 1 December 2022, at the age of 72. == References == == External links == Braverman, Alexander; Etingof, Pavel; Okounkov, Andrei; Phong, Duong; Wiegmann, Pavel; Arbarello, Enrico; Brazovski, Serguei; Drinfeld, Vladimir; Dzhamay, Anton; Grushevsky, Samuel; Nekrasov, Nikita; Novikov, Sergei; Takhtajan, Leon; Varchenko, Alexander; Veselov, Alexander; Zabrodin, Anton; Smoliarova, Tatiana (April 2024). "In Memory of Igor Krichever" (PDF). Notices of the American Mathematical Society. 71 (4): 483–499. doi:10.1090/noti2909.
|
Wikipedia:Igor Lomov#0
|
Igor Lomov (Russian: Игорь Серге́евич Ло́мов) (born 1956) is a Russian mathematician, Professor, Dr.Sc., a professor at the Faculty of Computer Science at the Moscow State University. He defended the thesis «Mathematical modeling and computer analysis of liquid metal systems» for the degree of Doctor of Physical and Mathematical Sciences (2009). Author of 13 books and more than 90 scientific articles. == References == == Bibliography == Grigoriev, Evgeny (2010). Faculty of Computational Mathematics and Cybernetics: History and Modernity: A Biographical Directory. Moscow: Publishing house of Moscow University. pp. 167–168. ISBN 978-5-211-05838-5. == External links == Annals of the Moscow University(in Russian) MSU CMC(in Russian) Scientific works of Igor Lomov Scientific works of Igor Lomov(in English)
|
Wikipedia:Igor Mashechkin#0
|
Igor Mashechkin (Russian: Игорь Вале́рьевич Ма́шечкин) (born 1956) is a Russian mathematician, Professor, Dr.Sc., a professor at the Faculty of Computer Science at the Moscow State University. He defended the thesis «Multifunctional cross-programming system» for the degree of Doctor of Physical and Mathematical Sciences (1996). Author of 8 books and more than 100 scientific articles. == References == == Bibliography == Evgeny Grigoriev (2010). Faculty of Computational Mathematics and Cybernetics: History and Modernity: A Biographical Directory (1 500 экз ed.). Moscow: Publishing house of Moscow University. pp. 440–442. ISBN 978-5-211-05838-5. == External links == MSU CMC(in Russian) Scientific works of Igor Mashechkin Scientific works of Igor Mashechkin(in English)
|
Wikipedia:Igor Rivin#0
|
Igor Rivin (born 1961 in Moscow, USSR) is a Russian-Canadian mathematician, working in various fields of pure and applied mathematics, computer science, and materials science. He was the Regius Professor of Mathematics at the University of St. Andrews from 2015 to 2017, and was the chief research officer at Cryptos Fund until 2019. He was the principal of a couple of small hedge funds, and later did research for Edgestream LP, in addition to his academic work. == Career == He received his B.Sc. (Hon) in mathematics from the University of Toronto in 1981, and his Ph.D. in 1986 from Princeton University under the direction of William Thurston. Following his doctorate, Rivin directed development of QLISP and the Mathematica kernel, before returning to academia in 1992, where he held positions at the Institut des Hautes Études Scientifiques, the Institute for Advanced Study, the University of Melbourne, Warwick, and Caltech. Since 1999, Rivin has been professor of mathematics at Temple University. Between 2015 and 2017 he was Regius Professor of Mathematics at the University of St. Andrews. == Major accomplishments == Rivin's PhD thesis and a series of extensions characterized hyperbolic 3-dimensional polyhedra in terms of their dihedral angles, resolving a long-standing open question of Jakob Steiner on the inscribable combinatorial types. These, and some related results in convex geometry, have been used in 3-manifold topology, theoretical physics, computational geometry, and the recently developed field of discrete differential geometry. Rivin has also made advances in counting geodesics on surfaces, the study of generic elements of discrete subgroups of Lie groups, and in the theory of dynamical systems. Rivin is also active in applied areas, having written large parts of the Mathematica 2.0 kernel, and he developed a database of hypothetical zeolites in collaboration with M. M. J. Treacy. Rivin is a frequent contributor to MathOverflow. Igor Rivin is the co-creator, with economist Carlo Scevola, of Cryptocurrencies Index 30 (CCi30), an index of the top 30 cryptocurrencies weighted by market capitalization. CCi30 is sometimes used by academic economists as a market index when comparing the cryptocurrency trading market as a whole with individual currencies. == Honors == First prize, Canadian Mathematical Olympiad, 1977 Whitehead prize of the London Mathematical Society, 1998 Advanced Research Fellowship of the EPSRC, 1998 Lady Davis Fellowship at the Hebrew University, 2006 Berlin Mathematical School professorship, 2011. Fellow of the American Mathematical Society, 2014. == References == == External links == Igor Rivin's author profile at MathSciNet Igor Rivin's Google Scholar profile Archived 2016-10-25 at the Wayback Machine Igor Rivin at Math Overflow Igor Rivin at the Mathematics Genealogy Project Official website
|
Wikipedia:Igor Simonenko#0
|
Igor Borisovich Simonenko (Russian: Игорь Борисович Симоненко; 16 August 1935, Kiev — 22 March 2008, Rostov-on-Don) was a Russian mathematician. Professor, Doctor of Physical and Mathematical Sciences, Honoured Scientist of the Russian Federation. == Biography == Igor Borisovich Simonenko was born on 16 August 1935 in Kiev. In 1947 he entered Luhansk Machine-Building Technical School and later worked at factory. Since 1953 he studied at the Faculty of Mechanics of Rostov State University and in 1959 graduated it with honours. The results of his diploma work were published in the reports of the USSR Academy of Sciences. Then he continued to study in graduate school, his scientific supervisor was Professor Fyodor Gakhov. In 1961 he defended his candidate thesis. Six years later, at the age of 32, he wrote his doctoral dissertation. Professor since 1970. In 1972 he was appointed head of the newly created Department of Algebra and Discrete Mathematics of Rostov State University. Simonenko wrote two monographs on the application of the local principle. During his research he made discoveries in the field of one-dimensional and multidimensional singular integral equations. The results were published in the academic press and were later translated into English by The American Mathematical Society. Igor Simonenko supervised the defense of 22 candidate and 4 doctoral dissertations. He died on 22 March 2008 in Rostov-on-Don. == References ==
|
Wikipedia:Ihara zeta function#0
|
In mathematics, the Ihara zeta function is a zeta function associated with a finite graph. It closely resembles the Selberg zeta function, and is used to relate closed walks to the spectrum of the adjacency matrix. The Ihara zeta function was first defined by Yasutaka Ihara in the 1960s in the context of discrete subgroups of the two-by-two p-adic special linear group. Jean-Pierre Serre suggested in his book Trees that Ihara's original definition can be reinterpreted graph-theoretically. It was Toshikazu Sunada who put this suggestion into practice in 1985. As observed by Sunada, a regular graph is a Ramanujan graph if and only if its Ihara zeta function satisfies an analogue of the Riemann hypothesis. == Definition == The Ihara zeta function is defined as the analytic continuation of the infinite product ζ G ( u ) = ∏ p 1 1 − u L ( p ) , {\displaystyle \zeta _{G}(u)=\prod _{p}{\frac {1}{1-u^{{L}(p)}}},} where L(p) is the length L ( p ) {\displaystyle L(p)} of p {\displaystyle p} . The product in the definition is taken over all prime closed geodesics p {\displaystyle p} of the graph G = ( V , E ) {\displaystyle G=(V,E)} , where geodesics which differ by a cyclic rotation are considered equal. A closed geodesic p {\displaystyle p} on G {\displaystyle G} (known in graph theory as a "reduced closed walk"; it is not a graph geodesic) is a finite sequence of vertices p = ( v 0 , … , v k − 1 ) {\displaystyle p=(v_{0},\ldots ,v_{k-1})} such that ( v i , v ( i + 1 ) mod k ) ∈ E , {\displaystyle (v_{i},v_{(i+1){\bmod {k}}})\in E,} v i ≠ v ( i + 2 ) mod k . {\displaystyle v_{i}\neq v_{(i+2){\bmod {k}}}.} The integer k {\displaystyle k} is the length L ( p ) {\displaystyle L(p)} . The closed geodesic p {\displaystyle p} is prime if it cannot be obtained by repeating a closed geodesic m {\displaystyle m} times, for an integer m > 1 {\displaystyle m>1} . This graph-theoretic formulation is due to Sunada. == Ihara's formula == Ihara (and Sunada in the graph-theoretic setting) showed that for regular graphs the zeta function is a rational function. If G {\displaystyle G} is a q + 1 {\displaystyle q+1} -regular graph with adjacency matrix A {\displaystyle A} then ζ G ( u ) = 1 ( 1 − u 2 ) r ( G ) − 1 det ( I − A u + q u 2 I ) , {\displaystyle \zeta _{G}(u)={\frac {1}{(1-u^{2})^{r(G)-1}\det(I-Au+qu^{2}I)}},} where r ( G ) {\displaystyle r(G)} is the circuit rank of G {\displaystyle G} . If G {\displaystyle G} is connected and has n {\displaystyle n} vertices, r ( G ) − 1 = ( q − 1 ) n / 2 {\displaystyle r(G)-1=(q-1)n/2} . The Ihara zeta-function is in fact always the reciprocal of a graph polynomial: ζ G ( u ) = 1 det ( I − T u ) , {\displaystyle \zeta _{G}(u)={\frac {1}{\det(I-Tu)}}~,} where T {\displaystyle T} is Ki-ichiro Hashimoto's edge adjacency operator. Hyman Bass gave a determinant formula involving the adjacency operator. == Applications == The Ihara zeta function plays an important role in the study of free groups, spectral graph theory, and dynamical systems, especially symbolic dynamics, where the Ihara zeta function is an example of a Ruelle zeta function. == References == Ihara, Yasutaka (1966). "On discrete subgroups of the two by two projective linear group over p {\displaystyle {\mathfrak {p}}} -adic fields". Journal of the Mathematical Society of Japan. 18: 219–235. doi:10.2969/jmsj/01830219. MR 0223463. Zbl 0158.27702. Sunada, Toshikazu (1986). "L-functions in geometry and some applications". Curvature and Topology of Riemannian Manifolds. Lecture Notes in Mathematics. Vol. 1201. pp. 266–284. doi:10.1007/BFb0075662. ISBN 978-3-540-16770-9. Zbl 0605.58046. Bass, Hyman (1992). "The Ihara-Selberg zeta function of a tree lattice". International Journal of Mathematics. 3 (6): 717–797. doi:10.1142/S0129167X92000357. MR 1194071. Zbl 0767.11025. Stark, Harold M. (1999). "Multipath zeta functions of graphs". In Hejhal, Dennis A.; Friedman, Joel; Gutzwiller, Martin C.; et al. (eds.). Emerging Applications of Number Theory. IMA Vol. Math. Appl. Vol. 109. Springer. pp. 601–615. ISBN 0-387-98824-6. Zbl 0988.11040. Terras, Audrey (1999). "A survey of discrete trace formulas". In Hejhal, Dennis A.; Friedman, Joel; Gutzwiller, Martin C.; et al. (eds.). Emerging Applications of Number Theory. IMA Vol. Math. Appl. Vol. 109. Springer. pp. 643–681. ISBN 0-387-98824-6. Zbl 0982.11031. Terras, Audrey (2010). Zeta Functions of Graphs: A Stroll through the Garden. Cambridge Studies in Advanced Mathematics. Vol. 128. Cambridge University Press. ISBN 978-0-521-11367-0. Zbl 1206.05003.
|
Wikipedia:Ilan Amit#0
|
Ilan Amit (Hebrew: אִילָן עָמִית; (1935-01-19)January 19, 1935 – (2013-03-11)March 11, 2013) was an Israeli mathematician, spiritual philosopher, and defence consultant. He worked as a strategist and senior advisor to Israel's defence establishment, including the Mossad. == Biography == Ilan Kroch (later Amit) was born in Haifa. His father, a mathematics teacher, was deputy principal of the Hebrew Reali School and a founder of the Hebrew Scouts Movement in Israel. Amit studied at the Reali School, where he was a student of Josef Schächter. In 1960, Amit was one of the founders of the moshav shitufi Yodfat, where he became a proponent of the teachings of mystic George Gurdjieff. After completing his undergraduate studies in mathematics at the Technion, Amit worked at Mekorot, soon becoming head of the company's operations research department. He completed his Ph.D. in mathematics from the Technion in 1967, under the supervision of Elisha Netanyahu. Amit joined the military research department at Rafael in the late 1970s, not long after which he became blind as the result of illness. In the late 1980s Amit joined a team in Mossad's intelligence division that aimed to engage in intelligence estimates and formulate recommendations in the area of policy and strategy. In 2009, he became a member of the Prime Minister's National Security Council. He died at the age of 78 following a stroke, survived by his wife and four children. == Awards and commemoration == Presence: Ilan Amit's Journey, a film about Amit's life, was released in 2018. == Published works == Amit has translated Kierkegaard into Hebrew and published essays on Emily Dickinson and on therapy of the absurd, along with many classified research papers. His published books include: Amit, Ilan (2013). חידת הנוכחות [The Mystery of Presence] (in Hebrew). Tel Aviv: Sifrey Aliyat HaGag. ISBN 978-965-545-667-7. Amit, Ilan (2009). The Lamp: A (Not Quite) Spiritual Biography. Eureka Editions. ISBN 978-90-72395-64-1. Amit, Ilan (2005). גורדייף והעבודה הפנימית [Gurdjieff and the Inner Work] (in Hebrew). Mapa Publishing. ISBN 978-965-521-015-6. == External links == Sviri, Sara (3 August 2013). "מה כתב הפילוסוף של המוסד לפני מותו" [What the Mossad's Philosopher Wrote Before His Death]. Haaretz (in Hebrew). == References ==
|
Wikipedia:Ilaria Perugia#0
|
Ilaria Perugia (born 1969) is an Italian applied mathematician and numerical analyst whose research concerns numerical methods for partial differential equations, especially Galerkin methods. She works at the University of Vienna in Austria as Professor of the Numerics of Partial Differential Equations. == Early life and education == Perugia was born on 23 October 1969 in Milan. She studied mathematics at the University of Pavia, working there with Gianni Arrigo Pozzi and Franco Brezzi. She earned a laurea in 1993, winning the university's triennial Berzolari Prize for best laurea thesis. Next, she moved to the University of Milan for graduate study in computational mathematics and operations research, completing her Ph.D. in 1998. Her doctoral dissertation, Discretization of Linearly Constrained Problems And Applications In Scientific Computing, was again supervised by Brezzi. == Career == Perugia held a research appointment at the University of Pavia from 1995 to 2001, during which time she also visited the University of Minnesota as a postdoctoral researcher. In 2001 she was given an associate professorship at the University of Pavia, and in 2011 she became Professor of Numerical Analysis there. She moved to her present position in Vienna in 2013, and became deputy director of the Erwin Schrödinger International Institute for Mathematics and Physics at the University of Vienna in 2016. She has also held a research associate position with the Italian National Research Council (CNR), in their Pavia-based Institute for Applied Mathematics and Information Technologies (IMATI), since 2001. == References == == External links == Home page Ilaria Perugia publications indexed by Google Scholar
|
Wikipedia:Ilijas Farah#0
|
Ilijas Farah (born 18 February 1966) is a Canadian-Serbian mathematician and a professor of mathematics at York University in Toronto and at the Mathematical Institute of Serbian Academy of Sciences and Arts, Belgrade, Serbia. His research focuses on applications of logic to operator algebras. == Career == Farah was born in Sremska Mitrovica, Serbia. He received his BSc and MSc in 1988 and 1992 respectively from Belgrade University and his PhD in 1997 from the University of Toronto. He is a Research Chair in Logic and Operator Algebras at York University, Toronto. Before moving to York University he was an NSERC Postdoctoral Fellow, York University (1997–99), a Hill Assistant Professor at Rutgers University (1999–2000), and a professor at CUNY–Graduate center and College of Staten Island (2000–02). Farah was an invited speaker at the International Congress of Mathematicians, Seoul 2014, section on Logic and Foundations, where he presented his work on applications of logic to operator algebras. == Awards, distinctions, and recognitions == Sacks prize for the best doctorate in Mathematical Logic, 1997 Governor General's gold medal for one of the two best doctorates at the University of Toronto, 1998 The Canadian Association for Graduate Studies/University Microfilms International Distinguished Dissertation Award, for the best dissertation in engineering, medicine and the natural sciences in Canada, 1998. Dean's award for outstanding research, York University, 2006. Faculty Excellence in Research Award (Established Research Award), Faculty of Science, York University, 2017 == Sources == == External links == Ilijas Farah publications indexed by Google Scholar Farah, Ilijas (2019). Combinatorial set theory of C*-algebras. Cham: Springer. ISBN 978-3-030-27093-3. OCLC 1134074666. Farah, Ilijas (2000). Analytic quotients : theory of liftings for quotients over analytic ideals on the integers. Providence, R.I.: American Mathematical Society. ISBN 978-1-4704-0293-8. OCLC 851086221. Ilijas Farah: Krajnja proširenja modela, MSc thesis, Belgrade university 1992.
|
Wikipedia:Ilkka Niiniluoto#0
|
Ilkka Maunu Olavi Niiniluoto (born 12 March 1946) is a Finnish philosopher and mathematician, serving as a professor of philosophy at the University of Helsinki since 1981. He was appointed as rector of the University of Helsinki on 1 August 2003 for five years. On 25 April 2008, he was chosen to succeed Kari Raivio as chancellor of the University of Helsinki, beginning 1 June 2008. == Work == A significant contribution to the philosophy of science, particularly to the topic of verisimilitude or truth approximation, is his Truthlikeness (Synthese Library, Springer, 1987). Another notable publication is Critical Scientific Realism (Oxford University Press, 2002). In the 1990s, Niiniluoto among other university employees organized a plea to the consistory of the university to abolish ecclesiastic remnants from the university ceremonies. Ilkka Niiniluoto is editor-in-chief of Acta Philosophica Fennica, the leading philosophical journal of Finland. From 2008 to 2013, he served as the Chancellor of the University of Helsinki. In 2011 he gave a Turku Agora Lecture. == Books == Niiniluoto, Ilkka; Woleński, Jan; Sintonen, Matti (2004). Handbook of epistemology. Dordrecht Boston: Kluwer Academic Publishers. ISBN 9781402019852. == References == == External links == helsinki.fi Ilkka Niiniluoto in 375 humanists 14.04.2015, Faculty of Arts, University of Helsinki
|
Wikipedia:Ilse Fischer#0
|
Ilse Fischer (born 29 June 1975) is an Austrian mathematician whose research concerns enumerative combinatorics and algebraic combinatorics, connecting these topics to representation theory and statistical mechanics. She is a professor of mathematics at the University of Vienna. == Education and career == Fischer was born in Klagenfurt. She studied at the University of Vienna beginning in 1993, earning a master's degree (mag. rer. nat.), doctorate (dr. rer. nat.), and habilitation there respectively in 1998, 2000, and 2006. Her doctoral dissertation, Enumeration of perfect matchings: Rhombus tilings and Pfaffian graphs, was jointly supervised by Christian Krattenthaler and Franz Rendl, and her habilitation thesis was A polynomial method for the enumeration of plane partitions and alternating sign matrices. She worked as an assistant at Alpen-Adria-Universität Klagenfurt from 1999 to 2004, with a year of postdoctoral research at the Massachusetts Institute of Technology in 2001. She moved to the University of Vienna in 2004, and at Vienna she was promoted to associate professor in 2011 and to full professor in 2017. == Recognition == Fischer won the 2006 Dr. Maria Schaumayer Prize, and the 2009 Start-Preis of the Austrian Science Fund. With Roger Behrend and Matjaž Konvalinka, Fischer is a winner of the 2019 David P. Robbins Prize of the American Mathematical Society and Mathematical Association of America, for their joint research on alternating sign matrices. == References == == External links == Official website
|
Wikipedia:Ilya M. Sobol'#0
|
Ilya Meyerovich Sobol’ (Russian: Илья Меерович Соболь; born 15 August 1926) is a Russian mathematician, known for his work on Monte Carlo methods. His research spans several applications, from nuclear studies to astrophysics, and has contributed significantly to the field of sensitivity analysis. == Biography == Ilya Meyerovich Sobol’ was born on August 15, 1926, in Panevėžys (Lithuania). When World War II reached Lithuania his family was evacuated to Izhevsk. Here Sobol’ attended high school which he finished in 1943 with distinction. Sobol’ then moved to Moscow at the Faculty of Mechanics and Mathematics of Moscow State University, where he graduated with distinction in 1948. Ilya Meyerovich Sobol’ recognizes Aleksandr Khinchin, Viktor Vladimirovich Nemytskii, and A. Kolmogorov as his teachers. In 1949, Sobol’ joined a laboratory of the Geophysical Complex Expedition at the Institute of Geophysics of the USSR Academy of Sciences led by Andrey Nikolayevich Tikhonov. This laboratory was subsequently merged with the Institute of Applied Mathematics of the USSR Academy of Sciences. He has been for many years professor at the Department of Mathematical Physics of the Moscow Engineering Physics Institute, and was an active contributor to the Journal of Computational Mathematics and Mathematical Physics. == Contribution == I.M. Sobol’ has contributed to the scientific literature with about one hundred and seventy scientific papers and several textbooks. In his student years, Sobol’ was actively engaged in solving various mathematical problems. His first scientific works concerning ordinary differential equations were published in renowned mathematical journals in 1948. Some of his subsequent studies were also devoted to this subject. During his years at the Institute of Applied Mathematics Sobol’ took part in the computations for the first Soviet atomic and hydrogen bombs. He also worked with Alexander Samarskii on the computation of temperature waves. In 1958, Sobol’ started to work on pseudo-random numbers, then to move on developing new approaches which were later called quasi-Monte Carlo methods (QMC). He was the first to use the Haar functions in mathematical applications. Sobol’ defended his D.Sc. dissertation "The Method of Haar Series in the Theory of Quadrature Formulas" in 1972. The results were previously published in his well-known monograph "Multidimensional Quadrature Formulas and Haar Functions" Sobol’ applied Monte Carlo methods in various scientific fields, including astrophysics. He was actively working with a prominent physicist Rashid Sunyaev on Monte-Carlo calculations of X-ray source spectra which led to discovery of the Sunyaev-Zel'dovich effect, which is due to electrons associated with gas in galaxy clusters scattering the cosmic microwave background radiation. He is especially known for developing a new quasi-random number sequence known as LPτ sequence, or Sobol’ sequences. These are now known as digital (t,s)-sequences in base 2, and they can be used to construct digital (t,m,s)-nets. Sobol’ demonstrated that these sequences are superior to many existing competing methods (see a review in Bratley and Fox, 1988 ). For this reason Sobol’ sequences are widely used in many fields, including finance, for the evaluation of integrals, optimization, experimental design, sensitivity analysis and finance . The key property of Sobol’ sequences is that they provide greatly accelerated convergence rate in Monte Carlo integration when compared with what can be obtained using pseudo-random numbers. His achievements in astrophysics include application of Monte Carlo methods to the mathematical simulation of X-ray and gamma spectra of compact relativistic objects. He studied particle transmission (neutrons, photons). His contributions to sensitivity analysis include the development of the variance-based sensitivity indices which bear his name (Sobol’ indices ) and Derivative-based Global Sensitivity Measures (DGSM). Sobol’, together with R. Statnikov, proposed a new approach to the problems of multi-objective optimization and multi-objective decision making. This approach allows researchers and practitioners to solve the problems with non-differentiable objective functions and non-linear constraints. These results are described in their monograph. His book, Monte Carlo Methods, originally published in Russian in 1968, had a US version in 1994. He also contributed to the first multi-author book on sensitivity analysis. == Legacy == Sobol's work is cited in textbooks of uncertainty quantification, Financial Engineering, quasi-Monte Carlo methods. His work on sensitivity analysis has been of inspiration for different scholars. == References == == External links == Link to the Monte Carlo Primer Google books page for I.M. Sobol’ primer on Monte Carlo Methods A page devoted to I.M. Sobol’ from Wilmott magazine Dedication to I.M. Sobol’ of a recent book on sensitivity analysis Sobol’ entry at the ACM Digital Library÷
|
Wikipedia:Immanant#0
|
In mathematics, the immanant of a matrix was defined by Dudley E. Littlewood and Archibald Read Richardson as a generalisation of the concepts of determinant and permanent. Let λ = ( λ 1 , λ 2 , … ) {\displaystyle \lambda =(\lambda _{1},\lambda _{2},\ldots )} be a partition of an integer n {\displaystyle n} and let χ λ {\displaystyle \chi _{\lambda }} be the corresponding irreducible representation-theoretic character of the symmetric group S n {\displaystyle S_{n}} . The immanant of an n × n {\displaystyle n\times n} matrix A = ( a i j ) {\displaystyle A=(a_{ij})} associated with the character χ λ {\displaystyle \chi _{\lambda }} is defined as the expression Imm λ ( A ) = ∑ σ ∈ S n χ λ ( σ ) a 1 σ ( 1 ) a 2 σ ( 2 ) ⋯ a n σ ( n ) = ∑ σ ∈ S n χ λ ( σ ) ∏ i = 1 n a i σ ( i ) . {\displaystyle \operatorname {Imm} _{\lambda }(A)=\sum _{\sigma \in S_{n}}\chi _{\lambda }(\sigma )a_{1\sigma (1)}a_{2\sigma (2)}\cdots a_{n\sigma (n)}=\sum _{\sigma \in S_{n}}\chi _{\lambda }(\sigma )\prod _{i=1}^{n}a_{i\sigma (i)}.} == Examples == The determinant is a special case of the immanant, where χ λ {\displaystyle \chi _{\lambda }} is the alternating character sgn {\displaystyle \operatorname {sgn} } , of Sn, defined by the parity of a permutation. The permanent is the case where χ λ {\displaystyle \chi _{\lambda }} is the trivial character, which is identically equal to 1. For example, for 3 × 3 {\displaystyle 3\times 3} matrices, there are three irreducible representations of S 3 {\displaystyle S_{3}} , as shown in the character table: As stated above, χ 1 {\displaystyle \chi _{1}} produces the permanent and χ 2 {\displaystyle \chi _{2}} produces the determinant, but χ 3 {\displaystyle \chi _{3}} produces the operation that maps as follows: ( a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 ) ⇝ 2 a 11 a 22 a 33 − a 12 a 23 a 31 − a 13 a 21 a 32 {\displaystyle {\begin{pmatrix}a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{pmatrix}}\rightsquigarrow 2a_{11}a_{22}a_{33}-a_{12}a_{23}a_{31}-a_{13}a_{21}a_{32}} == Properties == The immanant shares several properties with determinant and permanent. In particular, the immanant is multilinear in the rows and columns of the matrix; and the immanant is invariant under simultaneous permutations of the rows or columns by the same element of the symmetric group. Littlewood and Richardson studied the relation of the immanant to Schur functions in the representation theory of the symmetric group. The necessary and sufficient conditions for the immanant of a Gram matrix to be 0 {\displaystyle 0} are given by Gamas's Theorem. == References == D. E. Littlewood; A.R. Richardson (1934). "Group characters and algebras". Philosophical Transactions of the Royal Society A. 233 (721–730): 99–124. Bibcode:1934RSPTA.233...99L. doi:10.1098/rsta.1934.0015. D. E. Littlewood (1950). The Theory of Group Characters and Matrix Representations of Groups (2nd ed.). Oxford Univ. Press (reprinted by AMS, 2006). p. 81.
|
Wikipedia:Immanuel Bomze#0
|
Immanuel Bomze is an Austrian mathematician. In his doctoral thesis, he completely classified all (more than 100 topologically different) possible flows of the generalized Lotka–Volterra dynamics (generalized Lotka–Volterra equation) on the plane, employing equivalence of this dynamics to the 3-type replicator equation. In “Non-cooperative two-person games in biology: a classification” (1986) and his book jointly authored with B. M. Pötscher (Game theoretic foundations of evolutionary stability, Springer 1989), he popularized the field of evolutionary game theory which at that time received most attention within Theoretical Biology, among researchers in Economics and Social Sciences. Around the turn of the millennium, he coined, together with his co-authors, the now widely used terms "Standard Quadratic Optimization" and "Copositive Optimization" or "Copositive Programming". While the further deals with the simplest problem class in non-linear optimization with an NP-hard complexity, copositive optimization allows a conic reformulation of these hard problems as a linear optimization problem over a closed convex cone of symmetric matrices, a so-called conic optimization problem. In this type of problems, the full extent of complexity is put into the cone constraint, while structural constraints and also the objective function are linear and therefore easy to handle. == Research interests == Bomze's research interests are in the areas of nonlinear optimization, qualitative theory of dynamical systems, game theory, mathematical modeling and statistics, where he has edited one and published four books, as well as over 100 peer-reviewed articles in scientific journals and monographs. The list of his coauthors comprises over seventy scientists from more than a dozen countries in four continents. == Education and professional career == Bomze was born in Vienna, Austria, in 1958. He received the degree Magister rerum naturalium in Mathematics at the University of Vienna in 1981. After a postgraduate scholarship at the Institute for Advanced Studies, Vienna from 1981 to 1982, he received the degree Doctor rerum naturalium in Mathematics at the University of Vienna. He held several visiting research positions at various research institutions across Europe, the Americas, Asia and Australia. Since 2004, he holds a chair (full professor) of Applied Mathematics and Statistics at the University of Vienna. In 2014, he was elected EUROPT fellow. EUROPT is the Continuous Optimization Working Group of EURO. == Professional activities == Bomze is currently Past President of EURO, serving on the Executive Committee 2018 to 2021, and as president in 2019 and 2020. As a member of program and/or organizing committees, he co-organized various scientific events. He is an Associate Editor for five international journals. For several Science Foundations and Councils (based in Germany, Great Britain, Hong Kong, Israel, Italy, the Netherlands, Portugal, Spain, USA), and for over 50 scientific journals he acted as a reporting referee. Between 2011 and 2017 he served as an Editor of the European Journal of Operational Research, one of the worldwide leading journals in the field. He is currently Editor-in-Chief of the EURO Journal on Computational Optimization. == References == == External links == EURO Journal on Computational Optimization European Journal of Operational Research The Mathematics Genealogy Project University of Vienna, Immanuel Bomze Zentralblatt für Mathematik EWG EUROPT, EURO working group on Continuous Optimization
|
Wikipedia:Implicit function#0
|
In mathematics, an implicit equation is a relation of the form R ( x 1 , … , x n ) = 0 , {\displaystyle R(x_{1},\dots ,x_{n})=0,} where R is a function of several variables (often a polynomial). For example, the implicit equation of the unit circle is x 2 + y 2 − 1 = 0. {\displaystyle x^{2}+y^{2}-1=0.} An implicit function is a function that is defined by an implicit equation, that relates one of the variables, considered as the value of the function, with the others considered as the arguments.: 204–206 For example, the equation x 2 + y 2 − 1 = 0 {\displaystyle x^{2}+y^{2}-1=0} of the unit circle defines y as an implicit function of x if −1 ≤ x ≤ 1, and y is restricted to nonnegative values. The implicit function theorem provides conditions under which some kinds of implicit equations define implicit functions, namely those that are obtained by equating to zero multivariable functions that are continuously differentiable. == Examples == === Inverse functions === A common type of implicit function is an inverse function. Not all functions have a unique inverse function. If g is a function of x that has a unique inverse, then the inverse function of g, called g−1, is the unique function giving a solution of the equation y = g ( x ) {\displaystyle y=g(x)} for x in terms of y. This solution can then be written as x = g − 1 ( y ) . {\displaystyle x=g^{-1}(y)\,.} Defining g−1 as the inverse of g is an implicit definition. For some functions g, g−1(y) can be written out explicitly as a closed-form expression — for instance, if g(x) = 2x − 1, then g−1(y) = 1/2(y + 1). However, this is often not possible, or only by introducing a new notation (as in the product log example below). Intuitively, an inverse function is obtained from g by interchanging the roles of the dependent and independent variables. Example: The product log is an implicit function giving the solution for x of the equation y − xex = 0. === Algebraic functions === An algebraic function is a function that satisfies a polynomial equation whose coefficients are themselves polynomials. For example, an algebraic function in one variable x gives a solution for y of an equation a n ( x ) y n + a n − 1 ( x ) y n − 1 + ⋯ + a 0 ( x ) = 0 , {\displaystyle a_{n}(x)y^{n}+a_{n-1}(x)y^{n-1}+\cdots +a_{0}(x)=0\,,} where the coefficients ai(x) are polynomial functions of x. This algebraic function can be written as the right side of the solution equation y = f(x). Written like this, f is a multi-valued implicit function. Algebraic functions play an important role in mathematical analysis and algebraic geometry. A simple example of an algebraic function is given by the left side of the unit circle equation: x 2 + y 2 − 1 = 0 . {\displaystyle x^{2}+y^{2}-1=0\,.} Solving for y gives an explicit solution: y = ± 1 − x 2 . {\displaystyle y=\pm {\sqrt {1-x^{2}}}\,.} But even without specifying this explicit solution, it is possible to refer to the implicit solution of the unit circle equation as y = f(x), where f is the multi-valued implicit function. While explicit solutions can be found for equations that are quadratic, cubic, and quartic in y, the same is not in general true for quintic and higher degree equations, such as y 5 + 2 y 4 − 7 y 3 + 3 y 2 − 6 y − x = 0 . {\displaystyle y^{5}+2y^{4}-7y^{3}+3y^{2}-6y-x=0\,.} Nevertheless, one can still refer to the implicit solution y = f(x) involving the multi-valued implicit function f. == Caveats == Not every equation R(x, y) = 0 implies a graph of a single-valued function, the circle equation being one prominent example. Another example is an implicit function given by x − C(y) = 0 where C is a cubic polynomial having a "hump" in its graph. Thus, for an implicit function to be a true (single-valued) function it might be necessary to use just part of the graph. An implicit function can sometimes be successfully defined as a true function only after "zooming in" on some part of the x-axis and "cutting away" some unwanted function branches. Then an equation expressing y as an implicit function of the other variables can be written. The defining equation R(x, y) = 0 can also have other pathologies. For example, the equation x = 0 does not imply a function f(x) giving solutions for y at all; it is a vertical line. In order to avoid a problem like this, various constraints are frequently imposed on the allowable sorts of equations or on the domain. The implicit function theorem provides a uniform way of handling these sorts of pathologies. == Implicit differentiation == In calculus, a method called implicit differentiation makes use of the chain rule to differentiate implicitly defined functions. To differentiate an implicit function y(x), defined by an equation R(x, y) = 0, it is not generally possible to solve it explicitly for y and then differentiate. Instead, one can totally differentiate R(x, y) = 0 with respect to x and y and then solve the resulting linear equation for dy/dx to explicitly get the derivative in terms of x and y. Even when it is possible to explicitly solve the original equation, the formula resulting from total differentiation is, in general, much simpler and easier to use. === Examples === ==== Example 1 ==== Consider y + x + 5 = 0 . {\displaystyle y+x+5=0\,.} This equation is easy to solve for y, giving y = − x − 5 , {\displaystyle y=-x-5\,,} where the right side is the explicit form of the function y(x). Differentiation then gives dy/dx = −1. Alternatively, one can totally differentiate the original equation: d y d x + d x d x + d d x ( 5 ) = 0 ; d y d x + 1 + 0 = 0 . {\displaystyle {\begin{aligned}{\frac {dy}{dx}}+{\frac {dx}{dx}}+{\frac {d}{dx}}(5)&=0\,;\\[6px]{\frac {dy}{dx}}+1+0&=0\,.\end{aligned}}} Solving for dy/dx gives d y d x = − 1 , {\displaystyle {\frac {dy}{dx}}=-1\,,} the same answer as obtained previously. ==== Example 2 ==== An example of an implicit function for which implicit differentiation is easier than using explicit differentiation is the function y(x) defined by the equation x 4 + 2 y 2 = 8 . {\displaystyle x^{4}+2y^{2}=8\,.} To differentiate this explicitly with respect to x, one has first to get y ( x ) = ± 8 − x 4 2 , {\displaystyle y(x)=\pm {\sqrt {\frac {8-x^{4}}{2}}}\,,} and then differentiate this function. This creates two derivatives: one for y ≥ 0 and another for y < 0. It is substantially easier to implicitly differentiate the original equation: 4 x 3 + 4 y d y d x = 0 , {\displaystyle 4x^{3}+4y{\frac {dy}{dx}}=0\,,} giving d y d x = − 4 x 3 4 y = − x 3 y . {\displaystyle {\frac {dy}{dx}}={\frac {-4x^{3}}{4y}}=-{\frac {x^{3}}{y}}\,.} ==== Example 3 ==== Often, it is difficult or impossible to solve explicitly for y, and implicit differentiation is the only feasible method of differentiation. An example is the equation y 5 − y = x . {\displaystyle y^{5}-y=x\,.} It is impossible to algebraically express y explicitly as a function of x, and therefore one cannot find dy/dx by explicit differentiation. Using the implicit method, dy/dx can be obtained by differentiating the equation to obtain 5 y 4 d y d x − d y d x = d x d x , {\displaystyle 5y^{4}{\frac {dy}{dx}}-{\frac {dy}{dx}}={\frac {dx}{dx}}\,,} where dx/dx = 1. Factoring out dy/dx shows that ( 5 y 4 − 1 ) d y d x = 1 , {\displaystyle \left(5y^{4}-1\right){\frac {dy}{dx}}=1\,,} which yields the result d y d x = 1 5 y 4 − 1 , {\displaystyle {\frac {dy}{dx}}={\frac {1}{5y^{4}-1}}\,,} which is defined for y ≠ ± 1 5 4 and y ≠ ± i 5 4 . {\displaystyle y\neq \pm {\frac {1}{\sqrt[{4}]{5}}}\quad {\text{and}}\quad y\neq \pm {\frac {i}{\sqrt[{4}]{5}}}\,.} === General formula for derivative of implicit function === If R(x, y) = 0, the derivative of the implicit function y(x) is given by: §11.5 d y d x = − ∂ R ∂ x ∂ R ∂ y = − R x R y , {\displaystyle {\frac {dy}{dx}}=-{\frac {\,{\frac {\partial R}{\partial x}}\,}{\frac {\partial R}{\partial y}}}=-{\frac {R_{x}}{R_{y}}}\,,} where Rx and Ry indicate the partial derivatives of R with respect to x and y. The above formula comes from using the generalized chain rule to obtain the total derivative — with respect to x — of both sides of R(x, y) = 0: ∂ R ∂ x d x d x + ∂ R ∂ y d y d x = 0 , {\displaystyle {\frac {\partial R}{\partial x}}{\frac {dx}{dx}}+{\frac {\partial R}{\partial y}}{\frac {dy}{dx}}=0\,,} hence ∂ R ∂ x + ∂ R ∂ y d y d x = 0 , {\displaystyle {\frac {\partial R}{\partial x}}+{\frac {\partial R}{\partial y}}{\frac {dy}{dx}}=0\,,} which, when solved for dy/dx, gives the expression above. == Implicit function theorem == Let R(x, y) be a differentiable function of two variables, and (a, b) be a pair of real numbers such that R(a, b) = 0. If ∂R/∂y ≠ 0, then R(x, y) = 0 defines an implicit function that is differentiable in some small enough neighbourhood of (a, b); in other words, there is a differentiable function f that is defined and differentiable in some neighbourhood of a, such that R(x, f(x)) = 0 for x in this neighbourhood. The condition ∂R/∂y ≠ 0 means that (a, b) is a regular point of the implicit curve of implicit equation R(x, y) = 0 where the tangent is not vertical. In a less technical language, implicit functions exist and can be differentiated, if the curve has a non-vertical tangent.: §11.5 == In algebraic geometry == Consider a relation of the form R(x1, …, xn) = 0, where R is a multivariable polynomial. The set of the values of the variables that satisfy this relation is called an implicit curve if n = 2 and an implicit surface if n = 3. The implicit equations are the basis of algebraic geometry, whose basic subjects of study are the simultaneous solutions of several implicit equations whose left-hand sides are polynomials. These sets of simultaneous solutions are called affine algebraic sets. == In differential equations == The solutions of differential equations generally appear expressed by an implicit function. == Applications in economics == === Marginal rate of substitution === In economics, when the level set R(x, y) = 0 is an indifference curve for the quantities x and y consumed of two goods, the absolute value of the implicit derivative dy/dx is interpreted as the marginal rate of substitution of the two goods: how much more of y one must receive in order to be indifferent to a loss of one unit of x. === Marginal rate of technical substitution === Similarly, sometimes the level set R(L, K) is an isoquant showing various combinations of utilized quantities L of labor and K of physical capital each of which would result in the production of the same given quantity of output of some good. In this case the absolute value of the implicit derivative dK/dL is interpreted as the marginal rate of technical substitution between the two factors of production: how much more capital the firm must use to produce the same amount of output with one less unit of labor. === Optimization === Often in economic theory, some function such as a utility function or a profit function is to be maximized with respect to a choice vector x even though the objective function has not been restricted to any specific functional form. The implicit function theorem guarantees that the first-order conditions of the optimization define an implicit function for each element of the optimal vector x* of the choice vector x. When profit is being maximized, typically the resulting implicit functions are the labor demand function and the supply functions of various goods. When utility is being maximized, typically the resulting implicit functions are the labor supply function and the demand functions for various goods. Moreover, the influence of the problem's parameters on x* — the partial derivatives of the implicit function — can be expressed as total derivatives of the system of first-order conditions found using total differentiation. == See also == == References == == Further reading == Binmore, K. G. (1983). "Implicit Functions". Calculus. New York: Cambridge University Press. pp. 198–211. ISBN 0-521-28952-1. Rudin, Walter (1976). Principles of Mathematical Analysis. Boston: McGraw-Hill. pp. 223–228. ISBN 0-07-054235-X. Simon, Carl P.; Blume, Lawrence (1994). "Implicit Functions and Their Derivatives". Mathematics for Economists. New York: W. W. Norton. pp. 334–371. ISBN 0-393-95733-0. == External links == Archived at Ghostarchive and the Wayback Machine: "Implicit Differentiation, What's Going on Here?". 3Blue1Brown. Essence of Calculus. May 3, 2017 – via YouTube.
|
Wikipedia:Implicit function theorem#0
|
In multivariable calculus, the implicit function theorem is a tool that allows relations to be converted to functions of several real variables. It does so by representing the relation as the graph of a function. There may not be a single function whose graph can represent the entire relation, but there may be such a function on a restriction of the domain of the relation. The implicit function theorem gives a sufficient condition to ensure that there is such a function. More precisely, given a system of m equations fi (x1, ..., xn, y1, ..., ym) = 0, i = 1, ..., m (often abbreviated into F(x, y) = 0), the theorem states that, under a mild condition on the partial derivatives (with respect to each yi ) at a point, the m variables yi are differentiable functions of the xj in some neighborhood of the point. As these functions generally cannot be expressed in closed form, they are implicitly defined by the equations, and this motivated the name of the theorem. In other words, under a mild condition on the partial derivatives, the set of zeros of a system of equations is locally the graph of a function. == History == Augustin-Louis Cauchy (1789–1857) is credited with the first rigorous form of the implicit function theorem. Ulisse Dini (1845–1918) generalized the real-variable version of the implicit function theorem to the context of functions of any number of real variables. == Two variables case == Let f : R 2 → R {\displaystyle f:\mathbb {R} ^{2}\to \mathbb {R} } be a continuously differentiable function defining the implicit equation of a curve f ( x , y ) = 0 {\displaystyle f(x,y)=0} . Let ( x 0 , y 0 ) {\displaystyle (x_{0},y_{0})} be a point on the curve, that is, a point such that f ( x 0 , y 0 ) = 0 {\displaystyle f(x_{0},y_{0})=0} . In this simple case, the implicit function theorem can be stated as follows: Proof. By differentiating the equation f ( x , φ ( x ) ) = 0 {\displaystyle f(x,\varphi (x))=0} , one gets ∂ f ∂ x ( x , φ ( x ) ) + φ ′ ( x ) ∂ f ∂ y ( x , φ ( x ) ) = 0. {\displaystyle {\frac {\partial f}{\partial x}}(x,\varphi (x))+\varphi '(x)\,{\frac {\partial f}{\partial y}}(x,\varphi (x))=0.} and thus φ ′ ( x ) = − ∂ f ∂ x ( x , φ ( x ) ) ∂ f ∂ y ( x , φ ( x ) ) . {\displaystyle \varphi '(x)=-{\frac {{\frac {\partial f}{\partial x}}(x,\varphi (x))}{{\frac {\partial f}{\partial y}}(x,\varphi (x))}}.} This gives an ordinary differential equation for φ {\displaystyle \varphi } , with the initial condition φ ( x 0 ) = y 0 {\displaystyle \varphi (x_{0})=y_{0}} . Since ∂ f ∂ y ( x 0 , y 0 ) ≠ 0 , {\textstyle {\frac {\partial f}{\partial y}}(x_{0},y_{0})\neq 0,} the right-hand side of the differential equation is continuous, upper bounded and lower bounded on some closed interval around x 0 {\displaystyle x_{0}} . It is therefore Lipschitz continuous, and the Cauchy-Lipschitz theorem applies for proving the existence of a unique solution. == First example == If we define the function f(x, y) = x2 + y2, then the equation f(x, y) = 1 cuts out the unit circle as the level set {(x, y) | f(x, y) = 1}. There is no way to represent the unit circle as the graph of a function of one variable y = g(x) because for each choice of x ∈ (−1, 1), there are two choices of y, namely ± 1 − x 2 {\displaystyle \pm {\sqrt {1-x^{2}}}} . However, it is possible to represent part of the circle as the graph of a function of one variable. If we let g 1 ( x ) = 1 − x 2 {\displaystyle g_{1}(x)={\sqrt {1-x^{2}}}} for −1 ≤ x ≤ 1, then the graph of y = g1(x) provides the upper half of the circle. Similarly, if g 2 ( x ) = − 1 − x 2 {\displaystyle g_{2}(x)=-{\sqrt {1-x^{2}}}} , then the graph of y = g2(x) gives the lower half of the circle. The purpose of the implicit function theorem is to tell us that functions like g1(x) and g2(x) almost always exist, even in situations where we cannot write down explicit formulas. It guarantees that g1(x) and g2(x) are differentiable, and it even works in situations where we do not have a formula for f(x, y). == Definitions == Let f : R n + m → R m {\displaystyle f:\mathbb {R} ^{n+m}\to \mathbb {R} ^{m}} be a continuously differentiable function. We think of R n + m {\displaystyle \mathbb {R} ^{n+m}} as the Cartesian product R n × R m , {\displaystyle \mathbb {R} ^{n}\times \mathbb {R} ^{m},} and we write a point of this product as ( x , y ) = ( x 1 , … , x n , y 1 , … y m ) . {\displaystyle (\mathbf {x} ,\mathbf {y} )=(x_{1},\ldots ,x_{n},y_{1},\ldots y_{m}).} Starting from the given function f {\displaystyle f} , our goal is to construct a function g : R n → R m {\displaystyle g:\mathbb {R} ^{n}\to \mathbb {R} ^{m}} whose graph ( x , g ( x ) ) {\displaystyle ({\textbf {x}},g({\textbf {x}}))} is precisely the set of all ( x , y ) {\displaystyle ({\textbf {x}},{\textbf {y}})} such that f ( x , y ) = 0 {\displaystyle f({\textbf {x}},{\textbf {y}})={\textbf {0}}} . As noted above, this may not always be possible. We will therefore fix a point ( a , b ) = ( a 1 , … , a n , b 1 , … , b m ) {\displaystyle ({\textbf {a}},{\textbf {b}})=(a_{1},\dots ,a_{n},b_{1},\dots ,b_{m})} which satisfies f ( a , b ) = 0 {\displaystyle f({\textbf {a}},{\textbf {b}})={\textbf {0}}} , and we will ask for a g {\displaystyle g} that works near the point ( a , b ) {\displaystyle ({\textbf {a}},{\textbf {b}})} . In other words, we want an open set U ⊂ R n {\displaystyle U\subset \mathbb {R} ^{n}} containing a {\displaystyle {\textbf {a}}} , an open set V ⊂ R m {\displaystyle V\subset \mathbb {R} ^{m}} containing b {\displaystyle {\textbf {b}}} , and a function g : U → V {\displaystyle g:U\to V} such that the graph of g {\displaystyle g} satisfies the relation f = 0 {\displaystyle f={\textbf {0}}} on U × V {\displaystyle U\times V} , and that no other points within U × V {\displaystyle U\times V} do so. In symbols, { ( x , g ( x ) ) ∣ x ∈ U } = { ( x , y ) ∈ U × V ∣ f ( x , y ) = 0 } . {\displaystyle \{(\mathbf {x} ,g(\mathbf {x} ))\mid \mathbf {x} \in U\}=\{(\mathbf {x} ,\mathbf {y} )\in U\times V\mid f(\mathbf {x} ,\mathbf {y} )=\mathbf {0} \}.} To state the implicit function theorem, we need the Jacobian matrix of f {\displaystyle f} , which is the matrix of the partial derivatives of f {\displaystyle f} . Abbreviating ( a 1 , … , a n , b 1 , … , b m ) {\displaystyle (a_{1},\dots ,a_{n},b_{1},\dots ,b_{m})} to ( a , b ) {\displaystyle ({\textbf {a}},{\textbf {b}})} , the Jacobian matrix is ( D f ) ( a , b ) = [ ∂ f 1 ∂ x 1 ( a , b ) ⋯ ∂ f 1 ∂ x n ( a , b ) ∂ f 1 ∂ y 1 ( a , b ) ⋯ ∂ f 1 ∂ y m ( a , b ) ⋮ ⋱ ⋮ ⋮ ⋱ ⋮ ∂ f m ∂ x 1 ( a , b ) ⋯ ∂ f m ∂ x n ( a , b ) ∂ f m ∂ y 1 ( a , b ) ⋯ ∂ f m ∂ y m ( a , b ) ] = [ X Y ] {\displaystyle (Df)(\mathbf {a} ,\mathbf {b} )=\left[{\begin{array}{ccc|ccc}{\frac {\partial f_{1}}{\partial x_{1}}}(\mathbf {a} ,\mathbf {b} )&\cdots &{\frac {\partial f_{1}}{\partial x_{n}}}(\mathbf {a} ,\mathbf {b} )&{\frac {\partial f_{1}}{\partial y_{1}}}(\mathbf {a} ,\mathbf {b} )&\cdots &{\frac {\partial f_{1}}{\partial y_{m}}}(\mathbf {a} ,\mathbf {b} )\\\vdots &\ddots &\vdots &\vdots &\ddots &\vdots \\{\frac {\partial f_{m}}{\partial x_{1}}}(\mathbf {a} ,\mathbf {b} )&\cdots &{\frac {\partial f_{m}}{\partial x_{n}}}(\mathbf {a} ,\mathbf {b} )&{\frac {\partial f_{m}}{\partial y_{1}}}(\mathbf {a} ,\mathbf {b} )&\cdots &{\frac {\partial f_{m}}{\partial y_{m}}}(\mathbf {a} ,\mathbf {b} )\end{array}}\right]=\left[{\begin{array}{c|c}X&Y\end{array}}\right]} where X {\displaystyle X} is the matrix of partial derivatives in the variables x i {\displaystyle x_{i}} and Y {\displaystyle Y} is the matrix of partial derivatives in the variables y j {\displaystyle y_{j}} . The implicit function theorem says that if Y {\displaystyle Y} is an invertible matrix, then there are U {\displaystyle U} , V {\displaystyle V} , and g {\displaystyle g} as desired. Writing all the hypotheses together gives the following statement. == Statement of the theorem == Let f : R n + m → R m {\displaystyle f:\mathbb {R} ^{n+m}\to \mathbb {R} ^{m}} be a continuously differentiable function, and let R n + m {\displaystyle \mathbb {R} ^{n+m}} have coordinates ( x , y ) {\displaystyle ({\textbf {x}},{\textbf {y}})} . Fix a point ( a , b ) = ( a 1 , … , a n , b 1 , … , b m ) {\displaystyle ({\textbf {a}},{\textbf {b}})=(a_{1},\dots ,a_{n},b_{1},\dots ,b_{m})} with f ( a , b ) = 0 {\displaystyle f({\textbf {a}},{\textbf {b}})=\mathbf {0} } , where 0 ∈ R m {\displaystyle \mathbf {0} \in \mathbb {R} ^{m}} is the zero vector. If the Jacobian matrix (this is the right-hand panel of the Jacobian matrix shown in the previous section): J f , y ( a , b ) = [ ∂ f i ∂ y j ( a , b ) ] {\displaystyle J_{f,\mathbf {y} }(\mathbf {a} ,\mathbf {b} )=\left[{\frac {\partial f_{i}}{\partial y_{j}}}(\mathbf {a} ,\mathbf {b} )\right]} is invertible, then there exists an open set U ⊂ R n {\displaystyle U\subset \mathbb {R} ^{n}} containing a {\displaystyle {\textbf {a}}} such that there exists a unique function g : U → R m {\displaystyle g:U\to \mathbb {R} ^{m}} such that g ( a ) = b {\displaystyle g(\mathbf {a} )=\mathbf {b} } , and f ( x , g ( x ) ) = 0 for all x ∈ U {\displaystyle f(\mathbf {x} ,g(\mathbf {x} ))=\mathbf {0} ~{\text{for all}}~\mathbf {x} \in U} . Moreover, g {\displaystyle g} is continuously differentiable and, denoting the left-hand panel of the Jacobian matrix shown in the previous section as: J f , x ( a , b ) = [ ∂ f i ∂ x j ( a , b ) ] , {\displaystyle J_{f,\mathbf {x} }(\mathbf {a} ,\mathbf {b} )=\left[{\frac {\partial f_{i}}{\partial x_{j}}}(\mathbf {a} ,\mathbf {b} )\right],} the Jacobian matrix of partial derivatives of g {\displaystyle g} in U {\displaystyle U} is given by the matrix product: [ ∂ g i ∂ x j ( x ) ] m × n = − [ J f , y ( x , g ( x ) ) ] m × m − 1 [ J f , x ( x , g ( x ) ) ] m × n {\displaystyle \left[{\frac {\partial g_{i}}{\partial x_{j}}}(\mathbf {x} )\right]_{m\times n}=-\left[J_{f,\mathbf {y} }(\mathbf {x} ,g(\mathbf {x} ))\right]_{m\times m}^{-1}\,\left[J_{f,\mathbf {x} }(\mathbf {x} ,g(\mathbf {x} ))\right]_{m\times n}} For a proof, see Inverse function theorem#Implicit_function_theorem. Here, the two-dimensional case is detailed. === Higher derivatives === If, moreover, f {\displaystyle f} is analytic or continuously differentiable k {\displaystyle k} times in a neighborhood of ( a , b ) {\displaystyle ({\textbf {a}},{\textbf {b}})} , then one may choose U {\displaystyle U} in order that the same holds true for g {\displaystyle g} inside U {\displaystyle U} . In the analytic case, this is called the analytic implicit function theorem. == The circle example == Let us go back to the example of the unit circle. In this case n = m = 1 and f ( x , y ) = x 2 + y 2 − 1 {\displaystyle f(x,y)=x^{2}+y^{2}-1} . The matrix of partial derivatives is just a 1 × 2 matrix, given by ( D f ) ( a , b ) = [ ∂ f ∂ x ( a , b ) ∂ f ∂ y ( a , b ) ] = [ 2 a 2 b ] {\displaystyle (Df)(a,b)={\begin{bmatrix}{\dfrac {\partial f}{\partial x}}(a,b)&{\dfrac {\partial f}{\partial y}}(a,b)\end{bmatrix}}={\begin{bmatrix}2a&2b\end{bmatrix}}} Thus, here, the Y in the statement of the theorem is just the number 2b; the linear map defined by it is invertible if and only if b ≠ 0. By the implicit function theorem we see that we can locally write the circle in the form y = g(x) for all points where y ≠ 0. For (±1, 0) we run into trouble, as noted before. The implicit function theorem may still be applied to these two points, by writing x as a function of y, that is, x = h ( y ) {\displaystyle x=h(y)} ; now the graph of the function will be ( h ( y ) , y ) {\displaystyle \left(h(y),y\right)} , since where b = 0 we have a = 1, and the conditions to locally express the function in this form are satisfied. The implicit derivative of y with respect to x, and that of x with respect to y, can be found by totally differentiating the implicit function x 2 + y 2 − 1 {\displaystyle x^{2}+y^{2}-1} and equating to 0: 2 x d x + 2 y d y = 0 , {\displaystyle 2x\,dx+2y\,dy=0,} giving d y d x = − x y {\displaystyle {\frac {dy}{dx}}=-{\frac {x}{y}}} and d x d y = − y x . {\displaystyle {\frac {dx}{dy}}=-{\frac {y}{x}}.} == Application: change of coordinates == Suppose we have an m-dimensional space, parametrised by a set of coordinates ( x 1 , … , x m ) {\displaystyle (x_{1},\ldots ,x_{m})} . We can introduce a new coordinate system ( x 1 ′ , … , x m ′ ) {\displaystyle (x'_{1},\ldots ,x'_{m})} by supplying m functions h 1 … h m {\displaystyle h_{1}\ldots h_{m}} each being continuously differentiable. These functions allow us to calculate the new coordinates ( x 1 ′ , … , x m ′ ) {\displaystyle (x'_{1},\ldots ,x'_{m})} of a point, given the point's old coordinates ( x 1 , … , x m ) {\displaystyle (x_{1},\ldots ,x_{m})} using x 1 ′ = h 1 ( x 1 , … , x m ) , … , x m ′ = h m ( x 1 , … , x m ) {\displaystyle x'_{1}=h_{1}(x_{1},\ldots ,x_{m}),\ldots ,x'_{m}=h_{m}(x_{1},\ldots ,x_{m})} . One might want to verify if the opposite is possible: given coordinates ( x 1 ′ , … , x m ′ ) {\displaystyle (x'_{1},\ldots ,x'_{m})} , can we 'go back' and calculate the same point's original coordinates ( x 1 , … , x m ) {\displaystyle (x_{1},\ldots ,x_{m})} ? The implicit function theorem will provide an answer to this question. The (new and old) coordinates ( x 1 ′ , … , x m ′ , x 1 , … , x m ) {\displaystyle (x'_{1},\ldots ,x'_{m},x_{1},\ldots ,x_{m})} are related by f = 0, with f ( x 1 ′ , … , x m ′ , x 1 , … , x m ) = ( h 1 ( x 1 , … , x m ) − x 1 ′ , … , h m ( x 1 , … , x m ) − x m ′ ) . {\displaystyle f(x'_{1},\ldots ,x'_{m},x_{1},\ldots ,x_{m})=(h_{1}(x_{1},\ldots ,x_{m})-x'_{1},\ldots ,h_{m}(x_{1},\ldots ,x_{m})-x'_{m}).} Now the Jacobian matrix of f at a certain point (a, b) [ where a = ( x 1 ′ , … , x m ′ ) , b = ( x 1 , … , x m ) {\displaystyle a=(x'_{1},\ldots ,x'_{m}),b=(x_{1},\ldots ,x_{m})} ] is given by ( D f ) ( a , b ) = [ − 1 ⋯ 0 ⋮ ⋱ ⋮ 0 ⋯ − 1 | ∂ h 1 ∂ x 1 ( b ) ⋯ ∂ h 1 ∂ x m ( b ) ⋮ ⋱ ⋮ ∂ h m ∂ x 1 ( b ) ⋯ ∂ h m ∂ x m ( b ) ] = [ − I m | J ] . {\displaystyle (Df)(a,b)=\left[{\begin{matrix}-1&\cdots &0\\\vdots &\ddots &\vdots \\0&\cdots &-1\end{matrix}}\left|{\begin{matrix}{\frac {\partial h_{1}}{\partial x_{1}}}(b)&\cdots &{\frac {\partial h_{1}}{\partial x_{m}}}(b)\\\vdots &\ddots &\vdots \\{\frac {\partial h_{m}}{\partial x_{1}}}(b)&\cdots &{\frac {\partial h_{m}}{\partial x_{m}}}(b)\\\end{matrix}}\right.\right]=[-I_{m}|J].} where Im denotes the m × m identity matrix, and J is the m × m matrix of partial derivatives, evaluated at (a, b). (In the above, these blocks were denoted by X and Y. As it happens, in this particular application of the theorem, neither matrix depends on a.) The implicit function theorem now states that we can locally express ( x 1 , … , x m ) {\displaystyle (x_{1},\ldots ,x_{m})} as a function of ( x 1 ′ , … , x m ′ ) {\displaystyle (x'_{1},\ldots ,x'_{m})} if J is invertible. Demanding J is invertible is equivalent to det J ≠ 0, thus we see that we can go back from the primed to the unprimed coordinates if the determinant of the Jacobian J is non-zero. This statement is also known as the inverse function theorem. === Example: polar coordinates === As a simple application of the above, consider the plane, parametrised by polar coordinates (R, θ). We can go to a new coordinate system (cartesian coordinates) by defining functions x(R, θ) = R cos(θ) and y(R, θ) = R sin(θ). This makes it possible given any point (R, θ) to find corresponding Cartesian coordinates (x, y). When can we go back and convert Cartesian into polar coordinates? By the previous example, it is sufficient to have det J ≠ 0, with J = [ ∂ x ( R , θ ) ∂ R ∂ x ( R , θ ) ∂ θ ∂ y ( R , θ ) ∂ R ∂ y ( R , θ ) ∂ θ ] = [ cos θ − R sin θ sin θ R cos θ ] . {\displaystyle J={\begin{bmatrix}{\frac {\partial x(R,\theta )}{\partial R}}&{\frac {\partial x(R,\theta )}{\partial \theta }}\\{\frac {\partial y(R,\theta )}{\partial R}}&{\frac {\partial y(R,\theta )}{\partial \theta }}\\\end{bmatrix}}={\begin{bmatrix}\cos \theta &-R\sin \theta \\\sin \theta &R\cos \theta \end{bmatrix}}.} Since det J = R, conversion back to polar coordinates is possible if R ≠ 0. So it remains to check the case R = 0. It is easy to see that in case R = 0, our coordinate transformation is not invertible: at the origin, the value of θ is not well-defined. == Generalizations == === Banach space version === Based on the inverse function theorem in Banach spaces, it is possible to extend the implicit function theorem to Banach space valued mappings. Let X, Y, Z be Banach spaces. Let the mapping f : X × Y → Z be continuously Fréchet differentiable. If ( x 0 , y 0 ) ∈ X × Y {\displaystyle (x_{0},y_{0})\in X\times Y} , f ( x 0 , y 0 ) = 0 {\displaystyle f(x_{0},y_{0})=0} , and y ↦ D f ( x 0 , y 0 ) ( 0 , y ) {\displaystyle y\mapsto Df(x_{0},y_{0})(0,y)} is a Banach space isomorphism from Y onto Z, then there exist neighbourhoods U of x0 and V of y0 and a Fréchet differentiable function g : U → V such that f(x, g(x)) = 0 and f(x, y) = 0 if and only if y = g(x), for all ( x , y ) ∈ U × V {\displaystyle (x,y)\in U\times V} . === Implicit functions from non-differentiable functions === Various forms of the implicit function theorem exist for the case when the function f is not differentiable. It is standard that local strict monotonicity suffices in one dimension. The following more general form was proven by Kumagai based on an observation by Jittorntrum. Consider a continuous function f : R n × R m → R n {\displaystyle f:\mathbb {R} ^{n}\times \mathbb {R} ^{m}\to \mathbb {R} ^{n}} such that f ( x 0 , y 0 ) = 0 {\displaystyle f(x_{0},y_{0})=0} . If there exist open neighbourhoods A ⊂ R n {\displaystyle A\subset \mathbb {R} ^{n}} and B ⊂ R m {\displaystyle B\subset \mathbb {R} ^{m}} of x0 and y0, respectively, such that, for all y in B, f ( ⋅ , y ) : A → R n {\displaystyle f(\cdot ,y):A\to \mathbb {R} ^{n}} is locally one-to-one, then there exist open neighbourhoods A 0 ⊂ R n {\displaystyle A_{0}\subset \mathbb {R} ^{n}} and B 0 ⊂ R m {\displaystyle B_{0}\subset \mathbb {R} ^{m}} of x0 and y0, such that, for all y ∈ B 0 {\displaystyle y\in B_{0}} , the equation f(x, y) = 0 has a unique solution x = g ( y ) ∈ A 0 , {\displaystyle x=g(y)\in A_{0},} where g is a continuous function from B0 into A0. === Collapsing manifolds === Perelman’s collapsing theorem for 3-manifolds, the capstone of his proof of Thurston's geometrization conjecture, can be understood as an extension of the implicit function theorem. == See also == Inverse function theorem Constant rank theorem: Both the implicit function theorem and the inverse function theorem can be seen as special cases of the constant rank theorem. == Notes == == References == == Further reading == Allendoerfer, Carl B. (1974). "Theorems about Differentiable Functions". Calculus of Several Variables and Differentiable Manifolds. New York: Macmillan. pp. 54–88. ISBN 0-02-301840-2. Binmore, K. G. (1983). "Implicit Functions". Calculus. New York: Cambridge University Press. pp. 198–211. ISBN 0-521-28952-1. Loomis, Lynn H.; Sternberg, Shlomo (1990). Advanced Calculus (Revised ed.). Boston: Jones and Bartlett. pp. 164–171. ISBN 0-86720-122-3. Protter, Murray H.; Morrey, Charles B. Jr. (1985). "Implicit Function Theorems. Jacobians". Intermediate Calculus (2nd ed.). New York: Springer. pp. 390–420. ISBN 0-387-96058-9.
|
Wikipedia:Ina Kersten#0
|
Ina Kersten (born 1946) is a German mathematician and former president of the German Mathematical Society. Her research concerns abstract algebra including the theory of field extensions and algebraic groups. She is a professor emerita at the University of Göttingen. Kersten was born in Hamburg, and earned her Ph.D. at the University of Hamburg in 1977. Her dissertation, p-Algebren über semilokalen Ringen, was supervised by Ernst Witt. She completed a habilitation at the University of Regensburg in 1983. Kersten was president of the German Mathematical Society from 1995 to 1997, which meant she was the first woman to head the society. Under her leadership, the society founded the journal Documenta Mathematica. == References == == External links == Home page Archived 2021-10-27 at the Wayback Machine
|
Wikipedia:Inayatullah Khan Mashriqi#0
|
Inayatullah Khan Mashriqi (Punjabi: عنایت اللہ خاں مشرقی 'Ināyatullāh Khān Maśriqī; August 1888 – 27 August 1963), also known by the honorary title Allama Mashriqi (علامہ مشرقی 'Allāmah Maśriqī), was a British Indian, and later, Pakistani mathematician, logician, political theorist, Islamic scholar and the founder of the Khaksar movement. Around 1930, he founded the Khaksar Movement, aiming both to revive Islam among Muslims as well as to advance the condition of the masses irrespective of any faith, sect, or religion. == Early years == === Background === Inayatullah Khan Mashriqi was born on 25 August 1888 to a Punjabi Rajput family from Amritsar. Mashriqi's father Khan Ata Muhammad Khan was an educated man of wealth who owned a bi-weekly publication, Vakil, in Amritsar. His forefathers had held high government positions during the Mughal and Sikh Empires. Because of his father's position he came into contact with a range of well-known luminaries including Jamāl al-Dīn al-Afghānī, Sir Syed Ahmad Khan, and Shibli Nomani as a young man. === Education === Mashriqi was educated initially at home before attending schools in Amritsar. From an early age, he showed a passion for mathematics. After completing his Bachelor of Arts degree with First Class honours at Forman Christian College in Lahore, he completed his master's degree in mathematics from the University of the Punjab, taking a First Class for the first time in the history of the university. In 1907 he moved to England, where he matriculated at Christ's College, Cambridge, to read for the mathematics tripos. He was awarded a college foundation scholarship in May 1908. In June 1909 he was awarded first class honours in Mathematics Part I, being placed joint 27th out of 31 on the list of wranglers. For the next two years, he read for the oriental languages tripos in parallel to the natural sciences tripos, gaining first class honours in the former, and third class in the latter. After three years' residence at Cambridge he had qualified for a Bachelor of Arts degree, which he took in 1910. In 1912 he completed a fourth tripos in mechanical sciences, and was placed in the second class. At the time he was believed to be the first man of any nationality to achieve honours in four different Triposes, and was lauded in national newspapers across the UK. The next year, Mashriqi was conferred with a DPhil in mathematics receiving a gold medal at his doctoral graduation ceremony. He left Cambridge and returned to India in December 1912. During his stay in Cambridge his religious and scientific conviction was inspired by the works and concepts of Professor Sir James Jeans. == Early career == On his return to India, Mashriqi was offered the premiership of Alwar, a princely state, by the Maharaja. He declined owing to his interest in education. At the age of 25, and only a few months after arriving in India, he was appointed vice principal of Islamia College, Peshawar, by Chief Commissioner Sir George Roos-Keppel and was made principal of the same college two years later. In October 1917 he was appointed under secretary to the Government of India in the Education Department in succession to Sir George Anderson. He became headmaster of the High School, Peshawar on 21 October 1919. In 1920, the British government offered Mashriqi the ambassadorship of Afghanistan, and a year later he was offered a knighthood. However, he refused both awards. In 1930, he was passed over for a promotion in the government service, following which he went on medical leave. In 1932 he resigned, taking his pension, and settled down in Ichhra, Lahore. === Nobel nomination === In 1924, at the age of 36, Mashriqi completed the first volume of his book, Tazkirah. It is a commentary on the Qur'an in the light of science. It was nominated for the Nobel Prize in 1925, subject to the condition it was translated into one of the European languages. However, Mashriqi declined the suggestion of translation. == Political life == === Mashriqi's philosophy === A theistic evolutionist who accepted some of Darwin's ideas while criticizing others, he declared that the science of religions was essentially the science of collective evolution of mankind; all prophets came to unite mankind, not to disrupt it; the basic law of all faiths is the law of unification and consolidation of the entire humanity. According to Markus Daeschel, the philosophical ruminations of Mashriqi offer an opportunity to re-evaluate the meaning of colonial modernity and notion of post-colonial nation-building in modern times. Mashriqi is often portrayed as a controversial figure, a religious activist, a revolutionary, and an anarchist; while at the same time he is described as a visionary, a reformer, a leader, and a scientist-philosopher who was born ahead of his time. After Mashriqi resigned from government service, he laid the foundation of the Khaksar Tehrik (also known as Khaksar Movement) around 1930. Mashriqi and his Khaskar Tehrik opposed the partition of India. He stated that the "last remedy under the present circumstances is that one and all rise against this conspiracy as one man. Let there be a common Hindu-Muslim Revolution. ... it is time that we should sacrifice…in order to uphold Truth, Honour and Justice." Mashriqi opposed the partition of India because he felt that if Muslims and Hindus had largely lived peacefully together in India for centuries, they could also do so in a free and united India. Mashriqi saw the two-nation theory as a plot of the British to maintain control of the region more easily, if India was divided into two countries that were pitted against one another. He reasoned that a division of India along religious lines would breed fundamentalism and extremism on both sides of the border. Mashriqi thought that "Muslim majority areas were already under Muslim rule, so if any Muslims wanted to move to these areas, they were free to do so without having to divide the country." To him, separatist leaders "were power hungry and misleading Muslims in order to bolster their own power by serving the British agenda." === Imprisonments and allegations === On 20 July 1943, an assassination attempt was made on Muhammad Ali Jinnah by an individual named Rafiq Sabir Mazangavi who was assumed to be a Khaksar worker. The attack was deplored by Mashriqi, who denied any involvement. Later, Justice Blagden of the Bombay High Court in his ruling on 4 November 1943 dismissed any association between the attack and the Khaksars. In Pakistan, Mashriqi was imprisoned at least four times: in 1958 for alleged complicity in the murder of republican leader Khan Abdul Jabbar Khan (popularly known as Dr. Khan Sahib); and, in 1962 for suspicion of attempting to overthrow President Ayub's government. However, none of the charges were proven, and he was acquitted in each case. In 1957, Mashriqi allegedly led 300,000 of his followers to the borders of Kashmir, intending, it is said, to launch a fight for its liberation. However, the Pakistan government persuaded the group to withdraw and the organisation was later disbanded. == Death == Mashriqi died at the Mayo Hospital in Lahore on 27 August 1963 following a short battle with cancer. His funeral prayers were held at the Badshahi Mosque and he was buried in Ichhra. He was survived by his wife and seven children. == Mashriqi's works == Mashriqi's prominent works include: Armughan-i-Hakeem, a poetical work Dahulbab, a poetical work Isha’arat, the Manifesto of the Khaksar movement Khitab-e-Misr (The Egypt Address), based on his 1925 speech in Cairo as a delegate to the Motmar-e-Khilafat Maulvi Ka Ghalat Mazhab Tazkirah Volume I, 1924, discussions on conflicts between religions, between religion and science, and the need to resolve these conflicts Tazkirah Volume II. Posthumously published in 1964 Tazkirah Volume III. === Fellowships === Mashriqi's fellowships included: Fellow of the Royal Society of Arts, 1923 Fellow of the Geographical Society (F.G.S), Paris Fellow of Society of Arts (F.S.A), Paris Member of the Board at Delhi University President of the Mathematical Society, Islamia College, Peshawar Member of the International Congress of Orientalists (Leiden), 1930 President of the All World's Faiths Conference, 1937 === Edited works === God, Man, and Universe: As Conceived by a Mathematician (works of Inayatullah Khan el-Mashriqi), Akhuwat Publications, Rawalpindi, 1980 (edited by Syed Shabbir Hussain). == See also == All India Azad Muslim Conference Teilhard de Chardin Karl Marx == References ==
|
Wikipedia:Inca animal husbandry#0
|
Inca animal husbandry refers to how in the pre-Hispanic andes, camelids played a truly important role in the economy. In particular, the llama and alpaca—the only camelids domesticated by Andean people— which were raised in large-scale houses and used for different purposes within the production system of the Incas. Likewise, two other species of undomesticated camelids were used: the vicuña and the guanaco. The guanacos were hunted by means of chacos (collective hunts). The Inca people used tools such as: stones, knives or tumis, axes that, according to chroniclers, were made of stone and bronze, and ropes that were made by them in their leisure time. Many of these tools were used to shear the camelids, which were then set free; in this way, they ensured that their numbers were maintained. Guanacos, on the other hand, were hunted for their meat, which was highly prized. == Camelid raising == The South American camelids were a valuable resource. Their meat was consumed fresh or in charqui and chalona; their wool was used to make threads and fabrics; their bones, hide, fat and excrement had diverse applications such as musical instruments, footwear, medicines and fertilizer respectively. They were also preferred animals for religious sacrifices. The communal camelid herds were under the care of young people, whose ages ranged from twelve to sixteen years old. In areas where the communal herds were large, such as the altiplano region, where pastures were far away, it is likely that their care was in the hands of a full-time specialist. The chroniclers mention two Quechua names for the shepherds: llama michi—which Garcilaso associates with low social status—and llama camayos, which designated the llamas caretaker or employee responsible for the herds. The state herdsmen were responsible for the animals under their charge, whose accounting and supervision were done by officials appointed by the state. === Classification === The jesuit José de Acosta mentions that in Ancient Peru, the division of camelid herds was made according to the colors of the animals. There were white, black, brown and moromoros, as they called those of various colors. In addition, the chronicler said that the colors were taken into account for the various sacrifices, according to their traditions and beliefs. Garcilaso de la Vega adds that in the herds, when a calf was of a different color, once it had grown up, it was sent to its corresponding herd. This division by shades facilitated their counting in the quipus, which were made with wool of the same color as that of the animals they wanted to count. ==== Domesticated ==== The llama and alpaca were especially important in the Andean economy. Llama: the resources provided by the llama were used to the maximum. Thus, its wool was spun to transform it into clothing for the people of the sierra, as the inhabitants of the coast used the cotton to make their clothing. Their meat was consumed fresh as well as sun-dried and dehydrated (charqui); the latter allowed its preservation and storage in warehouses. In addition, they were bled through a vein in the jaw to prepare a special meal with the blood. The hides were used to prepare ropes, sandals and other objects, while their dried excrement was an excellent fuel, particularly at high altitudes where there were no trees to obtain firewood. Perhaps one of the most prized uses of the llama was as a draft animal, as it could carry up to 40 kilograms in weight and move easily up the steepest heights. Llama caravans were mainly made up of males. For longer journeys, such as between the Collao and the coast, "new males" of about two years of age were preferred. The herd traveled from early morning to midday, stopping at places with water and pasture. The maintenance of the animals was not difficult, since they were not provided with any other forage than the grasses found along the route. The animals were fed during the afternoon and chewed their cud at night. Finally, they were also sacrificed as offerings and their organs were used to read omens. Alpaca: they basically provided wool—of inferior quality to that of the vicuña—for the finest and most luxurious fabrics. The pastures necessary for their farming followed similar patterns to those of agricultural land tenure. The ayllus had pastures for their animals, as did the curacas, the great lords of the macro-ethnicities, the huacas and the special pastures of the Inca. Both archaeological research and archival documents refer to the existence of camelid herds on the coast long before the Inca conquest: from pre-ceramic times. They must have fed in the hilly region and in the carob forests, which today are almost totally depredated. When the hills dried out, the animals fed on the pods of the carob trees. ==== Non-domesticated ==== The vicuña and the guanaco had not been domesticated in Inca times. Vicuña: the chroniclers affirm that the vicuñas were never killed. They were used to obtain their wool, which was highly prized. The clothing of the Incas and that which would be used for offerings was made from this wool. It was hunted by means of the chacos (collective hunts) to be sheared and then set free; in this way they ensured that their numbers would be maintained. Guanaco: the most widespread camelid in geographical terms was the guanaco, as it was found from the subequatorial areas to the Tierra del Fuego. About guanacos, the chronicler Pedro Cieza de León says, they were hunted to make charqui, which was stored in warehouses "to feed the army". They were hunted for their meat, which was highly prized. === Consumption Overview === The visit of Garci Diez de San Miguel to the province of Chucuito is a document that provides interesting information regarding the livestock wealth of that region. From it we know that a common Inca, for example, could own up to a thousand heads of camelids, while a principal lord could own up to fifty thousand. Cattle raising certainly constituted an important source of wealth in pre-Hispanic times. The chroniclers point out that the meat of all camelids was eaten, but due to the restrictions that existed for its slaughter, its consumption must have been a luxury. The population probably had access to fresh meat only in the army or on ceremonial occasions, when the slaughtered animals were widely distributed. In colonial times, the pastures were disappearing or becoming poorer due exclusively to the massive presence of the animals introduced by the Spaniards and their eating habits. The Andean environment underwent a considerable change with the domestic animals that arrived with the Spanish conquest. == Other animals farming == The animals domesticated in the Inca Empire were mainly camelids. They also domesticated the cuy or guinea pig. Although no significant samples of guinea pigs have been found in the Andes, it is believed that their domestication was minor or in small proportions. Currently, the guinea pig is part of the diet of the Andean peoples. Likewise, ducks and payments to Mother Nature in the Inca Empire were raised at home because their meat was highly valued. According to chronicles of the Spanish colonization, the inhabitants of the high jungle raised tame and domestic animals such as the cuyes and turkeys. == See also == Inca Empire Inca agriculture Mathematics of the Incas History of Peru == Notes == == References == == Bibliography == Rostworowski, María (2004). Enciclopedia temática del Perú: Incas (in Spanish). Lima: Orbis Ventures. ISBN 9972-752-01-1. Culturas Prehispánicas (in Spanish). Muxica Editores. 2001. ISBN 9972-617-10-6. Historia Universal: América precolombina (in Spanish). Editorial Sol. 2003. ISBN 9972-891-79-8.
|
Wikipedia:Inclusion map#0
|
In mathematics, if A {\displaystyle A} is a subset of B , {\displaystyle B,} then the inclusion map is the function ι {\displaystyle \iota } that sends each element x {\displaystyle x} of A {\displaystyle A} to x , {\displaystyle x,} treated as an element of B : {\displaystyle B:} ι : A → B , ι ( x ) = x . {\displaystyle \iota :A\rightarrow B,\qquad \iota (x)=x.} An inclusion map may also be referred to as an inclusion function, an insertion, or a canonical injection. A "hooked arrow" (U+21AA ↪ RIGHTWARDS ARROW WITH HOOK) is sometimes used in place of the function arrow above to denote an inclusion map; thus: ι : A ↪ B . {\displaystyle \iota :A\hookrightarrow B.} (However, some authors use this hooked arrow for any embedding.) This and other analogous injective functions from substructures are sometimes called natural injections. Given any morphism f {\displaystyle f} between objects X {\displaystyle X} and Y {\displaystyle Y} , if there is an inclusion map ι : A → X {\displaystyle \iota :A\to X} into the domain X {\displaystyle X} , then one can form the restriction f ∘ ι {\displaystyle f\circ \iota } of f . {\displaystyle f.} In many instances, one can also construct a canonical inclusion into the codomain R → Y {\displaystyle R\to Y} known as the range of f . {\displaystyle f.} == Applications of inclusion maps == Inclusion maps tend to be homomorphisms of algebraic structures; thus, such inclusion maps are embeddings. More precisely, given a substructure closed under some operations, the inclusion map will be an embedding for tautological reasons. For example, for some binary operation ⋆ , {\displaystyle \star ,} to require that ι ( x ⋆ y ) = ι ( x ) ⋆ ι ( y ) {\displaystyle \iota (x\star y)=\iota (x)\star \iota (y)} is simply to say that ⋆ {\displaystyle \star } is consistently computed in the sub-structure and the large structure. The case of a unary operation is similar; but one should also look at nullary operations, which pick out a constant element. Here the point is that closure means such constants must already be given in the substructure. Inclusion maps are seen in algebraic topology where if A {\displaystyle A} is a strong deformation retract of X , {\displaystyle X,} the inclusion map yields an isomorphism between all homotopy groups (that is, it is a homotopy equivalence). Inclusion maps in geometry come in different kinds: for example embeddings of submanifolds. Contravariant objects (which is to say, objects that have pullbacks; these are called covariant in an older and unrelated terminology) such as differential forms restrict to submanifolds, giving a mapping in the other direction. Another example, more sophisticated, is that of affine schemes, for which the inclusions Spec ( R / I ) → Spec ( R ) {\displaystyle \operatorname {Spec} \left(R/I\right)\to \operatorname {Spec} (R)} and Spec ( R / I 2 ) → Spec ( R ) {\displaystyle \operatorname {Spec} \left(R/I^{2}\right)\to \operatorname {Spec} (R)} may be different morphisms, where R {\displaystyle R} is a commutative ring and I {\displaystyle I} is an ideal of R . {\displaystyle R.} == See also == Cofibration – continuous mapping between topological spacesPages displaying wikidata descriptions as a fallback Identity function – In mathematics, a function that always returns the same value that was used as its argument == References ==
|
Wikipedia:Inclusion order#0
|
In the mathematical field of order theory, an inclusion order is the partial order that arises as the subset-inclusion relation on some collection of objects. In a simple way, every poset P = (X,≤) is (isomorphic to) an inclusion order (just as every group is isomorphic to a permutation group – see Cayley's theorem). To see this, associate to each element x of X the set X ≤ ( x ) = { y ∈ X ∣ y ≤ x } ; {\displaystyle X_{\leq (x)}=\{y\in X\mid y\leq x\};} then the transitivity of ≤ ensures that for all a and b in X, we have X ≤ ( a ) ⊆ X ≤ ( b ) precisely when a ≤ b . {\displaystyle X_{\leq (a)}\subseteq X_{\leq (b)}{\text{ precisely when }}a\leq b.} There can be sets S {\displaystyle S} of cardinality less than | X | {\displaystyle |X|} such that P is isomorphic to the inclusion order on S. The size of the smallest possible S is called the 2-dimension of P. Several important classes of poset arise as inclusion orders for some natural collections, like the Boolean lattice Qn, which is the collection of all 2n subsets of an n-element set, the interval-containment orders, which are precisely the orders of order dimension at most two, and the dimension-n orders, which are the containment orders on collections of n-boxes anchored at the origin. Other containment orders that are interesting in their own right include the circle orders, which arise from disks in the plane, and the angle orders. == See also == Birkhoff's representation theorem Intersection graph Interval order == References == Fishburn, P.C.; Trotter, W.T. (1998). "Geometric containment orders: a survey". Order. 15 (2): 167–182. doi:10.1023/A:1006110326269. S2CID 14411154. Santoro, N., Sidney, J.B., Sidney, S.J., and Urrutia, J. (1989). "Geometric containment and partial orders". SIAM Journal on Discrete Mathematics. 2 (2): 245–254. CiteSeerX 10.1.1.65.1927. doi:10.1137/0402021.{{cite journal}}: CS1 maint: multiple names: authors list (link)
|
Wikipedia:Incomplete Bessel K function/generalized incomplete gamma function#0
|
Bessel functions, named after Friedrich Bessel who was the first to systematically study them in 1824, are canonical solutions y(x) of Bessel's differential equation x 2 d 2 y d x 2 + x d y d x + ( x 2 − α 2 ) y = 0 {\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+x{\frac {dy}{dx}}+\left(x^{2}-\alpha ^{2}\right)y=0} for an arbitrary complex number α {\displaystyle \alpha } , which represents the order of the Bessel function. Although α {\displaystyle \alpha } and − α {\displaystyle -\alpha } produce the same differential equation, it is conventional to define different Bessel functions for these two values in such a way that the Bessel functions are mostly smooth functions of α {\displaystyle \alpha } . The most important cases are when α {\displaystyle \alpha } is an integer or half-integer. Bessel functions for integer α {\displaystyle \alpha } are also known as cylinder functions or the cylindrical harmonics because they appear in the solution to Laplace's equation in cylindrical coordinates. Spherical Bessel functions with half-integer α {\displaystyle \alpha } are obtained when solving the Helmholtz equation in spherical coordinates. == Applications == Bessel's equation arises when finding separable solutions to Laplace's equation and the Helmholtz equation in cylindrical or spherical coordinates. Bessel functions are therefore especially important for many problems of wave propagation and static potentials. In solving problems in cylindrical coordinate systems, one obtains Bessel functions of integer order (α = n); in spherical problems, one obtains half-integer orders (α = n + 1/2). For example: Electromagnetic waves in a cylindrical waveguide Pressure amplitudes of inviscid rotational flows Heat conduction in a cylindrical object Modes of vibration of a thin circular or annular acoustic membrane (such as a drumhead or other membranophone) or thicker plates such as sheet metal (see Kirchhoff–Love plate theory, Mindlin–Reissner plate theory) Diffusion problems on a lattice Solutions to the radial Schrödinger equation (in spherical and cylindrical coordinates) for a free particle Position space representation of the Feynman propagator in quantum field theory Solving for patterns of acoustical radiation Frequency-dependent friction in circular pipelines Dynamics of floating bodies Angular resolution Diffraction from helical objects, including DNA Probability density function of product of two normally distributed random variables Analyzing of the surface waves generated by microtremors, in geophysics and seismology. Bessel functions also appear in other problems, such as signal processing (e.g., see FM audio synthesis, Kaiser window, or Bessel filter). == Definitions == Because this is a linear differential equation, solutions can be scaled to any amplitude. The amplitudes chosen for the functions originate from the early work in which the functions appeared as solutions to definite integrals rather than solutions to differential equations. Because the differential equation is second-order, there must be two linearly independent solutions: one of the first kind and one of the second kind. Depending upon the circumstances, however, various formulations of these solutions are convenient. Different variations are summarized in the table below and described in the following sections.The subscript n is typically used in place of α {\displaystyle \alpha } when α {\displaystyle \alpha } is known to be an integer. Bessel functions of the second kind and the spherical Bessel functions of the second kind are sometimes denoted by Nn and nn, respectively, rather than Yn and yn. === Bessel functions of the first kind: Jα === Bessel functions of the first kind, denoted as Jα(x), are solutions of Bessel's differential equation. For integer or positive α, Bessel functions of the first kind are finite at the origin (x = 0); while for negative non-integer α, Bessel functions of the first kind diverge as x approaches zero. It is possible to define the function by x α {\displaystyle x^{\alpha }} times a Maclaurin series (note that α need not be an integer, and non-integer powers are not permitted in a Taylor series), which can be found by applying the Frobenius method to Bessel's equation: J α ( x ) = ∑ m = 0 ∞ ( − 1 ) m m ! Γ ( m + α + 1 ) ( x 2 ) 2 m + α , {\displaystyle J_{\alpha }(x)=\sum _{m=0}^{\infty }{\frac {(-1)^{m}}{m!\,\Gamma (m+\alpha +1)}}{\left({\frac {x}{2}}\right)}^{2m+\alpha },} where Γ(z) is the gamma function, a shifted generalization of the factorial function to non-integer values. Some earlier authors define the Bessel function of the first kind differently, essentially without the division by 2 {\displaystyle 2} in x / 2 {\displaystyle x/2} ; this definition is not used in this article. The Bessel function of the first kind is an entire function if α is an integer, otherwise it is a multivalued function with singularity at zero. The graphs of Bessel functions look roughly like oscillating sine or cosine functions that decay proportionally to x − 1 / 2 {\displaystyle x^{-{1}/{2}}} (see also their asymptotic forms below), although their roots are not generally periodic, except asymptotically for large x. (The series indicates that −J1(x) is the derivative of J0(x), much like −sin x is the derivative of cos x; more generally, the derivative of Jn(x) can be expressed in terms of Jn ± 1(x) by the identities below.) For non-integer α, the functions Jα(x) and J−α(x) are linearly independent, and are therefore the two solutions of the differential equation. On the other hand, for integer order n, the following relationship is valid (the gamma function has simple poles at each of the non-positive integers): J − n ( x ) = ( − 1 ) n J n ( x ) . {\displaystyle J_{-n}(x)=(-1)^{n}J_{n}(x).} This means that the two solutions are no longer linearly independent. In this case, the second linearly independent solution is then found to be the Bessel function of the second kind, as discussed below. ==== Bessel's integrals ==== Another definition of the Bessel function, for integer values of n, is possible using an integral representation: J n ( x ) = 1 π ∫ 0 π cos ( n τ − x sin τ ) d τ = 1 π Re ( ∫ 0 π e i ( n τ − x sin τ ) d τ ) , {\displaystyle J_{n}(x)={\frac {1}{\pi }}\int _{0}^{\pi }\cos(n\tau -x\sin \tau )\,d\tau ={\frac {1}{\pi }}\operatorname {Re} \left(\int _{0}^{\pi }e^{i(n\tau -x\sin \tau )}\,d\tau \right),} which is also called Hansen-Bessel formula. This was the approach that Bessel used, and from this definition he derived several properties of the function. The definition may be extended to non-integer orders by one of Schläfli's integrals, for Re(x) > 0: J α ( x ) = 1 π ∫ 0 π cos ( α τ − x sin τ ) d τ − sin ( α π ) π ∫ 0 ∞ e − x sinh t − α t d t . {\displaystyle J_{\alpha }(x)={\frac {1}{\pi }}\int _{0}^{\pi }\cos(\alpha \tau -x\sin \tau )\,d\tau -{\frac {\sin(\alpha \pi )}{\pi }}\int _{0}^{\infty }e^{-x\sinh t-\alpha t}\,dt.} ==== Relation to hypergeometric series ==== The Bessel functions can be expressed in terms of the generalized hypergeometric series as J α ( x ) = ( x 2 ) α Γ ( α + 1 ) 0 F 1 ( α + 1 ; − x 2 4 ) . {\displaystyle J_{\alpha }(x)={\frac {\left({\frac {x}{2}}\right)^{\alpha }}{\Gamma (\alpha +1)}}\;_{0}F_{1}\left(\alpha +1;-{\frac {x^{2}}{4}}\right).} This expression is related to the development of Bessel functions in terms of the Bessel–Clifford function. ==== Relation to Laguerre polynomials ==== In terms of the Laguerre polynomials Lk and arbitrarily chosen parameter t, the Bessel function can be expressed as J α ( x ) ( x 2 ) α = e − t Γ ( α + 1 ) ∑ k = 0 ∞ L k ( α ) ( x 2 4 t ) ( k + α k ) t k k ! . {\displaystyle {\frac {J_{\alpha }(x)}{\left({\frac {x}{2}}\right)^{\alpha }}}={\frac {e^{-t}}{\Gamma (\alpha +1)}}\sum _{k=0}^{\infty }{\frac {L_{k}^{(\alpha )}\left({\frac {x^{2}}{4t}}\right)}{\binom {k+\alpha }{k}}}{\frac {t^{k}}{k!}}.} === Bessel functions of the second kind: Yα === The Bessel functions of the second kind, denoted by Yα(x), occasionally denoted instead by Nα(x), are solutions of the Bessel differential equation that have a singularity at the origin (x = 0) and are multivalued. These are sometimes called Weber functions, as they were introduced by H. M. Weber (1873), and also Neumann functions after Carl Neumann. For non-integer α, Yα(x) is related to Jα(x) by Y α ( x ) = J α ( x ) cos ( α π ) − J − α ( x ) sin ( α π ) . {\displaystyle Y_{\alpha }(x)={\frac {J_{\alpha }(x)\cos(\alpha \pi )-J_{-\alpha }(x)}{\sin(\alpha \pi )}}.} In the case of integer order n, the function is defined by taking the limit as a non-integer α tends to n: Y n ( x ) = lim α → n Y α ( x ) . {\displaystyle Y_{n}(x)=\lim _{\alpha \to n}Y_{\alpha }(x).} If n is a nonnegative integer, we have the series Y n ( z ) = − ( z 2 ) − n π ∑ k = 0 n − 1 ( n − k − 1 ) ! k ! ( z 2 4 ) k + 2 π J n ( z ) ln z 2 − ( z 2 ) n π ∑ k = 0 ∞ ( ψ ( k + 1 ) + ψ ( n + k + 1 ) ) ( − z 2 4 ) k k ! ( n + k ) ! {\displaystyle Y_{n}(z)=-{\frac {\left({\frac {z}{2}}\right)^{-n}}{\pi }}\sum _{k=0}^{n-1}{\frac {(n-k-1)!}{k!}}\left({\frac {z^{2}}{4}}\right)^{k}+{\frac {2}{\pi }}J_{n}(z)\ln {\frac {z}{2}}-{\frac {\left({\frac {z}{2}}\right)^{n}}{\pi }}\sum _{k=0}^{\infty }(\psi (k+1)+\psi (n+k+1)){\frac {\left(-{\frac {z^{2}}{4}}\right)^{k}}{k!(n+k)!}}} where ψ ( z ) {\displaystyle \psi (z)} is the digamma function, the logarithmic derivative of the gamma function. There is also a corresponding integral formula (for Re(x) > 0): Y n ( x ) = 1 π ∫ 0 π sin ( x sin θ − n θ ) d θ − 1 π ∫ 0 ∞ ( e n t + ( − 1 ) n e − n t ) e − x sinh t d t . {\displaystyle Y_{n}(x)={\frac {1}{\pi }}\int _{0}^{\pi }\sin(x\sin \theta -n\theta )\,d\theta -{\frac {1}{\pi }}\int _{0}^{\infty }\left(e^{nt}+(-1)^{n}e^{-nt}\right)e^{-x\sinh t}\,dt.} In the case where n = 0: (with γ {\displaystyle \gamma } being Euler's constant) Y 0 ( x ) = 4 π 2 ∫ 0 1 2 π cos ( x cos θ ) ( γ + ln ( 2 x sin 2 θ ) ) d θ . {\displaystyle Y_{0}\left(x\right)={\frac {4}{\pi ^{2}}}\int _{0}^{{\frac {1}{2}}\pi }\cos \left(x\cos \theta \right)\left(\gamma +\ln \left(2x\sin ^{2}\theta \right)\right)\,d\theta .} Yα(x) is necessary as the second linearly independent solution of the Bessel's equation when α is an integer. But Yα(x) has more meaning than that. It can be considered as a "natural" partner of Jα(x). See also the subsection on Hankel functions below. When α is an integer, moreover, as was similarly the case for the functions of the first kind, the following relationship is valid: Y − n ( x ) = ( − 1 ) n Y n ( x ) . {\displaystyle Y_{-n}(x)=(-1)^{n}Y_{n}(x).} Both Jα(x) and Yα(x) are holomorphic functions of x on the complex plane cut along the negative real axis. When α is an integer, the Bessel functions J are entire functions of x. If x is held fixed at a non-zero value, then the Bessel functions are entire functions of α. The Bessel functions of the second kind when α is an integer is an example of the second kind of solution in Fuchs's theorem. === Hankel functions: H(1)α, H(2)α === Another important formulation of the two linearly independent solutions to Bessel's equation are the Hankel functions of the first and second kind, H(1)α(x) and H(2)α(x), defined as H α ( 1 ) ( x ) = J α ( x ) + i Y α ( x ) , H α ( 2 ) ( x ) = J α ( x ) − i Y α ( x ) , {\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(x)&=J_{\alpha }(x)+iY_{\alpha }(x),\\[5pt]H_{\alpha }^{(2)}(x)&=J_{\alpha }(x)-iY_{\alpha }(x),\end{aligned}}} where i is the imaginary unit. These linear combinations are also known as Bessel functions of the third kind; they are two linearly independent solutions of Bessel's differential equation. They are named after Hermann Hankel. These forms of linear combination satisfy numerous simple-looking properties, like asymptotic formulae or integral representations. Here, "simple" means an appearance of a factor of the form ei f(x). For real x > 0 {\displaystyle x>0} where J α ( x ) {\displaystyle J_{\alpha }(x)} , Y α ( x ) {\displaystyle Y_{\alpha }(x)} are real-valued, the Bessel functions of the first and second kind are the real and imaginary parts, respectively, of the first Hankel function and the real and negative imaginary parts of the second Hankel function. Thus, the above formulae are analogs of Euler's formula, substituting H(1)α(x), H(2)α(x) for e ± i x {\displaystyle e^{\pm ix}} and J α ( x ) {\displaystyle J_{\alpha }(x)} , Y α ( x ) {\displaystyle Y_{\alpha }(x)} for cos ( x ) {\displaystyle \cos(x)} , sin ( x ) {\displaystyle \sin(x)} , as explicitly shown in the asymptotic expansion. The Hankel functions are used to express outward- and inward-propagating cylindrical-wave solutions of the cylindrical wave equation, respectively (or vice versa, depending on the sign convention for the frequency). Using the previous relationships, they can be expressed as H α ( 1 ) ( x ) = J − α ( x ) − e − α π i J α ( x ) i sin α π , H α ( 2 ) ( x ) = J − α ( x ) − e α π i J α ( x ) − i sin α π . {\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(x)&={\frac {J_{-\alpha }(x)-e^{-\alpha \pi i}J_{\alpha }(x)}{i\sin \alpha \pi }},\\[5pt]H_{\alpha }^{(2)}(x)&={\frac {J_{-\alpha }(x)-e^{\alpha \pi i}J_{\alpha }(x)}{-i\sin \alpha \pi }}.\end{aligned}}} If α is an integer, the limit has to be calculated. The following relationships are valid, whether α is an integer or not: H − α ( 1 ) ( x ) = e α π i H α ( 1 ) ( x ) , H − α ( 2 ) ( x ) = e − α π i H α ( 2 ) ( x ) . {\displaystyle {\begin{aligned}H_{-\alpha }^{(1)}(x)&=e^{\alpha \pi i}H_{\alpha }^{(1)}(x),\\[6mu]H_{-\alpha }^{(2)}(x)&=e^{-\alpha \pi i}H_{\alpha }^{(2)}(x).\end{aligned}}} In particular, if α = m + 1/2 with m a nonnegative integer, the above relations imply directly that J − ( m + 1 2 ) ( x ) = ( − 1 ) m + 1 Y m + 1 2 ( x ) , Y − ( m + 1 2 ) ( x ) = ( − 1 ) m J m + 1 2 ( x ) . {\displaystyle {\begin{aligned}J_{-(m+{\frac {1}{2}})}(x)&=(-1)^{m+1}Y_{m+{\frac {1}{2}}}(x),\\[5pt]Y_{-(m+{\frac {1}{2}})}(x)&=(-1)^{m}J_{m+{\frac {1}{2}}}(x).\end{aligned}}} These are useful in developing the spherical Bessel functions (see below). The Hankel functions admit the following integral representations for Re(x) > 0: H α ( 1 ) ( x ) = 1 π i ∫ − ∞ + ∞ + π i e x sinh t − α t d t , H α ( 2 ) ( x ) = − 1 π i ∫ − ∞ + ∞ − π i e x sinh t − α t d t , {\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(x)&={\frac {1}{\pi i}}\int _{-\infty }^{+\infty +\pi i}e^{x\sinh t-\alpha t}\,dt,\\[5pt]H_{\alpha }^{(2)}(x)&=-{\frac {1}{\pi i}}\int _{-\infty }^{+\infty -\pi i}e^{x\sinh t-\alpha t}\,dt,\end{aligned}}} where the integration limits indicate integration along a contour that can be chosen as follows: from −∞ to 0 along the negative real axis, from 0 to ±πi along the imaginary axis, and from ±πi to +∞ ± πi along a contour parallel to the real axis. === Modified Bessel functions: Iα, Kα === The Bessel functions are valid even for complex arguments x, and an important special case is that of a purely imaginary argument. In this case, the solutions to the Bessel equation are called the modified Bessel functions (or occasionally the hyperbolic Bessel functions) of the first and second kind and are defined as I α ( x ) = i − α J α ( i x ) = ∑ m = 0 ∞ 1 m ! Γ ( m + α + 1 ) ( x 2 ) 2 m + α , K α ( x ) = π 2 I − α ( x ) − I α ( x ) sin α π , {\displaystyle {\begin{aligned}I_{\alpha }(x)&=i^{-\alpha }J_{\alpha }(ix)=\sum _{m=0}^{\infty }{\frac {1}{m!\,\Gamma (m+\alpha +1)}}\left({\frac {x}{2}}\right)^{2m+\alpha },\\[5pt]K_{\alpha }(x)&={\frac {\pi }{2}}{\frac {I_{-\alpha }(x)-I_{\alpha }(x)}{\sin \alpha \pi }},\end{aligned}}} when α is not an integer; when α is an integer, then the limit is used. These are chosen to be real-valued for real and positive arguments x. The series expansion for Iα(x) is thus similar to that for Jα(x), but without the alternating (−1)m factor. K α {\displaystyle K_{\alpha }} can be expressed in terms of Hankel functions: K α ( x ) = { π 2 i α + 1 H α ( 1 ) ( i x ) − π < arg x ≤ π 2 π 2 ( − i ) α + 1 H α ( 2 ) ( − i x ) − π 2 < arg x ≤ π {\displaystyle K_{\alpha }(x)={\begin{cases}{\frac {\pi }{2}}i^{\alpha +1}H_{\alpha }^{(1)}(ix)&-\pi <\arg x\leq {\frac {\pi }{2}}\\{\frac {\pi }{2}}(-i)^{\alpha +1}H_{\alpha }^{(2)}(-ix)&-{\frac {\pi }{2}}<\arg x\leq \pi \end{cases}}} Using these two formulae the result to J α 2 ( z ) {\displaystyle J_{\alpha }^{2}(z)} + Y α 2 ( z ) {\displaystyle Y_{\alpha }^{2}(z)} , commonly known as Nicholson's integral or Nicholson's formula, can be obtained to give the following J α 2 ( x ) + Y α 2 ( x ) = 8 π 2 ∫ 0 ∞ cosh ( 2 α t ) K 0 ( 2 x sinh t ) d t , {\displaystyle J_{\alpha }^{2}(x)+Y_{\alpha }^{2}(x)={\frac {8}{\pi ^{2}}}\int _{0}^{\infty }\cosh(2\alpha t)K_{0}(2x\sinh t)\,dt,} given that the condition Re(x) > 0 is met. It can also be shown that J α 2 ( x ) + Y α 2 ( x ) = 8 cos ( α π ) π 2 ∫ 0 ∞ K 2 α ( 2 x sinh t ) d t , {\displaystyle J_{\alpha }^{2}(x)+Y_{\alpha }^{2}(x)={\frac {8\cos(\alpha \pi )}{\pi ^{2}}}\int _{0}^{\infty }K_{2\alpha }(2x\sinh t)\,dt,} only when |Re(α)| < 1/2 and Re(x) ≥ 0 but not when x = 0. We can express the first and second Bessel functions in terms of the modified Bessel functions (these are valid if −π < arg z ≤ π/2): J α ( i z ) = e α π i 2 I α ( z ) , Y α ( i z ) = e ( α + 1 ) π i 2 I α ( z ) − 2 π e − α π i 2 K α ( z ) . {\displaystyle {\begin{aligned}J_{\alpha }(iz)&=e^{\frac {\alpha \pi i}{2}}I_{\alpha }(z),\\[1ex]Y_{\alpha }(iz)&=e^{\frac {(\alpha +1)\pi i}{2}}I_{\alpha }(z)-{\tfrac {2}{\pi }}e^{-{\frac {\alpha \pi i}{2}}}K_{\alpha }(z).\end{aligned}}} Iα(x) and Kα(x) are the two linearly independent solutions to the modified Bessel's equation: x 2 d 2 y d x 2 + x d y d x − ( x 2 + α 2 ) y = 0. {\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+x{\frac {dy}{dx}}-\left(x^{2}+\alpha ^{2}\right)y=0.} Unlike the ordinary Bessel functions, which are oscillating as functions of a real argument, Iα and Kα are exponentially growing and decaying functions respectively. Like the ordinary Bessel function Jα, the function Iα goes to zero at x = 0 for α > 0 and is finite at x = 0 for α = 0. Analogously, Kα diverges at x = 0 with the singularity being of logarithmic type for K0, and 1/2Γ(|α|)(2/x)|α| otherwise. Two integral formulas for the modified Bessel functions are (for Re(x) > 0): I α ( x ) = 1 π ∫ 0 π e x cos θ cos α θ d θ − sin α π π ∫ 0 ∞ e − x cosh t − α t d t , K α ( x ) = ∫ 0 ∞ e − x cosh t cosh α t d t . {\displaystyle {\begin{aligned}I_{\alpha }(x)&={\frac {1}{\pi }}\int _{0}^{\pi }e^{x\cos \theta }\cos \alpha \theta \,d\theta -{\frac {\sin \alpha \pi }{\pi }}\int _{0}^{\infty }e^{-x\cosh t-\alpha t}\,dt,\\[5pt]K_{\alpha }(x)&=\int _{0}^{\infty }e^{-x\cosh t}\cosh \alpha t\,dt.\end{aligned}}} Bessel functions can be described as Fourier transforms of powers of quadratic functions. For example (for Re(ω) > 0): 2 K 0 ( ω ) = ∫ − ∞ ∞ e i ω t t 2 + 1 d t . {\displaystyle 2\,K_{0}(\omega )=\int _{-\infty }^{\infty }{\frac {e^{i\omega t}}{\sqrt {t^{2}+1}}}\,dt.} It can be proven by showing equality to the above integral definition for K0. This is done by integrating a closed curve in the first quadrant of the complex plane. Modified Bessel functions of the second kind may be represented with Bassett's integral K n ( x z ) = Γ ( n + 1 2 ) ( 2 z ) n π x n ∫ 0 ∞ cos ( x t ) d t ( t 2 + z 2 ) n + 1 2 . {\displaystyle K_{n}(xz)={\frac {\Gamma \left(n+{\frac {1}{2}}\right)(2z)^{n}}{{\sqrt {\pi }}x^{n}}}\int _{0}^{\infty }{\frac {\cos(xt)\,dt}{(t^{2}+z^{2})^{n+{\frac {1}{2}}}}}.} Modified Bessel functions K1/3 and K2/3 can be represented in terms of rapidly convergent integrals K 1 3 ( ξ ) = 3 ∫ 0 ∞ exp ( − ξ ( 1 + 4 x 2 3 ) 1 + x 2 3 ) d x , K 2 3 ( ξ ) = 1 3 ∫ 0 ∞ 3 + 2 x 2 1 + x 2 3 exp ( − ξ ( 1 + 4 x 2 3 ) 1 + x 2 3 ) d x . {\displaystyle {\begin{aligned}K_{\frac {1}{3}}(\xi )&={\sqrt {3}}\int _{0}^{\infty }\exp \left(-\xi \left(1+{\frac {4x^{2}}{3}}\right){\sqrt {1+{\frac {x^{2}}{3}}}}\right)\,dx,\\[5pt]K_{\frac {2}{3}}(\xi )&={\frac {1}{\sqrt {3}}}\int _{0}^{\infty }{\frac {3+2x^{2}}{\sqrt {1+{\frac {x^{2}}{3}}}}}\exp \left(-\xi \left(1+{\frac {4x^{2}}{3}}\right){\sqrt {1+{\frac {x^{2}}{3}}}}\right)\,dx.\end{aligned}}} The modified Bessel function K 1 2 ( ξ ) = ( 2 ξ / π ) − 1 / 2 exp ( − ξ ) {\displaystyle K_{\frac {1}{2}}(\xi )=(2\xi /\pi )^{-1/2}\exp(-\xi )} is useful to represent the Laplace distribution as an Exponential-scale mixture of normal distributions. The modified Bessel function of the second kind has also been called by the following names (now rare): Basset function after Alfred Barnard Basset Modified Bessel function of the third kind Modified Hankel function Macdonald function after Hector Munro Macdonald === Spherical Bessel functions: jn, yn === When solving the Helmholtz equation in spherical coordinates by separation of variables, the radial equation has the form x 2 d 2 y d x 2 + 2 x d y d x + ( x 2 − n ( n + 1 ) ) y = 0. {\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+2x{\frac {dy}{dx}}+\left(x^{2}-n(n+1)\right)y=0.} The two linearly independent solutions to this equation are called the spherical Bessel functions jn and yn, and are related to the ordinary Bessel functions Jn and Yn by j n ( x ) = π 2 x J n + 1 2 ( x ) , y n ( x ) = π 2 x Y n + 1 2 ( x ) = ( − 1 ) n + 1 π 2 x J − n − 1 2 ( x ) . {\displaystyle {\begin{aligned}j_{n}(x)&={\sqrt {\frac {\pi }{2x}}}J_{n+{\frac {1}{2}}}(x),\\y_{n}(x)&={\sqrt {\frac {\pi }{2x}}}Y_{n+{\frac {1}{2}}}(x)=(-1)^{n+1}{\sqrt {\frac {\pi }{2x}}}J_{-n-{\frac {1}{2}}}(x).\end{aligned}}} yn is also denoted nn or ηn; some authors call these functions the spherical Neumann functions. From the relations to the ordinary Bessel functions it is directly seen that: j n ( x ) = ( − 1 ) n y − n − 1 ( x ) y n ( x ) = ( − 1 ) n + 1 j − n − 1 ( x ) {\displaystyle {\begin{aligned}j_{n}(x)&=(-1)^{n}y_{-n-1}(x)\\y_{n}(x)&=(-1)^{n+1}j_{-n-1}(x)\end{aligned}}} The spherical Bessel functions can also be written as (Rayleigh's formulas) j n ( x ) = ( − x ) n ( 1 x d d x ) n sin x x , y n ( x ) = − ( − x ) n ( 1 x d d x ) n cos x x . {\displaystyle {\begin{aligned}j_{n}(x)&=(-x)^{n}\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{n}{\frac {\sin x}{x}},\\y_{n}(x)&=-(-x)^{n}\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{n}{\frac {\cos x}{x}}.\end{aligned}}} The zeroth spherical Bessel function j0(x) is also known as the (unnormalized) sinc function. The first few spherical Bessel functions are: j 0 ( x ) = sin x x . j 1 ( x ) = sin x x 2 − cos x x , j 2 ( x ) = ( 3 x 2 − 1 ) sin x x − 3 cos x x 2 , j 3 ( x ) = ( 15 x 3 − 6 x ) sin x x − ( 15 x 2 − 1 ) cos x x {\displaystyle {\begin{aligned}j_{0}(x)&={\frac {\sin x}{x}}.\\j_{1}(x)&={\frac {\sin x}{x^{2}}}-{\frac {\cos x}{x}},\\j_{2}(x)&=\left({\frac {3}{x^{2}}}-1\right){\frac {\sin x}{x}}-{\frac {3\cos x}{x^{2}}},\\j_{3}(x)&=\left({\frac {15}{x^{3}}}-{\frac {6}{x}}\right){\frac {\sin x}{x}}-\left({\frac {15}{x^{2}}}-1\right){\frac {\cos x}{x}}\end{aligned}}} and y 0 ( x ) = − j − 1 ( x ) = − cos x x , y 1 ( x ) = j − 2 ( x ) = − cos x x 2 − sin x x , y 2 ( x ) = − j − 3 ( x ) = ( − 3 x 2 + 1 ) cos x x − 3 sin x x 2 , y 3 ( x ) = j − 4 ( x ) = ( − 15 x 3 + 6 x ) cos x x − ( 15 x 2 − 1 ) sin x x . {\displaystyle {\begin{aligned}y_{0}(x)&=-j_{-1}(x)=-{\frac {\cos x}{x}},\\y_{1}(x)&=j_{-2}(x)=-{\frac {\cos x}{x^{2}}}-{\frac {\sin x}{x}},\\y_{2}(x)&=-j_{-3}(x)=\left(-{\frac {3}{x^{2}}}+1\right){\frac {\cos x}{x}}-{\frac {3\sin x}{x^{2}}},\\y_{3}(x)&=j_{-4}(x)=\left(-{\frac {15}{x^{3}}}+{\frac {6}{x}}\right){\frac {\cos x}{x}}-\left({\frac {15}{x^{2}}}-1\right){\frac {\sin x}{x}}.\end{aligned}}} The first few non-zero roots of the first few spherical Bessel functions are: ==== Generating function ==== The spherical Bessel functions have the generating functions 1 z cos ( z 2 − 2 z t ) = ∑ n = 0 ∞ t n n ! j n − 1 ( z ) , 1 z sin ( z 2 − 2 z t ) = ∑ n = 0 ∞ t n n ! y n − 1 ( z ) . {\displaystyle {\begin{aligned}{\frac {1}{z}}\cos \left({\sqrt {z^{2}-2zt}}\right)&=\sum _{n=0}^{\infty }{\frac {t^{n}}{n!}}j_{n-1}(z),\\{\frac {1}{z}}\sin \left({\sqrt {z^{2}-2zt}}\right)&=\sum _{n=0}^{\infty }{\frac {t^{n}}{n!}}y_{n-1}(z).\end{aligned}}} ==== Finite series expansions ==== In contrast to the whole integer Bessel functions Jn(x), Yn(x), the spherical Bessel functions jn(x), yn(x) have a finite series expression: j n ( x ) = π 2 x J n + 1 2 ( x ) = = 1 2 x [ e i x ∑ r = 0 n i r − n − 1 ( n + r ) ! r ! ( n − r ) ! ( 2 x ) r + e − i x ∑ r = 0 n ( − i ) r − n − 1 ( n + r ) ! r ! ( n − r ) ! ( 2 x ) r ] = 1 x [ sin ( x − n π 2 ) ∑ r = 0 [ n 2 ] ( − 1 ) r ( n + 2 r ) ! ( 2 r ) ! ( n − 2 r ) ! ( 2 x ) 2 r + cos ( x − n π 2 ) ∑ r = 0 [ n − 1 2 ] ( − 1 ) r ( n + 2 r + 1 ) ! ( 2 r + 1 ) ! ( n − 2 r − 1 ) ! ( 2 x ) 2 r + 1 ] y n ( x ) = ( − 1 ) n + 1 j − n − 1 ( x ) = ( − 1 ) n + 1 π 2 x J − ( n + 1 2 ) ( x ) = = ( − 1 ) n + 1 2 x [ e i x ∑ r = 0 n i r + n ( n + r ) ! r ! ( n − r ) ! ( 2 x ) r + e − i x ∑ r = 0 n ( − i ) r + n ( n + r ) ! r ! ( n − r ) ! ( 2 x ) r ] = = ( − 1 ) n + 1 x [ cos ( x + n π 2 ) ∑ r = 0 [ n 2 ] ( − 1 ) r ( n + 2 r ) ! ( 2 r ) ! ( n − 2 r ) ! ( 2 x ) 2 r − sin ( x + n π 2 ) ∑ r = 0 [ n − 1 2 ] ( − 1 ) r ( n + 2 r + 1 ) ! ( 2 r + 1 ) ! ( n − 2 r − 1 ) ! ( 2 x ) 2 r + 1 ] {\displaystyle {\begin{alignedat}{2}j_{n}(x)&={\sqrt {\frac {\pi }{2x}}}J_{n+{\frac {1}{2}}}(x)=\\&={\frac {1}{2x}}\left[e^{ix}\sum _{r=0}^{n}{\frac {i^{r-n-1}(n+r)!}{r!(n-r)!(2x)^{r}}}+e^{-ix}\sum _{r=0}^{n}{\frac {(-i)^{r-n-1}(n+r)!}{r!(n-r)!(2x)^{r}}}\right]\\&={\frac {1}{x}}\left[\sin \left(x-{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n}{2}}\right]}{\frac {(-1)^{r}(n+2r)!}{(2r)!(n-2r)!(2x)^{2r}}}+\cos \left(x-{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n-1}{2}}\right]}{\frac {(-1)^{r}(n+2r+1)!}{(2r+1)!(n-2r-1)!(2x)^{2r+1}}}\right]\\y_{n}(x)&=(-1)^{n+1}j_{-n-1}(x)=(-1)^{n+1}{\frac {\pi }{2x}}J_{-\left(n+{\frac {1}{2}}\right)}(x)=\\&={\frac {(-1)^{n+1}}{2x}}\left[e^{ix}\sum _{r=0}^{n}{\frac {i^{r+n}(n+r)!}{r!(n-r)!(2x)^{r}}}+e^{-ix}\sum _{r=0}^{n}{\frac {(-i)^{r+n}(n+r)!}{r!(n-r)!(2x)^{r}}}\right]=\\&={\frac {(-1)^{n+1}}{x}}\left[\cos \left(x+{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n}{2}}\right]}{\frac {(-1)^{r}(n+2r)!}{(2r)!(n-2r)!(2x)^{2r}}}-\sin \left(x+{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n-1}{2}}\right]}{\frac {(-1)^{r}(n+2r+1)!}{(2r+1)!(n-2r-1)!(2x)^{2r+1}}}\right]\end{alignedat}}} ==== Differential relations ==== In the following, fn is any of jn, yn, h(1)n, h(2)n for n = 0, ±1, ±2, ... ( 1 z d d z ) m ( z n + 1 f n ( z ) ) = z n − m + 1 f n − m ( z ) , ( 1 z d d z ) m ( z − n f n ( z ) ) = ( − 1 ) m z − n − m f n + m ( z ) . {\displaystyle {\begin{aligned}\left({\frac {1}{z}}{\frac {d}{dz}}\right)^{m}\left(z^{n+1}f_{n}(z)\right)&=z^{n-m+1}f_{n-m}(z),\\\left({\frac {1}{z}}{\frac {d}{dz}}\right)^{m}\left(z^{-n}f_{n}(z)\right)&=(-1)^{m}z^{-n-m}f_{n+m}(z).\end{aligned}}} === Spherical Hankel functions: h(1)n, h(2)n === There are also spherical analogues of the Hankel functions: h n ( 1 ) ( x ) = j n ( x ) + i y n ( x ) , h n ( 2 ) ( x ) = j n ( x ) − i y n ( x ) . {\displaystyle {\begin{aligned}h_{n}^{(1)}(x)&=j_{n}(x)+iy_{n}(x),\\h_{n}^{(2)}(x)&=j_{n}(x)-iy_{n}(x).\end{aligned}}} There are simple closed-form expressions for the Bessel functions of half-integer order in terms of the standard trigonometric functions, and therefore for the spherical Bessel functions. In particular, for non-negative integers n: h n ( 1 ) ( x ) = ( − i ) n + 1 e i x x ∑ m = 0 n i m m ! ( 2 x ) m ( n + m ) ! ( n − m ) ! , {\displaystyle h_{n}^{(1)}(x)=(-i)^{n+1}{\frac {e^{ix}}{x}}\sum _{m=0}^{n}{\frac {i^{m}}{m!\,(2x)^{m}}}{\frac {(n+m)!}{(n-m)!}},} and h(2)n is the complex-conjugate of this (for real x). It follows, for example, that j0(x) = sin x/x and y0(x) = −cos x/x, and so on. The spherical Hankel functions appear in problems involving spherical wave propagation, for example in the multipole expansion of the electromagnetic field. === Riccati–Bessel functions: Sn, Cn, ξn, ζn === Riccati–Bessel functions only slightly differ from spherical Bessel functions: S n ( x ) = x j n ( x ) = π x 2 J n + 1 2 ( x ) C n ( x ) = − x y n ( x ) = − π x 2 Y n + 1 2 ( x ) ξ n ( x ) = x h n ( 1 ) ( x ) = π x 2 H n + 1 2 ( 1 ) ( x ) = S n ( x ) − i C n ( x ) ζ n ( x ) = x h n ( 2 ) ( x ) = π x 2 H n + 1 2 ( 2 ) ( x ) = S n ( x ) + i C n ( x ) {\displaystyle {\begin{aligned}S_{n}(x)&=xj_{n}(x)={\sqrt {\frac {\pi x}{2}}}J_{n+{\frac {1}{2}}}(x)\\C_{n}(x)&=-xy_{n}(x)=-{\sqrt {\frac {\pi x}{2}}}Y_{n+{\frac {1}{2}}}(x)\\\xi _{n}(x)&=xh_{n}^{(1)}(x)={\sqrt {\frac {\pi x}{2}}}H_{n+{\frac {1}{2}}}^{(1)}(x)=S_{n}(x)-iC_{n}(x)\\\zeta _{n}(x)&=xh_{n}^{(2)}(x)={\sqrt {\frac {\pi x}{2}}}H_{n+{\frac {1}{2}}}^{(2)}(x)=S_{n}(x)+iC_{n}(x)\end{aligned}}} They satisfy the differential equation x 2 d 2 y d x 2 + ( x 2 − n ( n + 1 ) ) y = 0. {\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+\left(x^{2}-n(n+1)\right)y=0.} For example, this kind of differential equation appears in quantum mechanics while solving the radial component of the Schrödinger's equation with hypothetical cylindrical infinite potential barrier. This differential equation, and the Riccati–Bessel solutions, also arises in the problem of scattering of electromagnetic waves by a sphere, known as Mie scattering after the first published solution by Mie (1908). See e.g., Du (2004) for recent developments and references. Following Debye (1909), the notation ψn, χn is sometimes used instead of Sn, Cn. == Asymptotic forms == The Bessel functions have the following asymptotic forms. For small arguments 0 < z ≪ α + 1 {\displaystyle 0<z\ll {\sqrt {\alpha +1}}} , one obtains, when α {\displaystyle \alpha } is not a negative integer: J α ( z ) ∼ 1 Γ ( α + 1 ) ( z 2 ) α . {\displaystyle J_{\alpha }(z)\sim {\frac {1}{\Gamma (\alpha +1)}}\left({\frac {z}{2}}\right)^{\alpha }.} When α is a negative integer, we have J α ( z ) ∼ ( − 1 ) α ( − α ) ! ( 2 z ) α . {\displaystyle J_{\alpha }(z)\sim {\frac {(-1)^{\alpha }}{(-\alpha )!}}\left({\frac {2}{z}}\right)^{\alpha }.} For the Bessel function of the second kind we have three cases: Y α ( z ) ∼ { 2 π ( ln ( z 2 ) + γ ) if α = 0 − Γ ( α ) π ( 2 z ) α + 1 Γ ( α + 1 ) ( z 2 ) α cot ( α π ) if α is a positive integer (one term dominates unless α is imaginary) , − ( − 1 ) α Γ ( − α ) π ( z 2 ) α if α is a negative integer, {\displaystyle Y_{\alpha }(z)\sim {\begin{cases}{\dfrac {2}{\pi }}\left(\ln \left({\dfrac {z}{2}}\right)+\gamma \right)&{\text{if }}\alpha =0\\[1ex]-{\dfrac {\Gamma (\alpha )}{\pi }}\left({\dfrac {2}{z}}\right)^{\alpha }+{\dfrac {1}{\Gamma (\alpha +1)}}\left({\dfrac {z}{2}}\right)^{\alpha }\cot(\alpha \pi )&{\text{if }}\alpha {\text{ is a positive integer (one term dominates unless }}\alpha {\text{ is imaginary)}},\\[1ex]-{\dfrac {(-1)^{\alpha }\Gamma (-\alpha )}{\pi }}\left({\dfrac {z}{2}}\right)^{\alpha }&{\text{if }}\alpha {\text{ is a negative integer,}}\end{cases}}} where γ is the Euler–Mascheroni constant (0.5772...). For large real arguments z ≫ |α2 − 1/4|, one cannot write a true asymptotic form for Bessel functions of the first and second kind (unless α is half-integer) because they have zeros all the way out to infinity, which would have to be matched exactly by any asymptotic expansion. However, for a given value of arg z one can write an equation containing a term of order |z|−1: J α ( z ) = 2 π z ( cos ( z − α π 2 − π 4 ) + e | Im ( z ) | O ( | z | − 1 ) ) for | arg z | < π , Y α ( z ) = 2 π z ( sin ( z − α π 2 − π 4 ) + e | Im ( z ) | O ( | z | − 1 ) ) for | arg z | < π . {\displaystyle {\begin{aligned}J_{\alpha }(z)&={\sqrt {\frac {2}{\pi z}}}\left(\cos \left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)+e^{\left|\operatorname {Im} (z)\right|}{\mathcal {O}}\left(|z|^{-1}\right)\right)&&{\text{for }}\left|\arg z\right|<\pi ,\\Y_{\alpha }(z)&={\sqrt {\frac {2}{\pi z}}}\left(\sin \left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)+e^{\left|\operatorname {Im} (z)\right|}{\mathcal {O}}\left(|z|^{-1}\right)\right)&&{\text{for }}\left|\arg z\right|<\pi .\end{aligned}}} (For α = 1/2, the last terms in these formulas drop out completely; see the spherical Bessel functions above.) The asymptotic forms for the Hankel functions are: H α ( 1 ) ( z ) ∼ 2 π z e i ( z − α π 2 − π 4 ) for − π < arg z < 2 π , H α ( 2 ) ( z ) ∼ 2 π z e − i ( z − α π 2 − π 4 ) for − 2 π < arg z < π . {\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(z)&\sim {\sqrt {\frac {2}{\pi z}}}e^{i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-\pi <\arg z<2\pi ,\\H_{\alpha }^{(2)}(z)&\sim {\sqrt {\frac {2}{\pi z}}}e^{-i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-2\pi <\arg z<\pi .\end{aligned}}} These can be extended to other values of arg z using equations relating H(1)α(zeimπ) and H(2)α(zeimπ) to H(1)α(z) and H(2)α(z). It is interesting that although the Bessel function of the first kind is the average of the two Hankel functions, Jα(z) is not asymptotic to the average of these two asymptotic forms when z is negative (because one or the other will not be correct there, depending on the arg z used). But the asymptotic forms for the Hankel functions permit us to write asymptotic forms for the Bessel functions of first and second kinds for complex (non-real) z so long as |z| goes to infinity at a constant phase angle arg z (using the square root having positive real part): J α ( z ) ∼ 1 2 π z e i ( z − α π 2 − π 4 ) for − π < arg z < 0 , J α ( z ) ∼ 1 2 π z e − i ( z − α π 2 − π 4 ) for 0 < arg z < π , Y α ( z ) ∼ − i 1 2 π z e i ( z − α π 2 − π 4 ) for − π < arg z < 0 , Y α ( z ) ∼ i 1 2 π z e − i ( z − α π 2 − π 4 ) for 0 < arg z < π . {\displaystyle {\begin{aligned}J_{\alpha }(z)&\sim {\frac {1}{\sqrt {2\pi z}}}e^{i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-\pi <\arg z<0,\\[1ex]J_{\alpha }(z)&\sim {\frac {1}{\sqrt {2\pi z}}}e^{-i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}0<\arg z<\pi ,\\[1ex]Y_{\alpha }(z)&\sim -i{\frac {1}{\sqrt {2\pi z}}}e^{i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-\pi <\arg z<0,\\[1ex]Y_{\alpha }(z)&\sim i{\frac {1}{\sqrt {2\pi z}}}e^{-i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}0<\arg z<\pi .\end{aligned}}} For the modified Bessel functions, Hankel developed asymptotic expansions as well: I α ( z ) ∼ e z 2 π z ( 1 − 4 α 2 − 1 8 z + ( 4 α 2 − 1 ) ( 4 α 2 − 9 ) 2 ! ( 8 z ) 2 − ( 4 α 2 − 1 ) ( 4 α 2 − 9 ) ( 4 α 2 − 25 ) 3 ! ( 8 z ) 3 + ⋯ ) for | arg z | < π 2 , K α ( z ) ∼ π 2 z e − z ( 1 + 4 α 2 − 1 8 z + ( 4 α 2 − 1 ) ( 4 α 2 − 9 ) 2 ! ( 8 z ) 2 + ( 4 α 2 − 1 ) ( 4 α 2 − 9 ) ( 4 α 2 − 25 ) 3 ! ( 8 z ) 3 + ⋯ ) for | arg z | < 3 π 2 . {\displaystyle {\begin{aligned}I_{\alpha }(z)&\sim {\frac {e^{z}}{\sqrt {2\pi z}}}\left(1-{\frac {4\alpha ^{2}-1}{8z}}+{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)}{2!(8z)^{2}}}-{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)\left(4\alpha ^{2}-25\right)}{3!(8z)^{3}}}+\cdots \right)&&{\text{for }}\left|\arg z\right|<{\frac {\pi }{2}},\\K_{\alpha }(z)&\sim {\sqrt {\frac {\pi }{2z}}}e^{-z}\left(1+{\frac {4\alpha ^{2}-1}{8z}}+{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)}{2!(8z)^{2}}}+{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)\left(4\alpha ^{2}-25\right)}{3!(8z)^{3}}}+\cdots \right)&&{\text{for }}\left|\arg z\right|<{\frac {3\pi }{2}}.\end{aligned}}} There is also the asymptotic form (for large real z {\displaystyle z} ) I α ( z ) = 1 2 π z 1 + α 2 z 2 4 exp ( − α arcsinh ( α z ) + z 1 + α 2 z 2 ) ( 1 + O ( 1 z 1 + α 2 z 2 ) ) . {\displaystyle {\begin{aligned}I_{\alpha }(z)={\frac {1}{{\sqrt {2\pi z}}{\sqrt[{4}]{1+{\frac {\alpha ^{2}}{z^{2}}}}}}}\exp \left(-\alpha \operatorname {arcsinh} \left({\frac {\alpha }{z}}\right)+z{\sqrt {1+{\frac {\alpha ^{2}}{z^{2}}}}}\right)\left(1+{\mathcal {O}}\left({\frac {1}{z{\sqrt {1+{\frac {\alpha ^{2}}{z^{2}}}}}}}\right)\right).\end{aligned}}} When α = 1/2, all the terms except the first vanish, and we have I 1 / 2 ( z ) = 2 π sinh ( z ) z ∼ e z 2 π z for | arg z | < π 2 , K 1 / 2 ( z ) = π 2 e − z z . {\displaystyle {\begin{aligned}I_{{1}/{2}}(z)&={\sqrt {\frac {2}{\pi }}}{\frac {\sinh(z)}{\sqrt {z}}}\sim {\frac {e^{z}}{\sqrt {2\pi z}}}&&{\text{for }}\left|\arg z\right|<{\tfrac {\pi }{2}},\\[1ex]K_{{1}/{2}}(z)&={\sqrt {\frac {\pi }{2}}}{\frac {e^{-z}}{\sqrt {z}}}.\end{aligned}}} For small arguments 0 < | z | ≪ α + 1 {\displaystyle 0<|z|\ll {\sqrt {\alpha +1}}} , we have I α ( z ) ∼ 1 Γ ( α + 1 ) ( z 2 ) α , K α ( z ) ∼ { − ln ( z 2 ) − γ if α = 0 Γ ( α ) 2 ( 2 z ) α if α > 0 {\displaystyle {\begin{aligned}I_{\alpha }(z)&\sim {\frac {1}{\Gamma (\alpha +1)}}\left({\frac {z}{2}}\right)^{\alpha },\\[1ex]K_{\alpha }(z)&\sim {\begin{cases}-\ln \left({\dfrac {z}{2}}\right)-\gamma &{\text{if }}\alpha =0\\[1ex]{\frac {\Gamma (\alpha )}{2}}\left({\dfrac {2}{z}}\right)^{\alpha }&{\text{if }}\alpha >0\end{cases}}\end{aligned}}} == Properties == For integer order α = n, Jn is often defined via a Laurent series for a generating function: e x 2 ( t − 1 t ) = ∑ n = − ∞ ∞ J n ( x ) t n {\displaystyle e^{{\frac {x}{2}}\left(t-{\frac {1}{t}}\right)}=\sum _{n=-\infty }^{\infty }J_{n}(x)t^{n}} an approach used by P. A. Hansen in 1843. (This can be generalized to non-integer order by contour integration or other methods.) Infinite series of Bessel functions in the form ∑ ν = − ∞ ∞ J N ν + p ( x ) {\textstyle \sum _{\nu =-\infty }^{\infty }J_{N\nu +p}(x)} where ν , p ∈ Z , N ∈ Z + \nu ,p\in \mathbb {Z} ,\ N\in \mathbb {Z} ^{+} arise in many physical systems and are defined in closed form by the Sung series. For example, when N = 3: ∑ ν = − ∞ ∞ J 3 ν + p ( x ) = 1 3 [ 1 + 2 cos ( x 3 / 2 − 2 π p / 3 ) ] {\textstyle \sum _{\nu =-\infty }^{\infty }J_{3\nu +p}(x)={\frac {1}{3}}\left[1+2\cos {(x{\sqrt {3}}/2-2\pi p/3)}\right]} . More generally, the Sung series and the alternating Sung series are written as: ∑ ν = − ∞ ∞ J N ν + p ( x ) = 1 N ∑ q = 0 N − 1 e i x sin 2 π q / N e − i 2 π p q / N {\displaystyle \sum _{\nu =-\infty }^{\infty }J_{N\nu +p}(x)={\frac {1}{N}}\sum _{q=0}^{N-1}e^{ix\sin {2\pi q/N}}e^{-i2\pi pq/N}} ∑ ν = − ∞ ∞ ( − 1 ) ν J N ν + p ( x ) = 1 N ∑ q = 0 N − 1 e i x sin ( 2 q + 1 ) π / N e − i ( 2 q + 1 ) π p / N {\displaystyle \sum _{\nu =-\infty }^{\infty }(-1)^{\nu }J_{N\nu +p}(x)={\frac {1}{N}}\sum _{q=0}^{N-1}e^{ix\sin {(2q+1)\pi /N}}e^{-i(2q+1)\pi p/N}} A series expansion using Bessel functions (Kapteyn series) is 1 1 − z = 1 + 2 ∑ n = 1 ∞ J n ( n z ) . {\displaystyle {\frac {1}{1-z}}=1+2\sum _{n=1}^{\infty }J_{n}(nz).} Another important relation for integer orders is the Jacobi–Anger expansion: e i z cos ϕ = ∑ n = − ∞ ∞ i n J n ( z ) e i n ϕ {\displaystyle e^{iz\cos \phi }=\sum _{n=-\infty }^{\infty }i^{n}J_{n}(z)e^{in\phi }} and e ± i z sin ϕ = J 0 ( z ) + 2 ∑ n = 1 ∞ J 2 n ( z ) cos ( 2 n ϕ ) ± 2 i ∑ n = 0 ∞ J 2 n + 1 ( z ) sin ( ( 2 n + 1 ) ϕ ) {\displaystyle e^{\pm iz\sin \phi }=J_{0}(z)+2\sum _{n=1}^{\infty }J_{2n}(z)\cos(2n\phi )\pm 2i\sum _{n=0}^{\infty }J_{2n+1}(z)\sin((2n+1)\phi )} which is used to expand a plane wave as a sum of cylindrical waves, or to find the Fourier series of a tone-modulated FM signal. More generally, a series f ( z ) = a 0 ν J ν ( z ) + 2 ⋅ ∑ k = 1 ∞ a k ν J ν + k ( z ) {\displaystyle f(z)=a_{0}^{\nu }J_{\nu }(z)+2\cdot \sum _{k=1}^{\infty }a_{k}^{\nu }J_{\nu +k}(z)} is called Neumann expansion of f. The coefficients for ν = 0 have the explicit form a k 0 = 1 2 π i ∫ | z | = c f ( z ) O k ( z ) d z {\displaystyle a_{k}^{0}={\frac {1}{2\pi i}}\int _{|z|=c}f(z)O_{k}(z)\,dz} where Ok is Neumann's polynomial. Selected functions admit the special representation f ( z ) = ∑ k = 0 ∞ a k ν J ν + 2 k ( z ) {\displaystyle f(z)=\sum _{k=0}^{\infty }a_{k}^{\nu }J_{\nu +2k}(z)} with a k ν = 2 ( ν + 2 k ) ∫ 0 ∞ f ( z ) J ν + 2 k ( z ) z d z {\displaystyle a_{k}^{\nu }=2(\nu +2k)\int _{0}^{\infty }f(z){\frac {J_{\nu +2k}(z)}{z}}\,dz} due to the orthogonality relation ∫ 0 ∞ J α ( z ) J β ( z ) d z z = 2 π sin ( π 2 ( α − β ) ) α 2 − β 2 {\displaystyle \int _{0}^{\infty }J_{\alpha }(z)J_{\beta }(z){\frac {dz}{z}}={\frac {2}{\pi }}{\frac {\sin \left({\frac {\pi }{2}}(\alpha -\beta )\right)}{\alpha ^{2}-\beta ^{2}}}} More generally, if f has a branch-point near the origin of such a nature that f ( z ) = ∑ k = 0 a k J ν + k ( z ) {\displaystyle f(z)=\sum _{k=0}a_{k}J_{\nu +k}(z)} then L { ∑ k = 0 a k J ν + k } ( s ) = 1 1 + s 2 ∑ k = 0 a k ( s + 1 + s 2 ) ν + k {\displaystyle {\mathcal {L}}\left\{\sum _{k=0}a_{k}J_{\nu +k}\right\}(s)={\frac {1}{\sqrt {1+s^{2}}}}\sum _{k=0}{\frac {a_{k}}{\left(s+{\sqrt {1+s^{2}}}\right)^{\nu +k}}}} or ∑ k = 0 a k ξ ν + k = 1 + ξ 2 2 ξ L { f } ( 1 − ξ 2 2 ξ ) {\displaystyle \sum _{k=0}a_{k}\xi ^{\nu +k}={\frac {1+\xi ^{2}}{2\xi }}{\mathcal {L}}\{f\}\left({\frac {1-\xi ^{2}}{2\xi }}\right)} where L { f } {\displaystyle {\mathcal {L}}\{f\}} is the Laplace transform of f. Another way to define the Bessel functions is the Poisson representation formula and the Mehler-Sonine formula: J ν ( z ) = ( z 2 ) ν Γ ( ν + 1 2 ) π ∫ − 1 1 e i z s ( 1 − s 2 ) ν − 1 2 d s = 2 ( z 2 ) ν ⋅ π ⋅ Γ ( 1 2 − ν ) ∫ 1 ∞ sin z u ( u 2 − 1 ) ν + 1 2 d u {\displaystyle {\begin{aligned}J_{\nu }(z)&={\frac {\left({\frac {z}{2}}\right)^{\nu }}{\Gamma \left(\nu +{\frac {1}{2}}\right){\sqrt {\pi }}}}\int _{-1}^{1}e^{izs}\left(1-s^{2}\right)^{\nu -{\frac {1}{2}}}\,ds\\[5px]&={\frac {2}{{\left({\frac {z}{2}}\right)}^{\nu }\cdot {\sqrt {\pi }}\cdot \Gamma \left({\frac {1}{2}}-\nu \right)}}\int _{1}^{\infty }{\frac {\sin zu}{\left(u^{2}-1\right)^{\nu +{\frac {1}{2}}}}}\,du\end{aligned}}} where ν > −1/2 and z ∈ C. This formula is useful especially when working with Fourier transforms. Because Bessel's equation becomes Hermitian (self-adjoint) if it is divided by x, the solutions must satisfy an orthogonality relationship for appropriate boundary conditions. In particular, it follows that: ∫ 0 1 x J α ( x u α , m ) J α ( x u α , n ) d x = δ m , n 2 [ J α + 1 ( u α , m ) ] 2 = δ m , n 2 [ J α ′ ( u α , m ) ] 2 {\displaystyle \int _{0}^{1}xJ_{\alpha }\left(xu_{\alpha ,m}\right)J_{\alpha }\left(xu_{\alpha ,n}\right)\,dx={\frac {\delta _{m,n}}{2}}\left[J_{\alpha +1}\left(u_{\alpha ,m}\right)\right]^{2}={\frac {\delta _{m,n}}{2}}\left[J_{\alpha }'\left(u_{\alpha ,m}\right)\right]^{2}} where α > −1, δm,n is the Kronecker delta, and uα,m is the mth zero of Jα(x). This orthogonality relation can then be used to extract the coefficients in the Fourier–Bessel series, where a function is expanded in the basis of the functions Jα(x uα,m) for fixed α and varying m. An analogous relationship for the spherical Bessel functions follows immediately: ∫ 0 1 x 2 j α ( x u α , m ) j α ( x u α , n ) d x = δ m , n 2 [ j α + 1 ( u α , m ) ] 2 {\displaystyle \int _{0}^{1}x^{2}j_{\alpha }\left(xu_{\alpha ,m}\right)j_{\alpha }\left(xu_{\alpha ,n}\right)\,dx={\frac {\delta _{m,n}}{2}}\left[j_{\alpha +1}\left(u_{\alpha ,m}\right)\right]^{2}} If one defines a boxcar function of x that depends on a small parameter ε as: f ε ( x ) = 1 ε rect ( x − 1 ε ) {\displaystyle f_{\varepsilon }(x)={\frac {1}{\varepsilon }}\operatorname {rect} \left({\frac {x-1}{\varepsilon }}\right)} (where rect is the rectangle function) then the Hankel transform of it (of any given order α > −1/2), gε(k), approaches Jα(k) as ε approaches zero, for any given k. Conversely, the Hankel transform (of the same order) of gε(k) is fε(x): ∫ 0 ∞ k J α ( k x ) g ε ( k ) d k = f ε ( x ) {\displaystyle \int _{0}^{\infty }kJ_{\alpha }(kx)g_{\varepsilon }(k)\,dk=f_{\varepsilon }(x)} which is zero everywhere except near 1. As ε approaches zero, the right-hand side approaches δ(x − 1), where δ is the Dirac delta function. This admits the limit (in the distributional sense): ∫ 0 ∞ k J α ( k x ) J α ( k ) d k = δ ( x − 1 ) {\displaystyle \int _{0}^{\infty }kJ_{\alpha }(kx)J_{\alpha }(k)\,dk=\delta (x-1)} A change of variables then yields the closure equation: ∫ 0 ∞ x J α ( u x ) J α ( v x ) d x = 1 u δ ( u − v ) {\displaystyle \int _{0}^{\infty }xJ_{\alpha }(ux)J_{\alpha }(vx)\,dx={\frac {1}{u}}\delta (u-v)} for α > −1/2. The Hankel transform can express a fairly arbitrary function as an integral of Bessel functions of different scales. For the spherical Bessel functions the orthogonality relation is: ∫ 0 ∞ x 2 j α ( u x ) j α ( v x ) d x = π 2 u v δ ( u − v ) {\displaystyle \int _{0}^{\infty }x^{2}j_{\alpha }(ux)j_{\alpha }(vx)\,dx={\frac {\pi }{2uv}}\delta (u-v)} for α > −1. Another important property of Bessel's equations, which follows from Abel's identity, involves the Wronskian of the solutions: A α ( x ) d B α d x − d A α d x B α ( x ) = C α x {\displaystyle A_{\alpha }(x){\frac {dB_{\alpha }}{dx}}-{\frac {dA_{\alpha }}{dx}}B_{\alpha }(x)={\frac {C_{\alpha }}{x}}} where Aα and Bα are any two solutions of Bessel's equation, and Cα is a constant independent of x (which depends on α and on the particular Bessel functions considered). In particular, J α ( x ) d Y α d x − d J α d x Y α ( x ) = 2 π x {\displaystyle J_{\alpha }(x){\frac {dY_{\alpha }}{dx}}-{\frac {dJ_{\alpha }}{dx}}Y_{\alpha }(x)={\frac {2}{\pi x}}} and I α ( x ) d K α d x − d I α d x K α ( x ) = − 1 x , {\displaystyle I_{\alpha }(x){\frac {dK_{\alpha }}{dx}}-{\frac {dI_{\alpha }}{dx}}K_{\alpha }(x)=-{\frac {1}{x}},} for α > −1. For α > −1, the even entire function of genus 1, x−αJα(x), has only real zeros. Let 0 < j α , 1 < j α , 2 < ⋯ < j α , n < ⋯ {\displaystyle 0<j_{\alpha ,1}<j_{\alpha ,2}<\cdots <j_{\alpha ,n}<\cdots } be all its positive zeros, then J α ( z ) = ( z 2 ) α Γ ( α + 1 ) ∏ n = 1 ∞ ( 1 − z 2 j α , n 2 ) {\displaystyle J_{\alpha }(z)={\frac {\left({\frac {z}{2}}\right)^{\alpha }}{\Gamma (\alpha +1)}}\prod _{n=1}^{\infty }\left(1-{\frac {z^{2}}{j_{\alpha ,n}^{2}}}\right)} (There are a large number of other known integrals and identities that are not reproduced here, but which can be found in the references.) === Recurrence relations === The functions Jα, Yα, H(1)α, and H(2)α all satisfy the recurrence relations 2 α x Z α ( x ) = Z α − 1 ( x ) + Z α + 1 ( x ) {\displaystyle {\frac {2\alpha }{x}}Z_{\alpha }(x)=Z_{\alpha -1}(x)+Z_{\alpha +1}(x)} and 2 d Z α ( x ) d x = Z α − 1 ( x ) − Z α + 1 ( x ) , {\displaystyle 2{\frac {dZ_{\alpha }(x)}{dx}}=Z_{\alpha -1}(x)-Z_{\alpha +1}(x),} where Z denotes J, Y, H(1), or H(2). These two identities are often combined, e.g. added or subtracted, to yield various other relations. In this way, for example, one can compute Bessel functions of higher orders (or higher derivatives) given the values at lower orders (or lower derivatives). In particular, it follows that ( 1 x d d x ) m [ x α Z α ( x ) ] = x α − m Z α − m ( x ) , ( 1 x d d x ) m [ Z α ( x ) x α ] = ( − 1 ) m Z α + m ( x ) x α + m . {\displaystyle {\begin{aligned}\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{m}\left[x^{\alpha }Z_{\alpha }(x)\right]&=x^{\alpha -m}Z_{\alpha -m}(x),\\\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{m}\left[{\frac {Z_{\alpha }(x)}{x^{\alpha }}}\right]&=(-1)^{m}{\frac {Z_{\alpha +m}(x)}{x^{\alpha +m}}}.\end{aligned}}} Modified Bessel functions follow similar relations: e ( x 2 ) ( t + 1 t ) = ∑ n = − ∞ ∞ I n ( x ) t n {\displaystyle e^{\left({\frac {x}{2}}\right)\left(t+{\frac {1}{t}}\right)}=\sum _{n=-\infty }^{\infty }I_{n}(x)t^{n}} and e z cos θ = I 0 ( z ) + 2 ∑ n = 1 ∞ I n ( z ) cos n θ {\displaystyle e^{z\cos \theta }=I_{0}(z)+2\sum _{n=1}^{\infty }I_{n}(z)\cos n\theta } and 1 2 π ∫ 0 2 π e z cos ( m θ ) + y cos θ d θ = I 0 ( z ) I 0 ( y ) + 2 ∑ n = 1 ∞ I n ( z ) I m n ( y ) . {\displaystyle {\frac {1}{2\pi }}\int _{0}^{2\pi }e^{z\cos(m\theta )+y\cos \theta }d\theta =I_{0}(z)I_{0}(y)+2\sum _{n=1}^{\infty }I_{n}(z)I_{mn}(y).} The recurrence relation reads C α − 1 ( x ) − C α + 1 ( x ) = 2 α x C α ( x ) , C α − 1 ( x ) + C α + 1 ( x ) = 2 d d x C α ( x ) , {\displaystyle {\begin{aligned}C_{\alpha -1}(x)-C_{\alpha +1}(x)&={\frac {2\alpha }{x}}C_{\alpha }(x),\\[1ex]C_{\alpha -1}(x)+C_{\alpha +1}(x)&=2{\frac {d}{dx}}C_{\alpha }(x),\end{aligned}}} where Cα denotes Iα or eαiπKα. These recurrence relations are useful for discrete diffusion problems. === Transcendence === In 1929, Carl Ludwig Siegel proved that Jν(x), J'ν(x), and the logarithmic derivative J'ν(x)/Jν(x) are transcendental numbers when ν is rational and x is algebraic and nonzero. The same proof also implies that Γ ( v + 1 ) ( 2 / x ) v J v ( x ) {\displaystyle \Gamma (v+1)(2/x)^{v}J_{v}(x)} is transcendental under the same assumptions. === Sums with Bessel functions === The product of two Bessel functions admits the following sum: ∑ ν = − ∞ ∞ J ν ( x ) J n − ν ( y ) = J n ( x + y ) , {\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }(x)J_{n-\nu }(y)=J_{n}(x+y),} ∑ ν = − ∞ ∞ J ν ( x ) J ν + n ( y ) = J n ( y − x ) . {\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }(x)J_{\nu +n}(y)=J_{n}(y-x).} From these equalities it follows that ∑ ν = − ∞ ∞ J ν ( x ) J ν + n ( x ) = δ n , 0 {\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }(x)J_{\nu +n}(x)=\delta _{n,0}} and as a consequence ∑ ν = − ∞ ∞ J ν 2 ( x ) = 1. {\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }^{2}(x)=1.} These sums can be extended to include a term multiplier that is a polynomial function of the index. For example, ∑ ν = − ∞ ∞ ν J ν ( x ) J ν + n ( x ) = x 2 ( δ n , 1 + δ n , − 1 ) , {\displaystyle \sum _{\nu =-\infty }^{\infty }\nu J_{\nu }(x)J_{\nu +n}(x)={\frac {x}{2}}\left(\delta _{n,1}+\delta _{n,-1}\right),} ∑ ν = − ∞ ∞ ν J ν 2 ( x ) = 0 , {\displaystyle \sum _{\nu =-\infty }^{\infty }\nu J_{\nu }^{2}(x)=0,} ∑ ν = − ∞ ∞ ν 2 J ν ( x ) J ν + n ( x ) = x 2 ( δ n , − 1 − δ n , 1 ) + x 2 4 ( δ n , − 2 + 2 δ n , 0 + δ n , 2 ) , {\displaystyle \sum _{\nu =-\infty }^{\infty }\nu ^{2}J_{\nu }(x)J_{\nu +n}(x)={\frac {x}{2}}\left(\delta _{n,-1}-\delta _{n,1}\right)+{\frac {x^{2}}{4}}\left(\delta _{n,-2}+2\delta _{n,0}+\delta _{n,2}\right),} ∑ ν = − ∞ ∞ ν 2 J ν 2 ( x ) = x 2 2 . {\displaystyle \sum _{\nu =-\infty }^{\infty }\nu ^{2}J_{\nu }^{2}(x)={\frac {x^{2}}{2}}.} == Multiplication theorem == The Bessel functions obey a multiplication theorem λ − ν J ν ( λ z ) = ∑ n = 0 ∞ 1 n ! ( ( 1 − λ 2 ) z 2 ) n J ν + n ( z ) , {\displaystyle \lambda ^{-\nu }J_{\nu }(\lambda z)=\sum _{n=0}^{\infty }{\frac {1}{n!}}\left({\frac {\left(1-\lambda ^{2}\right)z}{2}}\right)^{n}J_{\nu +n}(z),} where λ and ν may be taken as arbitrary complex numbers. For |λ2 − 1| < 1, the above expression also holds if J is replaced by Y. The analogous identities for modified Bessel functions and |λ2 − 1| < 1 are λ − ν I ν ( λ z ) = ∑ n = 0 ∞ 1 n ! ( ( λ 2 − 1 ) z 2 ) n I ν + n ( z ) {\displaystyle \lambda ^{-\nu }I_{\nu }(\lambda z)=\sum _{n=0}^{\infty }{\frac {1}{n!}}\left({\frac {\left(\lambda ^{2}-1\right)z}{2}}\right)^{n}I_{\nu +n}(z)} and λ − ν K ν ( λ z ) = ∑ n = 0 ∞ ( − 1 ) n n ! ( ( λ 2 − 1 ) z 2 ) n K ν + n ( z ) . {\displaystyle \lambda ^{-\nu }K_{\nu }(\lambda z)=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{n!}}\left({\frac {\left(\lambda ^{2}-1\right)z}{2}}\right)^{n}K_{\nu +n}(z).} == Zeros of the Bessel function == === Bourget's hypothesis === Bessel himself originally proved that for nonnegative integers n, the equation Jn(x) = 0 has an infinite number of solutions in x. When the functions Jn(x) are plotted on the same graph, though, none of the zeros seem to coincide for different values of n except for the zero at x = 0. This phenomenon is known as Bourget's hypothesis after the 19th-century French mathematician who studied Bessel functions. Specifically it states that for any integers n ≥ 0 and m ≥ 1, the functions Jn(x) and Jn + m(x) have no common zeros other than the one at x = 0. The hypothesis was proved by Carl Ludwig Siegel in 1929. === Transcendence === Siegel proved in 1929 that when ν is rational, all nonzero roots of Jν(x) and J'ν(x) are transcendental, as are all the roots of Kν(x). It is also known that all roots of the higher derivatives J ν ( n ) ( x ) {\displaystyle J_{\nu }^{(n)}(x)} for n ≤ 18 are transcendental, except for the special values J 1 ( 3 ) ( ± 3 ) = 0 {\displaystyle J_{1}^{(3)}(\pm {\sqrt {3}})=0} and J 0 ( 4 ) ( ± 3 ) = 0 {\displaystyle J_{0}^{(4)}(\pm {\sqrt {3}})=0} . === Numerical approaches === For numerical studies about the zeros of the Bessel function, see Gil, Segura & Temme (2007), Kravanja et al. (1998) and Moler (2004). === Numerical values === The first zeros in J0 (i.e., j0,1, j0,2 and j0,3) occur at arguments of approximately 2.40483, 5.52008 and 8.65373, respectively. == History == === Waves and elasticity problems === The first appearance of a Bessel function appears in the work of Daniel Bernoulli in 1732, while working on the analysis of a vibrating string, a problem that was tackled before by his father Johann Bernoulli. Daniel considered a flexible chain suspended from a fixed point above and free at its lower end. The solution of the differential equation led to the introduction of a function that is now considered J 0 ( x ) {\displaystyle J_{0}(x)} . Bernoulli also developed a method to find the zeros of the function. Leonhard Euler in 1736, found a link between other functions (now known as Laguerre polynomials) and Bernoulli's solution. Euler also introduced a non-uniform chain that lead to the introduction of functions now related to modified Bessel functions I n ( x ) {\displaystyle I_{n}(x)} . In the middle of the eighteen century, Jean le Rond d'Alembert had found a formula to solve the wave equation. By 1771 there was dispute between Bernoulli, Euler, d'Alembert and Joseph-Louis Lagrange on the nature of the solutions vibrating strings. Euler worked in 1778 on buckling, introducing the concept of Euler's critical load. To solve the problem he introduced the series for J ± 1 / 3 ( x ) {\displaystyle J_{\pm 1/3}(x)} . Euler also worked out the solutions of vibrating 2D membranes in cylindrical coordinates in 1780. In order to solve his differential equation he introduced a power series associated to J n ( x ) {\displaystyle J_{n}(x)} , for integer n. During the end of the 19th century Lagrange, Pierre-Simon Laplace and Marc-Antoine Parseval also found equivalents to the Bessel functions. Parseval for example found an integral representation of J 0 ( x ) {\displaystyle J_{0}(x)} using cosine. At the beginning of the 1800s, Joseph Fourier used J 0 ( x ) {\displaystyle J_{0}(x)} to solve the heat equation in a problem with cylindrical symmetry. Fourier won a prize of the French Academy of Sciences for this work in 1811. But most of the details of his work, including the use of a Fourier series, remained unpublished until 1822. Poisson in rivalry with Fourier, extended Fourier's work in 1823, introducing new properties of Bessel functions including Bessel functions of half-integer order (now known as spherical Bessel functions). === Astronomical problems === In 1770, Lagrangre introduced the series expansion of Bessel functions to solve Kepler's equation, a trascendental equation in astronomy. Friedrich Wilhelm Bessel had seen Lagrange's solution but found it difficult to handle. In 1813 in a letter to Carl Friedrich Gauss, Bessel simplified the calculation using trigonometric functions. Bessel published his work in 1819, independently introducing the method of Fourier series unaware of the work of Fourier which was published later. In 1824, Bessel carried out a systematic investigation of the functions, which earned the functions his name. In older literature the functions were called cylindrical functions or even Bessel–Fourier functions. == See also == == Notes == == References == == External links ==
|
Wikipedia:Indefinite product#0
|
In mathematics, the indefinite product operator is the inverse operator of Q ( f ( x ) ) = f ( x + 1 ) f ( x ) {\textstyle Q(f(x))={\frac {f(x+1)}{f(x)}}} . It is a discrete version of the geometric integral of geometric calculus, one of the non-Newtonian calculi. Thus Q ( ∏ x f ( x ) ) = f ( x ) . {\displaystyle Q\left(\prod _{x}f(x)\right)=f(x)\,.} More explicitly, if ∏ x f ( x ) = F ( x ) {\textstyle \prod _{x}f(x)=F(x)} , then F ( x + 1 ) F ( x ) = f ( x ) . {\displaystyle {\frac {F(x+1)}{F(x)}}=f(x)\,.} If F(x) is a solution of this functional equation for a given f(x), then so is CF(x) for any constant C. Therefore, each indefinite product actually represents a family of functions, differing by a multiplicative constant. == Period rule == If T {\displaystyle T} is a period of function f ( x ) {\displaystyle f(x)} then ∏ x f ( T x ) = C f ( T x ) x − 1 {\displaystyle \prod _{x}f(Tx)=Cf(Tx)^{x-1}} == Connection to indefinite sum == Indefinite product can be expressed in terms of indefinite sum: ∏ x f ( x ) = exp ( ∑ x ln f ( x ) ) {\displaystyle \prod _{x}f(x)=\exp \left(\sum _{x}\ln f(x)\right)} == Alternative usage == Some authors use the phrase "indefinite product" in a slightly different but related way to describe a product in which the numerical value of the upper limit is not given. e.g. ∏ k = 1 n f ( k ) {\displaystyle \prod _{k=1}^{n}f(k)} . == Rules == ∏ x f ( x ) g ( x ) = ∏ x f ( x ) ∏ x g ( x ) {\displaystyle \prod _{x}f(x)g(x)=\prod _{x}f(x)\prod _{x}g(x)} ∏ x f ( x ) a = ( ∏ x f ( x ) ) a {\displaystyle \prod _{x}f(x)^{a}=\left(\prod _{x}f(x)\right)^{a}} ∏ x a f ( x ) = a ∑ x f ( x ) {\displaystyle \prod _{x}a^{f(x)}=a^{\sum _{x}f(x)}} == List of indefinite products == This is a list of indefinite products ∏ x f ( x ) {\textstyle \prod _{x}f(x)} . Not all functions have an indefinite product which can be expressed in elementary functions. ∏ x a = C a x {\displaystyle \prod _{x}a=Ca^{x}} ∏ x x = C Γ ( x ) {\displaystyle \prod _{x}x=C\,\Gamma (x)} ∏ x x + 1 x = C x {\displaystyle \prod _{x}{\frac {x+1}{x}}=Cx} ∏ x x + a x = C Γ ( x + a ) Γ ( x ) {\displaystyle \prod _{x}{\frac {x+a}{x}}={\frac {C\,\Gamma (x+a)}{\Gamma (x)}}} ∏ x x a = C Γ ( x ) a {\displaystyle \prod _{x}x^{a}=C\,\Gamma (x)^{a}} ∏ x a x = C a x Γ ( x ) {\displaystyle \prod _{x}ax=Ca^{x}\Gamma (x)} ∏ x a x = C a x 2 ( x − 1 ) {\displaystyle \prod _{x}a^{x}=Ca^{{\frac {x}{2}}(x-1)}} ∏ x a 1 x = C a Γ ′ ( x ) Γ ( x ) {\displaystyle \prod _{x}a^{\frac {1}{x}}=Ca^{\frac {\Gamma '(x)}{\Gamma (x)}}} ∏ x x x = C e ζ ′ ( − 1 , x ) − ζ ′ ( − 1 ) = C e ψ ( − 2 ) ( z ) + z 2 − z 2 − z 2 ln ( 2 π ) = C K ( x ) {\displaystyle \prod _{x}x^{x}=C\,e^{\zeta ^{\prime }(-1,x)-\zeta ^{\prime }(-1)}=C\,e^{\psi ^{(-2)}(z)+{\frac {z^{2}-z}{2}}-{\frac {z}{2}}\ln(2\pi )}=C\,\operatorname {K} (x)} (see K-function) ∏ x Γ ( x ) = C Γ ( x ) x − 1 K ( x ) = C Γ ( x ) x − 1 e z 2 ln ( 2 π ) − z 2 − z 2 − ψ ( − 2 ) ( z ) = C G ( x ) {\displaystyle \prod _{x}\Gamma (x)={\frac {C\,\Gamma (x)^{x-1}}{\operatorname {K} (x)}}=C\,\Gamma (x)^{x-1}e^{{\frac {z}{2}}\ln(2\pi )-{\frac {z^{2}-z}{2}}-\psi ^{(-2)}(z)}=C\,\operatorname {G} (x)} (see Barnes G-function) ∏ x sexp a ( x ) = C ( sexp a ( x ) ) ′ sexp a ( x ) ( ln a ) x {\displaystyle \prod _{x}\operatorname {sexp} _{a}(x)={\frac {C\,(\operatorname {sexp} _{a}(x))'}{\operatorname {sexp} _{a}(x)(\ln a)^{x}}}} (see super-exponential function) ∏ x x + a = C Γ ( x + a ) {\displaystyle \prod _{x}x+a=C\,\Gamma (x+a)} ∏ x a x + b = C a x Γ ( x + b a ) {\displaystyle \prod _{x}ax+b=C\,a^{x}\Gamma \left(x+{\frac {b}{a}}\right)} ∏ x a x 2 + b x = C a x Γ ( x ) Γ ( x + b a ) {\displaystyle \prod _{x}ax^{2}+bx=C\,a^{x}\Gamma (x)\Gamma \left(x+{\frac {b}{a}}\right)} ∏ x x 2 + 1 = C Γ ( x − i ) Γ ( x + i ) {\displaystyle \prod _{x}x^{2}+1=C\,\Gamma (x-i)\Gamma (x+i)} ∏ x x + 1 x = C Γ ( x − i ) Γ ( x + i ) Γ ( x ) {\displaystyle \prod _{x}x+{\frac {1}{x}}={\frac {C\,\Gamma (x-i)\Gamma (x+i)}{\Gamma (x)}}} ∏ x csc x sin ( x + 1 ) = C sin x {\displaystyle \prod _{x}\csc x\sin(x+1)=C\sin x} ∏ x sec x cos ( x + 1 ) = C cos x {\displaystyle \prod _{x}\sec x\cos(x+1)=C\cos x} ∏ x cot x tan ( x + 1 ) = C tan x {\displaystyle \prod _{x}\cot x\tan(x+1)=C\tan x} ∏ x tan x cot ( x + 1 ) = C cot x {\displaystyle \prod _{x}\tan x\cot(x+1)=C\cot x} == See also == Indefinite sum Product integral List of derivatives and integrals in alternative calculi Fractal derivative == References == == Further reading == http://reference.wolfram.com/mathematica/ref/Product.html -Indefinite products with Mathematica [1] - bug in Maple V to Maple 8 handling of indefinite product Markus Müller. How to Add a Non-Integer Number of Terms, and How to Produce Unusual Infinite Summations Markus Mueller, Dierk Schleicher. Fractional Sums and Euler-like Identities == External links == Non-Newtonian calculus website
|
Wikipedia:Independent equation#0
|
An independent equation is an equation in a system of simultaneous equations which cannot be derived algebraically from the other equations. The concept typically arises in the context of linear equations. If it is possible to duplicate one of the equations in a system by multiplying each of the other equations by some number (potentially a different number for each equation) and summing the resulting equations, then that equation is dependent on the others. But if this is not possible, then that equation is independent of the others. If an equation is independent of the other equations in its system, then it provides information beyond that which is provided by the other equations. In contrast, if an equation is dependent on the others, then it provides no information not contained in the others collectively, and the equation can be dropped from the system without any information loss. The number of independent equations in a system equals the rank of the augmented matrix of the system—the system's coefficient matrix with one additional column appended, that column being the column vector of constants. The number of independent equations in a system of consistent equations (a system that has at least one solution) can never be greater than the number of unknowns. Equivalently, if a system has more independent equations than unknowns, it is inconsistent and has no solutions. The concepts of dependence and independence of systems are partially generalized in numerical linear algebra by the condition number, which (roughly) measures how close a system of equations is to being dependent (a condition number of infinity is a dependent system, and a system of orthogonal equations is maximally independent and has a condition number close to 1.) == See also == Linear algebra Indeterminate system Independent variable == References ==
|
Wikipedia:Indeterminate (variable)#0
|
In mathematics, an indeterminate or formal variable is a variable (a symbol, usually a letter) that is used purely formally in a mathematical expression, but does not stand for any value. In analysis, a mathematical expression such as 3 x 2 + 4 x {\displaystyle 3x^{2}+4x} is usually taken to represent a quantity whose value is a function of its variable x {\displaystyle x} , and the variable itself is taken to represent an unknown or changing quantity. Two such functional expressions are considered equal whenever their value is equal for every possible value of x {\displaystyle x} within the domain of the functions. In algebra, however, expressions of this kind are typically taken to represent objects in themselves, elements of some algebraic structure – here a polynomial, element of a polynomial ring. A polynomial can be formally defined as the sequence of its coefficients, in this case [ 0 , 4 , 3 ] {\displaystyle [0,4,3]} , and the expression 3 x 2 + 4 x {\displaystyle 3x^{2}+4x} or more explicitly 0 x 0 + 4 x 1 + 3 x 2 {\displaystyle 0x^{0}+4x^{1}+3x^{2}} is just a convenient alternative notation, with powers of the indeterminate x {\displaystyle x} used to indicate the order of the coefficients. Two such formal polynomials are considered equal whenever their coefficients are the same. Sometimes these two concepts of equality disagree. Some authors reserve the word variable to mean an unknown or changing quantity, and strictly distinguish the concepts of variable and indeterminate. Other authors indiscriminately use the name variable for both. Indeterminates occur in polynomials, rational fractions (ratios of polynomials), formal power series, and, more generally, in expressions that are viewed as independent objects. A fundamental property of an indeterminate is that it can be substituted with any mathematical expressions to which the same operations apply as the operations applied to the indeterminate. Some authors of abstract algebra textbooks define an indeterminate over a ring R as an element of a larger ring that is transcendental over R. This uncommon definition implies that every transcendental number and every nonconstant polynomial must be considered as indeterminates. == Polynomials == A polynomial in an indeterminate X {\displaystyle X} is an expression of the form a 0 + a 1 X + a 2 X 2 + … + a n X n {\displaystyle a_{0}+a_{1}X+a_{2}X^{2}+\ldots +a_{n}X^{n}} , where the a i {\displaystyle a_{i}} are called the coefficients of the polynomial. Two such polynomials are equal only if the corresponding coefficients are equal. In contrast, two polynomial functions in a variable x {\displaystyle x} may be equal or not at a particular value of x {\displaystyle x} . For example, the functions f ( x ) = 2 + 3 x , g ( x ) = 5 + 2 x {\displaystyle f(x)=2+3x,\quad g(x)=5+2x} are equal when x = 3 {\displaystyle x=3} and not equal otherwise. But the two polynomials 2 + 3 X , 5 + 2 X {\displaystyle 2+3X,\quad 5+2X} are unequal, since 2 does not equal 5, and 3 does not equal 2. In fact, 2 + 3 X = a + b X {\displaystyle 2+3X=a+bX} does not hold unless a = 2 {\displaystyle a=2} and b = 3 {\displaystyle b=3} . This is because X {\displaystyle X} is not, and does not designate, a number. The distinction is subtle, since a polynomial in X {\displaystyle X} can be changed to a function in x {\displaystyle x} by substitution. But the distinction is important because information may be lost when this substitution is made. For example, when working in modulo 2, we have that: 0 − 0 2 = 0 , 1 − 1 2 = 0 , {\displaystyle 0-0^{2}=0,\quad 1-1^{2}=0,} so the polynomial function x − x 2 {\displaystyle x-x^{2}} is identically equal to 0 for x {\displaystyle x} having any value in the modulo-2 system. However, the polynomial X − X 2 {\displaystyle X-X^{2}} is not the zero polynomial, since the coefficients, 0, 1 and −1, respectively, are not all zero. == Formal power series == A formal power series in an indeterminate X {\displaystyle X} is an expression of the form a 0 + a 1 X + a 2 X 2 + … {\displaystyle a_{0}+a_{1}X+a_{2}X^{2}+\ldots } , where no value is assigned to the symbol X {\displaystyle X} . This is similar to the definition of a polynomial, except that an infinite number of the coefficients may be nonzero. Unlike the power series encountered in calculus, questions of convergence are irrelevant (since there is no function at play). So power series that would diverge for values of x {\displaystyle x} , such as 1 + x + 2 x 2 + 6 x 3 + … + n ! x n + … {\displaystyle 1+x+2x^{2}+6x^{3}+\ldots +n!x^{n}+\ldots \,} , are allowed. == As generators == Indeterminates are useful in abstract algebra for generating mathematical structures. For example, given a field K {\displaystyle K} , the set of polynomials with coefficients in K {\displaystyle K} is the polynomial ring with polynomial addition and multiplication as operations. In particular, if two indeterminates X {\displaystyle X} and Y {\displaystyle Y} are used, then the polynomial ring K [ X , Y ] {\displaystyle K[X,Y]} also uses these operations, and convention holds that X Y = Y X {\displaystyle XY=YX} . Indeterminates may also be used to generate a free algebra over a commutative ring A {\displaystyle A} . For instance, with two indeterminates X {\displaystyle X} and Y {\displaystyle Y} , the free algebra A ⟨ X , Y ⟩ {\displaystyle A\langle X,Y\rangle } includes sums of strings in X {\displaystyle X} and Y {\displaystyle Y} , with coefficients in A {\displaystyle A} , and with the understanding that X Y {\displaystyle XY} and Y X {\displaystyle YX} are not necessarily identical (since free algebra is by definition non-commutative). == See also == Indeterminate equation Indeterminate form Indeterminate system == Notes == == References == Herstein, I. N. (1975). Topics in Algebra. Wiley. ISBN 047102371X. McCoy, Neal H. (1960), Introduction To Modern Algebra, Boston: Allyn and Bacon, LCCN 68015225
|
Wikipedia:Index of fractal-related articles#0
|
This is a list of fractal topics, by Wikipedia page, See also list of dynamical systems and differential equations topics. 1/f noise Apollonian gasket Attractor Box-counting dimension Cantor distribution Cantor dust Cantor function Cantor set Cantor space Chaos theory Coastline Constructal theory Dimension Dimension theory Dragon curve Fatou set Fractal Fractal antenna Fractal art Fractal compression Fractal flame Fractal landscape Fractal transform Fractint Graftal Iterated function system Horseshoe map How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension Julia set Koch snowflake L-system Lebesgue covering dimension Lévy C curve Lévy flight List of fractals by Hausdorff dimension Lorenz attractor Lyapunov fractal Mandelbrot set Menger sponge Minkowski–Bouligand dimension Multifractal analysis Olbers' paradox Perlin noise Power law Rectifiable curve Scale-free network Self-similarity Sierpinski carpet Sierpiński curve Sierpinski triangle Space-filling curve T-square (fractal) Topological dimension
|
Wikipedia:Indian Statistical Institute#0
|
The Indian Statistical Institute (ISI) is a public research university headquartered in Kolkata, India with centers in New Delhi, Bengaluru, Chennai and Tezpur. It was declared an Institute of National Importance by the Government of India under the Indian Statistical Institute Act, 1959. Established in 1931, it functions under the Ministry of Statistics and Programme Implementation of the Government of India. Primary activities of ISI are research and training in statistics, development of theoretical statistics and its applications in various natural and social sciences. Key areas of research at ISI are statistics, mathematics, theoretical computer science and mathematical economics. It is one of the few research oriented Indian institutes offering courses at both the undergraduate and graduate level. Apart from the degree courses, ISI offers a few diploma and certificate courses, special diploma courses for international students via ISEC, and special courses in collaboration with CSO for training probationary officers of Indian Statistical Service (ISS). == History == ISI's origin can be traced back to the Statistical Laboratory in Presidency College, Kolkata, set up by Mahalanobis, who worked in the Physics Department of the college in the 1920s. During 1913–1915, he did his Tripos in Mathematics and Physics at the University of Cambridge, where he came across Biometrika, a journal of statistics founded by Karl Pearson. Since 1915, he taught physics at Presidency College, but his interest in statistics grew under the guidance of polymath Brajendranath Seal. Many colleagues of Mahalanobis took an interest in statistics and the group grew in the Statistical Laboratory. Considering the extensive application of statistics in solving various problems in real life such as analyzing multivariate anthropometric data, applying sample surveys as a method of data collection, analyzing meteorological data, estimating crop yield etc., this group, particularly, Mahalanobis and his younger colleagues S. S. Bose and H. C. Sinha felt the necessity of forming a specialized institute to facilitate research and learning of statistics. On 17 December 1931, Mahalonobis held a meeting with Pramatha Nath Banerji (Minto Professor of Economics), Nikhil Ranjan Sen (Khaira Professor of Applied Mathematics) and Sir R. N. Mukherjee. This meeting led to the establishment of the Indian Statistical Institute (ISI), which was formally registered on 28 April 1932, as a non-profit distributing learned society under the Societies Registration Act XXI of 1860. Later, the institute was registered under the West Bengal Societies Registration Act XXVI of 1961, amended in 1964. Mukherjee accepted the role of the president of ISI and held this position until his death in 1936. In 1953, ISI was relocated to a property owned by Professor Mahalanobis, named "Amrapali", in Baranagar, which is now a municipality at the northern outskirts of Kolkata. In 1931, Mahalanobis was the only person working at ISI, and he managed it with an annual expenditure of Rs. 250. It gradually grew with the pioneering work of a group of his colleagues including S. S. Bose, Samarendra Kumar Mitra (Head of the Computing Machines and Electronics Laboratory and designer of India's first computer), J. M. Sengupta, Raj Chandra Bose, Samarendra Nath Roy, K. R. Nair, R. R. Bahadur, Gopinath Kallianpur, D. B. Lahiri, and Anil Kumar Gain. Pitamber Pant, who had received training in statistics at the institute, went on to become a secretary to the first prime minister of India, Jawaharlal Nehru, and was a great source of help and support to the institute. The institute started a training section in 1938. In due course, many of the early workers left the ISI for careers in the United States or for positions in the public and private sectors in India. By the 1940s, the ISI was internationally known and was taken as a model when the first institute of statistics was set up in the United States by Gertrude Cox – perhaps the only time an institute in a developing country was used as a model in a developed country. As asked by the government of India, in 1950, ISI designed and planned a comprehensive socio–economic national sample survey covering rural India. The organisation named National Sample Survey (NSS) was founded in 1950 for conducting this survey. The field work was performed by the Directorate of NSS, functioning under the Ministry of Finance, whereas the other tasks such as planning of the survey, training of field workers, review, data processing and tabulation were executed by ISI. In 1961, the Directorate of NSS started functioning under the Department of Statistics of government of India, and later in 1971, the design and analysis wing of NSS was shifted from ISI to the Department of Statistics forming the National Sample Survey Organisation (NSSO). J. B. S. Haldane joined the ISI as a research professor from August 1957, and stayed on until February 1961, when he had a falling out with ISI Director P.C. Mahalanobis over Haldane's going on a much-publicized hunger strike to protest the United States pressuring U.S. National Science Fair winners Gary Botting and Susan Brown from attending an ISI banquet to which many prominent Indian scientists had been invited. Haldane helped the ISI grow in biometrics. Haldane also played a key role in developing the structure and content of the courses offered by ISI. Until 1959, ISI was associated with the University of Calcutta. By 'The Indian Statistical Institute Act 1959' of the Parliament of India, amended in 1995, ISI was declared an institute of national importance, and was authorised to hold examinations and to grant degrees and diplomas in Statistics, Mathematics, Computer Science, Quantitative Economics, and in any other subject related to statistics as identified by the institute from time to time. ISI is a public university, as the same act also states that ISI would be funded by the Central Government of India. ISI had by the 1960s started establishing special service units in New Delhi, Chennai, Bangalore, Mumbai and Hyderabad to provide consultancy services to business, industry and governmental public service organisations in the areas of statistical process control, operations research and industrial engineering. Additionally, Bangalore had a Documentation Research and Training Centre (DRTC). In the early 1970s, the Delhi and Bangalore units were converted to teaching centres. In 2008, ISI Chennai was upgraded to a teaching centre. In 2011, ISI added a new centre in Tezpur. == Campuses == The major objectives of the ISI are to facilitate research and training of Statistics, to indulge in development of statistical theory and in application of statistical techniques – in the scenarios of planning at national level and in theoretical development of natural and social sciences, to participate in the process of data collection and analysis, to operate related projects in planning and improvement of efficiency of management and production. The Sanskrit phrase भिन्नेष्वैक्यस्य दर्शणम् (Bhinneswaykyasya Darshanam), which literally means the philosophy of unity in diversity, is incorporated in the logo of the institute, and is the motto of ISI. ISI Kolkata is the headquarter with centers at New Delhi, Bengaluru, Chennai. Tezpur, the 4th center of ISI was inaugurated in 2011. === ISI, Kolkata === ISI Kolkata has a campus consisting of six addresses at 201 through 206 Barrackpore Trunk Road, Bonhooghly (Baranagar). These include a house, which was an erstwhile office of the National Sample Survey Organisation (NSSO) of India. ISI Kolkata campus is eco-friendly, as conceived by Mahalanobis. Hollow bricks that protect from heat and noise were used with minimum use of reinforced concrete, to avoid radiation. There was no use of bitumen-basalt combination at the roads inside ISI campuses. This helps in reduction of radiation and preservation of rain-water to maintain equilibrium in ground-water level. The Kolkata campus offers bachelor's level degree course in Statistics (B. Stat), Statistical Data Science (BSDS), master's degree course in Statistics (M.Stat), Mathematics (M.Math), Computer Science (MTech), Cryptology & Security (MTech), Quality Reliability and Operations Research (MTech) and Quantitative Economics (M.S.). Major divisions and units are: Statistics and Mathematics Unit (SMU), Human Genetics Unit (HGU), Physics and Applied Mathematics Unit (PAMU), Geological Studies Unit (GSU), Advanced Computation and MicroElectronics Unit (ACMU), Computer Vision and Pattern Recognition Unit (CVPRU), Machine Intelligence Unit (MIU), Electronics and Communication Sciences Unit (ECSU), Applied Statistics Unit (ASU), Economic Research Unit (ERU), Linguistic Research Unit (LRU), Sociology Research Unit (SRU), Psychometry Research Unit (PRU) and Population Studies Unit (PSU). The Kolkata campus houses the International Statistical Education Centre (ISEC), which opened in 1950. This centre provides training in statistics to sponsored students mainly from the Middle East, South and South East Asia, the Far East and the Commonwealth Countries of Africa. The centre also offers various short-term courses in statistics and related subjects. The Center for Soft Computing Research: A National Facility, an associate institute of Indian Statistical Institute and established in Kolkata in 2005, is unique in the country. Apart from conducting basic research, it offers a 3-month course and promotes less endowed institutes by providing fellowships and research grants. The Central Library of ISI is located at Kolkata with branches at the other facilities. The library has over 200,000 volumes of books and journals with a special emphasis on the field of statistics and related studies. The main branch also has a collection of official reports, reprints, maps, and microfilms. The library receives over a thousand new technical and scientific journals every year. The Library has databases on CD-ROM and is working on further digitization of the collection. The library has a separate collection of works on the topics of mathematics and statistics called the Eastern Regional Centre of NBHM collection, funded by grants from the National Board for Higher Mathematics. It also looks to set up research unit in artificial intelligence === ISI, Delhi === The ISI campus at New Delhi was established in 1974 and was shifted to the present campus in 1975. The Delhi campus offers bachelor's level degree course in Statistical Data Science (BSDS), master's level courses Master of Statistics (M. Stat) and Master of Science (M. S.) in Quantitative Economics, and doctoral programs. === ISI, Bangalore === The Bengaluru centre of ISI started with a Statistical Quality Control and Operations Research (SQC & OR) unit in 1954. The Documentation Research and Training Centre (DRTC) here became operational from 1962 with honorary professor S. R. Ranganathan as the head. Prof. Mahalanobis planned of starting a full-fledged centre of ISI here around the mid-sixties. In 1966, the then Government of Karnataka granted ISI 30 acres of forest land full of eucalyptus trees, next to the upcoming campus of the Bangalore University, located on the Mysore Road on the outskirts of the city. However, after death of Prof. Mahalanobis in 1972, the project of establishing Bengaluru centre got temporarily shelved. The project was again revived during 1976–78. Concrete proposals were made to the Government of India to get grants for the development of the land already in possession of ISI, along with the construction of an academic block with a library and offices. In the meantime, a building was rented on Church Street, in Bengaluru downtown, and various activities of the Bengaluru centre started in September 1978. The Economic Analysis Unit (EAU) and the Statistics and Mathematics Unit were established. The SQC&OR Unit and the DRTC unit, which were functioning from other rented buildings at that time, joined this new Centre. As construction of the administrative block at the new campus got completed, the various units moved to the new campus in May 1985. The Bengaluru centre was formally declared as a centre of ISI in September 1996. The Systems Science and Informatics Unit (SSIU) was established in 2009 The Bengaluru centre has by now became an institution for academic activities in Mathematics, Statistics, Computer Science, SQC and Operations Research, Library and Information Science, and Quantitative Economics. The Bengaluru campus offers bachelor level course Bachelor of Statistical Data Science (BSDS), Bachelor of Mathematics (B.Math), master level courses Master of Mathematics (M.Math), Master of Science (M. S.) in Library and Information Science and Master of Science (M. S.) in Quality Management Science, and doctoral programs. == Academics == Traditionally, ISI offers fewer programs (and admits fewer students) than most other degree granting academic institutions. Following the empowerment for granting degrees in the subject of Statistics as per the ISI Act 1959, in 1960, ISI initiated bachelor level degree program Bachelor of Statistics and master level degree course Master of Statistics, and also began awarding research level degrees such as PhD and DSc. Later, ISI started offering Master of Technology (MTech) courses in Computer Science and in Quality, Reliability & Operations Research (QR&OR); these courses got recognition from All India Council for Technical Education (AICTE). As ISI Act of 1959 was amended by the Parliament of India in 1995, ISI was empowered to confer degrees and diplomas in subjects such as Mathematics, Quantitative Economics, Computer Science, and other subjects related to Statistics and Operations Research as determined by ISI from time to time. Apart from the degree courses, ISI offers few diploma and certificate courses, special diploma courses for international students via ISEC, and special courses in collaboration with CSO for training probationary officers of Indian Statistical Service (ISS). === Research Divisions and Centers === R C Bose Centre for Cryptology and Security Center for Soft Computing Research Center for Artificial Intelligence and Machine Learning Technology Innovation Hub Centre for Research on Economics and Data Analysis === Degree courses === ISI offers three undergraduate programs, viz. Bachelor of Statistics (Honours) (B.Stat), Bachelor of Mathematics (Honours) (B. Math) and Bachelor of Statistical Data Science (Honours) eight graduate programs, viz. Master of Statistics (M. Stat), Master of Mathematics (M. Math), Master of Science in Quantitative Economics (MSQE), Master of Science in Library and Information Science (MSLIS), Master of Science in Quality Management Science (MSQMS), Master of Technology in Computer Science (MTech–CS), Master of Technology in Cryptology & Security (MTech-CrS) and Master of Technology in Quality, Reliability and Operations Research (MTech–QROR). ISI also offers four PG Diploma programs, viz. Postgraduate Diploma in Agricultural and Rural Management with Statistical Methods [PGDARSMA], Postgraduate Diploma in Statistical Methods & Analytics [PGDSMA], Post Graduate Diploma in Business Analytics [PGDBA] and PG Diploma in Applied Statistics [PGDAS]. B.Stat (Hons) and B.Math (Hons) courses are of 3 years duration, BSDS (Hons) course is of 4 years duration and the master's level courses of 2 years of duration. For all undergraduate and graduate level courses, the academic year is divided in two semesters. Except for sponsored candidates of MTech courses, ISI students are not required to pay any tuition fees. Conditional to performance beyond a threshold, all students and research fellows receive stipends, fellowships and contingency/book grants. Students demonstrating outstanding performances are rewarded at the end of the semesters. ISI campuses provide hostel accommodations with recreational facilities and limited medical facilities available free of cost. === Admissions === Applicants of all degree courses are required to go through written admission tests and/or interviews. ISI conducts the written tests at various examination centers across India. Only in few cases, candidates may get called for the interview directly, viz. applicants of MTech Computer Science course having a GATE score above a threshold. Candidates applying to doctoral research programmes who have been awarded (or qualified for) a Junior Research Fellowship by UGC / CSIR / NBHM etc. are also required to clear the ISI admission test or an equivalent separate test and interview conducted by the relevant JRF selection committee of the institute if they wish to obtain a PhD from the Indian Statistical Institute. === International Statistical Education Centre === In 1950, ISI, in collaboration with International Statistical Institute, UNESCO and Government of India, had set up International Statistical Education Centre (ISEC) to impart knowledge of theoretical and applied statistics to participants from Middle East, East and South-East Asia, the Far East and Commonwealth countries of Africa. The main training course offered by ISEC is meant for international students, preferably graduates with proficiency in English and Mathematics. ISEC, located in Kolkata campus of ISI, functions with support from the Ministry of External Affairs and the Ministry of Statistics and Programme Implementation of the Government of India. === Publications === Sankhya, the statistical journal published by ISI, was founded in 1933, along the lines of Karl Pearson's Biometrika. Mahalanobis was the founder editor. Each volume of Sankhya consists of four issues; two of them are in Series A, containing articles on theoretical statistics, probability theory and stochastic processes, and the other two issues form the Series B, containing articles on applied statistics, i.e. applied probability, applied stochastic processes, econometrics and statistical computing. === Rankings === According to India Education Review, no Indian university is in the world's top 200 universities, as of 2012. The ascribed ranking of ISI is 186. The web ranking of this institute, according to 4ICU (4 International Colleges and Universities), is 1693. According to the web ranking published by Webometrics Ranking of World Universities, ISI currently holds the world rank of 1352. In the subject-wise academic world ranking of Computer Science, the Indian Statistical Institute features in 101—150 category. The Indian Statistical Institute, Kolkata is ranked 2nd in Computer Science research by mean citation rate, p-Index, h-index among all universities in India. The NIRF (National Institutional Ranking Framework) ranked it 75th overall in India in 2024. == Student life == === Student Fest === Integration is the annual techno-cultural fest of the Indian Statistical Institute, Kolkata usually held during the first and second weekend of January each year. Chaos is the annual techno-cultural fest of the Indian Statistical Institute, Bangalore usually held during the last weekend of March each year. === Placement === Alumni of ISI – including recipients of PhD degree – are employed in government and semi–government departments, industrial establishments, research institutions, in India and other countries. There is a placement cell in ISI Kolkata that organizes campus interviews by prospective employers in various campuses of ISI. Since recent past, a high percentage of ISI alumni gets absorbed into jobs in analytics, banking, finance and software industry. == Statistical Quality Control (SQC) and Operations Research (OR) units == Since mid-forties, ISI pioneered in research and application of Statistical Quality Control (SQC) in India. Walter A. Shewhart, the statistician known as the father of SQC, and other experts of this field visited ISI over the years. The first Statistical Quality Control and Operations Research (SQC & OR) unit of ISI was set up in Mumbai in 1953, followed by Bangalore and Kolkata units in 1954. In 1976, this unit was transformed into the SQC & OR Division, which now operates seven units, located at various industrial centres in India – Kolkata, Delhi, Bangalore, Chennai, Pune, Mumbai and Vadodara. These units partake in technical consultancy with public and private organisations, in addition with performing research and training activities. The branch at Giridih was set up in 1931 and it has two operational units, viz. the Sociological Research Unit and the Agricultural Research Unit. == Achievements == Over the years, researchers of ISI made fundamental contributions in various fields of Statistics such as Design of Experiments, Sample Survey, Multivariate statistics and Computer Science. Mahalanobis introduced the measure Mahalanobis distance which is used in multivariate statistics and other related fields. Raj Chandra Bose, who is known for his contributions in coding theory, worked on Design of Experiments during his tenure at ISI, and was one of the three mathematicians, who disproved Euler's conjecture on orthogonal Latin squares. Anil Kumar Bhattacharya is credited with introduction of the measures Bhattacharyya distance and Bhattacharya coefficient. Samarendra Nath Roy is known for his pioneering contributions in multivariate statistics. Among colleagues of Mahalanobis, other notable contributors were K. R. Nair in Design of experiments, Jitendra Mohan Sengupta in Sample Survey, Ajit Dasgupta in Demography and Ramkrishna Mukherjea in Quantitative Sociology. C. R. Rao's contributions during his association with ISI include two theorems of Statistical Inference known as Cramér–Rao inequality and Rao-Blackwell Theorem, and introduction of orthogonal arrays in Design of Experiments. Anil Kumar Gain is known for his contributions to the Pearson product-moment correlation coefficient with his colleague Sir Ronald Fisher at the University of Cambridge. In 1953, India's first indigenous computer was designed by Samarendra Kumar Mitra who headed the Computing Machines and Electronics Laboratory at ISI Calcutta. The Indian Statistical Institute was also hosted the first two digital computers in South Asia; the HEC-2M from England in 1956, and the URAL from the Soviet Union in 1959. These were also among the earliest digital computers in Asia (outside Japan). During 1953 – 1956 distinguished scientists, like Ronald Fisher, Norbert Wiener and Yuri Linnik visited ISI. Norbert Wiener collaborated with Gopinath Kallianpur on topics including ergodic theory, prediction theory and generalized harmonic analysis. In 1962, during his month-long visit to ISI, Soviet mathematician Andrey Kolmogorov wrote his notable paper on Kolmogorov complexity, which was published in Sankhya, 1963. Other distinguished scientists including Jerzy Neyman, Walter A. Shewhart, W. Edwards Deming and Abraham Wald have visited ISI during the tenure of P. C. Mahalanobis. === Planning Commission === The second five-year plan of India was a brainchild of Mahalanobis. The plan followed the Mahalanobis model, an economic development model developed by Mahalanobis in 1953. The plan attempted to determine the optimal allocation of investment between productive sectors in order to maximise long-run economic growth . It used the prevalent state of art techniques of operations research and optimisation as well as the novel applications of statistical models developed at ISI. This second five-year plan shifted the focus from agriculture to industrialisation, with an objective of attaining self-reliance by economy of India. Domestic production of industrial products was encouraged in this plan, particularly in the development of the public sector. The two-pronged strategy devised in this plan targeted rapid growth of the heavy industry, keeping emphasis on growth of small and cottage industries. B. S. Minhas and K. S. Parikh, both from the Planning Unit of ISI Delhi, played key roles in the Planning Commission of the Government of India. Minhas, who joined the Planning Unit in 1962 and retired as a distinguished scientist in 1989, was a member of the Planning Commission during 1971–74. Parikh, who was a member of the Planning Commission during 2004–09, chaired Integrated Energy Policy Committee of the commission, was a member of the Economic Advisory Council of India during the tenure of five prime ministers, also played a role in the Department of Atomic Energy establishment, and was a key advisor to the government on energy issues. === Computer science === In India, the first analog computer was designed by Samarendra Kumar Mitra and built by Ashish Kumar Maity at ISI in 1953, for use in computation of numerical solutions of simultaneous linear equations using a modified version of Gauss-Siedel iteration. In 1955, the first digital computer of India was procured by ISI. This machine was of a model named HEC-2M, manufactured by British Tabulating Machine Company (BTM). As per the agreement with BTM, ISI had to take care of the installation work and maintenance of it, before it became operational in 1956. Though this HEC-2M machine and the URAL-1 machine, which was bought in 1959 from Russia, were operational until 1963, ISI began development of the first second-generation digital computer of India in collaboration with Jadavpur University (JU). This joint collaboration led by the head of the Computing Machines and Electronics Laboratory at ISI, Samarendra Kumar Mitra, produced the transistor-driven machine ISIJU-1, which became operational in 1964. The first annual convention of the Computer Society of India (CSI) was hosted by ISI in 1965. The Computer and Communication Sciences division of ISI produced many eminent scientists such as Samarendra Kumar Mitra (its original founder), Dwijesh Dutta Majumdar, Sankar Kumar Pal, Bidyut Baran Chaudhuri, Nikhil R. Pal, Bhabani P. Sinha, Bhargab B. Bhattacharya, Malay K. Kundu, Sushmita Mitra, Bhabatosh Chanda, C. A. Murthy, Sanghamitra Bandyopadhyay and many. ISI is regarded as one of the top most centres for research in computer science in India. The Knowledge-based Computer Systems project (KBCS), funded jointly by Department of Electronics and Information Technology (DoE), Government of India and UNDP since 1986, has a nodal centre at ISI Kolkata. This unit is responsible for research in the area of image processing, pattern recognition, computer vision and artificial intelligence. === Social sciences === R. L. Brahmachari, known for his work in many fields like agricultural sciences, zoology, botany, biometrics, did much of his work at ISI. The institute has done some pioneering work and research in anthropology and palaeontology. A trove of dinosaur fossils was discovered by a team led by ISI researchers in the early 1960s. The scattered fossils were recovered and the partial skeleton was reconstructed at ISI's Baranagar campus. It turned out to be a unique species and was named the Barapasaurus tagorei (Dinosauria: Sauropoda), after Rabindranath Tagore and was mounted in the Geology Museum at the Kolkata Campus of the institute. The Linguistic Research Unit (LRU) of ISI was involved in the study of speech pathology. Đorđe Kostić of this laboratory was a distinguished scientist. He invented a unique hearing aid, called SAFA (Selective Auditory Frequency Amplifier) that simulates frequency-range according to the need of the particular hearing impaired person. == Administration == ISI functions as an autonomous institute under the Ministry of Statistics and Programme Implementation (MOSPI), which is the nodal ministry of the Government of India that ensures the functioning of ISI in accordance with The Indian Statistical Institute Act 1959. ISI Council is the highest policy–making body of the institute. Members of this council include the president of ISI, the chairman of ISI, representatives of the Government of India including one representative of RBI, scientists not employed in ISI including one representative from the Planning Commission of India and one representative of the UGC, representatives of scientific and non-scientific workers of ISI, and representative from academic staff of ISI, including the director of ISI and the Dean of Studies of ISI. Bimal Kumar Roy was the director until 10 June 2015; in a move unique in the history of the institute, he was removed from his post via a notice posted on the web site of the Ministry of Statistics and Planning. He was sacked over financial and administrative irregularities The list is the following: == Visits by Heads of states == Soviet premier Nikita Khrushchev visited ISI during his visit to India in 1955. Zhou Enlai, the Prime Minister of China, and Ho Chi Minh, the President of Vietnam, during their visit to India specifically visited ISI respectively on 9 September 1956 and 13 February 1958. == Citations == == References ==
|
Wikipedia:Indian mathematics#0
|
Indian mathematics emerged in the Indian subcontinent from 1200 BCE until the end of the 18th century. In the classical period of Indian mathematics (400 CE to 1200 CE), important contributions were made by scholars like Aryabhata, Brahmagupta, Bhaskara II, Varāhamihira, and Madhava. The decimal number system in use today was first recorded in Indian mathematics. Indian mathematicians made early contributions to the study of the concept of zero as a number, negative numbers, arithmetic, and algebra. In addition, trigonometry was further advanced in India, and, in particular, the modern definitions of sine and cosine were developed there. These mathematical concepts were transmitted to the Middle East, China, and Europe and led to further developments that now form the foundations of many areas of mathematics. Ancient and medieval Indian mathematical works, all composed in Sanskrit, usually consisted of a section of sutras in which a set of rules or problems were stated with great economy in verse in order to aid memorization by a student. This was followed by a second section consisting of a prose commentary (sometimes multiple commentaries by different scholars) that explained the problem in more detail and provided justification for the solution. In the prose section, the form (and therefore its memorization) was not considered so important as the ideas involved. All mathematical works were orally transmitted until approximately 500 BCE; thereafter, they were transmitted both orally and in manuscript form. The oldest extant mathematical document produced on the Indian subcontinent is the birch bark Bakhshali Manuscript, discovered in 1881 in the village of Bakhshali, near Peshawar (modern day Pakistan) and is likely from the 7th century CE. A later landmark in Indian mathematics was the development of the series expansions for trigonometric functions (sine, cosine, and arc tangent) by mathematicians of the Kerala school in the 15th century CE. Their work, completed two centuries before the invention of calculus in Europe, provided what is now considered the first example of a power series (apart from geometric series). However, they did not formulate a systematic theory of differentiation and integration, nor is there any evidence of their results being transmitted outside Kerala. == Prehistory == Excavations at Harappa, Mohenjo-daro and other sites of the Indus Valley civilisation have uncovered evidence of the use of "practical mathematics". The people of the Indus Valley Civilization manufactured bricks whose dimensions were in the proportion 4:2:1, considered favourable for the stability of a brick structure. They used a standardised system of weights based on the ratios: 1/20, 1/10, 1/5, 1/2, 1, 2, 5, 10, 20, 50, 100, 200, and 500, with the unit weight equaling approximately 28 grams (and approximately equal to the English ounce or Greek uncia). They mass-produced weights in regular geometrical shapes, which included hexahedra, barrels, cones, and cylinders, thereby demonstrating knowledge of basic geometry. The inhabitants of Indus civilisation also tried to standardise measurement of length to a high degree of accuracy. They designed a ruler—the Mohenjo-daro ruler—whose unit of length (approximately 1.32 inches or 3.4 centimetres) was divided into ten equal parts. Bricks manufactured in ancient Mohenjo-daro often had dimensions that were integral multiples of this unit of length. Hollow cylindrical objects made of shell and found at Lothal (2200 BCE) and Dholavira are demonstrated to have the ability to measure angles in a plane, as well as to determine the position of stars for navigation. == Vedic period == === Samhitas and Brahmanas === The texts of the Vedic Period provide evidence for the use of large numbers. By the time of the Yajurvedasaṃhitā- (1200–900 BCE), numbers as high as 1012 were being included in the texts. For example, the mantra (sacred recitation) at the end of the annahoma ("food-oblation rite") performed during the aśvamedha, and uttered just before-, during-, and just after sunrise, invokes powers of ten from a hundred to a trillion: Hail to śata ("hundred," 102), hail to sahasra ("thousand," 103), hail to ayuta ("ten thousand," 104), hail to niyuta ("hundred thousand," 105), hail to prayuta ("million," 106), hail to arbuda ("ten million," 107), hail to nyarbuda ("hundred million," 108), hail to samudra ("billion," 109, literally "ocean"), hail to madhya ("ten billion," 1010, literally "middle"), hail to anta ("hundred billion," 1011, lit., "end"), hail to parārdha ("one trillion," 1012 lit., "beyond parts"), hail to the uṣas (dawn) , hail to the vyuṣṭi (twilight), hail to udeṣyat (the one which is going to rise), hail to udyat (the one which is rising), hail udita (to the one which has just risen), hail to svarga (the heaven), hail to martya (the world), hail to all. The solution to partial fraction was known to the Rigvedic People as states in the purush Sukta (RV 10.90.4): With three-fourths Puruṣa went up: one-fourth of him again was here. The Satapatha Brahmana (c. 7th century BCE) contains rules for ritual geometric constructions that are similar to the Sulba Sutras. === Śulba Sūtras === The Śulba Sūtras (literally, "Aphorisms of the Chords" in Vedic Sanskrit) (c. 700–400 BCE) list rules for the construction of sacrificial fire altars. Most mathematical problems considered in the Śulba Sūtras spring from "a single theological requirement," that of constructing fire altars which have different shapes but occupy the same area. The altars were required to be constructed of five layers of burnt brick, with the further condition that each layer consist of 200 bricks and that no two adjacent layers have congruent arrangements of bricks. According to Hayashi, the Śulba Sūtras contain "the earliest extant verbal expression of the Pythagorean Theorem in the world, although it had already been known to the Old Babylonians." The diagonal rope (akṣṇayā-rajju) of an oblong (rectangle) produces both which the flank (pārśvamāni) and the horizontal (tiryaṇmānī) <ropes> produce separately." Since the statement is a sūtra, it is necessarily compressed and what the ropes produce is not elaborated on, but the context clearly implies the square areas constructed on their lengths, and would have been explained so by the teacher to the student. They contain lists of Pythagorean triples, which are particular cases of Diophantine equations. They also contain statements (that with hindsight we know to be approximate) about squaring the circle and "circling the square." Baudhayana (c. 8th century BCE) composed the Baudhayana Sulba Sutra, the best-known Sulba Sutra, which contains examples of simple Pythagorean triples, such as: (3, 4, 5), (5, 12, 13), (8, 15, 17), (7, 24, 25), and (12, 35, 37), as well as a statement of the Pythagorean theorem for the sides of a square: "The rope which is stretched across the diagonal of a square produces an area double the size of the original square." It also contains the general statement of the Pythagorean theorem (for the sides of a rectangle): "The rope stretched along the length of the diagonal of a rectangle makes an area which the vertical and horizontal sides make together." Baudhayana gives an expression for the square root of two: 2 ≈ 1 + 1 3 + 1 3 ⋅ 4 − 1 3 ⋅ 4 ⋅ 34 = 1.4142156 … {\displaystyle {\sqrt {2}}\approx 1+{\frac {1}{3}}+{\frac {1}{3\cdot 4}}-{\frac {1}{3\cdot 4\cdot 34}}=1.4142156\ldots } The expression is accurate up to five decimal places, the true value being 1.41421356... This expression is similar in structure to the expression found on a Mesopotamian tablet from the Old Babylonian period (1900–1600 BCE): 2 ≈ 1 + 24 60 + 51 60 2 + 10 60 3 = 1.41421297 … {\displaystyle {\sqrt {2}}\approx 1+{\frac {24}{60}}+{\frac {51}{60^{2}}}+{\frac {10}{60^{3}}}=1.41421297\ldots } which expresses √2 in the sexagesimal system, and which is also accurate up to 5 decimal places. According to mathematician S. G. Dani, the Babylonian cuneiform tablet Plimpton 322 written c. 1850 BCE "contains fifteen Pythagorean triples with quite large entries, including (13500, 12709, 18541) which is a primitive triple, indicating, in particular, that there was sophisticated understanding on the topic" in Mesopotamia in 1850 BCE. "Since these tablets predate the Sulbasutras period by several centuries, taking into account the contextual appearance of some of the triples, it is reasonable to expect that similar understanding would have been there in India." Dani goes on to say: As the main objective of the Sulvasutras was to describe the constructions of altars and the geometric principles involved in them, the subject of Pythagorean triples, even if it had been well understood may still not have featured in the Sulvasutras. The occurrence of the triples in the Sulvasutras is comparable to mathematics that one may encounter in an introductory book on architecture or another similar applied area, and would not correspond directly to the overall knowledge on the topic at that time. Since, unfortunately, no other contemporaneous sources have been found it may never be possible to settle this issue satisfactorily. In all, three Sulba Sutras were composed. The remaining two, the Manava Sulba Sutra composed by Manava (fl. 750–650 BCE) and the Apastamba Sulba Sutra, composed by Apastamba (c. 600 BCE), contained results similar to the Baudhayana Sulba Sutra. Vyakarana The Vedic period saw the work of Sanskrit grammarian Pāṇini (c. 520–460 BCE). His grammar includes a precursor of the Backus–Naur form (used in the description programming languages). == Pingala (300 BCE – 200 BCE) == Among the scholars of the post-Vedic period who contributed to mathematics, the most notable is Pingala (piṅgalá) (fl. 300–200 BCE), a music theorist who authored the Chhandas Shastra (chandaḥ-śāstra, also Chhandas Sutra chhandaḥ-sūtra), a Sanskrit treatise on prosody. Pingala's work also contains the basic ideas of Fibonacci numbers (called maatraameru). Although the Chandah sutra hasn't survived in its entirety, a 10th-century commentary on it by Halāyudha has. Halāyudha, who refers to the Pascal triangle as Meru-prastāra (literally "the staircase to Mount Meru"), has this to say: Draw a square. Beginning at half the square, draw two other similar squares below it; below these two, three other squares, and so on. The marking should be started by putting 1 in the first square. Put 1 in each of the two squares of the second line. In the third line put 1 in the two squares at the ends and, in the middle square, the sum of the digits in the two squares lying above it. In the fourth line put 1 in the two squares at the ends. In the middle ones put the sum of the digits in the two squares above each. Proceed in this way. Of these lines, the second gives the combinations with one syllable, the third the combinations with two syllables, ... The text also indicates that Pingala was aware of the combinatorial identity: ( n 0 ) + ( n 1 ) + ( n 2 ) + ⋯ + ( n n − 1 ) + ( n n ) = 2 n {\displaystyle {n \choose 0}+{n \choose 1}+{n \choose 2}+\cdots +{n \choose n-1}+{n \choose n}=2^{n}} Kātyāyana Kātyāyana (c. 3rd century BCE) is notable for being the last of the Vedic mathematicians. He wrote the Katyayana Sulba Sutra, which presented much geometry, including the general Pythagorean theorem and a computation of the square root of 2 correct to five decimal places. == Jain mathematics (400 BCE – 200 CE) == Although Jainism as a religion and philosophy predates its most famous exponent, the great Mahaviraswami (6th century BCE), most Jain texts on mathematical topics were composed after the 6th century BCE. Jain mathematicians are important historically as crucial links between the mathematics of the Vedic period and that of the "classical period." A significant historical contribution of Jain mathematicians lay in their freeing Indian mathematics from its religious and ritualistic constraints. In particular, their fascination with the enumeration of very large numbers and infinities led them to classify numbers into three classes: enumerable, innumerable and infinite. Not content with a simple notion of infinity, their texts define five different types of infinity: the infinite in one direction, the infinite in two directions, the infinite in area, the infinite everywhere, and the infinite perpetually. In addition, Jain mathematicians devised notations for simple powers (and exponents) of numbers like squares and cubes, which enabled them to define simple algebraic equations (bījagaṇita samīkaraṇa). Jain mathematicians were apparently also the first to use the word shunya (literally void in Sanskrit) to refer to zero. This word is the ultimate etymological origin of the English word "zero", as it was calqued into Arabic as ṣifr and then subsequently borrowed into Medieval Latin as zephirum, finally arriving at English after passing through one or more Romance languages (cf. French zéro, Italian zero). In addition to Surya Prajnapti, important Jain works on mathematics included the Sthānāṅga Sūtra (c. 300 BCE – 200 CE); the Anuyogadwara Sutra (c. 200 BCE – 100 CE), which includes the earliest known description of factorials in Indian mathematics; and the Ṣaṭkhaṅḍāgama (c. 2nd century CE). Important Jain mathematicians included Bhadrabahu (d. 298 BCE), the author of two astronomical works, the Bhadrabahavi-Samhita and a commentary on the Surya Prajinapti; Yativrisham Acharya (c. 176 BCE), who authored a mathematical text called Tiloyapannati; and Umasvati (c. 150 BCE), who, although better known for his influential writings on Jain philosophy and metaphysics, composed a mathematical work called the Tattvārtha Sūtra. == Oral tradition == Mathematicians of ancient and early medieval India were almost all Sanskrit pandits (paṇḍita "learned man"), who were trained in Sanskrit language and literature, and possessed "a common stock of knowledge in grammar (vyākaraṇa), exegesis (mīmāṃsā) and logic (nyāya)." Memorisation of "what is heard" (śruti in Sanskrit) through recitation played a major role in the transmission of sacred texts in ancient India. Memorisation and recitation was also used to transmit philosophical and literary works, as well as treatises on ritual and grammar. Modern scholars of ancient India have noted the "truly remarkable achievements of the Indian pandits who have preserved enormously bulky texts orally for millennia." === Styles of memorisation === Prodigious energy was expended by ancient Indian culture in ensuring that these texts were transmitted from generation to generation with inordinate fidelity. For example, memorisation of the sacred Vedas included up to eleven forms of recitation of the same text. The texts were subsequently "proof-read" by comparing the different recited versions. Forms of recitation included the jaṭā-pāṭha (literally "mesh recitation") in which every two adjacent words in the text were first recited in their original order, then repeated in the reverse order, and finally repeated in the original order. The recitation thus proceeded as: In another form of recitation, dhvaja-pāṭha (literally "flag recitation") a sequence of N words were recited (and memorised) by pairing the first two and last two words and then proceeding as: The most complex form of recitation, ghana-pāṭha (literally "dense recitation"), according to Filliozat, took the form: That these methods have been effective is testified to by the preservation of the most ancient Indian religious text, the Ṛgveda (c. 1500 BCE), as a single text, without any variant readings. Similar methods were used for memorising mathematical texts, whose transmission remained exclusively oral until the end of the Vedic period (c. 500 BCE). === The Sutra genre === Mathematical activity in ancient India began as a part of a "methodological reflexion" on the sacred Vedas, which took the form of works called Vedāṇgas, or, "Ancillaries of the Veda" (7th–4th century BCE). The need to conserve the sound of sacred text by use of śikṣā (phonetics) and chhandas (metrics); to conserve its meaning by use of vyākaraṇa (grammar) and nirukta (etymology); and to correctly perform the rites at the correct time by the use of kalpa (ritual) and jyotiṣa (astrology), gave rise to the six disciplines of the Vedāṇgas. Mathematics arose as a part of the last two disciplines, ritual and astronomy (which also included astrology). Since the Vedāṇgas immediately preceded the use of writing in ancient India, they formed the last of the exclusively oral literature. They were expressed in a highly compressed mnemonic form, the sūtra (literally, "thread"): The knowers of the sūtra know it as having few phonemes, being devoid of ambiguity, containing the essence, facing everything, being without pause and unobjectionable. Extreme brevity was achieved through multiple means, which included using ellipsis "beyond the tolerance of natural language," using technical names instead of longer descriptive names, abridging lists by only mentioning the first and last entries, and using markers and variables. The sūtras create the impression that communication through the text was "only a part of the whole instruction. The rest of the instruction must have been transmitted by the so-called Guru-shishya parampara, 'uninterrupted succession from teacher (guru) to the student (śisya),' and it was not open to the general public" and perhaps even kept secret. The brevity achieved in a sūtra is demonstrated in the following example from the Baudhāyana Śulba Sūtra (700 BCE). The domestic fire-altar in the Vedic period was required by ritual to have a square base and be constituted of five layers of bricks with 21 bricks in each layer. One method of constructing the altar was to divide one side of the square into three equal parts using a cord or rope, to next divide the transverse (or perpendicular) side into seven equal parts, and thereby sub-divide the square into 21 congruent rectangles. The bricks were then designed to be of the shape of the constituent rectangle and the layer was created. To form the next layer, the same formula was used, but the bricks were arranged transversely. The process was then repeated three more times (with alternating directions) in order to complete the construction. In the Baudhāyana Śulba Sūtra, this procedure is described in the following words: II.64. After dividing the quadri-lateral in seven, one divides the transverse [cord] in three.II.65. In another layer one places the [bricks] North-pointing. According to Filliozat, the officiant constructing the altar has only a few tools and materials at his disposal: a cord (Sanskrit, rajju, f.), two pegs (Sanskrit, śanku, m.), and clay to make the bricks (Sanskrit, iṣṭakā, f.). Concision is achieved in the sūtra, by not explicitly mentioning what the adjective "transverse" qualifies; however, from the feminine form of the (Sanskrit) adjective used, it is easily inferred to qualify "cord." Similarly, in the second stanza, "bricks" are not explicitly mentioned, but inferred again by the feminine plural form of "North-pointing." Finally, the first stanza, never explicitly says that the first layer of bricks are oriented in the east–west direction, but that too is implied by the explicit mention of "North-pointing" in the second stanza; for, if the orientation was meant to be the same in the two layers, it would either not be mentioned at all or be only mentioned in the first stanza. All these inferences are made by the officiant as he recalls the formula from his memory. == The written tradition: prose commentary == With the increasing complexity of mathematics and other exact sciences, both writing and computation were required. Consequently, many mathematical works began to be written down in manuscripts that were then copied and re-copied from generation to generation. India today is estimated to have about thirty million manuscripts, the largest body of handwritten reading material anywhere in the world. The literate culture of Indian science goes back to at least the fifth century B.C. ... as is shown by the elements of Mesopotamian omen literature and astronomy that entered India at that time and (were) definitely not ... preserved orally. The earliest mathematical prose commentary was that on the work, Āryabhaṭīya (written 499 CE), a work on astronomy and mathematics. The mathematical portion of the Āryabhaṭīya was composed of 33 sūtras (in verse form) consisting of mathematical statements or rules, but without any proofs. However, according to Hayashi, "this does not necessarily mean that their authors did not prove them. It was probably a matter of style of exposition." From the time of Bhaskara I (600 CE onwards), prose commentaries increasingly began to include some derivations (upapatti). Bhaskara I's commentary on the Āryabhaṭīya, had the following structure: Rule ('sūtra') in verse by Āryabhaṭa Commentary by Bhāskara I, consisting of: Elucidation of rule (derivations were still rare then, but became more common later) Example (uddeśaka) usually in verse. Setting (nyāsa/sthāpanā) of the numerical data. Working (karana) of the solution. Verification (pratyayakaraṇa, literally "to make conviction") of the answer. These became rare by the 13th century, derivations or proofs being favoured by then. Typically, for any mathematical topic, students in ancient India first memorised the sūtras, which, as explained earlier, were "deliberately inadequate" in explanatory details (in order to pithily convey the bare-bone mathematical rules). The students then worked through the topics of the prose commentary by writing (and drawing diagrams) on chalk- and dust-boards (i.e. boards covered with dust). The latter activity, a staple of mathematical work, was to later prompt mathematician-astronomer, Brahmagupta (fl. 7th century CE), to characterise astronomical computations as "dust work" (Sanskrit: dhulikarman). == Numerals and the decimal number system == It is well known that the decimal place-value system in use today was first recorded in India, then transmitted to the Islamic world, and eventually to Europe. The Syrian bishop Severus Sebokht wrote in the mid-7th century CE about the "nine signs" of the Indians for expressing numbers. However, how, when, and where the first decimal place value system was invented is not so clear. The earliest extant script used in India was the Kharoṣṭhī script used in the Gandhara culture of the north-west. It is thought to be of Aramaic origin and it was in use from the 4th century BCE to the 4th century CE. Almost contemporaneously, another script, the Brāhmī script, appeared on much of the sub-continent, and would later become the foundation of many scripts of South Asia and South-east Asia. Both scripts had numeral symbols and numeral systems, which were initially not based on a place-value system. The earliest surviving evidence of decimal place value numerals in India and southeast Asia is from the middle of the first millennium CE. A copper plate from Gujarat, India mentions the date 595 CE, written in a decimal place value notation, although there is some doubt as to the authenticity of the plate. Decimal numerals recording the years 683 CE have also been found in stone inscriptions in Indonesia and Cambodia, where Indian cultural influence was substantial. There are older textual sources, although the extant manuscript copies of these texts are from much later dates. Probably the earliest such source is the work of the Buddhist philosopher Vasumitra dated likely to the 1st century CE. Discussing the counting pits of merchants, Vasumitra remarks, "When [the same] clay counting-piece is in the place of units, it is denoted as one, when in hundreds, one hundred." Although such references seem to imply that his readers had knowledge of a decimal place value representation, the "brevity of their allusions and the ambiguity of their dates, however, do not solidly establish the chronology of the development of this concept." A third decimal representation was employed in a verse composition technique, later labelled Bhuta-sankhya (literally, "object numbers") used by early Sanskrit authors of technical books. Since many early technical works were composed in verse, numbers were often represented by objects in the natural or religious world that correspondence to them; this allowed a many-to-one correspondence for each number and made verse composition easier. According to Plofker, the number 4, for example, could be represented by the word "Veda" (since there were four of these religious texts), the number 32 by the word "teeth" (since a full set consists of 32), and the number 1 by "moon" (since there is only one moon). So, Veda/teeth/moon would correspond to the decimal numeral 1324, as the convention for numbers was to enumerate their digits from right to left. The earliest reference employing object numbers is a c. 269 CE Sanskrit text, Yavanajātaka (literally "Greek horoscopy") of Sphujidhvaja, a versification of an earlier (c. 150 CE) Indian prose adaptation of a lost work of Hellenistic astrology. Such use seems to make the case that by the mid-3rd century CE, the decimal place value system was familiar, at least to readers of astronomical and astrological texts in India. It has been hypothesized that the Indian decimal place value system was based on the symbols used on Chinese counting boards from as early as the middle of the first millennium BCE. According to Plofker, These counting boards, like the Indian counting pits, ..., had a decimal place value structure ... Indians may well have learned of these decimal place value "rod numerals" from Chinese Buddhist pilgrims or other travelers, or they may have developed the concept independently from their earlier non-place-value system; no documentary evidence survives to confirm either conclusion." == Bakhshali Manuscript == The oldest extant mathematical manuscript in India is the Bakhshali Manuscript, a birch bark manuscript written in "Buddhist hybrid Sanskrit" in the Śāradā script, which was used in the northwestern region of the Indian subcontinent between the 8th and 12th centuries CE. The manuscript was discovered in 1881 by a farmer while digging in a stone enclosure in the village of Bakhshali, near Peshawar (then in British India and now in Pakistan). Of unknown authorship and now preserved in the Bodleian Library in the University of Oxford, the manuscript has been dated recently as 224 AD- 383 AD. The surviving manuscript has seventy leaves, some of which are in fragments. Its mathematical content consists of rules and examples, written in verse, together with prose commentaries, which include solutions to the examples. The topics treated include arithmetic (fractions, square roots, profit and loss, simple interest, the rule of three, and regula falsi) and algebra (simultaneous linear equations and quadratic equations), and arithmetic progressions. In addition, there is a handful of geometric problems (including problems about volumes of irregular solids). The Bakhshali manuscript also "employs a decimal place value system with a dot for zero." Many of its problems are of a category known as 'equalisation problems' that lead to systems of linear equations. One example from Fragment III-5-3v is the following: One merchant has seven asava horses, a second has nine haya horses, and a third has ten camels. They are equally well off in the value of their animals if each gives two animals, one to each of the others. Find the price of each animal and the total value for the animals possessed by each merchant. The prose commentary accompanying the example solves the problem by converting it to three (under-determined) equations in four unknowns and assuming that the prices are all integers. In 2017, three samples from the manuscript were shown by radiocarbon dating to come from three different centuries: from 224 to 383 AD, 680-779 AD, and 885-993 AD. It is not known how fragments from different centuries came to be packaged together. == Classical period (400–1300) == This period is often known as the golden age of Indian Mathematics. This period saw mathematicians such as Aryabhata, Varahamihira, Brahmagupta, Bhaskara I, Mahavira, Bhaskara II, Madhava of Sangamagrama and Nilakantha Somayaji give broader and clearer shape to many branches of mathematics. Their contributions would spread to Asia, the Middle East, and eventually to Europe. Unlike Vedic mathematics, their works included both astronomical and mathematical contributions. In fact, mathematics of that period was included in the 'astral science' (jyotiḥśāstra) and consisted of three sub-disciplines: mathematical sciences (gaṇita or tantra), horoscope astrology (horā or jātaka) and divination (saṃhitā). This tripartite division is seen in Varāhamihira's 6th century compilation—Pancasiddhantika (literally panca, "five," siddhānta, "conclusion of deliberation", dated 575 CE)—of five earlier works, Surya Siddhanta, Romaka Siddhanta, Paulisa Siddhanta, Vasishtha Siddhanta and Paitamaha Siddhanta, which were adaptations of still earlier works of Mesopotamian, Greek, Egyptian, Roman and Indian astronomy. As explained earlier, the main texts were composed in Sanskrit verse, and were followed by prose commentaries. === Fourth to sixth centuries === Surya Siddhanta Though its authorship is unknown, the Surya Siddhanta (c. 400) contains the roots of modern trigonometry. Because it contains many words of foreign origin, some authors consider that it was written under the influence of Mesopotamia and Greece. This ancient text uses the following as trigonometric functions for the first time: Sine (Jya). Cosine (Kojya). Inverse sine (Otkram jya). Later Indian mathematicians such as Aryabhata made references to this text, while later Arabic and Latin translations were very influential in Europe and the Middle East. Chhedi calendar This Chhedi calendar (594) contains an early use of the modern place-value Hindu–Arabic numeral system now used universally. Aryabhata I Aryabhata (476–550) wrote the Aryabhatiya. He described the important fundamental principles of mathematics in 332 shlokas. The treatise contained: Quadratic equations Trigonometry The value of π, correct to 4 decimal places. Aryabhata also wrote the Arya Siddhanta, which is now lost. Aryabhata's contributions include: Trigonometry: (See also : Aryabhata's sine table) Introduced the trigonometric functions. Defined the sine (jya) as the modern relationship between half an angle and half a chord. Defined the cosine (kojya). Defined the versine (utkrama-jya). Defined the inverse sine (otkram jya). Gave methods of calculating their approximate numerical values. Contains the earliest tables of sine, cosine and versine values, in 3.75° intervals from 0° to 90°, to 4 decimal places of accuracy. Contains the trigonometric formula sin(n + 1)x − sin nx = sin nx − sin(n − 1)x − (1/225)sin nx. Spherical trigonometry. Arithmetic: Simple continued fractions. Algebra: Solutions of simultaneous quadratic equations. Whole number solutions of linear equations by a method equivalent to the modern method. General solution of the indeterminate linear equation . Mathematical astronomy: Accurate calculations for astronomical constants, such as the: Solar eclipse. Lunar eclipse. The formula for the sum of the cubes, which was an important step in the development of integral calculus. Varahamihira Varahamihira (505–587) produced the Pancha Siddhanta (The Five Astronomical Canons). He made important contributions to trigonometry, including sine and cosine tables to 4 decimal places of accuracy and the following formulas relating sine and cosine functions: sin 2 ( x ) + cos 2 ( x ) = 1 {\displaystyle \sin ^{2}(x)+\cos ^{2}(x)=1} sin ( x ) = cos ( π 2 − x ) {\displaystyle \sin(x)=\cos \left({\frac {\pi }{2}}-x\right)} 1 − cos ( 2 x ) 2 = sin 2 ( x ) {\displaystyle {\frac {1-\cos(2x)}{2}}=\sin ^{2}(x)} === Seventh and eighth centuries === In the 7th century, two separate fields, arithmetic (which included measurement) and algebra, began to emerge in Indian mathematics. The two fields would later be called pāṭī-gaṇita (literally "mathematics of algorithms") and bīja-gaṇita (lit. "mathematics of seeds," with "seeds"—like the seeds of plants—representing unknowns with the potential to generate, in this case, the solutions of equations). Brahmagupta, in his astronomical work Brāhma Sphuṭa Siddhānta (628 CE), included two chapters (12 and 18) devoted to these fields. Chapter 12, containing 66 Sanskrit verses, was divided into two sections: "basic operations" (including cube roots, fractions, ratio and proportion, and barter) and "practical mathematics" (including mixture, mathematical series, plane figures, stacking bricks, sawing of timber, and piling of grain). In the latter section, he stated his famous theorem on the diagonals of a cyclic quadrilateral: Brahmagupta's theorem: If a cyclic quadrilateral has diagonals that are perpendicular to each other, then the perpendicular line drawn from the point of intersection of the diagonals to any side of the quadrilateral always bisects the opposite side. Chapter 12 also included a formula for the area of a cyclic quadrilateral (a generalisation of Heron's formula), as well as a complete description of rational triangles (i.e. triangles with rational sides and rational areas). Brahmagupta's formula: The area, A, of a cyclic quadrilateral with sides of lengths a, b, c, d, respectively, is given by A = ( s − a ) ( s − b ) ( s − c ) ( s − d ) {\displaystyle A={\sqrt {(s-a)(s-b)(s-c)(s-d)}}\,} where s, the semiperimeter, given by s = a + b + c + d 2 . {\displaystyle s={\frac {a+b+c+d}{2}}.} Brahmagupta's Theorem on rational triangles: A triangle with rational sides a , b , c {\displaystyle a,b,c} and rational area is of the form: a = u 2 v + v , b = u 2 w + w , c = u 2 v + u 2 w − ( v + w ) {\displaystyle a={\frac {u^{2}}{v}}+v,\ \ b={\frac {u^{2}}{w}}+w,\ \ c={\frac {u^{2}}{v}}+{\frac {u^{2}}{w}}-(v+w)} for some rational numbers u , v , {\displaystyle u,v,} and w {\displaystyle w} . Chapter 18 contained 103 Sanskrit verses which began with rules for arithmetical operations involving zero and negative numbers and is considered the first systematic treatment of the subject. The rules (which included a + 0 = a {\displaystyle a+0=\ a} and a × 0 = 0 {\displaystyle a\times 0=0} ) were all correct, with one exception: 0 0 = 0 {\displaystyle {\frac {0}{0}}=0} . Later in the chapter, he gave the first explicit (although still not completely general) solution of the quadratic equation: a x 2 + b x = c {\displaystyle \ ax^{2}+bx=c} To the absolute number multiplied by four times the [coefficient of the] square, add the square of the [coefficient of the] middle term; the square root of the same, less the [coefficient of the] middle term, being divided by twice the [coefficient of the] square is the value. This is equivalent to: x = 4 a c + b 2 − b 2 a {\displaystyle x={\frac {{\sqrt {4ac+b^{2}}}-b}{2a}}} Also in chapter 18, Brahmagupta was able to make progress in finding (integral) solutions of Pell's equation, x 2 − N y 2 = 1 , {\displaystyle \ x^{2}-Ny^{2}=1,} where N {\displaystyle N} is a nonsquare integer. He did this by discovering the following identity: Brahmagupta's Identity: ( x 2 − N y 2 ) ( x ′ 2 − N y ′ 2 ) = ( x x ′ + N y y ′ ) 2 − N ( x y ′ + x ′ y ) 2 {\displaystyle \ (x^{2}-Ny^{2})(x'^{2}-Ny'^{2})=(xx'+Nyy')^{2}-N(xy'+x'y)^{2}} which was a generalisation of an earlier identity of Diophantus: Brahmagupta used his identity to prove the following lemma: Lemma (Brahmagupta): If x = x 1 , y = y 1 {\displaystyle x=x_{1},\ \ y=y_{1}\ \ } is a solution of x 2 − N y 2 = k 1 , {\displaystyle \ \ x^{2}-Ny^{2}=k_{1},} and, x = x 2 , y = y 2 {\displaystyle x=x_{2},\ \ y=y_{2}\ \ } is a solution of x 2 − N y 2 = k 2 , {\displaystyle \ \ x^{2}-Ny^{2}=k_{2},} , then: x = x 1 x 2 + N y 1 y 2 , y = x 1 y 2 + x 2 y 1 {\displaystyle x=x_{1}x_{2}+Ny_{1}y_{2},\ \ y=x_{1}y_{2}+x_{2}y_{1}\ \ } is a solution of x 2 − N y 2 = k 1 k 2 {\displaystyle \ x^{2}-Ny^{2}=k_{1}k_{2}} He then used this lemma to both generate infinitely many (integral) solutions of Pell's equation, given one solution, and state the following theorem: Theorem (Brahmagupta): If the equation x 2 − N y 2 = k {\displaystyle \ x^{2}-Ny^{2}=k} has an integer solution for any one of k = ± 4 , ± 2 , − 1 {\displaystyle \ k=\pm 4,\pm 2,-1} then Pell's equation: x 2 − N y 2 = 1 {\displaystyle \ x^{2}-Ny^{2}=1} also has an integer solution. Brahmagupta did not actually prove the theorem, but rather worked out examples using his method. The first example he presented was: Example (Brahmagupta): Find integers x , y {\displaystyle \ x,\ y\ } such that: x 2 − 92 y 2 = 1 {\displaystyle \ x^{2}-92y^{2}=1} In his commentary, Brahmagupta added, "a person solving this problem within a year is a mathematician." The solution he provided was: x = 1151 , y = 120 {\displaystyle \ x=1151,\ y=120} Bhaskara I Bhaskara I (c. 600–680) expanded the work of Aryabhata in his books titled Mahabhaskariya, Aryabhatiya-bhashya and Laghu-bhaskariya. He produced: Solutions of indeterminate equations. A rational approximation of the sine function. A formula for calculating the sine of an acute angle without the use of a table, correct to two decimal places. === Ninth to twelfth centuries === Virasena Virasena (8th century) was a Jain mathematician in the court of Rashtrakuta King Amoghavarsha of Manyakheta, Karnataka. He wrote the Dhavala, a commentary on Jain mathematics, which: Deals with the concept of ardhaccheda, the number of times a number could be halved, and lists various rules involving this operation. This coincides with the binary logarithm when applied to powers of two, but differs on other numbers, more closely resembling the 2-adic order. Virasena also gave: The derivation of the volume of a frustum by a sort of infinite procedure. It is thought that much of the mathematical material in the Dhavala can attributed to previous writers, especially Kundakunda, Shamakunda, Tumbulura, Samantabhadra and Bappadeva and date who wrote between 200 and 600 CE. Mahavira Mahavira Acharya (c. 800–870) from Karnataka, the last of the notable Jain mathematicians, lived in the 9th century and was patronised by the Rashtrakuta king Amoghavarsha. He wrote a book titled Ganit Saar Sangraha on numerical mathematics, and also wrote treatises about a wide range of mathematical topics. These include the mathematics of: Zero Squares Cubes square roots, cube roots, and the series extending beyond these Plane geometry Solid geometry Problems relating to the casting of shadows Formulae derived to calculate the area of an ellipse and quadrilateral inside a circle. Mahavira also: Asserted that the square root of a negative number did not exist Gave the sum of a series whose terms are squares of an arithmetical progression, and gave empirical rules for area and perimeter of an ellipse. Solved cubic equations. Solved quartic equations. Solved some quintic equations and higher-order polynomials. Gave the general solutions of the higher order polynomial equations: a x n = q {\displaystyle \ ax^{n}=q} a x n − 1 x − 1 = p {\displaystyle a{\frac {x^{n}-1}{x-1}}=p} Solved indeterminate quadratic equations. Solved indeterminate cubic equations. Solved indeterminate higher order equations. Shridhara Shridhara (c. 870–930), who lived in Bengal, wrote the books titled Nav Shatika, Tri Shatika and Pati Ganita. He gave: A good rule for finding the volume of a sphere. The formula for solving quadratic equations. The Pati Ganita is a work on arithmetic and measurement. It deals with various operations, including: Elementary operations Extracting square and cube roots. Fractions. Eight rules given for operations involving zero. Methods of summation of different arithmetic and geometric series, which were to become standard references in later works. Manjula Aryabhata's equations were elaborated in the 10th century by Manjula (also Munjala), who realised that the expression sin w ′ − sin w {\displaystyle \ \sin w'-\sin w} could be approximately expressed as ( w ′ − w ) cos w {\displaystyle \ (w'-w)\cos w} This was elaborated by his later successor Bhaskara ii thereby finding the derivative of sine. Aryabhata II Aryabhata II (c. 920–1000) wrote a commentary on Shridhara, and an astronomical treatise Maha-Siddhanta. The Maha-Siddhanta has 18 chapters, and discusses: Numerical mathematics (Ank Ganit). Algebra. Solutions of indeterminate equations (kuttaka). Shripati Shripati Mishra (1019–1066) wrote the books Siddhanta Shekhara, a major work on astronomy in 19 chapters, and Ganit Tilaka, an incomplete arithmetical treatise in 125 verses based on a work by Shridhara. He worked mainly on: Permutations and combinations. General solution of the simultaneous indeterminate linear equation. He was also the author of Dhikotidakarana, a work of twenty verses on: Solar eclipse. Lunar eclipse. The Dhruvamanasa is a work of 105 verses on: Calculating planetary longitudes eclipses. planetary transits. Nemichandra Siddhanta Chakravati Nemichandra Siddhanta Chakravati (c. 1100) authored a mathematical treatise titled Gome-mat Saar. Bhaskara II Bhāskara II (1114–1185) was a mathematician-astronomer who wrote a number of important treatises, namely the Siddhanta Shiromani, Lilavati, Bijaganita, Gola Addhaya, Griha Ganitam and Karan Kautoohal. A number of his contributions were later transmitted to the Middle East and Europe. His contributions include: Arithmetic: Interest computation Arithmetical and geometrical progressions Plane geometry Solid geometry The shadow of the gnomon Solutions of combinations Gave a proof for division by zero being infinity. Algebra: The recognition of a positive number having two square roots. Surds. Operations with products of several unknowns. The solutions of: Quadratic equations. Cubic equations. Quartic equations. Equations with more than one unknown. Quadratic equations with more than one unknown. The general form of Pell's equation using the chakravala method. The general indeterminate quadratic equation using the chakravala method. Indeterminate cubic equations. Indeterminate quartic equations. Indeterminate higher-order polynomial equations. Geometry: Gave a proof of the Pythagorean theorem. Calculus: Preliminary concept of differentiation Discovered the differential coefficient. Stated early form of Rolle's theorem, a special case of the mean value theorem (one of the most important theorems of calculus and analysis). Derived the differential of the sine function although didn't perceive the notion of derivative. Computed π, correct to five decimal places. Calculated the length of the Earth's revolution around the Sun to 9 decimal places. Trigonometry: Developments of spherical trigonometry The trigonometric formulas: sin ( a + b ) = sin ( a ) cos ( b ) + sin ( b ) cos ( a ) {\displaystyle \ \sin(a+b)=\sin(a)\cos(b)+\sin(b)\cos(a)} sin ( a − b ) = sin ( a ) cos ( b ) − sin ( b ) cos ( a ) {\displaystyle \ \sin(a-b)=\sin(a)\cos(b)-\sin(b)\cos(a)} == Medieval and early modern mathematics (1300–1800) == === Navya-Nyaya === The Navya-Nyāya or Neo-Logical darśana (school) of Indian philosophy was founded in the 13th century by the philosopher Gangesha Upadhyaya of Mithila. It was a development of the classical Nyāya darśana. Other influences on Navya-Nyāya were the work of earlier philosophers Vācaspati Miśra (900–980 CE) and Udayana (late 10th century). Gangeśa's book Tattvacintāmaṇi ("Thought-Jewel of Reality") was written partly in response to Śrīharśa's Khandanakhandakhādya, a defence of Advaita Vedānta, which had offered a set of thorough criticisms of Nyāya theories of thought and language. Navya-Nyāya developed a sophisticated language and conceptual scheme that allowed it to raise, analyze, and solve problems in logic and epistemology. It involves naming each object to be analyzed, identifying a distinguishing characteristic for the named object, and verifying the appropriateness of the defining characteristic using pramanas. === Kerala School === The Kerala school of astronomy and mathematics was founded by Madhava of Sangamagrama in Kerala, South India and included among its members: Parameshvara, Neelakanta Somayaji, Jyeshtadeva, Achyuta Pisharati, Melpathur Narayana Bhattathiri and Achyuta Panikkar. It flourished between the 14th and 16th centuries and the original discoveries of the school seems to have ended with Narayana Bhattathiri (1559–1632). In attempting to solve astronomical problems, the Kerala school astronomers independently created a number of important mathematics concepts. The most important results, series expansion for trigonometric functions, were given in Sanskrit verse in a book by Neelakanta called Tantrasangraha and a commentary on this work called Tantrasangraha-vakhya of unknown authorship. The theorems were stated without proof, but proofs for the series for sine, cosine, and inverse tangent were provided a century later in the work Yuktibhāṣā (c.1500–c.1610), written in Malayalam, by Jyesthadeva. Their discovery of these three important series expansions of calculus—several centuries before calculus was developed in Europe by Isaac Newton and Gottfried Leibniz—was an achievement. However, the Kerala School did not invent calculus, because, while they were able to develop Taylor series expansions for the important trigonometric functions, they developed neither a theory of differentiation or integration, nor the fundamental theorem of calculus. The results obtained by the Kerala school include: The (infinite) geometric series: 1 1 − x = 1 + x + x 2 + x 3 + x 4 + ⋯ for | x | < 1 {\displaystyle {\frac {1}{1-x}}=1+x+x^{2}+x^{3}+x^{4}+\cdots {\text{ for }}|x|<1} A semi-rigorous proof (see "induction" remark below) of the result: 1 p + 2 p + ⋯ + n p ≈ n p + 1 p + 1 {\displaystyle 1^{p}+2^{p}+\cdots +n^{p}\approx {\frac {n^{p+1}}{p+1}}} for large n. Intuitive use of mathematical induction, however, the inductive hypothesis was not formulated or employed in proofs. Applications of ideas from (what was to become) differential and integral calculus to obtain (Taylor–Maclaurin) infinite series for sin x, cos x, and arctan x. The Tantrasangraha-vakhya gives the series in verse, which when translated to mathematical notation, can be written as: r arctan ( y x ) = 1 1 ⋅ r y x − 1 3 ⋅ r y 3 x 3 + 1 5 ⋅ r y 5 x 5 − ⋯ , where y / x ≤ 1. {\displaystyle r\arctan \left({\frac {y}{x}}\right)={\frac {1}{1}}\cdot {\frac {ry}{x}}-{\frac {1}{3}}\cdot {\frac {ry^{3}}{x^{3}}}+{\frac {1}{5}}\cdot {\frac {ry^{5}}{x^{5}}}-\cdots ,{\text{ where }}y/x\leq 1.} r sin x = x − x x 2 ( 2 2 + 2 ) r 2 + x x 2 ( 2 2 + 2 ) r 2 ⋅ x 2 ( 4 2 + 4 ) r 2 − ⋯ {\displaystyle r\sin x=x-x{\frac {x^{2}}{(2^{2}+2)r^{2}}}+x{\frac {x^{2}}{(2^{2}+2)r^{2}}}\cdot {\frac {x^{2}}{(4^{2}+4)r^{2}}}-\cdots } r − cos x = r x 2 ( 2 2 − 2 ) r 2 − r x 2 ( 2 2 − 2 ) r 2 x 2 ( 4 2 − 4 ) r 2 + ⋯ , {\displaystyle r-\cos x=r{\frac {x^{2}}{(2^{2}-2)r^{2}}}-r{\frac {x^{2}}{(2^{2}-2)r^{2}}}{\frac {x^{2}}{(4^{2}-4)r^{2}}}+\cdots ,} where, for r = 1, the series reduces to the standard power series for these trigonometric functions, for example: sin x = x − x 3 3 ! + x 5 5 ! − x 7 7 ! + ⋯ {\displaystyle \sin x=x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-{\frac {x^{7}}{7!}}+\cdots } and cos x = 1 − x 2 2 ! + x 4 4 ! − x 6 6 ! + ⋯ {\displaystyle \cos x=1-{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}-{\frac {x^{6}}{6!}}+\cdots } Use of rectification (computation of length) of the arc of a circle to give a proof of these results. (The later method of Leibniz, using quadrature, i.e. computation of area under the arc of the circle, was not used.) Use of the series expansion of arctan x {\displaystyle \arctan x} to obtain the Leibniz formula for π: π 4 = 1 − 1 3 + 1 5 − 1 7 + ⋯ {\displaystyle {\frac {\pi }{4}}=1-{\frac {1}{3}}+{\frac {1}{5}}-{\frac {1}{7}}+\cdots } A rational approximation of error for the finite sum of their series of interest. For example, the error, f i ( n + 1 ) {\displaystyle f_{i}(n+1)} , (for n odd, and i = 1, 2, 3) for the series: π 4 ≈ 1 − 1 3 + 1 5 − ⋯ + ( − 1 ) ( n − 1 ) / 2 1 n + ( − 1 ) ( n + 1 ) / 2 f i ( n + 1 ) {\displaystyle {\frac {\pi }{4}}\approx 1-{\frac {1}{3}}+{\frac {1}{5}}-\cdots +(-1)^{(n-1)/2}{\frac {1}{n}}+(-1)^{(n+1)/2}f_{i}(n+1)} where f 1 ( n ) = 1 2 n , f 2 ( n ) = n / 2 n 2 + 1 , f 3 ( n ) = ( n / 2 ) 2 + 1 ( n 2 + 5 ) n / 2 . {\displaystyle {\text{where }}f_{1}(n)={\frac {1}{2n}},\ f_{2}(n)={\frac {n/2}{n^{2}+1}},\ f_{3}(n)={\frac {(n/2)^{2}+1}{(n^{2}+5)n/2}}.} Manipulation of error term to derive a faster converging series for π {\displaystyle \pi } : π 4 = 3 4 + 1 3 3 − 3 − 1 5 3 − 5 + 1 7 3 − 7 − ⋯ {\displaystyle {\frac {\pi }{4}}={\frac {3}{4}}+{\frac {1}{3^{3}-3}}-{\frac {1}{5^{3}-5}}+{\frac {1}{7^{3}-7}}-\cdots } Using the improved series to derive a rational expression, 104348/33215 for π correct up to nine decimal places, i.e. 3.141592653. Use of an intuitive notion of limit to compute these results. A semi-rigorous (see remark on limits above) method of differentiation of some trigonometric functions. However, they did not formulate the notion of a function, or have knowledge of the exponential or logarithmic functions. The works of the Kerala school were first written up for the Western world by Englishman C.M. Whish in 1835. According to Whish, the Kerala mathematicians had "laid the foundation for a complete system of fluxions" and these works abounded "with fluxional forms and series to be found in no work of foreign countries." However, Whish's results were almost completely neglected, until over a century later, when the discoveries of the Kerala school were investigated again by C. Rajagopal and his associates. Their work includes commentaries on the proofs of the arctan series in Yuktibhāṣā given in two papers, a commentary on the Yuktibhāṣā's proof of the sine and cosine series and two papers that provide the Sanskrit verses of the Tantrasangrahavakhya for the series for arctan, sin, and cosine (with English translation and commentary). Parameshvara (c. 1370–1460) wrote commentaries on the works of Bhaskara I, Aryabhata and Bhaskara II. His Lilavati Bhasya, a commentary on Bhaskara II's Lilavati, contains one of his important discoveries: a version of the mean value theorem. Nilakantha Somayaji (1444–1544) composed the Tantra Samgraha (which 'spawned' a later anonymous commentary Tantrasangraha-vyakhya and a further commentary by the name Yuktidipaika, written in 1501). He elaborated and extended the contributions of Madhava. Citrabhanu (c. 1530) was a 16th-century mathematician from Kerala who gave integer solutions to 21 types of systems of two simultaneous algebraic equations in two unknowns. These types are all the possible pairs of equations of the following seven forms: x + y = a , x − y = b , x y = c , x 2 + y 2 = d , x 2 − y 2 = e , x 3 + y 3 = f , x 3 − y 3 = g {\displaystyle {\begin{aligned}&x+y=a,\ x-y=b,\ xy=c,x^{2}+y^{2}=d,\\[8pt]&x^{2}-y^{2}=e,\ x^{3}+y^{3}=f,\ x^{3}-y^{3}=g\end{aligned}}} For each case, Citrabhanu gave an explanation and justification of his rule as well as an example. Some of his explanations are algebraic, while others are geometric. Jyesthadeva (c. 1500–1575) was another member of the Kerala School. His key work was the Yukti-bhāṣā (written in Malayalam, a regional language of Kerala). Jyesthadeva presented proofs of most mathematical theorems and infinite series earlier discovered by Madhava and other Kerala School mathematicians. === Others === Narayana Pandit was a 14th century mathematician who composed two important mathematical works, an arithmetical treatise, Ganita Kaumudi, and an algebraic treatise, Bijganita Vatamsa. Ganita Kaumudi is one of the most revolutionary works in the field of combinatorics with developing a method for systematic generation of all permutations of a given sequence. In his Ganita Kaumudi Narayana proposed the following problem on a herd of cows and calves: A cow produces one calf every year. Beginning in its fourth year, each calf produces one calf at the beginning of each year. How many cows and calves are there altogether after 20 years? Translated into the modern mathematical language of recurrence sequences: Nn = Nn-1 + Nn-3 for n > 2, with initial values N0 = N1 = N2 = 1. The first few terms are 1, 1, 1, 2, 3, 4, 6, 9, 13, 19, 28, 41, 60, 88,... (sequence A000930 in the OEIS). The limit ratio between consecutive terms is the supergolden ratio. . Narayana is also thought to be the author of an elaborate commentary of Bhaskara II's Lilavati, titled Ganita Kaumudia(or Karma-Paddhati). == Charges of Eurocentrism == It has been suggested that Indian contributions to mathematics have not been given due acknowledgement in modern history and that many discoveries and inventions by Indian mathematicians are presently culturally attributed to their Western counterparts, as a result of Eurocentrism. According to G. G. Joseph's take on "Ethnomathematics": [Their work] takes on board some of the objections raised about the classical Eurocentric trajectory. The awareness [of Indian and Arabic mathematics] is all too likely to be tempered with dismissive rejections of their importance compared to Greek mathematics. The contributions from other civilisations – most notably China and India, are perceived either as borrowers from Greek sources or having made only minor contributions to mainstream mathematical development. An openness to more recent research findings, especially in the case of Indian and Chinese mathematics, is sadly missing" Historian of mathematics Florian Cajori wrote that he and others "suspect that Diophantus got his first glimpse of algebraic knowledge from India". He also wrote that "it is certain that portions of Hindu mathematics are of Greek origin". More recently, as discussed in the above section, the infinite series of calculus for trigonometric functions (rediscovered by Gregory, Taylor, and Maclaurin in the late 17th century) were described in India, by mathematicians of the Kerala school, some two centuries earlier. Some scholars have recently suggested that knowledge of these results might have been transmitted to Europe through the trade route from Kerala by traders and Jesuit missionaries. Kerala was in continuous contact with China and Arabia, and, from around 1500, with Europe. The fact that the communication routes existed and the chronology is suitable certainly make such transmission a possibility. However, no evidence of transmission has been found. According to David Bressoud, "there is no evidence that the Indian work of series was known beyond India, or even outside of Kerala, until the nineteenth century". Both Arab and Indian scholars made discoveries before the 17th century that are now considered a part of calculus. However, they did not (as Newton and Leibniz did) "combine many differing ideas under the two unifying themes of the derivative and the integral, show the connection between the two, and turn calculus into the great problem-solving tool we have today". The intellectual careers of both Newton and Leibniz are well-documented and there is no indication of their work not being their own; however, it is not known with certainty whether the immediate predecessors of Newton and Leibniz, "including, in particular, Fermat and Roberval, [may have] learned of some of the ideas of the Islamic and Indian mathematicians through sources we are not now aware." This is an area of current research, especially in the manuscript collections of Spain and Maghreb, and is being pursued, among other places, at the CNRS. == See also == == Notes == == References == Bourbaki, Nicolas (1998), Elements of the History of Mathematics, Berlin, Heidelberg, and New York: Springer-Verlag, 301 pages, ISBN 978-3-540-64767-6. Boyer, C. B.; Merzback (fwd. by Isaac Asimov), U. C. (1991), History of Mathematics, New York: John Wiley and Sons, 736 pages, ISBN 978-0-471-54397-8. Bressoud, David (2002), "Was Calculus Invented in India?", The College Mathematics Journal, 33 (1): 2–13, doi:10.2307/1558972, JSTOR 1558972. Bronkhorst, Johannes (2001), "Panini and Euclid: Reflections on Indian Geometry", Journal of Indian Philosophy, 29 (1–2), Springer Netherlands: 43–80, doi:10.1023/A:1017506118885, S2CID 115779583. Burnett, Charles (2006), "The Semantics of Indian Numerals in Arabic, Greek and Latin", Journal of Indian Philosophy, 34 (1–2), Springer-Netherlands: 15–30, doi:10.1007/s10781-005-8153-z, S2CID 170783929. Burton, David M. (1997), The History of Mathematics: An Introduction, The McGraw-Hill Companies, Inc., pp. 193–220. Cooke, Roger (2005), The History of Mathematics: A Brief Course, New York: Wiley-Interscience, 632 pages, ISBN 978-0-471-44459-6. Dani, S. G. (25 July 2003), "On the Pythagorean triples in the Śulvasūtras" (PDF), Current Science, 85 (2): 219–224, archived from the original (PDF) on 4 August 2003. Datta, Bibhutibhusan (December 1931), "Early Literary Evidence of the Use of the Zero in India", The American Mathematical Monthly, 38 (10): 566–572, doi:10.2307/2301384, JSTOR 2301384. Datta, Bibhutibhusan; Singh, Avadesh Narayan (1962), History of Hindu Mathematics : A source book, Bombay: Asia Publishing House{{citation}}: CS1 maint: publisher location (link). De Young, Gregg (1995), "Euclidean Geometry in the Mathematical Tradition of Islamic India", Historia Mathematica, 22 (2): 138–153, doi:10.1006/hmat.1995.1014. Kim Plofker (2007), "mathematics, South Asian", Encyclopaedia Britannica Online, pp. 1–12, retrieved 18 May 2007. Filliozat, Pierre-Sylvain (2004), "Ancient Sanskrit Mathematics: An Oral Tradition and a Written Literature", in Chemla, Karine; Cohen, Robert S.; Renn, Jürgen; et al. (eds.), History of Science, History of Text (Boston Series in the Philosophy of Science), Dordrecht: Springer Netherlands, 254 pages, pp. 137–157, pp. 360–375, doi:10.1007/1-4020-2321-9_7, ISBN 978-1-4020-2320-0. Fowler, David (1996), "Binomial Coefficient Function", The American Mathematical Monthly, 103 (1): 1–17, doi:10.2307/2975209, JSTOR 2975209. Hayashi, Takao (1995), The Bakhshali Manuscript, An ancient Indian mathematical treatise, Groningen: Egbert Forsten, 596 pages, ISBN 978-90-6980-087-5. Hayashi, Takao (1997), "Aryabhata's Rule and Table of Sine-Differences", Historia Mathematica, 24 (4): 396–406, doi:10.1006/hmat.1997.2160. Hayashi, Takao (2003), "Indian Mathematics", in Grattan-Guinness, Ivor (ed.), Companion Encyclopedia of the History and Philosophy of the Mathematical Sciences, vol. 1, Baltimore, MD: The Johns Hopkins University Press, pp. 118–130, ISBN 978-0-8018-7396-6. Hayashi, Takao (2005), "Indian Mathematics", in Flood, Gavin (ed.), The Blackwell Companion to Hinduism, Oxford: Basil Blackwell, 616 pages, pp. 360–375, pp. 360–375, ISBN 978-1-4051-3251-0. Henderson, David W. (2000), "Square roots in the Sulba Sutras", in Gorini, Catherine A. (ed.), Geometry at Work: Papers in Applied Geometry, vol. 53, Washington DC: Mathematical Association of America Notes, pp. 39–45, ISBN 978-0-88385-164-7. Ifrah, Georges (2000). A Universal History of Numbers: From Prehistory to Computers. New York: Wiley. ISBN 0471393401. Joseph, G. G. (2000), The Crest of the Peacock: The Non-European Roots of Mathematics, Princeton, NJ: Princeton University Press, 416 pages, ISBN 978-0-691-00659-8. Katz, Victor J. (1995), "Ideas of Calculus in Islam and India", Mathematics Magazine, 68 (3): 163–174, doi:10.2307/2691411, JSTOR 2691411. Katz, Victor J., ed. (2007), The Mathematics of Egypt, Mesopotamia, China, India, and Islam: A Sourcebook, Princeton, NJ: Princeton University Press, pp. 385–514, ISBN 978-0-691-11485-9. Keller, Agathe (2005), "Making diagrams speak, in Bhāskara I's commentary on the Aryabhaṭīya" (PDF), Historia Mathematica, 32 (3): 275–302, doi:10.1016/j.hm.2004.09.001. Kichenassamy, Satynad (2006), "Baudhāyana's rule for the quadrature of the circle", Historia Mathematica, 33 (2): 149–183, doi:10.1016/j.hm.2005.05.001. Neugebauer, Otto; Pingree, David, eds. (1970), The Pañcasiddhāntikā of Varāhamihira, Copenhagen{{citation}}: CS1 maint: location missing publisher (link). New edition with translation and commentary, (2 Vols.). Pingree, David (1971), "On the Greek Origin of the Indian Planetary Model Employing a Double Epicycle", Journal of Historical Astronomy, 2 (1): 80–85, Bibcode:1971JHA.....2...80P, doi:10.1177/002182867100200202, S2CID 118053453. Pingree, David (1973), "The Mesopotamian Origin of Early Indian Mathematical Astronomy", Journal of Historical Astronomy, 4 (1): 1–12, Bibcode:1973JHA.....4....1P, doi:10.1177/002182867300400102, S2CID 125228353. Pingree, David, ed. (1978), The Yavanajātaka of Sphujidhvaja, Harvard Oriental Series 48 (2 vols.), Edited, translated and commented by D. Pingree, Cambridge, MA{{citation}}: CS1 maint: location missing publisher (link). Pingree, David (1988), "Reviewed Work(s): The Fidelity of Oral Tradition and the Origins of Science by Frits Staal", Journal of the American Oriental Society, 108 (4): 637–638, doi:10.2307/603154, JSTOR 603154. Pingree, David (1992), "Hellenophilia versus the History of Science", Isis, 83 (4): 554–563, Bibcode:1992Isis...83..554P, doi:10.1086/356288, JSTOR 234257, S2CID 68570164 Pingree, David (2003), "The logic of non-Western science: mathematical discoveries in medieval India", Daedalus, 132 (4): 45–54, doi:10.1162/001152603771338779, S2CID 57559157. Plofker, Kim (1996), "An Example of the Secant Method of Iterative Approximation in a Fifteenth-Century Sanskrit Text", Historia Mathematica, 23 (3): 246–256, doi:10.1006/hmat.1996.0026. Plofker, Kim (2001), "The "Error" in the Indian "Taylor Series Approximation" to the Sine", Historia Mathematica, 28 (4): 283–295, doi:10.1006/hmat.2001.2331. Plofker, K. (2007), "Mathematics of India", in Katz, Victor J. (ed.), The Mathematics of Egypt, Mesopotamia, China, India, and Islam: A Sourcebook, Princeton, NJ: Princeton University Press, pp. 385–514, ISBN 978-0-691-11485-9. Plofker, Kim (2009), Mathematics in India: 500 BCE–1800 CE, Princeton, NJ: Princeton University Press, ISBN 978-0-691-12067-6. Price, John F. (2000), "Applied geometry of the Sulba Sutras" (PDF), in Gorini, Catherine A. (ed.), Geometry at Work: Papers in Applied Geometry, vol. 53, Washington DC: Mathematical Association of America Notes, pp. 46–58, ISBN 978-0-88385-164-7, archived from the original (PDF) on 27 September 2007, retrieved 20 May 2007. Roy, Ranjan (1990), "Discovery of the Series Formula for π {\displaystyle \pi } by Leibniz, Gregory, and Nilakantha", Mathematics Magazine, 63 (5): 291–306, doi:10.2307/2690896, JSTOR 2690896. Singh, A. N. (1936), "On the Use of Series in Hindu Mathematics", Osiris, 1 (1): 606–628, doi:10.1086/368443, JSTOR 301627, S2CID 144760421 Staal, Frits (1986), "The Fidelity of Oral Tradition and the Origins of Science", Mededelingen der Koninklijke Nederlandse Akademie von Wetenschappen, Afd. Letterkunde, New Series, 49 (8), Amsterdam: North Holland Publishing Company. Staal, Frits (1995), "The Sanskrit of science", Journal of Indian Philosophy, 23 (1), Springer Netherlands: 73–127, doi:10.1007/BF01062067, S2CID 170755274. Staal, Frits (1999), "Greek and Vedic Geometry", Journal of Indian Philosophy, 27 (1–2): 105–127, doi:10.1023/A:1004364417713, S2CID 170894641. Staal, Frits (2001), "Squares and oblongs in the Veda", Journal of Indian Philosophy, 29 (1–2), Springer Netherlands: 256–272, doi:10.1023/A:1017527129520, S2CID 170403804. Staal, Frits (2006), "Artificial Languages Across Sciences and Civilisations", Journal of Indian Philosophy, 34 (1), Springer Netherlands: 89–141, doi:10.1007/s10781-005-8189-0, S2CID 170968871. Stillwell, John (2004), Mathematics and its History, Undergraduate Texts in Mathematics (2 ed.), Springer, Berlin and New York, 568 pages, doi:10.1007/978-1-4684-9281-1, ISBN 978-0-387-95336-6. Thibaut, George (1984) [1875], Mathematics in the Making in Ancient India: reprints of 'On the Sulvasutras' and 'Baudhyayana Sulva-sutra', Calcutta and Delhi: K. P. Bagchi and Company (orig. Journal of the Asiatic Society of Bengal), 133 pages. van der Waerden, B. L. (1983), Geometry and Algebra in Ancient Civilisations, Berlin and New York: Springer, 223 pages, ISBN 978-0-387-12159-8 van der Waerden, B. L. (1988), "On the Romaka-Siddhānta", Archive for History of Exact Sciences, 38 (1): 1–11, doi:10.1007/BF00329976, S2CID 189788738 van der Waerden, B. L. (1988), "Reconstruction of a Greek table of chords", Archive for History of Exact Sciences, 38 (1): 23–38, Bibcode:1988AHES...38...23V, doi:10.1007/BF00329978, S2CID 189793547 Van Nooten, B. (1993), "Binary numbers in Indian antiquity", Journal of Indian Philosophy, 21 (1), Springer Netherlands: 31–50, doi:10.1007/BF01092744, S2CID 171039636 Whish, Charles (1835), "On the Hindú Quadrature of the Circle, and the infinite Series of the proportion of the circumference to the diameter exhibited in the four S'ástras, the Tantra Sangraham, Yucti Bháshá, Carana Padhati, and Sadratnamála", Transactions of the Royal Asiatic Society of Great Britain and Ireland, 3 (3): 509–523, doi:10.1017/S0950473700001221, JSTOR 25581775 Yano, Michio (2006), "Oral and Written Transmission of the Exact Sciences in Sanskrit", Journal of Indian Philosophy, 34 (1–2), Springer Netherlands: 143–160, doi:10.1007/s10781-005-8175-6, S2CID 170679879 == Further reading == === Source books in Sanskrit === Keller, Agathe (2006), Expounding the Mathematical Seed. Vol. 1: The Translation: A Translation of Bhaskara I on the Mathematical Chapter of the Aryabhatiya, Basel, Boston, and Berlin: Birkhäuser Verlag, 172 pages, ISBN 978-3-7643-7291-0. Keller, Agathe (2006), Expounding the Mathematical Seed. Vol. 2: The Supplements: A Translation of Bhaskara I on the Mathematical Chapter of the Aryabhatiya, Basel, Boston, and Berlin: Birkhäuser Verlag, 206 pages, ISBN 978-3-7643-7292-7. Sarma, K. V., ed. (1976), Āryabhaṭīya of Āryabhaṭa with the commentary of Sūryadeva Yajvan, critically edited with Introduction and Appendices, New Delhi: Indian National Science Academy. Sen, S. N.; Bag, A. K., eds. (1983), The Śulbasūtras of Baudhāyana, Āpastamba, Kātyāyana and Mānava, with Text, English Translation and Commentary, New Delhi: Indian National Science Academy. Shukla, K. S., ed. (1976), Āryabhaṭīya of Āryabhaṭa with the commentary of Bhāskara I and Someśvara, critically edited with Introduction, English Translation, Notes, Comments and Indexes, New Delhi: Indian National Science Academy. Shukla, K. S., ed. (1988), Āryabhaṭīya of Āryabhaṭa, critically edited with Introduction, English Translation, Notes, Comments and Indexes, in collaboration with K.V. Sarma, New Delhi: Indian National Science Academy. == External links == Science and Mathematics in India An overview of Indian mathematics, MacTutor History of Mathematics Archive, St Andrews University, 2000. Indian Mathematicians Index of Ancient Indian mathematics, MacTutor History of Mathematics Archive, St Andrews University, 2004. Indian Mathematics: Redressing the balance, Student Projects in the History of Mathematics. Ian Pearce. MacTutor History of Mathematics Archive, St Andrews University, 2002. Indian Mathematics on In Our Time at the BBC InSIGHT 2009, a workshop on traditional Indian sciences for school children conducted by the Computer Science department of Anna University, Chennai, India. Mathematics in ancient India by R. Sridharan Combinatorial methods in ancient India Mathematics before S. Ramanujan
|
Wikipedia:Indian numbering system#0
|
The Indian numbering system is used in India, Pakistan, Nepal, Sri Lanka, and Bangladesh to express large numbers, which differs from the International System of Units. Commonly used quantities include lakh (one hundred thousand) and crore (ten million) – written as 100,000 and 10,000,000 respectively in some locales. For example: 150,000 rupees is "1.5 lakh rupees" which can be written as "1,50,000 rupees", and 30,000,000 (thirty million) rupees is referred to as "3 crore rupees" which can be written as "3,00,00,000 rupees". There are names for numbers larger than crore, but they are less commonly used. These include arab (100 crore, 109), kharab (100 arab, 1011), nil or sometimes transliterated as neel (100 kharab, 1013), padma (100 nil, 1015), shankh (100 padma, 1017), and mahashankh (100 shankh, 1019). In common parlance (though inconsistent), the lakh and crore terminology repeats for larger numbers. Thus lakh crore is 1012. In the ancient Indian system, still in use in regional languages of India, there are words for (1062). These names respectively starting at 1000 are sahasra, ayuta, laksha, niyuta, koti, arbhudha, abhja, karva, nikarva, mahapadma, shanmkhu, jaladhi, amtya, madhya, paraardha. In the Indian system, now prevalent in the northern parts, the next powers of ten are one lakh, ten lakh, one crore, ten crore, one arab (or one hundred crore), and so on. == Multiples == The Indian system is decimal (base-10), same as in the International System of Units, and the first five orders of magnitude are named in a similar way: one (100), ten (101), one hundred (102), one thousand (103), and ten thousand (104). For higher powers of ten, naming diverges. The Indian system uses names for every second power of ten: lakh (105), crore (107), arab (109), kharab (1011), etc. In the rest of the world, long and short scales, there are names for every third power of ten. The short scale uses million (106), billion (109), trillion (1012), etc. == Decimal formatting == The Indian system groups digits of a large decimal representation differently than the International System of Units. The Indian system does group the first three digits to the left of the decimal point. But thereafter, groups by two digits to align with the naming of quantities at multiples of 100. Like English and other locales, the Indian system uses a period as the decimal separator and the comma for grouping, while others use a comma for decimal separator and a thin space or point to group digits. == Pronunciation in English == When speakers of indigenous Indian languages are speaking English, the pronunciations may be closer to their mother tongue; e.g. "lakh" and "crore" might be pronounced /lɑkʰ/, /kɑrɔːr/, respectively. lakh /lɑːkʰ/ crore /kɹɔːɹ/ (or /kɹoʊɹ/ in American English) arab /ʌˈɾʌb/ kharab /kʰʌˈɾʌb/ == Names of numbers == The table below includes the spelling and pronunciation of numbers in various Indian languages along with corresponding short scale names. == Historic numbering systems == === Numbering systems in Hindu epics === There are various systems of numeration found in various ancient epic literature of India (itihasas). The following table gives one such system used in the Valmiki Ramayana. === Other numbering systems === The denominations by which land was measured in the Kumaon Kingdom were based on arable lands and thus followed an approximate system with local variations. The most common of these was a vigesimal (base-20) numbering system with the main denomination called a bisi (see Hindustani number bīs), which corresponded to the land required to sow 20 nalis of seed. Consequently, its actual land measure varied based on the quality of the soil. This system became the established norm in Kumaon by 1891. == Usage in different languages == Below is a list of translations for the words lakh and crore in other languages spoken in the Indian subcontinent: In Assamese, a lakh is also called লক্ষ lokhyo, or লাখ lakh and a crore is called কৌটি বা কোটি kouti In Bengali, a lakh is natively (tadbhava) known as লাখ lākh, though some use the ardha-tatsama লক্ষ lokkho. A crore is called কোটি kōṭi In Burmese, crore is called ကုဋေ [ɡədè]. Lakh is used in Burmese English. In Dhivehi, a lakh is called ލައްކަ la'kha and a crore is called ކްރޯރް kroaru In Gujarati, a lakh is called લાખ lākh and a crore is called કરોડ karoḍ. A hundred crore is called અબજ abaj In Hindi, a lakh is called लाख lākh and a crore is called करोड karoḍ. A hundred crore is called अरब arab In Kannada, a lakh is called ಲಕ್ಷ lakṣha and a crore is called ಕೋಟಿ kōṭi In Khasi, a lakh is called lak and a crore is called klur or krur. A billion is called arab and hundred billion is called kharab. In Malayalam, a lakh is called ലക്ഷം laksham and a crore is called കോടി kodi. In Marathi, a lakh is called लाख/लक्ष lākh and a crore is called कोटी koṭi or करोड karoḍ, and an arab (109) is called अब्ज abja. In Nepali, a lakh is called लाख lākh and a crore is called करोड karoḍ. In Odia, a lakh is called ଲକ୍ଷ lôkhyô and a crore is called କୋଟି koṭi. In Punjabi, a lakh is called lakkh (Shahmukhi: لکھ, Gurmukhi: ਲੱਖ) and a crore is called karoṛ (Shahmukhi: کروڑ, Gurmukhi: ਕਰੋੜ). In Rohingya, a lakh is called lák and a crore is called kurul. A thousand crore is called kuthí. In Sinhala, a lakh is called ලක්ෂ lakṣa and a crore is called කෝටි kōṭi. In Tamil, a lakh is called இலட்சம் ilaṭcam and a crore is called கோடி kōṭi. In Telugu, a lakh is called లక్ష lakṣha and a crore is called కోటి kōṭi. In Urdu, a lakh is called لاکھ lākh and a crore is called کروڑ karoṛ. A billion is called arab (ارب), and one hundred billion/arab is called a kharab (کھرب). Lakh has entered the Swahili language as "laki" and is in common use. Formal written publications in English in India tend to use lakh/crore for Indian currency and International numbering for foreign currencies. == Current usage == The official usage of this system is limited to the nations of India, Pakistan and Bangladesh. It is universally employed within these countries, and is preferred to the International numbering system. Sri Lanka and Nepal used this system in the past but has switched to the International numbering system in recent years. In the Maldives, the term lakh is widely used in official documents and local speech. However, the International System of Units is preferred for higher denominations (such as millions). Most institutions and citizens in India use the Indian number system. The Reserve Bank of India was noted as a rare exception in 2015, whereas by 2024 the Indian system was used for amounts in rupees and the International system for foreign currencies throughout the Reserve Bank's website. == See also == Scientific notation Japanese notation == References ==
|
Wikipedia:Indira Chatterji#0
|
Indira Lara Chatterji (born 25 January 1973) is a Swiss-Indian mathematician working in France as a professor of mathematics in the J. A. Dieudonné Laboratory of the University of Côte d'Azur. Her research involves low-dimensional geometry, cubical complexes, and geometric group theory. She has also studied sexism and institutional bias in mathematics. == Education and career == Chatterji was born in Lausanne, where her father, Indian probability theorist Srishti Dhar Chatterji, worked; her mother was a Swiss woman from Ticino, Carla Bolognini. She is a Swiss citizen, and holds Overseas Citizenship of India. After entering the University of Lausanne intending to study criminology and then sociology, she switched to mathematics, earning a license in 1995 and a diploma in 1997. She completed her doctorate in mathematics at ETH Zurich in 2001. Her dissertation, On property (RD) for certain discrete groups, studied the property of "rapid decay" in group theory, introduced by Vincent Lafforgue in connection with the Baum–Connes conjecture. It was jointly supervised by Marc Burger and Alain Valette. After working as an H. C. Wang Assistant Professor at Cornell University, she took a tenure-track faculty position at Ohio State University in 2005, and remained there until 2011, earning tenure in 2010. In 2007, she was one of 16 US mathematicians to win an NSF CAREER Award. Meanwhile, in 2005, she began working as a professor at the University of Orléans in France (on leave from Ohio State), and in 2010 was promoted to professor 1st class. In 2014 she moved to her present position in the J. A. Dieudonné Laboratory, originally associated with the University of Nice Sophia Antipolis and currently with the University of Côte d'Azur. She also serves on the scientific council of the Fondation sciences mathématiques de Paris. == Animation == Along with her mathematical research, Chatterji has made animated drawings that illustrate concepts in geometric group theory for general-audience talks, and exhibited her drawings in the Institute for Computational and Experimental Research in Mathematics "Illustrating Mathematics" program. == Recognition == Chatterji was a junior member of the Institut Universitaire de France from 2014 to 2019. == References == == External links == Home page
|
Wikipedia:Ineke De Moortel#0
|
Ineke De Moortel is a Belgian applied mathematician in Scotland, where she is a professor of applied mathematics at the University of St Andrews, director of research in the School of Mathematics and Statistics at St Andrews, and president of the Edinburgh Mathematical Society. Her research concerns the computational and mathematical modelling of solar physics, and particularly of the Sun's corona. She has been awarded the Philip Leverhulme Prize in Astronomy and Astrophysics. She is a Fellow of the Royal Society of Edinburgh, of the Royal Astronomical Society. == Education and career == De Moortel earned a master's degree in mathematics in 1997 at KU Leuven. She completed a Ph.D. in solar physics in 2001 at the University of St Andrews; her dissertation, Theoretical & Observational Aspects of Wave Propagation in the Solar Corona, was supervised by Alan Hood. She remained at St Andrews as a postdoctoral researcher and research fellow, becoming a reader there in 2008 and a professor of applied mathematics in 2013. Her research concerns the computational and mathematical modelling of solar physics, and particularly of the Sun's corona. De Moortel is president of the Edinburgh Mathematical Society. Since 2019 she has been a member of the editorial board at the journal Monthly Notices of the Royal Astronomical Society. De Moortel sits on the judging panel for the St Andrews Prize for the Environment. == Recognition == In 2005, De Moortel became a Fellow of the Royal Astronomical Society. In 2009 she won the Philip Leverhulme Prize in Astronomy and Astrophysics. She was elected to the Royal Society of Edinburgh in 2015, and previously co-chaired its affiliate society, the Young Academy of Scotland. She was featured in the Royal Society of Edinburgh's 2019 exhibition Women in Science in Scotland, which celebrated some of Scotland’s leading female scientists. == References == == External links == Home page Archived 27 March 2019 at the Wayback Machine
|
Wikipedia:Inequality (mathematics)#0
|
In mathematics, an inequality is a relation which makes a non-equal comparison between two numbers or other mathematical expressions. It is used most often to compare two numbers on the number line by their size. The main types of inequality are less than and greater than (denoted by < and >, respectively the less-than and greater-than signs). == Notation == There are several different notations used to represent different kinds of inequalities: The notation a < b means that a is less than b. The notation a > b means that a is greater than b. In either case, a is not equal to b. These relations are known as strict inequalities, meaning that a is strictly less than or strictly greater than b. Equality is excluded. In contrast to strict inequalities, there are two types of inequality relations that are not strict: The notation a ≤ b or a ⩽ b or a ≦ b means that a is less than or equal to b (or, equivalently, at most b, or not greater than b). The notation a ≥ b or a ⩾ b or a ≧ b means that a is greater than or equal to b (or, equivalently, at least b, or not less than b). In the 17th and 18th centuries, personal notations or typewriting signs were used to signal inequalities. For example, In 1670, John Wallis used a single horizontal bar above rather than below the < and >. Later in 1734, ≦ and ≧, known as "less than (greater-than) over equal to" or "less than (greater than) or equal to with double horizontal bars", first appeared in Pierre Bouguer's work . After that, mathematicians simplified Bouguer's symbol to "less than (greater than) or equal to with one horizontal bar" (≤), or "less than (greater than) or slanted equal to" (⩽). The relation not greater than can also be represented by a ≯ b , {\displaystyle a\ngtr b,} the symbol for "greater than" bisected by a slash, "not". The same is true for not less than, a ≮ b . {\displaystyle a\nless b.} The notation a ≠ b means that a is not equal to b; this inequation sometimes is considered a form of strict inequality. It does not say that one is greater than the other; it does not even require a and b to be member of an ordered set. In engineering sciences, less formal use of the notation is to state that one quantity is "much greater" than another, normally by several orders of magnitude. The notation a ≪ b means that a is much less than b. The notation a ≫ b means that a is much greater than b. This implies that the lesser value can be neglected with little effect on the accuracy of an approximation (such as the case of ultrarelativistic limit in physics). In all of the cases above, any two symbols mirroring each other are symmetrical; a < b and b > a are equivalent, etc. == Properties on the number line == Inequalities are governed by the following properties. All of these properties also hold if all of the non-strict inequalities (≤ and ≥) are replaced by their corresponding strict inequalities (< and >) and — in the case of applying a function — monotonic functions are limited to strictly monotonic functions. === Converse === The relations ≤ and ≥ are each other's converse, meaning that for any real numbers a and b: === Transitivity === The transitive property of inequality states that for any real numbers a, b, c: If either of the premises is a strict inequality, then the conclusion is a strict inequality: === Addition and subtraction === A common constant c may be added to or subtracted from both sides of an inequality. So, for any real numbers a, b, c: In other words, the inequality relation is preserved under addition (or subtraction) and the real numbers are an ordered group under addition. === Multiplication and division === The properties that deal with multiplication and division state that for any real numbers, a, b and non-zero c: In other words, the inequality relation is preserved under multiplication and division with positive constant, but is reversed when a negative constant is involved. More generally, this applies for an ordered field. For more information, see § Ordered fields. === Additive inverse === The property for the additive inverse states that for any real numbers a and b: === Multiplicative inverse === If both numbers are positive, then the inequality relation between the multiplicative inverses is opposite of that between the original numbers. More specifically, for any non-zero real numbers a and b that are both positive (or both negative): All of the cases for the signs of a and b can also be written in chained notation, as follows: === Applying a function to both sides === Any monotonically increasing function, by its definition, may be applied to both sides of an inequality without breaking the inequality relation (provided that both expressions are in the domain of that function). However, applying a monotonically decreasing function to both sides of an inequality means the inequality relation would be reversed. The rules for the additive inverse, and the multiplicative inverse for positive numbers, are both examples of applying a monotonically decreasing function. If the inequality is strict (a < b, a > b) and the function is strictly monotonic, then the inequality remains strict. If only one of these conditions is strict, then the resultant inequality is non-strict. In fact, the rules for additive and multiplicative inverses are both examples of applying a strictly monotonically decreasing function. A few examples of this rule are: Raising both sides of an inequality to a power n > 0 (equiv., −n < 0), when a and b are positive real numbers: Taking the natural logarithm on both sides of an inequality, when a and b are positive real numbers: (this is true because the natural logarithm is a strictly increasing function.) == Formal definitions and generalizations == A (non-strict) partial order is a binary relation ≤ over a set P which is reflexive, antisymmetric, and transitive. That is, for all a, b, and c in P, it must satisfy the three following clauses: a ≤ a (reflexivity) if a ≤ b and b ≤ a, then a = b (antisymmetry) if a ≤ b and b ≤ c, then a ≤ c (transitivity) A set with a partial order is called a partially ordered set. Those are the very basic axioms that every kind of order has to satisfy. A strict partial order is a relation < that satisfies a ≮ a (irreflexivity), if a < b, then b ≮ a (asymmetry), if a < b and b < c, then a < c (transitivity), where ≮ means that < does not hold. Some types of partial orders are specified by adding further axioms, such as: Total order: For every a and b in P, a ≤ b or b ≤ a . Dense order: For all a and b in P for which a < b, there is a c in P such that a < c < b. Least-upper-bound property: Every non-empty subset of P with an upper bound has a least upper bound (supremum) in P. === Ordered fields === If (F, +, ×) is a field and ≤ is a total order on F, then (F, +, ×, ≤) is called an ordered field if and only if: a ≤ b implies a + c ≤ b + c; 0 ≤ a and 0 ≤ b implies 0 ≤ a × b. Both ( Q , + , × , ≤ ) {\displaystyle (\mathbb {Q} ,+,\times ,\leq )} and ( R , + , × , ≤ ) {\displaystyle (\mathbb {R} ,+,\times ,\leq )} are ordered fields, but ≤ cannot be defined in order to make ( C , + , × , ≤ ) {\displaystyle (\mathbb {C} ,+,\times ,\leq )} an ordered field, because −1 is the square of i and would therefore be positive. Besides being an ordered field, R also has the Least-upper-bound property. In fact, R can be defined as the only ordered field with that quality. == Chained notation == The notation a < b < c stands for "a < b and b < c", from which, by the transitivity property above, it also follows that a < c. By the above laws, one can add or subtract the same number to all three terms, or multiply or divide all three terms by same nonzero number and reverse all inequalities if that number is negative. Hence, for example, a < b + e < c is equivalent to a − e < b < c − e. This notation can be generalized to any number of terms: for instance, a1 ≤ a2 ≤ ... ≤ an means that ai ≤ ai+1 for i = 1, 2, ..., n − 1. By transitivity, this condition is equivalent to ai ≤ aj for any 1 ≤ i ≤ j ≤ n. When solving inequalities using chained notation, it is possible and sometimes necessary to evaluate the terms independently. For instance, to solve the inequality 4x < 2x + 1 ≤ 3x + 2, it is not possible to isolate x in any one part of the inequality through addition or subtraction. Instead, the inequalities must be solved independently, yielding x < 1/2 and x ≥ −1 respectively, which can be combined into the final solution −1 ≤ x < 1/2. Occasionally, chained notation is used with inequalities in different directions, in which case the meaning is the logical conjunction of the inequalities between adjacent terms. For example, the defining condition of a zigzag poset is written as a1 < a2 > a3 < a4 > a5 < a6 > ... . Mixed chained notation is used more often with compatible relations, like <, =, ≤. For instance, a < b = c ≤ d means that a < b, b = c, and c ≤ d. This notation exists in a few programming languages such as Python. In contrast, in programming languages that provide an ordering on the type of comparison results, such as C, even homogeneous chains may have a completely different meaning. == Sharp inequalities == An inequality is said to be sharp if it cannot be relaxed and still be valid in general. Formally, a universally quantified inequality φ is called sharp if, for every valid universally quantified inequality ψ, if ψ ⇒ φ holds, then ψ ⇔ φ also holds. For instance, the inequality ∀a ∈ R. a2 ≥ 0 is sharp, whereas the inequality ∀a ∈ R. a2 ≥ −1 is not sharp. == Inequalities between means == There are many inequalities between means. For example, for any positive numbers a1, a2, ..., an we have H ≤ G ≤ A ≤ Q , {\displaystyle H\leq G\leq A\leq Q,} where they represent the following means of the sequence: Harmonic mean : H = n 1 a 1 + 1 a 2 + ⋯ + 1 a n {\displaystyle H={\frac {n}{{\frac {1}{a_{1}}}+{\frac {1}{a_{2}}}+\cdots +{\frac {1}{a_{n}}}}}} Geometric mean : G = a 1 ⋅ a 2 ⋯ a n n {\displaystyle G={\sqrt[{n}]{a_{1}\cdot a_{2}\cdots a_{n}}}} Arithmetic mean : A = a 1 + a 2 + ⋯ + a n n {\displaystyle A={\frac {a_{1}+a_{2}+\cdots +a_{n}}{n}}} Quadratic mean : Q = a 1 2 + a 2 2 + ⋯ + a n 2 n {\displaystyle Q={\sqrt {\frac {a_{1}^{2}+a_{2}^{2}+\cdots +a_{n}^{2}}{n}}}} == Cauchy–Schwarz inequality == The Cauchy–Schwarz inequality states that for all vectors u and v of an inner product space it is true that | ⟨ u , v ⟩ | 2 ≤ ⟨ u , u ⟩ ⋅ ⟨ v , v ⟩ , {\displaystyle |\langle \mathbf {u} ,\mathbf {v} \rangle |^{2}\leq \langle \mathbf {u} ,\mathbf {u} \rangle \cdot \langle \mathbf {v} ,\mathbf {v} \rangle ,} where ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } is the inner product. Examples of inner products include the real and complex dot product; In Euclidean space Rn with the standard inner product, the Cauchy–Schwarz inequality is ( ∑ i = 1 n u i v i ) 2 ≤ ( ∑ i = 1 n u i 2 ) ( ∑ i = 1 n v i 2 ) . {\displaystyle {\biggl (}\sum _{i=1}^{n}u_{i}v_{i}{\biggr )}^{2}\leq {\biggl (}\sum _{i=1}^{n}u_{i}^{2}{\biggr )}{\biggl (}\sum _{i=1}^{n}v_{i}^{2}{\biggr )}.} == Power inequalities == A power inequality is an inequality containing terms of the form ab, where a and b are real positive numbers or variable expressions. They often appear in mathematical olympiads exercises. Examples: For any real x, e x ≥ 1 + x . {\displaystyle e^{x}\geq 1+x.} If x > 0 and p > 0, then x p − 1 p ≥ ln ( x ) ≥ 1 − 1 x p p . {\displaystyle {\frac {x^{p}-1}{p}}\geq \ln(x)\geq {\frac {1-{\frac {1}{x^{p}}}}{p}}.} In the limit of p → 0, the upper and lower bounds converge to ln(x). If x > 0, then x x ≥ ( 1 e ) 1 e . {\displaystyle x^{x}\geq \left({\frac {1}{e}}\right)^{\frac {1}{e}}.} If x > 0, then x x x ≥ x . {\displaystyle x^{x^{x}}\geq x.} If x, y, z > 0, then ( x + y ) z + ( x + z ) y + ( y + z ) x > 2. {\displaystyle \left(x+y\right)^{z}+\left(x+z\right)^{y}+\left(y+z\right)^{x}>2.} For any real distinct numbers a and b, e b − e a b − a > e ( a + b ) / 2 . {\displaystyle {\frac {e^{b}-e^{a}}{b-a}}>e^{(a+b)/2}.} If x, y > 0 and 0 < p < 1, then x p + y p > ( x + y ) p . {\displaystyle x^{p}+y^{p}>\left(x+y\right)^{p}.} If x, y, z > 0, then x x y y z z ≥ ( x y z ) ( x + y + z ) / 3 . {\displaystyle x^{x}y^{y}z^{z}\geq \left(xyz\right)^{(x+y+z)/3}.} If a, b > 0, then a a + b b ≥ a b + b a . {\displaystyle a^{a}+b^{b}\geq a^{b}+b^{a}.} If a, b > 0, then a e a + b e b ≥ a e b + b e a . {\displaystyle a^{ea}+b^{eb}\geq a^{eb}+b^{ea}.} If a, b, c > 0, then a 2 a + b 2 b + c 2 c ≥ a 2 b + b 2 c + c 2 a . {\displaystyle a^{2a}+b^{2b}+c^{2c}\geq a^{2b}+b^{2c}+c^{2a}.} If a, b > 0, then a b + b a > 1. {\displaystyle a^{b}+b^{a}>1.} == Well-known inequalities == Mathematicians often use inequalities to bound quantities for which exact formulas cannot be computed easily. Some inequalities are used so often that they have names: == Complex numbers and inequalities == The set of complex numbers C {\displaystyle \mathbb {C} } with its operations of addition and multiplication is a field, but it is impossible to define any relation ≤ so that ( C , + , × , ≤ ) {\displaystyle (\mathbb {C} ,+,\times ,\leq )} becomes an ordered field. To make ( C , + , × , ≤ ) {\displaystyle (\mathbb {C} ,+,\times ,\leq )} an ordered field, it would have to satisfy the following two properties: if a ≤ b, then a + c ≤ b + c; if 0 ≤ a and 0 ≤ b, then 0 ≤ ab. Because ≤ is a total order, for any number a, either 0 ≤ a or a ≤ 0 (in which case the first property above implies that 0 ≤ −a). In either case 0 ≤ a2; this means that i2 > 0 and 12 > 0; so −1 > 0 and 1 > 0, which means (−1 + 1) > 0; contradiction. However, an operation ≤ can be defined so as to satisfy only the first property (namely, "if a ≤ b, then a + c ≤ b + c"). Sometimes the lexicographical order definition is used: a ≤ b, if Re(a) < Re(b), or Re(a) = Re(b) and Im(a) ≤ Im(b) It can easily be proven that for this definition a ≤ b implies a + c ≤ b + c. == Systems of inequalities == Systems of linear inequalities can be simplified by Fourier–Motzkin elimination. The cylindrical algebraic decomposition is an algorithm that allows testing whether a system of polynomial equations and inequalities has solutions, and, if solutions exist, describing them. The complexity of this algorithm is doubly exponential in the number of variables. It is an active research domain to design algorithms that are more efficient in specific cases. == See also == Binary relation Bracket (mathematics), for the use of similar ‹ and › signs as brackets Inclusion (set theory) Inequation Interval (mathematics) List of inequalities List of triangle inequalities Partially ordered set Relational operators, used in programming languages to denote inequality == References == == Sources == Hardy, G., Littlewood J. E., Pólya, G. (1999). Inequalities. Cambridge Mathematical Library, Cambridge University Press. ISBN 0-521-05206-8.{{cite book}}: CS1 maint: multiple names: authors list (link) Beckenbach, E. F., Bellman, R. (1975). An Introduction to Inequalities. Random House Inc. ISBN 0-394-01559-2.{{cite book}}: CS1 maint: multiple names: authors list (link) Drachman, Byron C., Cloud, Michael J. (1998). Inequalities: With Applications to Engineering. Springer-Verlag. ISBN 0-387-98404-6.{{cite book}}: CS1 maint: multiple names: authors list (link) Grinshpan, A. Z. (2005), "General inequalities, consequences, and applications", Advances in Applied Mathematics, 34 (1): 71–100, doi:10.1016/j.aam.2004.05.001 Murray S. Klamkin. "'Quickie' inequalities" (PDF). Math Strategies. Archived (PDF) from the original on 2022-10-09. Arthur Lohwater (1982). "Introduction to Inequalities". Online e-book in PDF format. Harold Shapiro (2005). "Mathematical Problem Solving". The Old Problem Seminar. Kungliga Tekniska högskolan. "3rd USAMO". Archived from the original on 2008-02-03. Pachpatte, B. G. (2005). Mathematical Inequalities. North-Holland Mathematical Library. Vol. 67 (first ed.). Amsterdam, the Netherlands: Elsevier. ISBN 0-444-51795-2. ISSN 0924-6509. MR 2147066. Zbl 1091.26008. Ehrgott, Matthias (2005). Multicriteria Optimization. Springer-Berlin. ISBN 3-540-21398-8. Steele, J. Michael (2004). The Cauchy-Schwarz Master Class: An Introduction to the Art of Mathematical Inequalities. Cambridge University Press. ISBN 978-0-521-54677-5. == External links == "Inequality", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Graph of Inequalities by Ed Pegg, Jr. AoPS Wiki entry about Inequalities
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.