source
stringlengths
16
98
text
stringlengths
40
168k
Wikipedia:Inequation#0
In mathematics, an inequation is a statement that either an inequality (relations "greater than" and "less than", < and >) or a relation "not equal to" (≠) holds between two values. It is usually written in the form of a pair of expressions denoting the values in question, with a relational sign between the two sides, indicating the specific inequality relation. Some examples of inequations are: a < b {\displaystyle a<b} x + y + z ≤ 1 {\displaystyle x+y+z\leq 1} n > 1 {\displaystyle n>1} x ≠ 0 {\displaystyle x\neq 0} In some cases, the term "inequation" has a more restricted definition, reserved only for statements whose inequality relation is "not equal to" (or "distinct"). == Chains of inequations == A shorthand notation is used for the conjunction of several inequations involving common expressions, by chaining them together. For example, the chain 0 ≤ a < b ≤ 1 {\displaystyle 0\leq a<b\leq 1} is shorthand for 0 ≤ a a n d a < b a n d b ≤ 1 {\displaystyle 0\leq a~~\mathrm {and} ~~a<b~~\mathrm {and} ~~b\leq 1} which also implies that 0 < b {\displaystyle 0<b} and a < 1 {\displaystyle a<1} . In rare cases, chains without such implications about distant terms are used. For example i ≠ 0 ≠ j {\displaystyle i\neq 0\neq j} is shorthand for i ≠ 0 a n d 0 ≠ j {\displaystyle i\neq 0~~\mathrm {and} ~~0\neq j} , which does not imply i ≠ j . {\displaystyle i\neq j.} Similarly, a < b > c {\displaystyle a<b>c} is shorthand for a < b a n d b > c {\displaystyle a<b~~\mathrm {and} ~~b>c} , which does not imply any order of a {\displaystyle a} and c {\displaystyle c} . == Solving inequations == Similar to equation solving, inequation solving means finding what values (numbers, functions, sets, etc.) fulfill a condition stated in the form of an inequation or a conjunction of several inequations. These expressions contain one or more unknowns, which are free variables for which values are sought that cause the condition to be fulfilled. To be precise, what is sought are often not necessarily actual values, but, more in general, expressions. A solution of the inequation is an assignment of expressions to the unknowns that satisfies the inequation(s); in other words, expressions such that, when they are substituted for the unknowns, make the inequations true propositions. Often, an additional objective expression (i.e., an optimization equation) is given, that is to be minimized or maximized by an optimal solution. For example, 0 ≤ x 1 ≤ 690 − 1.5 ⋅ x 2 ∧ 0 ≤ x 2 ≤ 530 − x 1 ∧ x 1 ≤ 640 − 0.75 ⋅ x 2 {\displaystyle 0\leq x_{1}\leq 690-1.5\cdot x_{2}\;\land \;0\leq x_{2}\leq 530-x_{1}\;\land \;x_{1}\leq 640-0.75\cdot x_{2}} is a conjunction of inequations, partly written as chains (where ∧ {\displaystyle \land } can be read as "and"); the set of its solutions is shown in blue in the picture (the red, green, and orange line corresponding to the 1st, 2nd, and 3rd conjunct, respectively). For a larger example. see Linear programming#Example. Computer support in solving inequations is described in constraint programming; in particular, the simplex algorithm finds optimal solutions of linear inequations. The programming language Prolog III also supports solving algorithms for particular classes of inequalities (and other relations) as a basic language feature. For more, see constraint logic programming. == Combinations of meanings == Usually because of the properties of certain functions (like square roots), some inequations are equivalent to a combination of multiple others. For example, the inequation f ( x ) < g ( x ) {\displaystyle \textstyle {\sqrt {f(x)}}<g(x)} is logically equivalent to the following three inequations combined: f ( x ) ≥ 0 {\displaystyle f(x)\geq 0} g ( x ) > 0 {\displaystyle g(x)>0} f ( x ) < ( g ( x ) ) 2 {\displaystyle f(x)<\left(g(x)\right)^{2}} == See also == Apartness relation — a form of inequality in constructive mathematics Equation Equals sign Inequality (mathematics) Relational operator == References ==
Wikipedia:Infinite Dimensional Analysis, Quantum Probability and Related Topics#0
Infinite Dimensional Analysis, Quantum Probability and Related Topics is a quarterly peer-reviewed scientific journal published since 1998 by World Scientific. It covers the development of infinite dimensional analysis, quantum probability, and their applications to classical probability and other areas of physics. == Abstracting and indexing == The journal is abstracted and indexed in CompuMath Citation Index, Current Contents/Physical, Chemical & Earth Sciences, Mathematical Reviews, Science Citation Index, Scopus, and Zentralblatt MATH. According to the Journal Citation Reports, the journal has a 2020 impact factor of 0.793. == References == == External links == Official website
Wikipedia:Infinite conjugacy class property#0
In mathematics, a group is said to have the infinite conjugacy class property, or to be an ICC group, if the conjugacy class of every group element but the identity is infinite.: 907 The von Neumann group algebra of a group is a factor if and only if the group has the infinite conjugacy class property. It will then be, provided the group is nontrivial, of type II1, i.e. it will possess a unique, faithful, tracial state. Examples of ICC groups are the group of permutations of an infinite set that leave all but a finite subset of elements fixed,: 908 and free groups on two generators.: 908 In abelian groups, every conjugacy class consists of only one element, so ICC groups are, in a way, as far from being abelian as possible. == References ==
Wikipedia:Infinite difference method#0
In mathematics, infinite difference methods are numerical methods for solving differential equations by approximating them with difference equations, in which infinite differences approximate the derivatives. In calculus there are two sections, one is differentiation and the other is integration. Integration is the reverse process of differentiation. == See also == Infinite element method Finite difference Finite difference time domain == References == Simulation of ion transfer under conditions of natural convection by the finite difference method Han, Houde; Wu, Xiaonan (2013). Artificial Boundary Method. Springer. Chapter 6: Discrete Artificial Boundary Conditions. ISBN 978-3-642-35464-9.. Genetic Algorithm and Numerical Solution
Wikipedia:Infinite expression#0
In mathematics, an infinite expression is an expression in which some operators take an infinite number of arguments, or in which the nesting of the operators continues to an infinite depth. A generic concept for infinite expression can lead to ill-defined or self-inconsistent constructions (much like a set of all sets), but there are several instances of infinite expressions that are well-defined. == Examples == Examples of well-defined infinite expressions are infinite sums, such as ∑ n = 0 ∞ a n = a 0 + a 1 + a 2 + ⋯ {\displaystyle \sum _{n=0}^{\infty }a_{n}=a_{0}+a_{1}+a_{2}+\cdots \,} infinite products, such as ∏ n = 0 ∞ b n = b 0 × b 1 × b 2 × ⋯ {\displaystyle \prod _{n=0}^{\infty }b_{n}=b_{0}\times b_{1}\times b_{2}\times \cdots } infinite nested radicals, such as 1 + 2 1 + 3 1 + ⋯ {\displaystyle {\sqrt {1+2{\sqrt {1+3{\sqrt {1+\cdots }}}}}}} infinite power towers, such as 2 2 2 ⋅ ⋅ ⋅ {\displaystyle {\sqrt {2}}^{{\sqrt {2}}^{{\sqrt {2}}^{\cdot ^{\cdot ^{\cdot }}}}}} infinite continued fractions, such as c 0 + K ∞ n = 1 1 c n = c 0 + 1 c 1 + 1 c 2 + 1 c 3 + 1 c 4 + ⋱ , {\displaystyle c_{0}+{\underset {n=1}{\overset {\infty }{\mathrm {K} }}}{\frac {1}{c_{n}}}=c_{0}+{\cfrac {1}{c_{1}+{\cfrac {1}{c_{2}+{\cfrac {1}{c_{3}+{\cfrac {1}{c_{4}+\ddots }}}}}}}},} where the left hand side uses Gauss's Kettenbruch notation. In infinitary logic, one can use infinite conjunctions and infinite disjunctions. Even for well-defined infinite expressions, the value of the infinite expression may be ambiguous or not well-defined; for instance, there are multiple summation rules available for assigning values to series, and the same series may have different values according to different summation rules if the series is not absolutely convergent. == See also == Iterated binary operation Infinite word Decimal expansion Power series Infinite compositions of analytic functions Omega language == References ==
Wikipedia:Infinite product#0
In mathematics, for a sequence of complex numbers a1, a2, a3, ... the infinite product ∏ n = 1 ∞ a n = a 1 a 2 a 3 ⋯ {\displaystyle \prod _{n=1}^{\infty }a_{n}=a_{1}a_{2}a_{3}\cdots } is defined to be the limit of the partial products a1a2...an as n increases without bound. The product is said to converge when the limit exists and is not zero. Otherwise the product is said to diverge. A limit of zero is treated specially in order to obtain results analogous to those for infinite sums. Some sources allow convergence to 0 if there are only a finite number of zero factors and the product of the non-zero factors is non-zero, but for simplicity we will not allow that here. If the product converges, then the limit of the sequence an as n increases without bound must be 1, while the converse is in general not true. The best known examples of infinite products are probably some of the formulae for π, such as the following two products, respectively by Viète (Viète's formula, the first published infinite product in mathematics) and John Wallis (Wallis product): 2 π = 2 2 ⋅ 2 + 2 2 ⋅ 2 + 2 + 2 2 ⋅ ⋯ = ∏ n = 1 ∞ cos ⁡ π 2 n + 1 {\displaystyle {\frac {2}{\pi }}={\frac {\sqrt {2}}{2}}\cdot {\frac {\sqrt {2+{\sqrt {2}}}}{2}}\cdot {\frac {\sqrt {2+{\sqrt {2+{\sqrt {2}}}}}}{2}}\cdot \;\cdots =\prod _{n=1}^{\infty }\cos {\frac {\pi }{2^{n+1}}}} π 2 = ( 2 1 ⋅ 2 3 ) ⋅ ( 4 3 ⋅ 4 5 ) ⋅ ( 6 5 ⋅ 6 7 ) ⋅ ( 8 7 ⋅ 8 9 ) ⋅ ⋯ = ∏ n = 1 ∞ ( 4 n 2 4 n 2 − 1 ) . {\displaystyle {\frac {\pi }{2}}=\left({\frac {2}{1}}\cdot {\frac {2}{3}}\right)\cdot \left({\frac {4}{3}}\cdot {\frac {4}{5}}\right)\cdot \left({\frac {6}{5}}\cdot {\frac {6}{7}}\right)\cdot \left({\frac {8}{7}}\cdot {\frac {8}{9}}\right)\cdot \;\cdots =\prod _{n=1}^{\infty }\left({\frac {4n^{2}}{4n^{2}-1}}\right).} == Convergence criteria == The product of positive real numbers ∏ n = 1 ∞ a n {\displaystyle \prod _{n=1}^{\infty }a_{n}} converges to a nonzero real number if and only if the sum ∑ n = 1 ∞ log ⁡ ( a n ) {\displaystyle \sum _{n=1}^{\infty }\log(a_{n})} converges. This allows the translation of convergence criteria for infinite sums into convergence criteria for infinite products. The same criterion applies to products of arbitrary complex numbers (including negative reals) if the logarithm is understood as a fixed branch of logarithm which satisfies ln ⁡ ( 1 ) = 0 {\displaystyle \ln(1)=0} , with the provision that the infinite product diverges when infinitely many an fall outside the domain of ln {\displaystyle \ln } , whereas finitely many such an can be ignored in the sum. If we define a n = 1 + p n {\displaystyle a_{n}=1+p_{n}} , the bounds 1 + ∑ n = 1 N p n ≤ ∏ n = 1 N ( 1 + p n ) ≤ exp ⁡ ( ∑ n = 1 N p n ) {\displaystyle 1+\sum _{n=1}^{N}p_{n}\leq \prod _{n=1}^{N}\left(1+p_{n}\right)\leq \exp \left(\sum _{n=1}^{N}p_{n}\right)} show that the infinite product of an converges if the infinite sum of the pn converges. This relies on the Monotone convergence theorem. We can show the converse by observing that, if p n → 0 {\displaystyle p_{n}\to 0} , then lim n → ∞ log ⁡ ( 1 + p n ) p n = lim x → 0 log ⁡ ( 1 + x ) x = 1 , {\displaystyle \lim _{n\to \infty }{\frac {\log(1+p_{n})}{p_{n}}}=\lim _{x\to 0}{\frac {\log(1+x)}{x}}=1,} and by the limit comparison test it follows that the two series ∑ n = 1 ∞ log ⁡ ( 1 + p n ) and ∑ n = 1 ∞ p n , {\displaystyle \sum _{n=1}^{\infty }\log(1+p_{n})\quad {\text{and}}\quad \sum _{n=1}^{\infty }p_{n},} are equivalent meaning that either they both converge or they both diverge. If the series ∑ n = 1 ∞ log ⁡ ( a n ) {\textstyle \sum _{n=1}^{\infty }\log(a_{n})} diverges to − ∞ {\displaystyle -\infty } , then the sequence of partial products of the an converges to zero. The infinite product is said to diverge to zero. For the case where the p n {\displaystyle p_{n}} have arbitrary signs, the convergence of the sum ∑ n = 1 ∞ p n {\textstyle \sum _{n=1}^{\infty }p_{n}} does not guarantee the convergence of the product ∏ n = 1 ∞ ( 1 + p n ) {\textstyle \prod _{n=1}^{\infty }(1+p_{n})} . For example, if p n = ( − 1 ) n + 1 n {\displaystyle p_{n}={\frac {(-1)^{n+1}}{\sqrt {n}}}} , then ∑ n = 1 ∞ p n {\textstyle \sum _{n=1}^{\infty }p_{n}} converges, but ∏ n = 1 ∞ ( 1 + p n ) {\textstyle \prod _{n=1}^{\infty }(1+p_{n})} diverges to zero. However, if ∑ n = 1 ∞ | p n | {\textstyle \sum _{n=1}^{\infty }|p_{n}|} is convergent, then the product ∏ n = 1 ∞ ( 1 + p n ) {\textstyle \prod _{n=1}^{\infty }(1+p_{n})} converges absolutely–that is, the factors may be rearranged in any order without altering either the convergence, or the limiting value, of the infinite product. Also, if ∑ n = 1 ∞ | p n | 2 {\textstyle \sum _{n=1}^{\infty }|p_{n}|^{2}} is convergent, then the sum ∑ n = 1 ∞ p n {\textstyle \sum _{n=1}^{\infty }p_{n}} and the product ∏ n = 1 ∞ ( 1 + p n ) {\textstyle \prod _{n=1}^{\infty }(1+p_{n})} are either both convergent, or both divergent. == Product representations of functions == One important result concerning infinite products is that every entire function f(z) (that is, every function that is holomorphic over the entire complex plane) can be factored into an infinite product of entire functions, each with at most a single root. In general, if f has a root of order m at the origin and has other complex roots at u1, u2, u3, ... (listed with multiplicities equal to their orders), then f ( z ) = z m e ϕ ( z ) ∏ n = 1 ∞ ( 1 − z u n ) exp ⁡ { z u n + 1 2 ( z u n ) 2 + ⋯ + 1 λ n ( z u n ) λ n } {\displaystyle f(z)=z^{m}e^{\phi (z)}\prod _{n=1}^{\infty }\left(1-{\frac {z}{u_{n}}}\right)\exp \left\lbrace {\frac {z}{u_{n}}}+{\frac {1}{2}}\left({\frac {z}{u_{n}}}\right)^{2}+\cdots +{\frac {1}{\lambda _{n}}}\left({\frac {z}{u_{n}}}\right)^{\lambda _{n}}\right\rbrace } where λn are non-negative integers that can be chosen to make the product converge, and ϕ ( z ) {\displaystyle \phi (z)} is some entire function (which means the term before the product will have no roots in the complex plane). The above factorization is not unique, since it depends on the choice of values for λn. However, for most functions, there will be some minimum non-negative integer p such that λn = p gives a convergent product, called the canonical product representation. This p is called the rank of the canonical product. In the event that p = 0, this takes the form f ( z ) = z m e ϕ ( z ) ∏ n = 1 ∞ ( 1 − z u n ) . {\displaystyle f(z)=z^{m}e^{\phi (z)}\prod _{n=1}^{\infty }\left(1-{\frac {z}{u_{n}}}\right).} This can be regarded as a generalization of the fundamental theorem of algebra, since for polynomials, the product becomes finite and ϕ ( z ) {\displaystyle \phi (z)} is constant. In addition to these examples, the following representations are of special note: The last of these is not a product representation of the same sort discussed above, as ζ is not entire. Rather, the above product representation of ζ(z) converges precisely for Re(z) > 1, where it is an analytic function. By techniques of analytic continuation, this function can be extended uniquely to an analytic function (still denoted ζ(z)) on the whole complex plane except at the point z = 1, where it has a simple pole. == See also == Infinite products in trigonometry Iterated binary operation Infinite expression Infinite series Pentagonal number theorem == References == Knopp, Konrad (1990). Theory and Application of Infinite Series. Dover Publications. ISBN 978-0-486-66165-0. Rudin, Walter (1987). Real and Complex Analysis (3rd ed.). Boston: McGraw Hill. ISBN 0-07-054234-1. Abramowitz, Milton; Stegun, Irene A., eds. (1972). Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Dover Publications. ISBN 978-0-486-61272-0. == External links == Infinite products from Wolfram Math World A Collection of Infinite Products – I A Collection of Infinite Products – II
Wikipedia:Infinite-dimensional Lebesgue measure#0
In mathematics, an infinite-dimensional Lebesgue measure is a measure defined on infinite-dimensional normed vector spaces, such as Banach spaces, which resembles the Lebesgue measure used in finite-dimensional spaces. However, the traditional Lebesgue measure cannot be straightforwardly extended to all infinite-dimensional spaces due to a key limitation: any translation-invariant Borel measure on an infinite-dimensional separable Banach space must be either infinite for all sets or zero for all sets. Despite this, certain forms of infinite-dimensional Lebesgue-like measures can exist in specific contexts. These include non-separable spaces like the Hilbert cube, or scenarios where some typical properties of finite-dimensional Lebesgue measures are modified or omitted. == Motivation == The Lebesgue measure λ {\displaystyle \lambda } on the Euclidean space R n {\displaystyle \mathbb {R} ^{n}} is locally finite, strictly positive, and translation-invariant. That is: every point x {\displaystyle x} in R n {\displaystyle \mathbb {R} ^{n}} has an open neighborhood N x {\displaystyle N_{x}} with finite measure: λ ( N x ) < + ∞ ; {\displaystyle \lambda (N_{x})<+\infty ;} every non-empty open subset U {\displaystyle U} of R n {\displaystyle \mathbb {R} ^{n}} has positive measure: λ ( U ) > 0 ; {\displaystyle \lambda (U)>0;} and if A {\displaystyle A} is any Lebesgue-measurable subset of R n , {\displaystyle \mathbb {R} ^{n},} and h {\displaystyle h} is a vector in R n , {\displaystyle \mathbb {R} ^{n},} then all translates of A {\displaystyle A} have the same measure: λ ( A + h ) = λ ( A ) . {\displaystyle \lambda (A+h)=\lambda (A).} Motivated by their geometrical significance, constructing measures satisfying the above set properties for infinite-dimensional spaces such as the L p {\displaystyle L^{p}} spaces or path spaces is still an open and active area of research. == Non-existence theorem in separable Banach spaces == Let X {\displaystyle X} be an infinite-dimensional, separable Banach space. Then, the only locally finite and translation invariant Borel measure μ {\displaystyle \mu } on X {\displaystyle X} is a trivial measure. Equivalently, there is no locally finite, strictly positive, and translation invariant measure on X {\displaystyle X} . === Statemant for non locally compact Polish groups === More generally: on a non locally compact Polish group G {\displaystyle G} , there cannot exist a σ-finite and left-invariant Borel measure. This theorem implies that on an infinite dimensional separable Banach space (which cannot be locally compact) a measure that perfectly matches the properties of a finite dimensional Lebesgue measure does not exist. === Proof === Let X {\displaystyle X} be an infinite-dimensional, separable Banach space equipped with a locally finite translation-invariant measure μ {\displaystyle \mu } . To prove that μ {\displaystyle \mu } is the trivial measure, it is sufficient and necessary to show that μ ( X ) = 0. {\displaystyle \mu (X)=0.} Like every separable metric space, X {\displaystyle X} is a Lindelöf space, which means that every open cover of X {\displaystyle X} has a countable subcover. It is, therefore, enough to show that there exists some open cover of X {\displaystyle X} by null sets because by choosing a countable subcover, the σ-subadditivity of μ {\displaystyle \mu } will imply that μ ( X ) = 0. {\displaystyle \mu (X)=0.} Using local finiteness of the measure μ {\displaystyle \mu } , suppose that for some r > 0 , {\displaystyle r>0,} the open ball B ( r ) {\displaystyle B(r)} of radius r {\displaystyle r} has a finite μ {\displaystyle \mu } -measure. Since X {\displaystyle X} is infinite-dimensional, by Riesz's lemma there is an infinite sequence of pairwise disjoint open balls B n ( r / 4 ) , {\displaystyle B_{n}(r/4),} n ∈ N {\displaystyle n\in \mathbb {N} } , of radius r / 4 , {\displaystyle r/4,} with all the smaller balls B n ( r / 4 ) {\displaystyle B_{n}(r/4)} contained within B ( r ) . {\displaystyle B(r).} By translation invariance, all the cover's balls have the same μ {\displaystyle \mu } -measure, and since the infinite sum of these finite μ {\displaystyle \mu } -measures are finite, the cover's balls must all have μ {\displaystyle \mu } -measure zero. Since r {\displaystyle r} was arbitrary, every open ball in X {\displaystyle X} has zero μ {\displaystyle \mu } -measure, and taking a cover of X {\displaystyle X} which is the set of all open balls that completes the proof that μ ( X ) = 0 {\displaystyle \mu (X)=0} . == Nontrivial measures == Here are some examples of infinite-dimensional Lebesgue measures that can exist if the conditions of the above theorem are relaxed. One example for an entirely separable Banach space is the abstract Wiener space construction, similar to a product of Gaussian measures (which are not translation invariant). Another approach is to consider a Lebesgue measure of finite-dimensional subspaces within the larger space and look at prevalent and shy sets. The Hilbert cube carries the product Lebesgue measure and the compact topological group given by the Tychonoff product of an infinite number of copies of the circle group is infinite-dimensional and carries a Haar measure that is translation-invariant. These two spaces can be mapped onto each other in a measure-preserving way by unwrapping the circles into intervals. The infinite product of the additive real numbers has the analogous product Haar measure, which is precisely the infinite-dimensional analog of the Lebesgue measure. == See also == Cylinder set measure – way to generate a measure over product spacesPages displaying wikidata descriptions as a fallback Cameron–Martin theorem – Theorem defining translation of Gaussian measures (Wiener measures) on Hilbert spaces. Feldman–Hájek theorem – Theory in probability theory Gaussian measure#Infinite-dimensional spaces – Type of Borel measure Structure theorem for Gaussian measures – Mathematical theorem Projection-valued measure – Mathematical operator-value measure of interest in quantum mechanics and functional analysis Set function – Function from sets to numbers == References ==
Wikipedia:Inflation-restriction exact sequence#0
In algebraic topology, a transgression map is a way to transfer cohomology classes. It occurs, for example in the inflation-restriction exact sequence in group cohomology, and in integration in fibers. It also naturally arises in many spectral sequences; see spectral sequence#Edge maps and transgressions. == Inflation-restriction exact sequence == The transgression map appears in the inflation-restriction exact sequence, an exact sequence occurring in group cohomology. Let G be a group, N a normal subgroup, and A an abelian group which is equipped with an action of G, i.e., a homomorphism from G to the automorphism group of A. The quotient group G / N {\displaystyle G/N} acts on A N = { a ∈ A : n a = a for all n ∈ N } . {\displaystyle A^{N}=\{a\in A:na=a{\text{ for all }}n\in N\}.} Then the inflation-restriction exact sequence is: 0 → H 1 ( G / N , A N ) → H 1 ( G , A ) → H 1 ( N , A ) G / N → H 2 ( G / N , A N ) → H 2 ( G , A ) . {\displaystyle 0\to H^{1}(G/N,A^{N})\to H^{1}(G,A)\to H^{1}(N,A)^{G/N}\to H^{2}(G/N,A^{N})\to H^{2}(G,A).} The transgression map is the map H 1 ( N , A ) G / N → H 2 ( G / N , A N ) {\displaystyle H^{1}(N,A)^{G/N}\to H^{2}(G/N,A^{N})} . Transgression is defined for general n ∈ N {\displaystyle n\in \mathbb {N} } , H n ( N , A ) G / N → H n + 1 ( G / N , A N ) {\displaystyle H^{n}(N,A)^{G/N}\to H^{n+1}(G/N,A^{N})} , only if H i ( N , A ) G / N = 0 {\displaystyle H^{i}(N,A)^{G/N}=0} for i ≤ n − 1 {\displaystyle i\leq n-1} . == Notes == == References == Gille, Philippe; Szamuely, Tamás (2006). Central simple algebras and Galois cohomology. Cambridge Studies in Advanced Mathematics. Vol. 101. Cambridge: Cambridge University Press. ISBN 0-521-86103-9. Zbl 1137.12001. Hazewinkel, Michiel (1995). Handbook of Algebra, Volume 1. Elsevier. p. 282. ISBN 0444822127. Koch, Helmut (1997). Algebraic Number Theory. Encycl. Math. Sci. Vol. 62 (2nd printing of 1st ed.). Springer-Verlag. ISBN 3-540-63003-1. Zbl 0819.11044. Neukirch, Jürgen; Schmidt, Alexander; Wingberg, Kay (2008). Cohomology of Number Fields. Grundlehren der Mathematischen Wissenschaften. Vol. 323 (2nd ed.). Springer-Verlag. pp. 112–113. ISBN 978-3-540-37888-4. Zbl 1136.11001. Schmid, Peter (2007). The Solution of The K(GV) Problem. Advanced Texts in Mathematics. Vol. 4. Imperial College Press. p. 214. ISBN 978-1860949708. Serre, Jean-Pierre (1979). Local Fields. Graduate Texts in Mathematics. Vol. 67. Translated by Greenberg, Marvin Jay. Springer-Verlag. pp. 117–118. ISBN 0-387-90424-7. Zbl 0423.12016. == External links == transgression at the nLab
Wikipedia:Information algebra#0
The term "information algebra" refers to mathematical techniques of information processing. Classical information theory goes back to Claude Shannon. It is a theory of information transmission, looking at communication and storage. However, it has not been considered so far that information comes from different sources and that it is therefore usually combined. It has furthermore been neglected in classical information theory that one wants to extract those parts out of a piece of information that are relevant to specific questions. A mathematical phrasing of these operations leads to an algebra of information, describing basic modes of information processing. Such an algebra involves several formalisms of computer science, which seem to be different on the surface: relational databases, multiple systems of formal logic or numerical problems of linear algebra. It allows the development of generic procedures of information processing and thus a unification of basic methods of computer science, in particular of distributed information processing. Information relates to precise questions, comes from different sources, must be aggregated, and can be focused on questions of interest. Starting from these considerations, information algebras (Kohlas 2003) are two-sorted algebras ( Φ , D ) {\displaystyle (\Phi ,D)} : Where Φ {\displaystyle \Phi } is a semigroup, representing combination or aggregation of information, and D {\displaystyle D} is a lattice of domains (related to questions) whose partial order reflects the granularity of the domain or the question, and a mixed operation representing focusing or extraction of information. == Information and its operations == More precisely, in the two-sorted algebra ( Φ , D ) {\displaystyle (\Phi ,D)} , the following operations are defined Additionally, in D {\displaystyle D} the usual lattice operations (meet and join) are defined. == Axioms and definition == The axioms of the two-sorted algebra ( Φ , D ) {\displaystyle (\Phi ,D)} , in addition to the axioms of the lattice D {\displaystyle D} : A two-sorted algebra ( Φ , D ) {\displaystyle (\Phi ,D)} satisfying these axioms is called an Information Algebra. == Order of information == A partial order of information can be introduced by defining ϕ ≤ ψ {\displaystyle \phi \leq \psi } if ϕ ⊗ ψ = ψ {\displaystyle \phi \otimes \psi =\psi } . This means that ϕ {\displaystyle \phi } is less informative than ψ {\displaystyle \psi } if it adds no new information to ψ {\displaystyle \psi } . The semigroup Φ {\displaystyle \Phi } is a semilattice relative to this order, i.e. ϕ ⊗ ψ = ϕ ∨ ψ {\displaystyle \phi \otimes \psi =\phi \vee \psi } . Relative to any domain (question) x ∈ D {\displaystyle x\in D} a partial order can be introduced by defining ϕ ≤ x ψ {\displaystyle \phi \leq _{x}\psi } if ϕ ⇒ x ≤ ψ ⇒ x {\displaystyle \phi ^{\Rightarrow x}\leq \psi ^{\Rightarrow x}} . It represents the order of information content of ϕ {\displaystyle \phi } and ψ {\displaystyle \psi } relative to the domain (question) x {\displaystyle x} . == Labeled information algebra == The pairs ( ϕ , x ) {\displaystyle (\phi ,x)\ } , where ϕ ∈ Φ {\displaystyle \phi \in \Phi } and x ∈ D {\displaystyle x\in D} such that ϕ ⇒ x = ϕ {\displaystyle \phi ^{\Rightarrow x}=\phi } form a labeled Information Algebra. More precisely, in the two-sorted algebra ( Φ , D ) {\displaystyle (\Phi ,D)\ } , the following operations are defined == Models of information algebras == Here follows an incomplete list of instances of information algebras: Relational algebra: The reduct of a relational algebra with natural join as combination and the usual projection is a labeled information algebra, see Example. Constraint systems: Constraints form an information algebra (Jaffar & Maher 1994). Semiring valued algebras: C-Semirings induce information algebras (Bistarelli, Montanari & Rossi1997);(Bistarelli et al. 1999);(Kohlas & Wilson 2006). Logic: Many logic systems induce information algebras (Wilson & Mengin 1999). Reducts of cylindric algebras (Henkin, Monk & Tarski 1971) or polyadic algebras are information algebras related to predicate logic (Halmos 2000). Module algebras: (Bergstra, Heering & Klint 1990);(de Lavalette 1992). Linear systems: Systems of linear equations or linear inequalities induce information algebras (Kohlas 2003). === Worked-out example: relational algebra === Let A {\displaystyle {\mathcal {A}}} be a set of symbols, called attributes (or column names). For each α ∈ A {\displaystyle \alpha \in {\mathcal {A}}} let U α {\displaystyle U_{\alpha }} be a non-empty set, the set of all possible values of the attribute α {\displaystyle \alpha } . For example, if A = { name , age , income } {\displaystyle {\mathcal {A}}=\{{\texttt {name}},{\texttt {age}},{\texttt {income}}\}} , then U name {\displaystyle U_{\texttt {name}}} could be the set of strings, whereas U age {\displaystyle U_{\texttt {age}}} and U income {\displaystyle U_{\texttt {income}}} are both the set of non-negative integers. Let x ⊆ A {\displaystyle x\subseteq {\mathcal {A}}} . An x {\displaystyle x} -tuple is a function f {\displaystyle f} so that dom ( f ) = x {\displaystyle {\hbox{dom}}(f)=x} and f ( α ) ∈ U α {\displaystyle f(\alpha )\in U_{\alpha }} for each α ∈ x {\displaystyle \alpha \in x} The set of all x {\displaystyle x} -tuples is denoted by E x {\displaystyle E_{x}} . For an x {\displaystyle x} -tuple f {\displaystyle f} and a subset y ⊆ x {\displaystyle y\subseteq x} the restriction f [ y ] {\displaystyle f[y]} is defined to be the y {\displaystyle y} -tuple g {\displaystyle g} so that g ( α ) = f ( α ) {\displaystyle g(\alpha )=f(\alpha )} for all α ∈ y {\displaystyle \alpha \in y} . A relation R {\displaystyle R} over x {\displaystyle x} is a set of x {\displaystyle x} -tuples, i.e. a subset of E x {\displaystyle E_{x}} . The set of attributes x {\displaystyle x} is called the domain of R {\displaystyle R} and denoted by d ( R ) {\displaystyle d(R)} . For y ⊆ d ( R ) {\displaystyle y\subseteq d(R)} the projection of R {\displaystyle R} onto y {\displaystyle y} is defined as follows: π y ( R ) := { f [ y ] ∣ f ∈ R } . {\displaystyle \pi _{y}(R):=\{f[y]\mid f\in R\}.} The join of a relation R {\displaystyle R} over x {\displaystyle x} and a relation S {\displaystyle S} over y {\displaystyle y} is defined as follows: R ⋈ S := { f ∣ f ( x ∪ y ) -tuple , f [ x ] ∈ R , f [ y ] ∈ S } . {\displaystyle R\bowtie S:=\{f\mid f\quad (x\cup y){\hbox{-tuple}},\quad f[x]\in R,\;f[y]\in S\}.} As an example, let R {\displaystyle R} and S {\displaystyle S} be the following relations: R = name age A 34 B 47 S = name income A 20'000 B 32'000 {\displaystyle R={\begin{matrix}{\texttt {name}}&{\texttt {age}}\\{\texttt {A}}&{\texttt {34}}\\{\texttt {B}}&{\texttt {47}}\\\end{matrix}}\qquad S={\begin{matrix}{\texttt {name}}&{\texttt {income}}\\{\texttt {A}}&{\texttt {20'000}}\\{\texttt {B}}&{\texttt {32'000}}\\\end{matrix}}} Then the join of R {\displaystyle R} and S {\displaystyle S} is: R ⋈ S = name age income A 34 20'000 B 47 32'000 {\displaystyle R\bowtie S={\begin{matrix}{\texttt {name}}&{\texttt {age}}&{\texttt {income}}\\{\texttt {A}}&{\texttt {34}}&{\texttt {20'000}}\\{\texttt {B}}&{\texttt {47}}&{\texttt {32'000}}\\\end{matrix}}} A relational database with natural join ⋈ {\displaystyle \bowtie } as combination and the usual projection π {\displaystyle \pi } is an information algebra. The operations are well defined since d ( R ⋈ S ) = d ( R ) ∪ d ( S ) {\displaystyle d(R\bowtie S)=d(R)\cup d(S)} If x ⊆ d ( R ) {\displaystyle x\subseteq d(R)} , then d ( π x ( R ) ) = x {\displaystyle d(\pi _{x}(R))=x} . It is easy to see that relational databases satisfy the axioms of a labeled information algebra: semigroup ( R 1 ⋈ R 2 ) ⋈ R 3 = R 1 ⋈ ( R 2 ⋈ R 3 ) {\displaystyle (R_{1}\bowtie R_{2})\bowtie R_{3}=R_{1}\bowtie (R_{2}\bowtie R_{3})} and R ⋈ S = S ⋈ R {\displaystyle R\bowtie S=S\bowtie R} transitivity If x ⊆ y ⊆ d ( R ) {\displaystyle x\subseteq y\subseteq d(R)} , then π x ( π y ( R ) ) = π x ( R ) {\displaystyle \pi _{x}(\pi _{y}(R))=\pi _{x}(R)} . combination If d ( R ) = x {\displaystyle d(R)=x} and d ( S ) = y {\displaystyle d(S)=y} , then π x ( R ⋈ S ) = R ⋈ π x ∩ y ( S ) {\displaystyle \pi _{x}(R\bowtie S)=R\bowtie \pi _{x\cap y}(S)} . idempotency If x ⊆ d ( R ) {\displaystyle x\subseteq d(R)} , then R ⋈ π x ( R ) = R {\displaystyle R\bowtie \pi _{x}(R)=R} . support If x = d ( R ) {\displaystyle x=d(R)} , then π x ( R ) = R {\displaystyle \pi _{x}(R)=R} . == Connections == Valuation algebras Dropping the idempotency axiom leads to valuation algebras. These axioms have been introduced by (Shenoy & Shafer 1990) to generalize local computation schemes (Lauritzen & Spiegelhalter 1988) from Bayesian networks to more general formalisms, including belief function, possibility potentials, etc. (Kohlas & Shenoy 2000). For a book-length exposition on the topic see Pouly & Kohlas (2011). Domains and information systems Compact Information Algebras (Kohlas 2003) are related to Scott domains and Scott information systems (Scott 1970);(Scott 1982);(Larsen & Winskel 1984). Uncertain information Random variables with values in information algebras represent probabilistic argumentation systems (Haenni, Kohlas & Lehmann 2000). Semantic information Information algebras introduce semantics by relating information to questions through focusing and combination (Groenendijk & Stokhof 1984);(Floridi 2004). Information flow Information algebras are related to information flow, in particular classifications (Barwise & Seligman 1997). Tree decomposition Information algebras are organized into a hierarchical tree structure, and decomposed into smaller problems. Semigroup theory ... Compositional models Such models may be defined within the framework of information algebras: https://arxiv.org/abs/1612.02587 Extended axiomatic foundations of information and valuation algebras The concept of conditional independence is basic for information algebras and a new axiomatic foundation of information algebras, based on conditional independence, extending the old one (see above) is available: https://arxiv.org/abs/1701.02658 == Historical Roots == The axioms for information algebras are derived from the axiom system proposed in (Shenoy and Shafer, 1990), see also (Shafer, 1991). == References == Barwise, J.; Seligman, J. (1997), Information Flow: The Logic of Distributed Systems, Cambridge U.K.: Number 44 in Cambridge Tracts in Theoretical Computer Science, Cambridge University Press Bergstra, J.A.; Heering, J.; Klint, P. (1990), "Module algebra", Journal of the ACM, 73 (2): 335–372, doi:10.1145/77600.77621, S2CID 7910431 Bistarelli, S.; Fargier, H.; Montanari, U.; Rossi, F.; Schiex, T.; Verfaillie, G. (1999), "Semiring-based CSPs and valued CSPs: Frameworks, properties, and comparison", Constraints, 4 (3): 199–240, doi:10.1023/A:1026441215081, S2CID 17232456, archived from the original on March 10, 2022 Bistarelli, Stefano; Montanari, Ugo; Rossi, Francesca (1997), "Semiring-based constraint satisfaction and optimization", Journal of the ACM, 44 (2): 201–236, CiteSeerX 10.1.1.45.5110, doi:10.1145/256303.256306, S2CID 4003767 de Lavalette, Gerard R. Renardel (1992), "Logical semantics of modularisation", in Egon Börger; Gerhard Jäger; Hans Kleine Büning; Michael M. Richter (eds.), CSL: 5th Workshop on Computer Science Logic, Volume 626 of Lecture Notes in Computer Science, Springer, pp. 306–315, ISBN 978-3-540-55789-0 Floridi, Luciano (2004), "Outline of a theory of strongly semantic information" (PDF), Minds and Machines, 14 (2): 197–221, doi:10.1023/b:mind.0000021684.50925.c9, S2CID 3058065 Groenendijk, J.; Stokhof, M. (1984), Studies on the Semantics of Questions and the Pragmatics of Answers, PhD thesis, Universiteit van Amsterdam Haenni, R.; Kohlas, J.; Lehmann, N. (2000), "Probabilistic argumentation systems" (PDF), in J. Kohlas; S. Moral (eds.), Handbook of Defeasible Reasoning and Uncertainty Management Systems, Dordrecht: Volume 5: Algorithms for Uncertainty and Defeasible Reasoning, Kluwer, pp. 221–287, archived from the original on January 25, 2005 Halmos, Paul R. (2000), "An autobiography of polyadic algebras", Logic Journal of the IGPL, 8 (4): 383–392, doi:10.1093/jigpal/8.4.383, S2CID 36156234 Henkin, L.; Monk, J. D.; Tarski, A. (1971), Cylindric Algebras, Amsterdam: North-Holland, ISBN 978-0-7204-2043-2 Jaffar, J.; Maher, M. J. (1994), "Constraint logic programming: A survey", Journal of Logic Programming, 19/20: 503–581, doi:10.1016/0743-1066(94)90033-7 Kohlas, J. (2003), Information Algebras: Generic Structures for Inference, Springer-Verlag, ISBN 978-1-85233-689-9 Kohlas, J.; Shenoy, P.P. (2000), "Computation in valuation algebras", in J. Kohlas; S. Moral (eds.), Handbook of Defeasible Reasoning and Uncertainty Management Systems, Volume 5: Algorithms for Uncertainty and Defeasible Reasoning, Dordrecht: Kluwer, pp. 5–39 Kohlas, J.; Wilson, N. (2006), Exact and approximate local computation in semiring-induced valuation algebras (PDF), Technical Report 06-06, Department of Informatics, University of Fribourg, archived from the original on September 24, 2006 Larsen, K. G.; Winskel, G. (1984), "Using information systems to solve recursive domain equations effectively", in Gilles Kahn; David B. MacQueen; Gordon D. Plotkin (eds.), Semantics of Data Types, International Symposium, Sophia-Antipolis, France, June 27–29, 1984, Proceedings, vol. 173 of Lecture Notes in Computer Science, Berlin: Springer, pp. 109–129 Lauritzen, S. L.; Spiegelhalter, D. J. (1988), "Local computations with probabilities on graphical structures and their application to expert systems", Journal of the Royal Statistical Society, Series B, 50 (2): 157–224, doi:10.1111/j.2517-6161.1988.tb01721.x Pouly, Marc; Kohlas, Jürg (2011), Generic Inference: A Unifying Theory for Automated Reasoning, John Wiley & Sons, ISBN 978-1-118-01086-0 Scott, Dana S. (1970), Outline of a mathematical theory of computation, Technical Monograph PRG–2, Oxford University Computing Laboratory, Programming Research Group Scott, D.S. (1982), "Domains for denotational semantics", in M. Nielsen; E.M. Schmitt (eds.), Automata, Languages and Programming, Springer, pp. 577–613 Shafer, G. (1991), An axiomatic study of computation in hypertrees, Working Paper 232, School of Business, University of Kansas Shenoy, P. P.; Shafer, G. (1990). "Axioms for probability and belief-function proagation". In Ross D. Shachter; Tod S. Levitt; Laveen N. Kanal; John F. Lemmer (eds.). Uncertainty in Artificial Intelligence 4. Vol. 9. Amsterdam: Elsevier. pp. 169–198. doi:10.1016/B978-0-444-88650-7.50019-6. hdl:1808/144. ISBN 978-0-444-88650-7. {{cite book}}: |journal= ignored (help) Wilson, Nic; Mengin, Jérôme (1999), "Logical deduction using the local computation framework", in Anthony Hunter; Simon Parsons (eds.), Symbolic and Quantitative Approaches to Reasoning and Uncertainty, European Conference, ECSQARU'99, London, UK, July 5–9, 1999, Proceedings, volume 1638 of Lecture Notes in Computer Science, Springer, pp. 386–396, ISBN 978-3-540-66131-3
Wikipedia:Infrared fixed point#0
In physics, an infrared fixed point is a set of coupling constants, or other parameters, that evolve from arbitrary initial values at very high energies (short distance) to fixed, stable values, usually predictable, at low energies (large distance). This usually involves the use of the renormalization group, which specifically details the way parameters in a physical system (a quantum field theory) depend on the energy scale being probed. Conversely, if the length-scale decreases and the physical parameters approach fixed values, then we have ultraviolet fixed points. The fixed points are generally independent of the initial values of the parameters over a large range of the initial values. This is known as universality. == Statistical physics == In the statistical physics of second order phase transitions, the physical system approaches an infrared fixed point that is independent of the initial short distance dynamics that defines the material. This determines the properties of the phase transition at the critical temperature, or critical point. Observables, such as critical exponents usually depend only upon dimension of space, and are independent of the atomic or molecular constituents. == Top Quark == In the Standard Model, quarks and leptons have "Yukawa couplings" to the Higgs boson which determine the masses of the particles. Most of the quarks' and leptons' Yukawa couplings are small compared to the top quark's Yukawa coupling. Yukawa couplings are not constants and their properties change depending on the energy scale at which they are measured, this is known as running of the constants. The dynamics of Yukawa couplings are determined by the renormalization group equation: μ ∂ ∂ μ y q ≈ y q 16 π 2 ( 9 2 y q 2 − 8 g 3 2 ) , {\displaystyle \ \mu \ {\frac {\partial }{\partial \mu }}\ y_{q}\approx {\frac {y_{q}}{\ 16\pi ^{2}\ }}\left({\frac {\ 9\ }{2}}y_{q}^{2}-8g_{3}^{2}\right)\ ,} where g 3 {\displaystyle \ g_{3}\ } is the color gauge coupling (which is a function of μ {\displaystyle \ \mu \ } and associated with asymptotic freedom ) and y q {\displaystyle \ y_{q}\ } is the Yukawa coupling for the quark q ∈ { u , b , t } . {\displaystyle \ q\in \{\mathrm {u,b,t} \}~.} This equation describes how the Yukawa coupling changes with energy scale μ . {\displaystyle \ \mu ~.} A more complete version of the same formula is more appropriate for the top quark: μ ∂ ∂ μ y t ≈ y t 16 π 2 ( 9 2 y t 2 − 8 g 3 2 − 9 4 g 2 2 − 17 20 g 1 2 ) , {\displaystyle \ \mu \ {\frac {\ \partial }{\partial \mu }}\ y_{\mathrm {t} }\approx {\frac {\ y_{\text{t}}\ }{16\ \pi ^{2}}}\left({\frac {\ 9\ }{2}}y_{\mathrm {t} }^{2}-8g_{3}^{2}-{\frac {\ 9\ }{4}}g_{2}^{2}-{\frac {\ 17\ }{20}}g_{1}^{2}\right)\ ,} where g2 is the weak isospin gauge coupling and g1 is the weak hypercharge gauge coupling. For small or near constant values of g1 and g2 the qualitative behavior is the same. The Yukawa couplings of the up, down, charm, strange and bottom quarks, are small at the extremely high energy scale of grand unification, μ ≈ 10 15 G e V . {\displaystyle \ \mu \approx 10^{15}\mathrm {GeV} ~.} Therefore, the y q 2 {\displaystyle \ y_{q}^{2}\ } term can be neglected in the above equation for all but the top quark. Solving, we then find that y q {\displaystyle \ y_{q}\ } is increased slightly at the low energy scales at which the quark masses are generated by the Higgs, μ ≈ 125 G e V . {\displaystyle \ \mu \approx 125\ \mathrm {GeV} ~.} On the other hand, solutions to this equation for large initial values typical for the top quark y t {\displaystyle \ y_{\mathrm {t} }\ } cause the expression on the right side to quickly approach zero as we descend in energy scale, which stops y t {\displaystyle \ y_{\mathrm {t} }\ } from changing and locks it to the QCD coupling g 3 . {\displaystyle \ g_{3}~.} This is known as a (infrared) quasi-fixed point of the renormalization group equation for the Yukawa coupling. No matter what the initial starting value of the coupling is, if it is sufficiently large at high energies to begin with, it will reach this quasi-fixed point value, and the corresponding quark mass is predicted to be about m ≈ 220 G e V . {\displaystyle \ m\approx 220\ \mathrm {GeV} ~.} The renormalization group equation for large values of the top Yukawa coupling was first considered in 1981 by Pendleton & Ross, and the "infrared quasi-fixed point" was proposed by Hill. The prevailing view at the time was that the top quark mass would lie in a range of 15 to 26 GeV. The quasi-infrared fixed point emerged in top quark condensation theories of electroweak symmetry breaking in which the Higgs boson is composite at extremely short distance scales, composed of a pair of top and anti-top quarks. While the value of the quasi-fixed point is determined in the Standard Model of about m ≈ 220 G e V , {\displaystyle \ m\approx 220\ \mathrm {GeV} ~,} if there is more than one Higgs doublet, the value will be reduced by an increase in the ⁠ 9 /2⁠ factor in the equation, and any Higgs mixing angle effects. Since the observed top quark mass of 174 GeV is slightly lower than the standard model prediction by about 20%, this suggests there may be more Higgs doublets beyond the single standard model Higgs boson. If there are many additional Higgs doublets in nature the predicted value of the quasi-fixed point comes into agreement with experiment. Even if there are two Higgs doublets, the fixed point for the top mass is reduced, 170~200 GeV. Some theorists believed this was supporting evidence for the Supersymmetric Standard Model, however no other signs of supersymmetry have emerged at the Large Hadron Collider. == Banks–Zaks fixed point == Another example of an infrared fixed point is the Banks–Zaks fixed point in which the coupling constant of a Yang–Mills theory evolves to a fixed value. The beta-function vanishes, and the theory possesses a symmetry known as conformal symmetry. == Footnotes == == See also == Top quark Cutoff (physics) == References ==
Wikipedia:Infrastructure (number theory)#0
In mathematics, an infrastructure is a group-like structure appearing in global fields. == Historic development == In 1972, D. Shanks first discovered the infrastructure of a real quadratic number field and applied his baby-step giant-step algorithm to compute the regulator of such a field in O ( D 1 / 4 + ε ) {\displaystyle {\mathcal {O}}(D^{1/4+\varepsilon })} binary operations (for every ε > 0 {\displaystyle \varepsilon >0} ), where D {\displaystyle D} is the discriminant of the quadratic field; previous methods required O ( D 1 / 2 + ε ) {\displaystyle {\mathcal {O}}(D^{1/2+\varepsilon })} binary operations. Ten years later, H. W. Lenstra published a mathematical framework describing the infrastructure of a real quadratic number field in terms of "circular groups". It was also described by R. Schoof and H. C. Williams, and later extended by H. C. Williams, G. W. Dueck and B. K. Schmid to certain cubic number fields of unit rank one and by J. Buchmann and H. C. Williams to all number fields of unit rank one. In his habilitation thesis, J. Buchmann presented a baby-step giant-step algorithm to compute the regulator of a number field of arbitrary unit rank. The first description of infrastructures in number fields of arbitrary unit rank was given by R. Schoof using Arakelov divisors in 2008. The infrastructure was also described for other global fields, namely for algebraic function fields over finite fields. This was done first by A. Stein and H. G. Zimmer in the case of real hyperelliptic function fields. It was extended to certain cubic function fields of unit rank one by Renate Scheidler and A. Stein. In 1999, S. Paulus and H.-G. Rück related the infrastructure of a real quadratic function field to the divisor class group. This connection can be generalized to arbitrary function fields and, combining with R. Schoof's results, to all global fields. == One-dimensional case == === Abstract definition === A one-dimensional (abstract) infrastructure ( X , d ) {\displaystyle (X,d)} consists of a real number R > 0 {\displaystyle R>0} , a finite set X ≠ ∅ {\displaystyle X\neq \emptyset } together with an injective map d : X → R / R Z {\displaystyle d:X\to \mathbb {R} /R\mathbb {Z} } . The map d {\displaystyle d} is often called the distance map. By interpreting R / R Z {\displaystyle \mathbb {R} /R\mathbb {Z} } as a circle of circumference R {\displaystyle R} and by identifying X {\displaystyle X} with d ( X ) {\displaystyle d(X)} , one can see a one-dimensional infrastructure as a circle with a finite set of points on it. === Baby steps === A baby step is a unary operation b s : X → X {\displaystyle bs:X\to X} on a one-dimensional infrastructure ( X , d ) {\displaystyle (X,d)} . Visualizing the infrastructure as a circle, a baby step assigns each point of d ( X ) {\displaystyle d(X)} the next one. Formally, one can define this by assigning to x ∈ X {\displaystyle x\in X} the real number f x := inf { f ′ > 0 ∣ d ( x ) + f ′ ∈ d ( X ) } {\displaystyle f_{x}:=\inf\{f'>0\mid d(x)+f'\in d(X)\}} ; then, one can define b s ( x ) := d − 1 ( d ( x ) + f x ) {\displaystyle bs(x):=d^{-1}(d(x)+f_{x})} . === Giant steps and reduction maps === Observing that R / R Z {\displaystyle \mathbb {R} /R\mathbb {Z} } is naturally an abelian group, one can consider the sum d ( x ) + d ( y ) ∈ R / R Z {\displaystyle d(x)+d(y)\in \mathbb {R} /R\mathbb {Z} } for x , y ∈ X {\displaystyle x,y\in X} . In general, this is not an element of d ( X ) {\displaystyle d(X)} . But instead, one can take an element of d ( X ) {\displaystyle d(X)} which lies nearby. To formalize this concept, assume that there is a map r e d : R / R Z → X {\displaystyle red:\mathbb {R} /R\mathbb {Z} \to X} ; then, one can define g s ( x , y ) := r e d ( d ( x ) + d ( y ) ) {\displaystyle gs(x,y):=red(d(x)+d(y))} to obtain a binary operation g s : X × X → X {\displaystyle gs:X\times X\to X} , called the giant step operation. Note that this operation is in general not associative. The main difficulty is how to choose the map r e d {\displaystyle red} . Assuming that one wants to have the condition r e d ∘ d = i d X {\displaystyle red\circ d=\mathrm {id} _{X}} , a range of possibilities remain. One possible choice is given as follows: for v ∈ R / R Z {\displaystyle v\in \mathbb {R} /R\mathbb {Z} } , define f v := inf { f ≥ 0 ∣ v − f ∈ d ( X ) } {\displaystyle f_{v}:=\inf\{f\geq 0\mid v-f\in d(X)\}} ; then one can define r e d ( v ) := d − 1 ( v − f v ) {\displaystyle red(v):=d^{-1}(v-f_{v})} . This choice, seeming somewhat arbitrary, appears in a natural way when one tries to obtain infrastructures from global fields. Other choices are possible as well, for example choosing an element x ∈ d ( X ) {\displaystyle x\in d(X)} such that | d ( x ) − v | {\displaystyle |d(x)-v|} is minimal (here, | d ( x ) − v | {\displaystyle |d(x)-v|} is stands for inf { | f − v | ∣ f ∈ d ( x ) } {\displaystyle \inf\{|f-v|\mid f\in d(x)\}} , as d ( x ) {\displaystyle d(x)} is of the form v + R Z {\displaystyle v+R\mathbb {Z} } ); one possible construction in the case of real quadratic hyperelliptic function fields is given by S. D. Galbraith, M. Harrison and D. J. Mireles Morales. === Relation to real quadratic fields === D. Shanks observed the infrastructure in real quadratic number fields when he was looking at cycles of reduced binary quadratic forms. Note that there is a close relation between reducing binary quadratic forms and continued fraction expansion; one step in the continued fraction expansion of a certain quadratic irrationality gives a unary operation on the set of reduced forms, which cycles through all reduced forms in one equivalence class. Arranging all these reduced forms in a cycle, Shanks noticed that one can quickly jump to reduced forms further away from the beginning of the circle by composing two such forms and reducing the result. He called this binary operation on the set of reduced forms a giant step, and the operation to go to the next reduced form in the cycle a baby step. === Relation to === R / R Z {\displaystyle \mathbb {R} /R\mathbb {Z} } The set R / R Z {\displaystyle \mathbb {R} /R\mathbb {Z} } has a natural group operation and the giant step operation is defined in terms of it. Hence, it makes sense to compare the arithmetic in the infrastructure to the arithmetic in R / R Z {\displaystyle \mathbb {R} /R\mathbb {Z} } . It turns out that the group operation of R / R Z {\displaystyle \mathbb {R} /R\mathbb {Z} } can be described using giant steps and baby steps, by representing elements of R / R Z {\displaystyle \mathbb {R} /R\mathbb {Z} } by elements of X {\displaystyle X} together with a relatively small real number; this has been first described by D. Hühnlein and S. Paulus and by M. J. Jacobson, Jr., R. Scheidler and H. C. Williams in the case of infrastructures obtained from real quadratic number fields. They used floating point numbers to represent the real numbers, and called these representations CRIAD-representations resp. ( f , p ) {\displaystyle (f,p)} -representations. More generally, one can define a similar concept for all one-dimensional infrastructures; these are sometimes called f {\displaystyle f} -representations. A set of f {\displaystyle f} -representations is a subset f R e p {\displaystyle fRep} of X × R / R Z {\displaystyle X\times \mathbb {R} /R\mathbb {Z} } such that the map Ψ f R e p : f R e p → R / R Z , ( x , f ) ↦ d ( x ) + f {\displaystyle \Psi _{fRep}:fRep\to \mathbb {R} /R\mathbb {Z} ,\;(x,f)\mapsto d(x)+f} is a bijection and that ( x , 0 ) ∈ f R e p {\displaystyle (x,0)\in fRep} for every x ∈ X {\displaystyle x\in X} . If r e d : R / R Z → X {\displaystyle red:\mathbb {R} /R\mathbb {Z} \to X} is a reduction map, f R e p r e d := { ( x , f ) ∈ X × R / R Z ∣ r e d ( d ( x ) + f ) = x } {\displaystyle fRep_{red}:=\{(x,f)\in X\times \mathbb {R} /R\mathbb {Z} \mid red(d(x)+f)=x\}} is a set of f {\displaystyle f} -representations; conversely, if f R e p {\displaystyle fRep} is a set of f {\displaystyle f} -representations, one can obtain a reduction map by setting r e d ( f ) = π 1 ( Ψ f R e p − 1 ( f ) ) {\displaystyle red(f)=\pi _{1}(\Psi _{fRep}^{-1}(f))} , where π 1 : X × R / R Z → X , ( x , f ) ↦ x {\displaystyle \pi _{1}:X\times \mathbb {R} /R\mathbb {Z} \to X,\;(x,f)\mapsto x} is the projection on $X$. Hence, sets of f {\displaystyle f} -representations and reduction maps are in a one-to-one correspondence. Using the bijection Ψ f R e p : f R e p → R / R Z {\displaystyle \Psi _{fRep}:fRep\to \mathbb {R} /R\mathbb {Z} } , one can pull over the group operation on R / R Z {\displaystyle \mathbb {R} /R\mathbb {Z} } to f R e p {\displaystyle fRep} , hence turning f R e p {\displaystyle fRep} into an abelian group ( f R e p , + ) {\displaystyle (fRep,+)} by x + y := Ψ f R e p − 1 ( Ψ f R e p ( x ) + Ψ f R e p ( y ) ) {\displaystyle x+y:=\Psi _{fRep}^{-1}(\Psi _{fRep}(x)+\Psi _{fRep}(y))} , x , y ∈ f R e p {\displaystyle x,y\in fRep} . In certain cases, this group operation can be explicitly described without using Ψ f R e p {\displaystyle \Psi _{fRep}} and d {\displaystyle d} . In case one uses the reduction map r e d : R / R Z → X , v ↦ d − 1 ( v − inf { f ≥ 0 ∣ v − f ∈ d ( X ) } ) {\displaystyle red:\mathbb {R} /R\mathbb {Z} \to X,\;v\mapsto d^{-1}(v-\inf\{f\geq 0\mid v-f\in d(X)\})} , one obtains f R e p r e d = { ( x , f ) ∣ f ≥ 0 , ∀ f ′ ∈ [ 0 , f ) : d ( x ) + f ′ ∉ d ( X ) } {\displaystyle fRep_{red}=\{(x,f)\mid f\geq 0,\;\forall f'\in [0,f):d(x)+f'\not \in d(X)\}} . Given ( x , f ) , ( x ′ , f ′ ) ∈ f R e p r e d {\displaystyle (x,f),(x',f')\in fRep_{red}} , one can consider ( x ″ , f ″ ) {\displaystyle (x'',f'')} with x ″ = g s ( x , x ′ ) {\displaystyle x''=gs(x,x')} and f ″ = f + f ′ + ( d ( x ) + d ( x ′ ) − d ( g s ( x , x ′ ) ) ) ≥ 0 {\displaystyle f''=f+f'+(d(x)+d(x')-d(gs(x,x')))\geq 0} ; this is in general no element of f R e p r e d {\displaystyle fRep_{red}} , but one can reduce it as follows: one computes b s − 1 ( x ″ ) {\displaystyle bs^{-1}(x'')} and f ″ − ( d ( x ″ ) − d ( b s − 1 ( x ″ ) ) ) {\displaystyle f''-(d(x'')-d(bs^{-1}(x'')))} ; in case the latter is not negative, one replaces ( x ″ , f ″ ) {\displaystyle (x'',f'')} with ( b s − 1 ( x ″ ) , f ″ − ( d ( x ″ ) − d ( b s − 1 ( x ″ ) ) ) ) {\displaystyle (bs^{-1}(x''),f''-(d(x'')-d(bs^{-1}(x''))))} and continues. If the value was negative, one has that ( x ″ , f ″ ) ∈ f R e p r e d {\displaystyle (x'',f'')\in fRep_{red}} and that Ψ f R e p r e d ( x , f ) + Ψ f R e p r e d ( x ′ , f ′ ) = Ψ f R e p r e d ( x ″ , f ″ ) {\displaystyle \Psi _{fRep_{red}}(x,f)+\Psi _{fRep_{red}}(x',f')=\Psi _{fRep_{red}}(x'',f'')} , i.e. ( x , f ) + ( x ′ , f ′ ) = ( x ″ , f ″ ) {\displaystyle (x,f)+(x',f')=(x'',f'')} . == References ==
Wikipedia:Inga Berre#0
Inga Berre (born 31 July 1978) is a Norwegian applied mathematician who studies numerical methods for the partial differential equations used to model fractured geothermal systems and porous media more generally. She is a professor in the department of mathematics at the University of Bergen, a scientific advisor to the Chr. Michelsen Institute in Bergen, and a leading researcher on geothermal energy in Norway. == Education and career == Berre earned a candidate degree in mathematics from the University of Bergen in 2001, and completed a doctorate (Dr. Sci.) in 2005. Her dissertation, Fast simulation of transport and adaptive permeability estimation in porous media, was jointly supervised by Helge Dahle, Knut-Andreas Lie, Trond Mannseth, and Kenneth Hvistendahl Karlsen. She joined the University of Bergen faculty as an associate professor in 2006, and was promoted to full professor in 2013. In 2018 she became chair of the Joint Programme Geothermal of the European Energy Research Alliance. == Recognition == Berre is a member of the Norwegian Academy of Technological Sciences, elected in 2017. In 2021 she was elected Council Members-at-Large for SIAM for a term running January 1, 2022 - December 31, 2024. == References == == External links == Inga Berre publications indexed by Google Scholar Making Research Matter - interview with Inga Berre, University of Bergen, 9 September 2020 Interview with Inga Berre, Carina Bringedal, Univ. of Stuttgart, 11 November 2019
Wikipedia:Inge Henningsen#0
Inge Biehl Henningsen (14 April 1941 – 5 August 2024) was a Danish statistician, academic and writer. A researcher and lecturer at the universities of Copenhagen and Aarhus, she was also active in politics and women's rights, most recently in connection with the PISA approach to student assessment. As editor of the socialist journal Naturkampen in the 1980s, she covered subjects as varied as the management of cancer research and the European Union's approach to agriculture in the third world. == Biography == Born on 14 April 1941 in the Frederiksberg district of Copenhagen, Inge Biehl Henningsen was the daughter of the haberdasher Sven Aage Henningsen (1990–1991) and the correspondent Elisabeth Braunstein (1911–1996). After matriculating from Holte Gymnasium in 1959, she read statistics at Copenhagen University, graduating in 1966. Henningsen died on 5 August 2024, at the age of 83. == Career == On graduating, Henningsen joined the university's Institut for Matematisk Statistik where she taught and carried out research until 2007, becoming an associate professor in 1974. Her interest in politics started when as an undergraduate she joined the socialist Studentersamfundet (student society). From its establishment in 1967, she was active in the Left Socialists party ( Venstresocialisterne), together with her partner Steen Folke, becoming a member of the board and, from the late 1960s, editor of the party's journal Politisk Revy. While on a study trip to the United States (1969–1970), she became involved in the emerging New Women's Movement. On her return to Denmark, she promoted the new women's activities for the Left Socialists in the 1970s. From 1980 to 1991, she was editor of Naturkampen, the socialist organ for women who were critically involved in topics such as science and technology, participating in articles on cancer research, technology risk assessment, third world agriculture and AIDS. Both in her professional life and in her extramural activities, Henningsen became an effective communicator, interpreting statistical information for non-specialists, frequently criticizing how statistics were being misused. She became particularly active in the educational sphere, demonstrating, for example, how statistics revealed more limited choices for girls than for boys in regard to the applied sciences. She also collaborated with women researchers in sociology, politics and psychology, revealing how women did not enjoy the same opportunities as men in higher education and research. In this connection, in 1998 she became a member of the Gender Equality Research Foundation under the Ministry of Research. Latterly, on the basis of statistics, she published articles on the extent to which girls are disadvantaged in Denmark's educational environment, but she also pointed out that a fair proportion of boys are increasingly considered to be "losers" in the absence of effective vocational programmes and internships. Other topics she examined with a statistics viewpoint include bullying in the classroom (2009 & 2013), gender and educational choices (2008), and a critical examination of the results of the PISA reports (2008 & 2017). == References ==
Wikipedia:Ingeborg Seynsche#0
Martha Mechthild Ingeborg Seynsche (21 October 1905 in Barmen – 27 June 1994 in Göttingen) was a German mathematician. She was one of the first women to be allowed to earn a doctorate on a mathematical topic in Göttingen. == Life and work == Her father Johannes Seynsche (1857–1925) was a professor and senior teacher at the Unterbarmer Higher Girls' School. Her mother was Anna Seynsche (1882–1943), née Limbach. Ingeborg passed her Abitur in Unterbarmen in 1924. She then studied in Marburg and Göttingen, and in 1929 passed the state examination for teachers in pure and applied mathematics and physics. She went on to become an assistant at the Mathematical Institute in Göttingen. in 1930, Seynsche received her doctorate in philosophy from the Georg-August University, now University of Göttingen. The topic of her dissertation with Richard Courant was: On the theory of almost periodic sequences of numbers (Zur Theorie der fastperiodischen Zahlfolgen). It was a topic from the theory of almost periodic functions suggested by her advisors Harald Bohr and Alwin Walther. Later she dealt, among other things, with the calculation of function tables (with Alwin Walther) and the two-sided surface ornaments. She also solved the queen problem for arbitrary n. == Personal life == She married physicist Friedrich Hund (1896–1997) in Barmen on 17 March 1931. The family had six children: Gerhard Hund (1932–2024), Dietrich (1933–1939), Irmgard (b. 1934), Martin (1937–2018), Andreas (b. 1940) and Erwin (1941–2022). The chess grandmaster Barbara Hund (b. 1959) and chess player Isabel Hund (b. 1962) are her granddaughters. Ingeborg wrote many letters to her eldest son, Gerhard. Letters from the years before her death are interesting. Ingeborg Seynsche's final resting place is in the Munich Waldfriedhof, where her husband and sister Gertrud and her son-in-law Dieter Pfirsch are also buried. == References ==
Wikipedia:Ingrid Daubechies#0
Baroness Ingrid Daubechies ( doh-bə-SHEE; French: [dobʃi]; born 17 August 1954) is a Belgian-American physicist and mathematician. She is best known for her work with wavelets in image compression. Daubechies is recognized for her study of the mathematical methods that enhance image-compression technology. She is a member of the National Academy of Engineering, the National Academy of Sciences and the American Academy of Arts and Sciences. She is a 1992 MacArthur Fellow. She also served on the Mathematical Sciences jury for the Infosys Prize from 2011 to 2013. The name Daubechies is widely associated with the orthogonal Daubechies wavelet and the biorthogonal CDF wavelet. A wavelet from this family of wavelets is now used in the JPEG 2000 standard. Her research involves the use of automatic methods from both mathematics, technology, and biology to extract information from samples such as bones and teeth. She also developed sophisticated image processing techniques used to help establish the authenticity and age of some of the world's most famous works of art, including paintings by Vincent van Gogh and Rembrandt. Daubechies is on the board of directors of Enhancing Diversity in Graduate Education (EDGE), a program that helps women entering graduate studies in the mathematical sciences. She was the first woman to be president of the International Mathematical Union (2011–2014). She became a member of the Academia Europaea in 2015. == Early life and education == Daubechies was born in Houthalen, Belgium, as the daughter of Simonne Duran (a criminologist) and Marcel Daubechies (a civil mining engineer). She remembers that when she was a little girl and could not sleep, she did not count numbers, as one would expect from a child, but started to multiply numbers by two from memory. Thus, as a child, she already familiarized herself with the properties of exponential growth. Her parents found out that mathematical conceptions, such as cone and tetrahedron, were familiar to her before she reached the age of six. She excelled at the primary school and was moved up a grade after only three months. After completing the Lyceum in Hasselt, she entered the Vrije Universiteit Brussel at age 17. Daubechies completed her undergraduate studies in physics at the Vrije Universiteit Brussel in 1975. During the next few years, she visited the CNRS Center for Theoretical Physics in Marseille several times, where she collaborated with Alex Grossmann; this work was the basis for her doctorate in quantum mechanics. She obtained her PhD in theoretical physics in 1980 at the Vrije Universiteit Brussel. == Career == After completing her doctorate, Daubechies continued her research career at the Vrije Universiteit Brussel until 1987, rising through the ranks to positions roughly equivalent with research assistant-professor in 1981 and research associate-professor 1985, funded by a fellowship from the NFWO (Nationaal Fonds voor Wetenschappelijk Onderzoek). Daubechies spent most of 1986 as a guest-researcher at the Courant Institute of Mathematical Sciences in New York. At Courant she made her best-known discovery: based on quadrature mirror filter-technology she constructed compactly supported continuous wavelets that would require only a finite amount of processing, in this way enabling wavelet theory to enter the realm of digital signal processing. In July 1987, Daubechies joined Bell Laboratories in Murray Hill, New Jersey. In 1988, she published the result of her research on orthonormal bases of compactly supported wavelets in Communications on Pure and Applied Mathematics. In 1991, Daubechies was appointed as a professor at Rutgers University in New Brunswick, where she taught in their mathematics department. She remained there through 1994. Daubechies moved to Princeton University in 1994, where she was active within the program in applied and computational mathematics. In 2004, she was named as the William R. Kenan, Jr. Professor there. She was the first woman to become a full professor of mathematics at Princeton. In January 2011, Daubechies moved to Duke University to serve as the James B. Duke Professor in the department of mathematics and electrical and computer engineering at Duke University. In 2016, she and Heekyoung Hahn founded Duke Summer Workshop in Mathematics (SWIM) for rising high school seniors who were female. In 2020 and 2021 Daubechies, along with fiber artist Dominique Ehrmann, led a team of mathematicians and artists who collectively built the touring art and math installation known as Mathemalchemy. == Mathematical skills applied to fine art == Daubechies has used mathematical techniques on multiple art restoration projects. Her team worked on restoring the Ghent Altarpiece, a massive fifteenth-century work of art consisting of 12 panels that are attributed to the brothers Hubert and Jan van Eyck. Daubechies and several colleagues developed new mathematical techniques to both reverse the effects of aging upon the artworks and untangle and remove the effects of past ill-fated conservation efforts. Using highly precise photographs and X-rays of the panels as well as various filtering methods, the team of mathematicians found an automatic way to detect the cracks caused by aging. They also were able to decipher the apparent text of the polyptych, which was attributed to Thomas Aquinas. Daubechies and her collaborators also contributed to the restoration of the fourteenth-century Saint John Altarpiece by Francescuccio Ghissi in the North Carolina Museum of Art, applying some of the techniques they discovered working on the Ghent Altarpiece restoration. With this project the mathematicians used machine-learning algorithms to separate features. == Awards and honors == Daubechies received the Louis Empain Prize for Physics in 1984. It is awarded once every five years to a Belgian scientist on the basis of work done before the age of 29. In 1992, she was awarded a MacArthur Fellowship and in 1993, she was elected to the American Academy of Arts and Sciences. In 1994, she received the American Mathematical Society Steele Prize for Exposition for her book, Ten Lectures on Wavelets, and was invited to give a plenary lecture at the International Congress of Mathematicians in Zurich. In 1997, she was awarded the AMS Ruth Lyttle Satter prize. In 1998, she was elected to the United States National Academy of Sciences and won the Golden Jubilee Award for Technological Innovation from the IEEE Information Theory Society. She became a foreign member of the Royal Netherlands Academy of Arts and Sciences in 1999. In 2000, Daubechies became the first woman to receive the National Academy of Sciences Award in Mathematics, presented every four years for excellence in published mathematical research. The award honored her "for fundamental discoveries on wavelets and wavelet expansions and for her role in making wavelets methods a practical basic tool of applied mathematics". She was awarded the Basic Research Award of the German Eduard Rhein Foundation as well as the NAS Award in Mathematics. In 2003, Daubechies was elected to the American Philosophical Society. In January 2005, Daubechies became the third woman since 1924 to give the Josiah Willard Gibbs Lecture sponsored by the American Mathematical Society. Her talk was on "The Interplay Between Analysis and Algorithm". Daubechies was the 2006 Emmy Noether Lecturer at the San Antonio Joint Mathematics Meetings. In September 2006, the Pioneer Prize from the International Council for Industrial and Applied Mathematics was awarded jointly to Daubechies and Heinz Engl. In 2010, she was awarded an honorary doctorate by The Norwegian University of Science and Technology (NTNU). In 2011, Daubechies was the SIAM John von Neumann Lecturer, and was awarded the IEEE Jack S. Kilby Signal Processing Medal, the Leroy P. Steele Prize for Seminal Contribution to Research from the American Mathematical Society, and the Benjamin Franklin Medal in Electrical Engineering from the Franklin Institute. In 2012, King Albert II of Belgium granted Daubechies the title of Baroness. She also won the 2012 Nemmers Prize in Mathematics awarded by Northwestern University, and the 2012 BBVA Foundation Frontiers of Knowledge Award in the Basic Sciences category (jointly with David Mumford). Daubechies gave the Gauss Lecture of the German Mathematical Society in 2015. The Simons Foundation, a private foundation based in New York City that funds research in mathematics and the basic sciences, gave Daubechies the Math + X Investigator award, which provides money to professors at American and Canadian universities to encourage new partnerships between mathematicians and researchers in other fields of science. She was the one to suggest to Simons that the foundation should fund better mechanisms for interpreting existing data, rather than new research. Also in 2015, Daubechies was elected a member of the National Academy of Engineering for "contributions to the mathematics and applications of wavelets". In 2018, Daubechies won the William Benter Prize in Applied Mathematics from City University of Hong Kong (CityU). She is the first woman to be the recipient of the award. Prize officials cited the pioneering work of Daubechies in wavelet theory and her "exceptional contributions to a wide spectrum of scientific and mathematical subjects" and noted that "her work in enabling the mobile smartphone revolution is truly symbolic of the era". Also in 2018, Daubechies was awarded the Fudan-Zhongzhi Science Award ($440,000) for her work on wavelets. She is part of the 2019 class of fellows of the Association for Women in Mathematics. Daubechies was named the North American Laureate of 2019 L'Oréal-UNESCO International Award For Women in Science. Since 1998, the annual worldwide award recognizes five outstanding women in chemistry, physics, materials science, mathematics, and computer science. Also in 2019, she became a member of the German Academy of Sciences Leopoldina. Daubechies received the Princess of Asturias Award for Technical and Scientific Research in 2020. In 2023, she was awarded the Wolf Prize in Mathematics "for work in wavelet theory and applied harmonic analysis”. She was the first woman to receive this award. In 2024, Daubechies received an honorary Doctor of Sciences from University of Pennsylvania and a honorary degree from Amherst College. Daubechies has been awarded The Bakerian Medal and Lecture 2025 for her work on wavelets and image compression and her exceptional contributions to a wide spectrum of physical, technological, and mathematical applications. In January 2025, Daubechies was a recipient of the National Medal of Science. == Personal life == In 1985, Daubechies met mathematician Robert Calderbank when he was on a three-month exchange visit from Bell Laboratories in Murray Hill, New Jersey to the Brussels-based mathematics division of Philips Research. They married in 1987. They have two children, Michael Calderbank and Carolyn Calderbank. == Publications == Ten Lectures on Wavelets. Philadelphia: SIAM. 1992. ISBN 0-89871-274-2. Orthonormal bases of compactly supported wavelets 1988, Wiley Periodicals, Inc. Journal: Communications on Pure and Applied Mathematics, Volume41, Issue 7. D. Aerts and I. Daubechies, A connection between propositional systems in Hilbert spaces and von Neumann algebras, Helv. Phys. Acta, 52, pp. 184–199, 1979. D. Aerts and I. Daubechies, A characterization of subsystems in physics, Lett. Math. Phys., 3 (1), pp. 11–17, 1979. Iteratively reweighted least squares minimization for sparse recovery 2009, Periodicals, Inc. Journal: Communications on Pure and Applied Mathematics, Volume 63, Issue1. Cohen, I. Daubechies, and A. Ron, How smooth is the smoothest function in a given refinable space?, Appl. Comp. Harm. Anal., 3 (1), pp. 87–89, 1996. I. Daubechies, S. Jaffard, and J.L. Journe, A simple Wilson orthonormal basis with exponential decay, SIAM J. Math. Anal., 22 (2), pp. 554–572, 1991. == Applications == Image compression Digital cinema Digital art restoration Biological morphology == References == === Citations === === Attribution === This article incorporates material from Ingrid Daubechies on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. == External links == Ingrid Daubechies at the Mathematics Genealogy Project I. Daubechies, A Different Way to Look at Subband Coding, NJIT Symposium on Multi-Resolution Signal Decomposition Techniques: Wavelets, Subbands and Transforms, April 1990. An Interview with Ingrid Daubechies in the Girls' Angle Bulletin, volume 1, number 6 and volume 2, numbers 1 through 4. "Ingrid Daubechies", Biographies of Women Mathematicians, Agnes Scott College O'Connor, John J.; Robertson, Edmund F., "Ingrid Daubechies", MacTutor History of Mathematics Archive, University of St Andrews Ingrid Daubechies' homepage at Duke University
Wikipedia:Initial value theorem#0
In mathematical analysis, the initial value theorem is a theorem used to relate frequency domain expressions to the time domain behavior as time approaches zero. Let F ( s ) = ∫ 0 ∞ f ( t ) e − s t d t {\displaystyle F(s)=\int _{0}^{\infty }f(t)e^{-st}\,dt} be the (one-sided) Laplace transform of ƒ(t). If f {\displaystyle f} is bounded on ( 0 , ∞ ) {\displaystyle (0,\infty )} (or if just f ( t ) = O ( e c t ) {\displaystyle f(t)=O(e^{ct})} ) and lim t → 0 + f ( t ) {\displaystyle \lim _{t\to 0^{+}}f(t)} exists then the initial value theorem says lim t → 0 f ( t ) = lim s → ∞ s F ( s ) . {\displaystyle \lim _{t\,\to \,0}f(t)=\lim _{s\to \infty }{sF(s)}.} == Proofs == === Proof using dominated convergence theorem and assuming that function is bounded === Suppose first that f {\displaystyle f} is bounded, i.e. lim t → 0 + f ( t ) = α {\displaystyle \lim _{t\to 0^{+}}f(t)=\alpha } . A change of variable in the integral ∫ 0 ∞ f ( t ) e − s t d t {\displaystyle \int _{0}^{\infty }f(t)e^{-st}\,dt} shows that s F ( s ) = ∫ 0 ∞ f ( t s ) e − t d t {\displaystyle sF(s)=\int _{0}^{\infty }f\left({\frac {t}{s}}\right)e^{-t}\,dt} . Since f {\displaystyle f} is bounded, the Dominated Convergence Theorem implies that lim s → ∞ s F ( s ) = ∫ 0 ∞ α e − t d t = α . {\displaystyle \lim _{s\to \infty }sF(s)=\int _{0}^{\infty }\alpha e^{-t}\,dt=\alpha .} === Proof using elementary calculus and assuming that function is bounded === Of course we don't really need DCT here, one can give a very simple proof using only elementary calculus: Start by choosing A {\displaystyle A} so that ∫ A ∞ e − t d t < ϵ {\displaystyle \int _{A}^{\infty }e^{-t}\,dt<\epsilon } , and then note that lim s → ∞ f ( t s ) = α {\displaystyle \lim _{s\to \infty }f\left({\frac {t}{s}}\right)=\alpha } uniformly for t ∈ ( 0 , A ] {\displaystyle t\in (0,A]} . === Generalizing to non-bounded functions that have exponential order === The theorem assuming just that f ( t ) = O ( e c t ) {\displaystyle f(t)=O(e^{ct})} follows from the theorem for bounded f {\displaystyle f} : Define g ( t ) = e − c t f ( t ) {\displaystyle g(t)=e^{-ct}f(t)} . Then g {\displaystyle g} is bounded, so we've shown that g ( 0 + ) = lim s → ∞ s G ( s ) {\displaystyle g(0^{+})=\lim _{s\to \infty }sG(s)} . But f ( 0 + ) = g ( 0 + ) {\displaystyle f(0^{+})=g(0^{+})} and G ( s ) = F ( s + c ) {\displaystyle G(s)=F(s+c)} , so lim s → ∞ s F ( s ) = lim s → ∞ ( s − c ) F ( s ) = lim s → ∞ s F ( s + c ) = lim s → ∞ s G ( s ) , {\displaystyle \lim _{s\to \infty }sF(s)=\lim _{s\to \infty }(s-c)F(s)=\lim _{s\to \infty }sF(s+c)=\lim _{s\to \infty }sG(s),} since lim s → ∞ F ( s ) = 0 {\displaystyle \lim _{s\to \infty }F(s)=0} . == See also == Final value theorem == Notes ==
Wikipedia:Injective function#0
In mathematics, an injective function (also known as injection, or one-to-one function ) is a function f that maps distinct elements of its domain to distinct elements of its codomain; that is, x1 ≠ x2 implies f(x1) ≠ f(x2) (equivalently by contraposition, f(x1) = f(x2) implies x1 = x2). In other words, every element of the function's codomain is the image of at most one element of its domain. (There may be some elements in the codomain that are not mapped from elements in the domain.) The term one-to-one function must not be confused with one-to-one correspondence that refers to bijective functions, which are functions such that each element in the codomain is an image of exactly one element in the domain. A homomorphism between algebraic structures is a function that is compatible with the operations of the structures. For all common algebraic structures, and, in particular for vector spaces, an injective homomorphism is also called a monomorphism. However, in the more general context of category theory, the definition of a monomorphism differs from that of an injective homomorphism. This is thus a theorem that they are equivalent for algebraic structures; see Homomorphism § Monomorphism for more details. A function f {\displaystyle f} that is not injective is sometimes called many-to-one. == Definition == Let f {\displaystyle f} be a function whose domain is a set X . {\displaystyle X.} The function f {\displaystyle f} is said to be injective provided that for all a {\displaystyle a} and b {\displaystyle b} in X , {\displaystyle X,} if f ( a ) = f ( b ) , {\displaystyle f(a)=f(b),} then a = b {\displaystyle a=b} ; that is, f ( a ) = f ( b ) {\displaystyle f(a)=f(b)} implies a = b . {\displaystyle a=b.} Equivalently, if a ≠ b , {\displaystyle a\neq b,} then f ( a ) ≠ f ( b ) {\displaystyle f(a)\neq f(b)} in the contrapositive statement. Symbolically, ∀ a , b ∈ X , f ( a ) = f ( b ) ⇒ a = b , {\displaystyle \forall a,b\in X,\;\;f(a)=f(b)\Rightarrow a=b,} which is logically equivalent to the contrapositive, ∀ a , b ∈ X , a ≠ b ⇒ f ( a ) ≠ f ( b ) . {\displaystyle \forall a,b\in X,\;\;a\neq b\Rightarrow f(a)\neq f(b).} An injective function (or, more generally, a monomorphism) is often denoted by using the specialized arrows ↣ or ↪ (for example, f : A ↣ B {\displaystyle f:A\rightarrowtail B} or f : A ↪ B {\displaystyle f:A\hookrightarrow B} ), although some authors specifically reserve ↪ for an inclusion map. == Examples == For visual examples, readers are directed to the gallery section. For any set X {\displaystyle X} and any subset S ⊆ X , {\displaystyle S\subseteq X,} the inclusion map S → X {\displaystyle S\to X} (which sends any element s ∈ S {\displaystyle s\in S} to itself) is injective. In particular, the identity function X → X {\displaystyle X\to X} is always injective (and in fact bijective). If the domain of a function is the empty set, then the function is the empty function, which is injective. If the domain of a function has one element (that is, it is a singleton set), then the function is always injective. The function f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } defined by f ( x ) = 2 x + 1 {\displaystyle f(x)=2x+1} is injective. The function g : R → R {\displaystyle g:\mathbb {R} \to \mathbb {R} } defined by g ( x ) = x 2 {\displaystyle g(x)=x^{2}} is not injective, because (for example) g ( 1 ) = 1 = g ( − 1 ) . {\displaystyle g(1)=1=g(-1).} However, if g {\displaystyle g} is redefined so that its domain is the non-negative real numbers [0,+∞), then g {\displaystyle g} is injective. The exponential function exp : R → R {\displaystyle \exp :\mathbb {R} \to \mathbb {R} } defined by exp ⁡ ( x ) = e x {\displaystyle \exp(x)=e^{x}} is injective (but not surjective, as no real value maps to a negative number). The natural logarithm function ln : ( 0 , ∞ ) → R {\displaystyle \ln :(0,\infty )\to \mathbb {R} } defined by x ↦ ln ⁡ x {\displaystyle x\mapsto \ln x} is injective. The function g : R → R {\displaystyle g:\mathbb {R} \to \mathbb {R} } defined by g ( x ) = x n − x {\displaystyle g(x)=x^{n}-x} is not injective, since, for example, g ( 0 ) = g ( 1 ) = 0. {\displaystyle g(0)=g(1)=0.} More generally, when X {\displaystyle X} and Y {\displaystyle Y} are both the real line R , {\displaystyle \mathbb {R} ,} then an injective function f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } is one whose graph is never intersected by any horizontal line more than once. This principle is referred to as the horizontal line test. == Injections can be undone == Functions with left inverses are always injections. That is, given f : X → Y , {\displaystyle f:X\to Y,} if there is a function g : Y → X {\displaystyle g:Y\to X} such that for every x ∈ X {\displaystyle x\in X} , g ( f ( x ) ) = x {\displaystyle g(f(x))=x} , then f {\displaystyle f} is injective. In this case, g {\displaystyle g} is called a retraction of f . {\displaystyle f.} Conversely, f {\displaystyle f} is called a section of g . {\displaystyle g.} Conversely, every injection f {\displaystyle f} with a non-empty domain has a left inverse g {\displaystyle g} . It can be defined by choosing an element a {\displaystyle a} in the domain of f {\displaystyle f} and setting g ( y ) {\displaystyle g(y)} to the unique element of the pre-image f − 1 [ y ] {\displaystyle f^{-1}[y]} (if it is non-empty) or to a {\displaystyle a} (otherwise). The left inverse g {\displaystyle g} is not necessarily an inverse of f , {\displaystyle f,} because the composition in the other order, f ∘ g , {\displaystyle f\circ g,} may differ from the identity on Y . {\displaystyle Y.} In other words, an injective function can be "reversed" by a left inverse, but is not necessarily invertible, which requires that the function is bijective. == Injections may be made invertible == In fact, to turn an injective function f : X → Y {\displaystyle f:X\to Y} into a bijective (hence invertible) function, it suffices to replace its codomain Y {\displaystyle Y} by its actual image J = f ( X ) . {\displaystyle J=f(X).} That is, let g : X → J {\displaystyle g:X\to J} such that g ( x ) = f ( x ) {\displaystyle g(x)=f(x)} for all x ∈ X {\displaystyle x\in X} ; then g {\displaystyle g} is bijective. Indeed, f {\displaystyle f} can be factored as In J , Y ∘ g , {\displaystyle \operatorname {In} _{J,Y}\circ g,} where In J , Y {\displaystyle \operatorname {In} _{J,Y}} is the inclusion function from J {\displaystyle J} into Y . {\displaystyle Y.} More generally, injective partial functions are called partial bijections. == Other properties == If f {\displaystyle f} and g {\displaystyle g} are both injective then f ∘ g {\displaystyle f\circ g} is injective. If g ∘ f {\displaystyle g\circ f} is injective, then f {\displaystyle f} is injective (but g {\displaystyle g} need not be). f : X → Y {\displaystyle f:X\to Y} is injective if and only if, given any functions g , {\displaystyle g,} h : W → X {\displaystyle h:W\to X} whenever f ∘ g = f ∘ h , {\displaystyle f\circ g=f\circ h,} then g = h . {\displaystyle g=h.} In other words, injective functions are precisely the monomorphisms in the category Set of sets. If f : X → Y {\displaystyle f:X\to Y} is injective and A {\displaystyle A} is a subset of X , {\displaystyle X,} then f − 1 ( f ( A ) ) = A . {\displaystyle f^{-1}(f(A))=A.} Thus, A {\displaystyle A} can be recovered from its image f ( A ) . {\displaystyle f(A).} If f : X → Y {\displaystyle f:X\to Y} is injective and A {\displaystyle A} and B {\displaystyle B} are both subsets of X , {\displaystyle X,} then f ( A ∩ B ) = f ( A ) ∩ f ( B ) . {\displaystyle f(A\cap B)=f(A)\cap f(B).} Every function h : W → Y {\displaystyle h:W\to Y} can be decomposed as h = f ∘ g {\displaystyle h=f\circ g} for a suitable injection f {\displaystyle f} and surjection g . {\displaystyle g.} This decomposition is unique up to isomorphism, and f {\displaystyle f} may be thought of as the inclusion function of the range h ( W ) {\displaystyle h(W)} of h {\displaystyle h} as a subset of the codomain Y {\displaystyle Y} of h . {\displaystyle h.} If f : X → Y {\displaystyle f:X\to Y} is an injective function, then Y {\displaystyle Y} has at least as many elements as X , {\displaystyle X,} in the sense of cardinal numbers. In particular, if, in addition, there is an injection from Y {\displaystyle Y} to X , {\displaystyle X,} then X {\displaystyle X} and Y {\displaystyle Y} have the same cardinal number. (This is known as the Cantor–Bernstein–Schroeder theorem.) If both X {\displaystyle X} and Y {\displaystyle Y} are finite with the same number of elements, then f : X → Y {\displaystyle f:X\to Y} is injective if and only if f {\displaystyle f} is surjective (in which case f {\displaystyle f} is bijective). An injective function which is a homomorphism between two algebraic structures is an embedding. Unlike surjectivity, which is a relation between the graph of a function and its codomain, injectivity is a property of the graph of the function alone; that is, whether a function f {\displaystyle f} is injective can be decided by only considering the graph (and not the codomain) of f . {\displaystyle f.} == Proving that functions are injective == A proof that a function f {\displaystyle f} is injective depends on how the function is presented and what properties the function holds. For functions that are given by some formula there is a basic idea. We use the definition of injectivity, namely that if f ( x ) = f ( y ) , {\displaystyle f(x)=f(y),} then x = y . {\displaystyle x=y.} Here is an example: f ( x ) = 2 x + 3 {\displaystyle f(x)=2x+3} Proof: Let f : X → Y . {\displaystyle f:X\to Y.} Suppose f ( x ) = f ( y ) . {\displaystyle f(x)=f(y).} So 2 x + 3 = 2 y + 3 {\displaystyle 2x+3=2y+3} implies 2 x = 2 y , {\displaystyle 2x=2y,} which implies x = y . {\displaystyle x=y.} Therefore, it follows from the definition that f {\displaystyle f} is injective. There are multiple other methods of proving that a function is injective. For example, in calculus if f {\displaystyle f} is a differentiable function defined on some interval, then it is sufficient to show that the derivative is always positive or always negative on that interval. In linear algebra, if f {\displaystyle f} is a linear transformation it is sufficient to show that the kernel of f {\displaystyle f} contains only the zero vector. If f {\displaystyle f} is a function with finite domain it is sufficient to look through the list of images of each domain element and check that no image occurs twice on the list. A graphical approach for a real-valued function f {\displaystyle f} of a real variable x {\displaystyle x} is the horizontal line test. If every horizontal line intersects the curve of f ( x ) {\displaystyle f(x)} in at most one point, then f {\displaystyle f} is injective or one-to-one. == Gallery == == See also == Bijection, injection and surjection – Properties of mathematical functions Injective metric space – Type of metric space Monotonic function – Order-preserving mathematical function Univalent function – Mathematical concept == Notes == == References == Bartle, Robert G. (1976), The Elements of Real Analysis (2nd ed.), New York: John Wiley & Sons, ISBN 978-0-471-05464-1, p. 17 ff. Halmos, Paul R. (1974), Naive Set Theory, New York: Springer, ISBN 978-0-387-90092-6, p. 38 ff. == External links == Earliest Uses of Some of the Words of Mathematics: entry on Injection, Surjection and Bijection has the history of Injection and related terms. Khan Academy – Surjective (onto) and Injective (one-to-one) functions: Introduction to surjective and injective functions
Wikipedia:Institute of Mathematical Logic and Fundamental Research#0
Heinrich Scholz (German: [ʃɔlts]; 17 December 1884 – 30 December 1956) was a German logician, philosopher, and Protestant theologian. He was a peer of Alan Turing who mentioned Scholz when writing with regard to the reception of "On Computable Numbers, with an Application to the Entscheidungsproblem": "I have had two letters asking for reprints, one from Braithwaite at King's and one from a professor [sic] in Germany... They seemed very much interested in the paper. [...] I was disappointed by its reception here." Scholz had an extraordinary career (he was considered an outstanding scientist of national importance) but was not considered a brilliant logician, for example on the same level as Gottlob Frege or Rudolf Carnap. He provided a suitable academic environment for his students to thrive. He founded the Institute of Mathematical Logic and Fundamental Research at the University of Münster in 1936, which can be said enabled the study of logic at the highest international level after World War II up until the present day. == Personal life == Herman Scholz father was a Protestant minister at St. Mary's Church, Berlin. From 1903 to 1907 he studied philosophy and theology at Erlangen University and Berlin University achieving a Licentiate in theology (Lic. theol.). He was a student of Adolf von Harnack, in philosophy with peers Alois Riehl and Friedrich Paulsen. On 28 July 1910, Scholz habilitated in the subjects of religious philosophy and systematic theology in Berlin, and was promoted to full professor, therein working as a lecturer. In 1913, at Erlangen, Heinrich Scholz took his examination for promotion of Dr. phil. with Richard Falckenberg, studying the work of Schleiermacher and Goethe with a thesis titled: Schleiermacher und Goethe. Ein Beitrag zur Geschichte des deutschen Geistes. In 1917 he was appointed to the chair of Philosophy of Religion at the Breslau university succeeding Rudolf Otto to teach religious philosophy and systematic theology. In the same year he married his fiancée, Elisabeth Orth. Due to 8 years of continuous gastric trouble, he was exempted from military service. In 1919, he underwent an operation in which he believed to be a large part of his stomach was removed. That year he took the call to Kiel University, as the chair of philosophy. It was while at Kiel, in 1924, that Scholz's first wife, Elisabeth Orth died. From October 1928 onwards, he taught in Münster University, first as Professor of Philosophy. In 1938, this was changed to Professor of Philosophy of Mathematics and Science and again in 1943 to Chair of Mathematical Logic and Fundamental Questions in Mathematics working as head of the Institute for Mathematical Logic and Fundamental Research at Münster until he retired in 1952 as professor emeritus. Scholz was survived by his second wife, Erna. Scholz grave is located on the Park Cemetery Eichhof near Kiel. == Career == From his own account, in 1921, having by accident came across Principia Mathematica by Bertrand Russell and Alfred North Whitehead he began studying logic, which he had abandoned in his youth to study theology, leading later to a study of mathematics and theoretical physics by taking an undergraduate degree at Kiel. However another factor in his change of focus was the mathematician Otto Toeplitz. Toeplitz's broad research interests including Hilbert spaces and spectral theory encouraged Scholz interest in mathematics. Indeed, Segal suggests that Scholz love of structure was also an important factor in his move into mathematical logic, describing it this: Scholz's feeling for structure was no small thing. He apparently felt that when having guests for dinner: (1) no more than six people should be invited; (2) there must be an excellent menu; (3) a discussion theme must be planned; and (4) the guests should have prepared themselves as much as possible beforehand on this theme. In 1925, he was a peer of Karl Barth at Münster University, in which he taught Protestant theology. Under the influence of conversations with Scholz, Barth later wrote in 1930/31. his book about the Anselm of Canterbury proof of God "fides quaerens intellectum." In the 1930s, he maintained contact with Alan Turing who later – in a letter home dated 22 February 1937 – wrote with regard to the reception of his article "On Computable Numbers, with an Application to the Entscheidungsproblem": I have had two letters asking for reprints, one from Braithwaite at King's and one from a proffessor [sic] in Germany... They seemed very much interested in the paper. I think possibly it is making a certain amount of impression. I was disappointed by its reception here. I expected Weyl who had done some work connected quite closely with it some years ago at least to have made a few remarks about it. At the University of Münster, his study into mathematical logic and basic research, provided many of the critical insights, that contributed to the foundations of theoretical computer science. Right from the time he arrived at Münster, Scholz worked towards building a school of mathematical logic. By 1935, his research team at Münster were being referred to as the Münster school of mathematical logic. Scholz names 1936, as the year the Münster School was born. His professorship was rededicated in 1936 to a lectureship for mathematical logic and fundamental research and in 1943 the first chair in Germany for mathematical logic and fundamental research. The Münster Chair is still regarded as one of the best in Germany. Scholz was considered a Platonist, and in that sense, he regarded the mathematical logic as the foundation of knowledge. In 1936 he was awarded a grant from the DFG, for the production of three volumes of research in logic and for the editing of the Gottlob Frege papers. He is considered the discoverer of the estate of Gottlob Frege. Gisbert Hasenjaeger whose thesis had been supervised by Scholtz, produced a book Grundzüge der mathematischen Logik in 1961 which was jointly authored with Scholz despite being published five years after Scholz's death. === Work during World War II === Initially Scholz was pleased with the rise of Nazi power in Germany. Describing himself a conservative nationalist, describing himself as such "We felt like Prussians right to the bone,"" and described by his friend Heinrich Behnke as a "small-minded Prussian nationalist". Behnke found discussing political issues difficult. In the beginning the Nazi laws helped establish Münster as an important centre for Logic as other university staff at Göttingen and Berlin Universities were being obliterated. On 14 March 1940, Scholz sent a letter to the Education department of occupied Poland, seeking the release of Jan Salamucha, who had been professor of theology at Kraków University. Salamucha was sent to Sachsenhausen concentration camp in 1940. In October 1940, Scholz received a reply for the education minister which stated he had "injured the national honour" and was forbidden to send further petitions. Salamucha was later released but killed by the Nazis in 1944 However, Scholz persisted, first helping Alfred Tarski, who had fled Poland to the United States, to correspond with his wife who remained in Poland and later helping the Polish Logician Jan Łukasiewicz, who he had been corresponding since 1928, to leave Poland with his wife and hide in Germany. Although Scholz recognized the true nature of the Nazis and abhorred them from mid-1942 onwards, he remained on good terms with Nazi academics like Ludwig Bieberbach. During the period of National Socialism, Max Steck, who championed the German Mathematics which rejected the formalist approach to mathematics, deeply opposed Hilbert's approach which he described as Jewish – the worst possible insult in Germany at this time. Max Steck acknowledged the "per se outstanding achievement of formalism" ("an sich betrachtet einmaligen Leistung des Formalismus"), but criticized the "missing epistemological component" ("Jede eigentliche Erkenntnistheorie fehlt im Formalismus") and on the only page of his main work where he connects formalism and Jews he mentions that "Jews were the actual trendsetters of formalism" ("die eigentlichen Schrittmacher des Formalismus"). In response to this, Bieberbach asked Scholz to write an article for Deutsche Mathematik, to answer the attacks on mathematical formalism by Steck, which was surprising since Bieberbach led the Nazi mathematicians' attack on Jewish mathematics. Ensuring that Hilbert was not considered "Jewish", Scholz wrote "What does formalised study of the foundations of mathematics aim at?." Scholz had received funding from Bieberbach as early as 1937, which prompted an annoyed Steck to write in his 1942 book: What Scholz has understood is doubtless this, to obtain from the German State huge amounts of publication money for this logic production. We fundamentally reject this logic which praises the English empiricists and sensory philosophers such as the Englishmen Locke, Berkeley, Hume, and by now find it really time to speak for once about the "Great Germans". There were three other articles by Heinrich Scholz in the journal German Mathematics: Ein neuer Vollständigkeitsbeweis für das reduzierte Fregesche Axiomensystem des Aussagenkalküls (1936), a review of the Nazi philosopher Wolfgang Cramer's book Das Problem der reinen Anschauung (1938) and a review of Andreas Speiser's Ein Parmenideskommentar (1938). == World's first computer science seminar == In the late 2000s, Achim Clausing was tasked with going through the remaining estate of Scholz at Münster University, and while going through the archive papers in the basement of the Institute of Computer Science, Clausing discovered two original prints of the most important publication of Alan Turing, which had been missing since 1945. In this case, the work "On Computable Numbers, with an Application to the Entscheidungsproblem" from 1936, which Scholz had requested, and a postcard from Turing. Based on the work by Turing and conversations with Scholz, Clausing stated "[it was] the world's first seminar on computer science." The second work, which was a Mind (journal) article, dates from 1950 and is a treatise on the development of artificial intelligence, Turing provided them with a handwritten comment. This is probably my last copy. At Sotheby's recently, comparable prints of Turing, with no attached dedication, sold for 180,000 euros. == Bibliography == Christianity and Science in Schleiermacher's Doctrine of the Faith, 1909 Belief and unbelief in world history. One Response to Augustine de Civitate Dei, 1911 Schleiermacher und Goethe. Ein Beitrag zur Geschichte des deutschen Geistes [Schleiermacher and Goethe. A Contribution to the History of the German Spirit] (Dissertation) (in German), Leipzig: J. C. Hinrichs, 1913 Idealism as a carrier of the war thought. Friedrich Andreas Perthes, Gotha, 1915. Perthes' writings on World War II, Volume 3 Politics and morality. An investigation of the moral character of modern realpolitik. Friedrich Andreas Perthes, Gotha, 1915. Perthes' writings on the World War, Volume 6 The war and Christianity. Friedrich Andreas Perthes, Gotha, 1915. Perthes' writings on World War II, Volume 7 The essence of the German spirit. Grote'sche Verlagsbuchhandlung, Berlin, 1917. The idea of immortality as a philosophical problem, 1920 Philosophy of religion. Reuther & Reichard, Berlin, 1921, 2nd revised edition, 1922. On The 'Decline' of the West. A dispute with Oswald Spengler . Reuther & Reichard, Berlin; 2nd revised and supplemented edition, 1921. The religious philosophy of the as-if. A review of Kant and the idealistic positivism, 1921 The importance of Hegel's philosophy for philosophers of the present day. Reuther & Reichard, 1921 Berlin The legacy of Kant's doctrine of space and time, 1924 The Basics of Greek Mathematics, 1928 with Helmut Hasse Eros and Caritas. The platonic love and the love within the meaning of Christianity, 1929 History of logic. Junker and Dünnhaupt, Berlin 1931 (1959 under outline of the history of logic Alber, Freiburg im Breisgau) Goethe's attitude to the question of immortality, 1934 The new logistic logic and science teaching. In: Research and progress, Volume 11, 1935. The classical and modern logic. In: Sheets for German Philosophy, Volume 10, 1937, pp. 254–281. Fragments of a Platonist. Staufen, Cologne undated (1940). Metaphysics as a rigorous science. Staufen, Cologne 1941. A new form of basic research. Research and progress No. 35/36 born 1941, pp. 382ff. Logic, grammar, metaphysics. In: Archives of philosophy, Volume 1, 1947, pp. 39–80. Encounter with Nietzsche. Furrow, Tübingen 1948. Principles of mathematical logic. Berlin, Göttingen 1961 Gisbert Hasenjaeger Mathesis universalis. Essays on the philosophy as rigorous science, Edited by Hans Hermes, Friedrich Kambartel and Joachim Ritter, University Press, Darmstadt 1961. Scholz Leibniz and the mathematical basis for research, annual report German mathematician club 1943 === Papers === Fichte und Napoleon. In: Preußische Jahrbücher (in German), Volume 152, 1913, pp. 1–12. The religious philosophy of the as-if. In: Annals of Philosophy, 1 Vol 1919, pp. 27–113 The religious philosophy of the as-if. In: Annals of Philosophy, 3 Bd, H. 1 1923, pp. 1–73 Why the Greeks did not build the irrational numbers?. In: Kant Studies Vol.3, 1928, pp. 35–72 Augustine and Descartes. In: Sheets for German Philosophy, Volume 5, 1932, Issue 4, pp. 405–423. The idea of God in mathematics. In: Sheets for German Philosophy, Volume 8, 1934/35, pp. 318–338. Logic, grammar, metaphysics. In: Archives for Law and Social Philosophy, Volume 36, 1943/44, pp. 393–433 == References == == Sources == Hermes, Hans (1955), "Heinrich Scholz zum 70. Geburtstag" [Heinrich Scholz on the occasion of his 70th birthday], Mathematisch-Physikalische Semesterberichte (in German), 4: 165–170, ISSN 0340-4897 Linneweber-Lammerskitten, Helmut (1995). "Scholz, Heinrich". Biographisch-Bibliographisches Kirchenlexikon (BBKL) (in German). Vol. 9: "Scharling, Carl Henrik – Sheldon, Charles Monroe". Herzberg: Traugott Bautz. cols. 683–687. ISBN 978-3-88309-058-0. Meschkowski, Herbert (1984), "Heinrich Scholz. Zum 100. Geburtstag des Grundlagenforschers" [Heinrich Scholz. On the occasion of the 100th Birthday of the fundamental researcher], Humanismus und Technik. Jahrbuch 1984 (in German), vol. 27, Berlin: Gesellschaft von Freunden der Technischen Universität Berlin e. V., pp. 28–52, ISSN 0439-884X Molendijk, Arie L. (1991), Aus dem Dunklen ins Helle. Wissenschaft und Theologie im Denken von Heinrich Scholz. Mit unveröffentlichten Thesenreihen von Heinrich Scholz und Karl Barth [Out of the Darkness to the Light. Science and Theology in the Thoughts of Heinrich Scholz. With unpublished sets of theses of Heinrich Scholz and Karl Barth], Amsterdam Studies in Theology (in German), vol. 8, Amsterdam / Atlanta GA: Editions Rodopi, ISBN 978-9051832471 Peckhaus, Volker (1998–1999), "Moral integrity during a difficult period: Beth and Scholz", Philosophia Scientiae, 3 (4): 151–173, retrieved 18 January 2019 Peckhaus, Volker (2018), "Heinrich Scholz", in Zalta, Edward N. (ed.), The Stanford Encyclopedia of Philosophy (Fall 2018 ed.), Stanford, CA: Metaphysics Research Lab, Stanford University, retrieved 18 January 2019 Schmidt am Busch, Hans-Christoph; Wehmeier, Kai F., eds. (2005). Heinrich Scholz – Logiker, Philosoph, Theologe [Heinrich Scholz – Logician, philosopher, theologian] (in German and English). Paderborn, Germany: Mentis. ISBN 978-3897852792. Schmidt am Busch, Hans-Christoph (2007), "Scholz, Heinrich", Neue Deutsche Biographie (in German), vol. 23, pp. 454–455, retrieved 18 January 2019 == External works == John J. O'Connor, Edmund F. Robertson : Heinrich Scholz (logician). In: MacTutor History of Mathematics archive (English) Publications by and on Heinrich Scholz in the catalog of the German National Library Stanford Encyclopedia of Philosophy
Wikipedia:Institute of Mathematical Sciences, Chennai#0
The Institute of Mathematical Sciences (IMSc) (sometimes also referred to as Matscience) is a research centre located in Chennai, India. It is a constituent institute of the Homi Bhabha National Institute. IMSc is a national institute for fundamental research in frontier disciplines of the mathematical and physical sciences: theoretical computer science, mathematics, theoretical physics, and computational biology. It is funded mainly by the Department of Atomic Energy. The institute operates the Kabru supercomputer. == History == The institute was founded by Alladi Ramakrishnan in 1962 in Chennai. It is modelled after the Institute for Advanced Study, Princeton, New Jersey, United States. It went through a phase of expansion when E. C. G. Sudarshan in the 1980s and R. Ramachandran in 1990s were the directors. The current director of the institute is V.Ravindran. == Academics == The institute has a graduate research program to which a group of students are admitted each year to work towards a Ph.D. degree. IMSc hosts scientists at the post-doctoral level and supports a visiting scientist program in areas of research in the institute. == Campus == Located in South Chennai, in the Adyar-Taramani area, the institute is on the Central Institutes of Technology (CIT) campus. The institute maintains a student hostel, flatlets for long-term visitors, married students and post-doctoral fellows, and the institute guest house. IMSc has its own faculty housing in Tiruvanmiyur near the seashore. == Notable people == Ramachandran Balasubramanian, mathematician Ganapathy Baskaran, physicist Indumathi D., physicist Rajiah Simon, physicist Radha Balakrishnan, physicist C. S. Yogananda, mathematician == References == == External links == Official website
Wikipedia:Institute of Mathematics and its Applications#0
The Institute of Mathematics and its Applications (IMA) is the UK's chartered professional body for mathematicians and one of the UK's learned societies for mathematics (another being the London Mathematical Society). The IMA aims to advance mathematics and its applications, promote and foster research and other enquiries directed the advancement, teaching and application of mathematics, to seek to establish and maintain high standards of professional conduct for members and to seek to promote, encourage and guide the development of education and training in mathematics. == History == In 1959, the need for a professional and learned body for mathematics and its applications was recognised independently by both Sir James Lighthill and a committee of the heads of the mathematics departments of several colleges of technology together with some interested mathematicians from universities, industry and government research establishments. After much discussion, the name and constitution of the institute were confirmed in 1963, and the IMA was approved as a company limited by guarantee on 23 April 1964. In 1990, the institute was incorporated as a royal charter company, and it was registered as a charity in 1993. == Governance == The institute is governed via a Council, made up of between 25 and 31 individuals including a president, three past presidents, elected and co-opted members, and honorary officers. === IMA president === The president normally serves a two-year term. This is a list of the presidents of the IMA: 1964–1966: Sir James Lighthill FRS 1966–1967: Professor Sir Bryan Thwaites 1968–1969: Dr Peter Wakely FRS 1970–1971: Professor George Barnard 1972–1973: Professor Charles Coulson FRS 1974–1975: Sir Hermann Bondi FRS 1976–1977: HRH The Duke of Edinburgh 1978–1979: Dame Kathleen Ollerenshaw 1980–1981: Sir Samuel Edwards FRS 1982–1983: Dr Peter Trier 1984–1985: Sir Harry Pitt FRS 1986–1987: Professor Bob Churchhouse FRS 1988–1989: Professor Douglas Jones FRS 1990–1991: Sir Roy Harding 1992-1993: J H McDonnell 1993–1995: Professor Lord Julian Hunt FRS 1996–1997: Professor David Crighton FRS 1998–1999: Professor Henry Beker 2000–2001: Professor Stephen Reid 2002–2003: Professor John McWhirter FREng, FRS 2004–2005: Professor Tim Pedley FRS 2006–2007: Professor Peter Grindrod CBE 2008–2009: Professor David Abrahams 2010–2011: Professor Michael Walker OBE, FRS 2012–2013: Professor Robert MacKay FRS 2014–2015: Professor Dame Celia Hoyles 2016–2017: Professor Chris Linton 2018–2019: Professor Alistair Fitt 2020–2021: Professor Nira Chamberlain OBE 2022–2023: Professor Paul Glendinning 2024–Present: Professor Hannah Fry HonFREng === Honorary officers === In addition to the president, the six honorary officer roles are listed below with their incumbents: == Membership == The IMA has 5,000 members, ten percent of whom live outside the United Kingdom. Forty percent of members are employed in education (schools through to universities) and sixty percent work in commercial and governmental organisations. The institute awards five grades of membership within three groups. === Corporate membership === Fellow (FIMA) Fellows are peer-reviewed by external reference and selected internally through election by the membership committee. Qualifications include a minimum of seven years experience and hold a senior managerial or technical position involving the use of, or training in, mathematics. A Fellow has made outstanding contributions to the development or application of mathematics. Member (MIMA) Members have an appropriate degree, a minimum period of three years training and experience after graduation and a position of responsibility involving the application of mathematical knowledge or training in mathematics. === Leading to corporate membership === Associate Member (AMIMA) Associate Member hold a degree in mathematics, a joint degree in mathematics with another subject or a degree with a sufficient mathematical component such as would be expected in physics or engineering. Students Student Members are undertaking a course of study which will lead to a qualification that meets Associate Member requirements. === Non-professional membership === Affiliate No requirements are necessary for entry into this grade. == Professional status == In 1990 the institute was incorporated by royal charter and was subsequently granted the right to award Chartered Mathematician (CMath) status. The institute may also nominate individuals for the award of Chartered Scientist (CSci) under license from the Science Council. The institute can also award individual Chartered Mathematics Teacher (CMathTeach). == Publications == === Mathematics Today === Mathematics Today is a general-interest mathematics publication aimed primarily at Institute members, published six times a year and containing articles, reviews, reports and other news on developments in mathematics and its applications. === Research journals === Eight research journals are published by Oxford University Press on behalf of the IMA. IMA Journal of Applied Mathematics IMA Journal of Numerical Analysis Mathematical Medicine and Biology IMA Journal of Mathematical Control and Information IMA Journal of Management Mathematics Teaching Mathematics and its Applications Information and Inference: A Journal of the IMA Transactions of Mathematics and its Applications === Other publications === The IMA began publishing a podcast, Travels in a Mathematical World, on 4 October 2008. The IMA also publishes conference proceedings, monographs and special interest group newsletters. == Conferences == The institute runs 8–10 conferences most years. These are specialist meetings where new research is presented and discussed. == Education activities == The IMA runs a wide range of mathematical activities through the Higher Education Services Area and the Schools and Further Education Group committees. The IMA operates a Programme Approval Scheme, which provides an 'approval in principle' for degree courses that meet the educational requirements for Chartered Mathematician. For programmes to be approved, the IMA requires the programme to be an honours degree of at least three years length, which meets the required mathematical content threshold of two-thirds. The programmes also need to meet the QAA benchmark for Mathematics and the Framework for High Education Qualification. The IMA provides education grants of up to £600 to allow individuals from the UK working in schools or further/higher education to help with the attendance at or the organisation of a mathematics educational activity such as attendance at a conference, expenses to cover a speaker coming into a school, organising a session for a conference. The IMA also employs a university liaison officer to promote mathematics and the IMA to university students undertaking mathematics and help act as a means of support. As part of this support the IMA runs the University Liaison Grants Scheme to provide university mathematical societies with grants of up to £400 to organise more activities and work more closely with the IMA. == Prizes == The councils of the IMA and the London Mathematical Society jointly award the Christopher Zeeman Medal, dedicated to recognising excellence in the communication of mathematics and the David Crighton Award dedicated to the recognition of service to mathematics and the wider mathematics community. The IMA in cooperation with the British Applied Mathematics Colloquium (BAMC) award the biennial IMA Lighthill-Thwaites Prize for early career applied mathematicians. The IMA awards the Leslie Fox Prize for Numerical Analysis, the Catherine Richards Prize for the best articles in Mathematics Today, the John Blake University Teaching Medal and the IMA Gold Medal for outstanding contribution to mathematics and its applications over the years. The IMA awards student-level prizes at most universities which offer mathematics around the UK. Each student prize is a year's membership of the IMA. == Branches == The IMA has Branches in the regions London, East Midlands, Lancashire and the North West, West Midlands, West of England, Ireland and Scotland, which run local activities (like talks by well known mathematicians). Its headquarters are in Southend-on-Sea, Essex. == Early Career Mathematicians Group == The Early Career Mathematicians Group of the IMA hold a series of conferences for mathematicians in the first 15 years of their career among other activities. == Social networking == As well as all the conferences, meetings and group activities that are held across the country the IMA operates groups on Facebook and LinkedIn, and has a Twitter feed. == Interaction with other bodies == Along with the London Mathematical Society, the Royal Statistical Society, the Edinburgh Mathematical Society and the Operational Research Society, forms the Council for the Mathematical Sciences. The IMA is a member of the Joint Mathematical Council (JMC) and informs the deliberations of the Advisory Committee on Mathematics Education (ACME). The IMA has representatives on Bath University Court, Bradford University Court, Cranfield University Court, Engineering Technology Board and Engineering Council, Engineering and Physical Sciences Research Council, EPSRC Public Understanding of Science Committee, Heads of Departments of Mathematical Sciences, International Council for Industrial and Applied Mathematics, Joint Mathematical Council, LMS Computer Science Committee, LMS International Affairs Committee, LMS Women in Maths Committee, Maths, Stats & OR Network (part of the HEA), Parliamentary and Scientific Committee, Qualifications and Curriculum Authority, Science Council, Science Council Registration Authority, The Association of Management Sciences (TAMS) and University of Wales, Swansea Court == See also == List of Mathematical Societies Council for the Mathematical Sciences Leslie Fox Prize for Numerical Analysis == Notes == == External links == The Institute of Mathematics and its Applications The origins of the Institute Travels in a Mathematical World Podcast
Wikipedia:Integer points in convex polyhedra#0
The study of integer points in convex polyhedra is motivated by questions such as "how many nonnegative integer-valued solutions does a system of linear equations with nonnegative coefficients have" or "how many solutions does an integer linear program have". Counting integer points in polyhedra or other questions about them arise in representation theory, commutative algebra, algebraic geometry, statistics, and computer science. The set of integer points, or, more generally, the set of points of an affine lattice, in a polyhedron is called Z-polyhedron, from the mathematical notation Z {\displaystyle \mathbb {Z} } or Z for the set of integer numbers. == Properties == For a lattice Λ, Minkowski's theorem relates the number d(Λ) (the volume of a fundamental parallelepiped of the lattice) and the volume of a given symmetric convex set S to the number of lattice points contained in S. The number of lattice points contained in a polytope all of whose vertices are elements of the lattice is described by the polytope's Ehrhart polynomial. Formulas for some of the coefficients of this polynomial involve d(Λ) as well. == Applications == === Loop optimization === In certain approaches to loop optimization, the set of the executions of the loop body is viewed as the set of integer points in a polyhedron defined by loop constraints. == See also == Convex lattice polytope Pick's theorem == References and notes == == Further reading == Barvinok, Alexander; Beck, Matthias; Haase, Christian; Reznick, Bruce; Welker, Volkmar (2005), Integer Points In Polyhedra: Proceedings of the AMS-IMS-SIAM Joint Summer Research Conference held in Snowbird, UT, July 13–17, 2003, Contemporary Mathematics, vol. 374, Providence, RI: American Mathematical Society, doi:10.1090/conm/374, ISBN 0-8218-3459-2, MR 2134757 Barvinok, Alexander (2008), Integer Points In Polyhedra, Zurich Lectures in Advanced Mathematics, vol. 11, Zürich: European Mathematical Society, doi:10.4171/052, ISBN 978-3-03719-052-4, MR 2455889 Beck, Matthias; Haase, Christian; Reznick, Bruce; Vergne, Michèle; Welker, Volkmar; Yoshida, Ruriko (2008), Integer Points In Polyhedra: Geometry, Number Theory, Representation Theory, Algebra, Optimization, Statistics (PDF), Contemporary Mathematics, vol. 452, Providence, RI: American Mathematical Society, doi:10.1090/conm/452, ISBN 978-0-8218-4173-0, MR 2416261
Wikipedia:Integrable module#0
In algebra, an integrable module (or integrable representation) of a Kac–Moody algebra g {\displaystyle {\mathfrak {g}}} (a certain infinite-dimensional Lie algebra) is a representation of g {\displaystyle {\mathfrak {g}}} such that (1) it is a sum of weight spaces and (2) the Chevalley generators e i , f i {\displaystyle e_{i},f_{i}} of g {\displaystyle {\mathfrak {g}}} are locally nilpotent. For example, the adjoint representation of a Kac–Moody algebra is integrable. == Notes == == References == Kac, Victor (1990). Infinite dimensional Lie algebras (3rd ed.). Cambridge University Press. ISBN 0-521-46693-8.
Wikipedia:Integral Equations and Operator Theory#0
Integral Equations and Operator Theory is a journal dedicated to operator theory and its applications to engineering and other mathematical sciences. As some approaches to the study of integral equations (theoretically and numerically) constitute a subfield of operator theory, the journal also deals with the theory of integral equations and hence of differential equations. The journal consists of two sections: a main section consisting of refereed papers and a second consisting of short announcements of important results, open problems, information, etc. It has been published monthly by Springer-Verlag since 1978. The journal is also available online by subscription. The founding editor-in-chief of the journal, in 1978, was Israel Gohberg. Its current editor-in-chief is Christiane Tretter. == References == == External links == Journal homepage
Wikipedia:Integral Transforms and Special Functions#0
Integral Transforms and Special Functions is a monthly peer-reviewed scientific journal, specialised in topics of mathematical analysis, the theory of differential and integral equations, and approximation theory, but publishes also papers in other areas of mathematics. It is published by Taylor & Francis and the editor-in-chief is S.B. Yakubovich (University of Porto) == External links == Official website
Wikipedia:Integral graph#0
In the mathematical field of graph theory, an integral graph is a graph whose adjacency matrix's spectrum consists entirely of integers. In other words, a graph is an integral graph if all of the roots of the characteristic polynomial of its adjacency matrix are integers. The notion was introduced in 1974 by Frank Harary and Allen Schwenk. == Examples == The complete graph Kn is integral for all n. The only cycle graphs that are integral are C 3 {\displaystyle C_{3}} , C 4 {\displaystyle C_{4}} , and C 6 {\displaystyle C_{6}} . If a graph is integral, then so is its complement graph; for instance, the complements of complete graphs, edgeless graphs, are integral. If two graphs are integral, then so is their Cartesian product and strong product; for instance, the Cartesian products of two complete graphs, the rook's graphs, are integral. Similarly, the hypercube graphs, as Cartesian products of any number of complete graphs K 2 {\displaystyle K_{2}} , are integral. The line graph of a regular integral graph is again integral. For instance, as the line graph of K 4 {\displaystyle K_{4}} , the octahedral graph is integral, and as the complement of the line graph of K 5 {\displaystyle K_{5}} , the Petersen graph is integral. Among the cubic symmetric graphs the utility graph, the Petersen graph, the Nauru graph and the Desargues graph are integral. The Higman–Sims graph, the Hall–Janko graph, the Clebsch graph, the Hoffman–Singleton graph, the Shrikhande graph and the Hoffman graph are integral. A regular graph is periodic if and only if it is an integral graph. A walk-regular graph that admits perfect state transfer is an integral graph. The Sudoku graphs, graphs whose vertices represent cells of a Sudoku board and whose edges represent cells that should not be equal, are integral. == References ==
Wikipedia:Integral of inverse functions#0
In mathematics, integrals of inverse functions can be computed by means of a formula that expresses the antiderivatives of the inverse f − 1 {\displaystyle f^{-1}} of a continuous and invertible function f {\displaystyle f} , in terms of f − 1 {\displaystyle f^{-1}} and an antiderivative of f {\displaystyle f} . This formula was published in 1905 by Charles-Ange Laisant. == Statement of the theorem == Let I 1 {\displaystyle I_{1}} and I 2 {\displaystyle I_{2}} be two intervals of R {\displaystyle \mathbb {R} } . Assume that f : I 1 → I 2 {\displaystyle f:I_{1}\to I_{2}} is a continuous and invertible function. It follows from the intermediate value theorem that f {\displaystyle f} is strictly monotone. Consequently, f {\displaystyle f} maps intervals to intervals, so is an open map and thus a homeomorphism. Since f {\displaystyle f} and the inverse function f − 1 : I 2 → I 1 {\displaystyle f^{-1}:I_{2}\to I_{1}} are continuous, they have antiderivatives by the fundamental theorem of calculus. Laisant proved that if F {\displaystyle F} is an antiderivative of f {\displaystyle f} , then the antiderivatives of f − 1 {\displaystyle f^{-1}} are: ∫ f − 1 ( y ) d y = y f − 1 ( y ) − F ∘ f − 1 ( y ) + C , {\displaystyle \int f^{-1}(y)\,dy=yf^{-1}(y)-F\circ f^{-1}(y)+C,} where C {\displaystyle C} is an arbitrary real number. Note that it is not assumed that f − 1 {\displaystyle f^{-1}} is differentiable. In his 1905 article, Laisant gave three proofs. === First proof === First, under the additional hypothesis that f − 1 {\displaystyle f^{-1}} is differentiable, one may differentiate the above formula, which completes the proof immediately. === Second proof === His second proof was geometric. If f ( a ) = c {\displaystyle f(a)=c} and f ( b ) = d {\displaystyle f(b)=d} , the theorem can be written: ∫ c d f − 1 ( y ) d y + ∫ a b f ( x ) d x = b d − a c . {\displaystyle \int _{c}^{d}f^{-1}(y)\,dy+\int _{a}^{b}f(x)\,dx=bd-ac.} The figure on the right is a proof without words of this formula. Laisant does not discuss the hypotheses necessary to make this proof rigorous, but this can be proved if f {\displaystyle f} is just assumed to be strictly monotone (but not necessarily continuous, let alone differentiable). In this case, both f {\displaystyle f} and f − 1 {\displaystyle f^{-1}} are Riemann integrable and the identity follows from a bijection between lower/upper Darboux sums of f {\displaystyle f} and upper/lower Darboux sums of f − 1 {\displaystyle f^{-1}} . The antiderivative version of the theorem then follows from the fundamental theorem of calculus in the case when f {\displaystyle f} is also assumed to be continuous. === Third proof === Laisant's third proof uses the additional hypothesis that f {\displaystyle f} is differentiable. Beginning with f − 1 ( f ( x ) ) = x {\displaystyle f^{-1}(f(x))=x} , one multiplies by f ′ ( x ) {\displaystyle f'(x)} and integrates both sides. The right-hand side is calculated using integration by parts to be x f ( x ) − ∫ f ( x ) d x {\textstyle xf(x)-\int f(x)\,dx} , and the formula follows. === Details === One may also think as follows when f {\displaystyle f} is differentiable. As f {\displaystyle f} is continuous at any x {\displaystyle x} , F := ∫ 0 x f {\displaystyle F:=\int _{0}^{x}f} is differentiable at all x {\displaystyle x} by the fundamental theorem of calculus. Since f {\displaystyle f} is invertible, its derivative would vanish in at most countably many points. Sort these points by . . . < t − 1 < t 0 < t 1 < . . . {\displaystyle ...<t_{-1}<t_{0}<t_{1}<...} . Since g ( y ) := y f − 1 ( y ) − F ∘ f − 1 ( y ) + C {\displaystyle g(y):=yf^{-1}(y)-F\circ f^{-1}(y)+C} is a composition of differentiable functions on each interval ( t i , t i + 1 ) {\displaystyle (t_{i},t_{i+1})} , chain rule could be applied g ′ ( y ) = f − 1 ( y ) + y / f ′ ( y ) − f ∘ f − 1 ( y ) .1 / f ′ ( y ) + 0 = f − 1 ( y ) {\displaystyle g'(y)=f^{-1}(y)+y/f'(y)-f\circ f^{-1}(y).1/f'(y)+0=f^{-1}(y)} to see g | ( t i , t i + 1 ) {\displaystyle \left.g\right|_{(t_{i},t_{i+1})}} is an antiderivative for f | ( t i , t i + 1 ) {\displaystyle \left.f\right|_{(t_{i},t_{i+1})}} . We claim g {\displaystyle g} is also differentiable on each of t i {\displaystyle t_{i}} and does not go unbounded if I 2 {\displaystyle I_{2}} is compact. In such a case f − 1 {\displaystyle f^{-1}} is continuous and bounded. By continuity and the fundamental theorem of calculus, G ( y ) := C + ∫ 0 y f − 1 {\displaystyle G(y):=C+\int _{0}^{y}f^{-1}} where C {\displaystyle C} is a constant, is a differentiable extension of g {\displaystyle g} . But g {\displaystyle g} is continuous as it's the composition of continuous functions. So is G {\displaystyle G} by differentiability. Therefore, G = g {\displaystyle G=g} . One can now use the fundamental theorem of calculus to compute ∫ I 2 f − 1 {\displaystyle \int _{I_{2}}f^{-1}} . Nevertheless, it can be shown that this theorem holds even if f {\displaystyle f} or f − 1 {\displaystyle f^{-1}} is not differentiable: it suffices, for example, to use the Stieltjes integral in the previous argument. On the other hand, even though general monotonic functions are differentiable almost everywhere, the proof of the general formula does not follow, unless f − 1 {\displaystyle f^{-1}} is absolutely continuous. It is also possible to check that for every y {\displaystyle y} in I 2 {\displaystyle I_{2}} , the derivative of the function y ↦ y f − 1 ( y ) − F ( f − 1 ( y ) ) {\displaystyle y\mapsto yf^{-1}(y)-F(f^{-1}(y))} is equal to f − 1 ( y ) {\displaystyle f^{-1}(y)} . In other words: ∀ x ∈ I 1 lim h → 0 ( x + h ) f ( x + h ) − x f ( x ) − ( F ( x + h ) − F ( x ) ) f ( x + h ) − f ( x ) = x . {\displaystyle \forall x\in I_{1}\quad \lim _{h\to 0}{\frac {(x+h)f(x+h)-xf(x)-\left(F(x+h)-F(x)\right)}{f(x+h)-f(x)}}=x.} To this end, it suffices to apply the mean value theorem to F {\displaystyle F} between x {\displaystyle x} and x + h {\displaystyle x+h} , taking into account that f {\displaystyle f} is monotonic. == Examples == Assume that f ( x ) = exp ⁡ ( x ) {\displaystyle f(x)=\exp(x)} , hence f − 1 ( y ) = ln ⁡ ( y ) {\displaystyle f^{-1}(y)=\ln(y)} . The formula above gives immediately ∫ ln ⁡ ( y ) d y = y ln ⁡ ( y ) − exp ⁡ ( ln ⁡ ( y ) ) + C = y ln ⁡ ( y ) − y + C . {\displaystyle \int \ln(y)\,dy=y\ln(y)-\exp(\ln(y))+C=y\ln(y)-y+C.} Similarly, with f ( x ) = cos ⁡ ( x ) {\displaystyle f(x)=\cos(x)} and f − 1 ( y ) = arccos ⁡ ( y ) {\displaystyle f^{-1}(y)=\arccos(y)} , ∫ arccos ⁡ ( y ) d y = y arccos ⁡ ( y ) − sin ⁡ ( arccos ⁡ ( y ) ) + C . {\displaystyle \int \arccos(y)\,dy=y\arccos(y)-\sin(\arccos(y))+C.} With f ( x ) = tan ⁡ ( x ) {\displaystyle f(x)=\tan(x)} and f − 1 ( y ) = arctan ⁡ ( y ) {\displaystyle f^{-1}(y)=\arctan(y)} , ∫ arctan ⁡ ( y ) d y = y arctan ⁡ ( y ) + ln ⁡ | cos ⁡ ( arctan ⁡ ( y ) ) | + C . {\displaystyle \int \arctan(y)\,dy=y\arctan(y)+\ln \left|\cos(\arctan(y))\right|+C.} == History == Apparently, this theorem of integration was discovered for the first time in 1905 by Charles-Ange Laisant, who "could hardly believe that this theorem is new", and hoped its use would henceforth spread out among students and teachers. This result was published independently in 1912 by an Italian engineer, Alberto Caprilli, in an opuscule entitled "Nuove formole d'integrazione". It was rediscovered in 1955 by Parker, and by a number of mathematicians following him. Nevertheless, they all assume that f or f−1 is differentiable. The general version of the theorem, free from this additional assumption, was proposed by Michael Spivak in 1965, as an exercise in the Calculus, and a fairly complete proof following the same lines was published by Eric Key in 1994. This proof relies on the very definition of the Darboux integral, and consists in showing that the upper Darboux sums of the function f are in 1-1 correspondence with the lower Darboux sums of f−1. In 2013, Michael Bensimhoun, estimating that the general theorem was still insufficiently known, gave two other proofs: The second proof, based on the Stieltjes integral and on its formulae of integration by parts and of homeomorphic change of variables, is the most suitable to establish more complex formulae. == Generalization to holomorphic functions == The above theorem generalizes in the obvious way to holomorphic functions: Let U {\displaystyle U} and V {\displaystyle V} be two open and simply connected sets of C {\displaystyle \mathbb {C} } , and assume that f : U → V {\displaystyle f:U\to V} is a biholomorphism. Then f {\displaystyle f} and f − 1 {\displaystyle f^{-1}} have antiderivatives, and if F {\displaystyle F} is an antiderivative of f {\displaystyle f} , the general antiderivative of f − 1 {\displaystyle f^{-1}} is G ( z ) = z f − 1 ( z ) − F ∘ f − 1 ( z ) + C . {\displaystyle G(z)=zf^{-1}(z)-F\circ f^{-1}(z)+C.} Because all holomorphic functions are differentiable, the proof is immediate by complex differentiation. == See also == Integration by parts Legendre transformation Young's inequality for products == References ==
Wikipedia:Integration by parts#0
In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivative. It is frequently used to transform the antiderivative of a product of functions into an antiderivative for which a solution can be more easily found. The rule can be thought of as an integral version of the product rule of differentiation; it is indeed derived using the product rule. The integration by parts formula states: ∫ a b u ( x ) v ′ ( x ) d x = [ u ( x ) v ( x ) ] a b − ∫ a b u ′ ( x ) v ( x ) d x = u ( b ) v ( b ) − u ( a ) v ( a ) − ∫ a b u ′ ( x ) v ( x ) d x . {\displaystyle {\begin{aligned}\int _{a}^{b}u(x)v'(x)\,dx&={\Big [}u(x)v(x){\Big ]}_{a}^{b}-\int _{a}^{b}u'(x)v(x)\,dx\\&=u(b)v(b)-u(a)v(a)-\int _{a}^{b}u'(x)v(x)\,dx.\end{aligned}}} Or, letting u = u ( x ) {\displaystyle u=u(x)} and d u = u ′ ( x ) d x {\displaystyle du=u'(x)\,dx} while v = v ( x ) {\displaystyle v=v(x)} and d v = v ′ ( x ) d x , {\displaystyle dv=v'(x)\,dx,} the formula can be written more compactly: ∫ u d v = u v − ∫ v d u . {\displaystyle \int u\,dv\ =\ uv-\int v\,du.} The former expression is written as a definite integral and the latter is written as an indefinite integral. Applying the appropriate limits to the latter expression should yield the former, but the latter is not necessarily equivalent to the former. Mathematician Brook Taylor discovered integration by parts, first publishing the idea in 1715. More general formulations of integration by parts exist for the Riemann–Stieltjes and Lebesgue–Stieltjes integrals. The discrete analogue for sequences is called summation by parts. == Theorem == === Product of two functions === The theorem can be derived as follows. For two continuously differentiable functions u ( x ) {\displaystyle u(x)} and v ( x ) {\displaystyle v(x)} , the product rule states: ( u ( x ) v ( x ) ) ′ = u ′ ( x ) v ( x ) + u ( x ) v ′ ( x ) . {\displaystyle {\Big (}u(x)v(x){\Big )}'=u'(x)v(x)+u(x)v'(x).} Integrating both sides with respect to x {\displaystyle x} , ∫ ( u ( x ) v ( x ) ) ′ d x = ∫ u ′ ( x ) v ( x ) d x + ∫ u ( x ) v ′ ( x ) d x , {\displaystyle \int {\Big (}u(x)v(x){\Big )}'\,dx=\int u'(x)v(x)\,dx+\int u(x)v'(x)\,dx,} and noting that an indefinite integral is an antiderivative gives u ( x ) v ( x ) = ∫ u ′ ( x ) v ( x ) d x + ∫ u ( x ) v ′ ( x ) d x , {\displaystyle u(x)v(x)=\int u'(x)v(x)\,dx+\int u(x)v'(x)\,dx,} where we neglect writing the constant of integration. This yields the formula for integration by parts: ∫ u ( x ) v ′ ( x ) d x = u ( x ) v ( x ) − ∫ u ′ ( x ) v ( x ) d x , {\displaystyle \int u(x)v'(x)\,dx=u(x)v(x)-\int u'(x)v(x)\,dx,} or in terms of the differentials d u = u ′ ( x ) d x {\displaystyle du=u'(x)\,dx} , d v = v ′ ( x ) d x , {\displaystyle dv=v'(x)\,dx,\quad } ∫ u ( x ) d v = u ( x ) v ( x ) − ∫ v ( x ) d u . {\displaystyle \int u(x)\,dv=u(x)v(x)-\int v(x)\,du.} This is to be understood as an equality of functions with an unspecified constant added to each side. Taking the difference of each side between two values x = a {\displaystyle x=a} and x = b {\displaystyle x=b} and applying the fundamental theorem of calculus gives the definite integral version: ∫ a b u ( x ) v ′ ( x ) d x = u ( b ) v ( b ) − u ( a ) v ( a ) − ∫ a b u ′ ( x ) v ( x ) d x . {\displaystyle \int _{a}^{b}u(x)v'(x)\,dx=u(b)v(b)-u(a)v(a)-\int _{a}^{b}u'(x)v(x)\,dx.} The original integral ∫ u v ′ d x {\displaystyle \int uv'\,dx} contains the derivative v'; to apply the theorem, one must find v, the antiderivative of v', then evaluate the resulting integral ∫ v u ′ d x . {\displaystyle \int vu'\,dx.} === Validity for less smooth functions === It is not necessary for u {\displaystyle u} and v {\displaystyle v} to be continuously differentiable. Integration by parts works if u {\displaystyle u} is absolutely continuous and the function designated v ′ {\displaystyle v'} is Lebesgue integrable (but not necessarily continuous). (If v ′ {\displaystyle v'} has a point of discontinuity then its antiderivative v {\displaystyle v} may not have a derivative at that point.) If the interval of integration is not compact, then it is not necessary for u {\displaystyle u} to be absolutely continuous in the whole interval or for v ′ {\displaystyle v'} to be Lebesgue integrable in the interval, as a couple of examples (in which u {\displaystyle u} and v {\displaystyle v} are continuous and continuously differentiable) will show. For instance, if u ( x ) = e x / x 2 , v ′ ( x ) = e − x {\displaystyle u(x)=e^{x}/x^{2},\,v'(x)=e^{-x}} u {\displaystyle u} is not absolutely continuous on the interval [1, ∞), but nevertheless: ∫ 1 ∞ u ( x ) v ′ ( x ) d x = [ u ( x ) v ( x ) ] 1 ∞ − ∫ 1 ∞ u ′ ( x ) v ( x ) d x {\displaystyle \int _{1}^{\infty }u(x)v'(x)\,dx={\Big [}u(x)v(x){\Big ]}_{1}^{\infty }-\int _{1}^{\infty }u'(x)v(x)\,dx} so long as [ u ( x ) v ( x ) ] 1 ∞ {\displaystyle \left[u(x)v(x)\right]_{1}^{\infty }} is taken to mean the limit of u ( L ) v ( L ) − u ( 1 ) v ( 1 ) {\displaystyle u(L)v(L)-u(1)v(1)} as L → ∞ {\displaystyle L\to \infty } and so long as the two terms on the right-hand side are finite. This is only true if we choose v ( x ) = − e − x . {\displaystyle v(x)=-e^{-x}.} Similarly, if u ( x ) = e − x , v ′ ( x ) = x − 1 sin ⁡ ( x ) {\displaystyle u(x)=e^{-x},\,v'(x)=x^{-1}\sin(x)} v ′ {\displaystyle v'} is not Lebesgue integrable on the interval [1, ∞), but nevertheless ∫ 1 ∞ u ( x ) v ′ ( x ) d x = [ u ( x ) v ( x ) ] 1 ∞ − ∫ 1 ∞ u ′ ( x ) v ( x ) d x {\displaystyle \int _{1}^{\infty }u(x)v'(x)\,dx={\Big [}u(x)v(x){\Big ]}_{1}^{\infty }-\int _{1}^{\infty }u'(x)v(x)\,dx} with the same interpretation. One can also easily come up with similar examples in which u {\displaystyle u} and v {\displaystyle v} are not continuously differentiable. Further, if f ( x ) {\displaystyle f(x)} is a function of bounded variation on the segment [ a , b ] , {\displaystyle [a,b],} and φ ( x ) {\displaystyle \varphi (x)} is differentiable on [ a , b ] , {\displaystyle [a,b],} then ∫ a b f ( x ) φ ′ ( x ) d x = − ∫ − ∞ ∞ φ ~ ( x ) d ( χ ~ [ a , b ] ( x ) f ~ ( x ) ) , {\displaystyle \int _{a}^{b}f(x)\varphi '(x)\,dx=-\int _{-\infty }^{\infty }{\widetilde {\varphi }}(x)\,d({\widetilde {\chi }}_{[a,b]}(x){\widetilde {f}}(x)),} where d ( χ [ a , b ] ( x ) f ~ ( x ) ) {\displaystyle d(\chi _{[a,b]}(x){\widetilde {f}}(x))} denotes the signed measure corresponding to the function of bounded variation χ [ a , b ] ( x ) f ( x ) {\displaystyle \chi _{[a,b]}(x)f(x)} , and functions f ~ , φ ~ {\displaystyle {\widetilde {f}},{\widetilde {\varphi }}} are extensions of f , φ {\displaystyle f,\varphi } to R , {\displaystyle \mathbb {R} ,} which are respectively of bounded variation and differentiable. === Product of many functions === Integrating the product rule for three multiplied functions, u ( x ) {\displaystyle u(x)} , v ( x ) {\displaystyle v(x)} , w ( x ) {\displaystyle w(x)} , gives a similar result: ∫ a b u v d w = [ u v w ] a b − ∫ a b u w d v − ∫ a b v w d u . {\displaystyle \int _{a}^{b}uv\,dw\ =\ {\Big [}uvw{\Big ]}_{a}^{b}-\int _{a}^{b}uw\,dv-\int _{a}^{b}vw\,du.} In general, for n {\displaystyle n} factors ( ∏ i = 1 n u i ( x ) ) ′ = ∑ j = 1 n u j ′ ( x ) ∏ i ≠ j n u i ( x ) , {\displaystyle \left(\prod _{i=1}^{n}u_{i}(x)\right)'\ =\ \sum _{j=1}^{n}u_{j}'(x)\prod _{i\neq j}^{n}u_{i}(x),} which leads to [ ∏ i = 1 n u i ( x ) ] a b = ∑ j = 1 n ∫ a b u j ′ ( x ) ∏ i ≠ j n u i ( x ) . {\displaystyle \left[\prod _{i=1}^{n}u_{i}(x)\right]_{a}^{b}\ =\ \sum _{j=1}^{n}\int _{a}^{b}u_{j}'(x)\prod _{i\neq j}^{n}u_{i}(x).} == Visualization == Consider a parametric curve ( x , y ) = ( f ( t ) , g ( t ) ) {\displaystyle (x,y)=(f(t),g(t))} . Assuming that the curve is locally one-to-one and integrable, we can define x ( y ) = f ( g − 1 ( y ) ) y ( x ) = g ( f − 1 ( x ) ) {\displaystyle {\begin{aligned}x(y)&=f(g^{-1}(y))\\y(x)&=g(f^{-1}(x))\end{aligned}}} The area of the blue region is A 1 = ∫ y 1 y 2 x ( y ) d y {\displaystyle A_{1}=\int _{y_{1}}^{y_{2}}x(y)\,dy} Similarly, the area of the red region is A 2 = ∫ x 1 x 2 y ( x ) d x {\displaystyle A_{2}=\int _{x_{1}}^{x_{2}}y(x)\,dx} The total area A1 + A2 is equal to the area of the bigger rectangle, x2y2, minus the area of the smaller one, x1y1: ∫ y 1 y 2 x ( y ) d y ⏞ A 1 + ∫ x 1 x 2 y ( x ) d x ⏞ A 2 = x ⋅ y ( x ) | x 1 x 2 = y ⋅ x ( y ) | y 1 y 2 {\displaystyle \overbrace {\int _{y_{1}}^{y_{2}}x(y)\,dy} ^{A_{1}}+\overbrace {\int _{x_{1}}^{x_{2}}y(x)\,dx} ^{A_{2}}\ =\ {\biggl .}x\cdot y(x){\biggl |}_{x_{1}}^{x_{2}}\ =\ {\biggl .}y\cdot x(y){\biggl |}_{y_{1}}^{y_{2}}} Or, in terms of t, ∫ t 1 t 2 x ( t ) d y ( t ) + ∫ t 1 t 2 y ( t ) d x ( t ) = x ( t ) y ( t ) | t 1 t 2 {\displaystyle \int _{t_{1}}^{t_{2}}x(t)\,dy(t)+\int _{t_{1}}^{t_{2}}y(t)\,dx(t)\ =\ {\biggl .}x(t)y(t){\biggl |}_{t_{1}}^{t_{2}}} Or, in terms of indefinite integrals, this can be written as ∫ x d y + ∫ y d x = x y {\displaystyle \int x\,dy+\int y\,dx\ =\ xy} Rearranging: ∫ x d y = x y − ∫ y d x {\displaystyle \int x\,dy\ =\ xy-\int y\,dx} Thus integration by parts may be thought of as deriving the area of the blue region from the area of rectangles and that of the red region. This visualization also explains why integration by parts may help find the integral of an inverse function f−1(x) when the integral of the function f(x) is known. Indeed, the functions x(y) and y(x) are inverses, and the integral ∫ x dy may be calculated as above from knowing the integral ∫ y dx. In particular, this explains use of integration by parts to integrate logarithm and inverse trigonometric functions. In fact, if f {\displaystyle f} is a differentiable one-to-one function on an interval, then integration by parts can be used to derive a formula for the integral of f − 1 {\displaystyle f^{-1}} in terms of the integral of f {\displaystyle f} . This is demonstrated in the article, Integral of inverse functions. == Applications == === Finding antiderivatives === Integration by parts is a heuristic rather than a purely mechanical process for solving integrals; given a single function to integrate, the typical strategy is to carefully separate this single function into a product of two functions u(x)v(x) such that the residual integral from the integration by parts formula is easier to evaluate than the single function. The following form is useful in illustrating the best strategy to take: ∫ u v d x = u ∫ v d x − ∫ ( u ′ ∫ v d x ) d x . {\displaystyle \int uv\,dx=u\int v\,dx-\int \left(u'\int v\,dx\right)\,dx.} On the right-hand side, u is differentiated and v is integrated; consequently it is useful to choose u as a function that simplifies when differentiated, or to choose v as a function that simplifies when integrated. As a simple example, consider: ∫ ln ⁡ ( x ) x 2 d x . {\displaystyle \int {\frac {\ln(x)}{x^{2}}}\,dx\,.} Since the derivative of ln(x) is ⁠1/x⁠, one makes (ln(x)) part u; since the antiderivative of ⁠1/x2⁠ is −⁠1/x⁠, one makes ⁠1/x2⁠ part v. The formula now yields: ∫ ln ⁡ ( x ) x 2 d x = − ln ⁡ ( x ) x − ∫ ( 1 x ) ( − 1 x ) d x . {\displaystyle \int {\frac {\ln(x)}{x^{2}}}\,dx=-{\frac {\ln(x)}{x}}-\int {\biggl (}{\frac {1}{x}}{\biggr )}{\biggl (}-{\frac {1}{x}}{\biggr )}\,dx\,.} The antiderivative of −⁠1/x2⁠ can be found with the power rule and is ⁠1/x⁠. Alternatively, one may choose u and v such that the product u′ (∫v dx) simplifies due to cancellation. For example, suppose one wishes to integrate: ∫ sec 2 ⁡ ( x ) ⋅ ln ⁡ ( | sin ⁡ ( x ) | ) d x . {\displaystyle \int \sec ^{2}(x)\cdot \ln {\Big (}{\bigl |}\sin(x){\bigr |}{\Big )}\,dx.} If we choose u(x) = ln(|sin(x)|) and v(x) = sec2x, then u differentiates to 1 tan ⁡ x {\displaystyle {\frac {1}{\tan x}}} using the chain rule and v integrates to tan x; so the formula gives: ∫ sec 2 ⁡ ( x ) ⋅ ln ⁡ ( | sin ⁡ ( x ) | ) d x = tan ⁡ ( x ) ⋅ ln ⁡ ( | sin ⁡ ( x ) | ) − ∫ tan ⁡ ( x ) ⋅ 1 tan ⁡ ( x ) d x . {\displaystyle \int \sec ^{2}(x)\cdot \ln {\Big (}{\bigl |}\sin(x){\bigr |}{\Big )}\,dx=\tan(x)\cdot \ln {\Big (}{\bigl |}\sin(x){\bigr |}{\Big )}-\int \tan(x)\cdot {\frac {1}{\tan(x)}}\,dx\ .} The integrand simplifies to 1, so the antiderivative is x. Finding a simplifying combination frequently involves experimentation. In some applications, it may not be necessary to ensure that the integral produced by integration by parts has a simple form; for example, in numerical analysis, it may suffice that it has small magnitude and so contributes only a small error term. Some other special techniques are demonstrated in the examples below. ==== Polynomials and trigonometric functions ==== In order to calculate I = ∫ x cos ⁡ ( x ) d x , {\displaystyle I=\int x\cos(x)\,dx\,,} let: u = x ⇒ d u = d x d v = cos ⁡ ( x ) d x ⇒ v = ∫ cos ⁡ ( x ) d x = sin ⁡ ( x ) {\displaystyle {\begin{alignedat}{3}u&=x\ &\Rightarrow \ &&du&=dx\\dv&=\cos(x)\,dx\ &\Rightarrow \ &&v&=\int \cos(x)\,dx=\sin(x)\end{alignedat}}} then: ∫ x cos ⁡ ( x ) d x = ∫ u d v = u ⋅ v − ∫ v d u = x sin ⁡ ( x ) − ∫ sin ⁡ ( x ) d x = x sin ⁡ ( x ) + cos ⁡ ( x ) + C , {\displaystyle {\begin{aligned}\int x\cos(x)\,dx&=\int u\ dv\\&=u\cdot v-\int v\,du\\&=x\sin(x)-\int \sin(x)\,dx\\&=x\sin(x)+\cos(x)+C,\end{aligned}}} where C is a constant of integration. For higher powers of x {\displaystyle x} in the form ∫ x n e x d x , ∫ x n sin ⁡ ( x ) d x , ∫ x n cos ⁡ ( x ) d x , {\displaystyle \int x^{n}e^{x}\,dx,\ \int x^{n}\sin(x)\,dx,\ \int x^{n}\cos(x)\,dx\,,} repeatedly using integration by parts can evaluate integrals such as these; each application of the theorem lowers the power of x {\displaystyle x} by one. ==== Exponentials and trigonometric functions ==== An example commonly used to examine the workings of integration by parts is I = ∫ e x cos ⁡ ( x ) d x . {\displaystyle I=\int e^{x}\cos(x)\,dx.} Here, integration by parts is performed twice. First let u = cos ⁡ ( x ) ⇒ d u = − sin ⁡ ( x ) d x d v = e x d x ⇒ v = ∫ e x d x = e x {\displaystyle {\begin{alignedat}{3}u&=\cos(x)\ &\Rightarrow \ &&du&=-\sin(x)\,dx\\dv&=e^{x}\,dx\ &\Rightarrow \ &&v&=\int e^{x}\,dx=e^{x}\end{alignedat}}} then: ∫ e x cos ⁡ ( x ) d x = e x cos ⁡ ( x ) + ∫ e x sin ⁡ ( x ) d x . {\displaystyle \int e^{x}\cos(x)\,dx=e^{x}\cos(x)+\int e^{x}\sin(x)\,dx.} Now, to evaluate the remaining integral, we use integration by parts again, with: u = sin ⁡ ( x ) ⇒ d u = cos ⁡ ( x ) d x d v = e x d x ⇒ v = ∫ e x d x = e x . {\displaystyle {\begin{alignedat}{3}u&=\sin(x)\ &\Rightarrow \ &&du&=\cos(x)\,dx\\dv&=e^{x}\,dx\,&\Rightarrow \ &&v&=\int e^{x}\,dx=e^{x}.\end{alignedat}}} Then: ∫ e x sin ⁡ ( x ) d x = e x sin ⁡ ( x ) − ∫ e x cos ⁡ ( x ) d x . {\displaystyle \int e^{x}\sin(x)\,dx=e^{x}\sin(x)-\int e^{x}\cos(x)\,dx.} Putting these together, ∫ e x cos ⁡ ( x ) d x = e x cos ⁡ ( x ) + e x sin ⁡ ( x ) − ∫ e x cos ⁡ ( x ) d x . {\displaystyle \int e^{x}\cos(x)\,dx=e^{x}\cos(x)+e^{x}\sin(x)-\int e^{x}\cos(x)\,dx.} The same integral shows up on both sides of this equation. The integral can simply be added to both sides to get 2 ∫ e x cos ⁡ ( x ) d x = e x [ sin ⁡ ( x ) + cos ⁡ ( x ) ] + C , {\displaystyle 2\int e^{x}\cos(x)\,dx=e^{x}{\bigl [}\sin(x)+\cos(x){\bigr ]}+C,} which rearranges to ∫ e x cos ⁡ ( x ) d x = 1 2 e x [ sin ⁡ ( x ) + cos ⁡ ( x ) ] + C ′ {\displaystyle \int e^{x}\cos(x)\,dx={\frac {1}{2}}e^{x}{\bigl [}\sin(x)+\cos(x){\bigr ]}+C'} where again C {\displaystyle C} (and C ′ = C 2 {\displaystyle C'={\frac {C}{2}}} ) is a constant of integration. A similar method is used to find the integral of secant cubed. ==== Functions multiplied by unity ==== Two other well-known examples are when integration by parts is applied to a function expressed as a product of 1 and itself. This works if the derivative of the function is known, and the integral of this derivative times x {\displaystyle x} is also known. The first example is ∫ ln ⁡ ( x ) d x {\displaystyle \int \ln(x)dx} . We write this as: I = ∫ ln ⁡ ( x ) ⋅ 1 d x . {\displaystyle I=\int \ln(x)\cdot 1\,dx\,.} Let: u = ln ⁡ ( x ) ⇒ d u = d x x {\displaystyle u=\ln(x)\ \Rightarrow \ du={\frac {dx}{x}}} d v = d x ⇒ v = x {\displaystyle dv=dx\ \Rightarrow \ v=x} then: ∫ ln ⁡ ( x ) d x = x ln ⁡ ( x ) − ∫ x x d x = x ln ⁡ ( x ) − ∫ 1 d x = x ln ⁡ ( x ) − x + C {\displaystyle {\begin{aligned}\int \ln(x)\,dx&=x\ln(x)-\int {\frac {x}{x}}\,dx\\&=x\ln(x)-\int 1\,dx\\&=x\ln(x)-x+C\end{aligned}}} where C {\displaystyle C} is the constant of integration. The second example is the inverse tangent function arctan ⁡ ( x ) {\displaystyle \arctan(x)} : I = ∫ arctan ⁡ ( x ) d x . {\displaystyle I=\int \arctan(x)\,dx.} Rewrite this as ∫ arctan ⁡ ( x ) ⋅ 1 d x . {\displaystyle \int \arctan(x)\cdot 1\,dx.} Now let: u = arctan ⁡ ( x ) ⇒ d u = d x 1 + x 2 {\displaystyle u=\arctan(x)\ \Rightarrow \ du={\frac {dx}{1+x^{2}}}} d v = d x ⇒ v = x {\displaystyle dv=dx\ \Rightarrow \ v=x} then ∫ arctan ⁡ ( x ) d x = x arctan ⁡ ( x ) − ∫ x 1 + x 2 d x = x arctan ⁡ ( x ) − ln ⁡ ( 1 + x 2 ) 2 + C {\displaystyle {\begin{aligned}\int \arctan(x)\,dx&=x\arctan(x)-\int {\frac {x}{1+x^{2}}}\,dx\\[8pt]&=x\arctan(x)-{\frac {\ln(1+x^{2})}{2}}+C\end{aligned}}} using a combination of the inverse chain rule method and the natural logarithm integral condition. ==== LIATE rule ==== The LIATE rule is a rule of thumb for integration by parts. It involves choosing as u the function that comes first in the following list: L – logarithmic functions: ln ⁡ ( x ) , log b ⁡ ( x ) , {\displaystyle \ln(x),\ \log _{b}(x),} etc. I – inverse trigonometric functions (including hyperbolic analogues): arctan ⁡ ( x ) , arcsec ⁡ ( x ) , arsinh ⁡ ( x ) , {\displaystyle \arctan(x),\ \operatorname {arcsec}(x),\ \operatorname {arsinh} (x),} etc. A – algebraic functions (such as polynomials): x 2 , 3 x 50 , {\displaystyle x^{2},\ 3x^{50},} etc. T – trigonometric functions (including hyperbolic analogues): sin ⁡ ( x ) , tan ⁡ ( x ) , sech ⁡ ( x ) , {\displaystyle \sin(x),\ \tan(x),\ \operatorname {sech} (x),} etc. E – exponential functions: e x , 19 x , {\displaystyle e^{x},\ 19^{x},} etc. The function which is to be dv is whichever comes last in the list. The reason is that functions lower on the list generally have simpler antiderivatives than the functions above them. The rule is sometimes written as "DETAIL", where D stands for dv and the top of the list is the function chosen to be dv. An alternative to this rule is the ILATE rule, where inverse trigonometric functions come before logarithmic functions. To demonstrate the LIATE rule, consider the integral ∫ x ⋅ cos ⁡ ( x ) d x . {\displaystyle \int x\cdot \cos(x)\,dx.} Following the LIATE rule, u = x, and dv = cos(x) dx, hence du = dx, and v = sin(x), which makes the integral become x ⋅ sin ⁡ ( x ) − ∫ 1 sin ⁡ ( x ) d x , {\displaystyle x\cdot \sin(x)-\int 1\sin(x)\,dx,} which equals x ⋅ sin ⁡ ( x ) + cos ⁡ ( x ) + C . {\displaystyle x\cdot \sin(x)+\cos(x)+C.} In general, one tries to choose u and dv such that du is simpler than u and dv is easy to integrate. If instead cos(x) was chosen as u, and x dx as dv, we would have the integral x 2 2 cos ⁡ ( x ) + ∫ x 2 2 sin ⁡ ( x ) d x , {\displaystyle {\frac {x^{2}}{2}}\cos(x)+\int {\frac {x^{2}}{2}}\sin(x)\,dx,} which, after recursive application of the integration by parts formula, would clearly result in an infinite recursion and lead nowhere. Although a useful rule of thumb, there are exceptions to the LIATE rule. A common alternative is to consider the rules in the "ILATE" order instead. Also, in some cases, polynomial terms need to be split in non-trivial ways. For example, to integrate ∫ x 3 e x 2 d x , {\displaystyle \int x^{3}e^{x^{2}}\,dx,} one would set u = x 2 , d v = x ⋅ e x 2 d x , {\displaystyle u=x^{2},\quad dv=x\cdot e^{x^{2}}\,dx,} so that d u = 2 x d x , v = e x 2 2 . {\displaystyle du=2x\,dx,\quad v={\frac {e^{x^{2}}}{2}}.} Then ∫ x 3 e x 2 d x = ∫ ( x 2 ) ( x e x 2 ) d x = ∫ u d v = u v − ∫ v d u = x 2 e x 2 2 − ∫ x e x 2 d x . {\displaystyle \int x^{3}e^{x^{2}}\,dx=\int \left(x^{2}\right)\left(xe^{x^{2}}\right)\,dx=\int u\,dv=uv-\int v\,du={\frac {x^{2}e^{x^{2}}}{2}}-\int xe^{x^{2}}\,dx.} Finally, this results in ∫ x 3 e x 2 d x = e x 2 ( x 2 − 1 ) 2 + C . {\displaystyle \int x^{3}e^{x^{2}}\,dx={\frac {e^{x^{2}}\left(x^{2}-1\right)}{2}}+C.} Integration by parts is often used as a tool to prove theorems in mathematical analysis. === Wallis product === The Wallis infinite product for π {\displaystyle \pi } π 2 = ∏ n = 1 ∞ 4 n 2 4 n 2 − 1 = ∏ n = 1 ∞ ( 2 n 2 n − 1 ⋅ 2 n 2 n + 1 ) = ( 2 1 ⋅ 2 3 ) ⋅ ( 4 3 ⋅ 4 5 ) ⋅ ( 6 5 ⋅ 6 7 ) ⋅ ( 8 7 ⋅ 8 9 ) ⋅ ⋯ {\displaystyle {\begin{aligned}{\frac {\pi }{2}}&=\prod _{n=1}^{\infty }{\frac {4n^{2}}{4n^{2}-1}}=\prod _{n=1}^{\infty }\left({\frac {2n}{2n-1}}\cdot {\frac {2n}{2n+1}}\right)\\[6pt]&={\Big (}{\frac {2}{1}}\cdot {\frac {2}{3}}{\Big )}\cdot {\Big (}{\frac {4}{3}}\cdot {\frac {4}{5}}{\Big )}\cdot {\Big (}{\frac {6}{5}}\cdot {\frac {6}{7}}{\Big )}\cdot {\Big (}{\frac {8}{7}}\cdot {\frac {8}{9}}{\Big )}\cdot \;\cdots \end{aligned}}} may be derived using integration by parts. === Gamma function identity === The gamma function is an example of a special function, defined as an improper integral for z > 0 {\displaystyle z>0} . Integration by parts illustrates it to be an extension of the factorial function: Γ ( z ) = ∫ 0 ∞ e − x x z − 1 d x = − ∫ 0 ∞ x z − 1 d ( e − x ) = − [ e − x x z − 1 ] 0 ∞ + ∫ 0 ∞ e − x d ( x z − 1 ) = 0 + ∫ 0 ∞ ( z − 1 ) x z − 2 e − x d x = ( z − 1 ) Γ ( z − 1 ) . {\displaystyle {\begin{aligned}\Gamma (z)&=\int _{0}^{\infty }e^{-x}x^{z-1}dx\\[6pt]&=-\int _{0}^{\infty }x^{z-1}\,d\left(e^{-x}\right)\\[6pt]&=-{\Biggl [}e^{-x}x^{z-1}{\Biggl ]}_{0}^{\infty }+\int _{0}^{\infty }e^{-x}d\left(x^{z-1}\right)\\[6pt]&=0+\int _{0}^{\infty }\left(z-1\right)x^{z-2}e^{-x}dx\\[6pt]&=(z-1)\Gamma (z-1).\end{aligned}}} Since Γ ( 1 ) = ∫ 0 ∞ e − x d x = 1 , {\displaystyle \Gamma (1)=\int _{0}^{\infty }e^{-x}\,dx=1,} when z {\displaystyle z} is a natural number, that is, z = n ∈ N {\displaystyle z=n\in \mathbb {N} } , applying this formula repeatedly gives the factorial: Γ ( n + 1 ) = n ! {\displaystyle \Gamma (n+1)=n!} === Use in harmonic analysis === Integration by parts is often used in harmonic analysis, particularly Fourier analysis, to show that quickly oscillating integrals with sufficiently smooth integrands decay quickly. The most common example of this is its use in showing that the decay of function's Fourier transform depends on the smoothness of that function, as described below. ==== Fourier transform of derivative ==== If f {\displaystyle f} is a k {\displaystyle k} -times continuously differentiable function and all derivatives up to the k {\displaystyle k} th one decay to zero at infinity, then its Fourier transform satisfies ( F f ( k ) ) ( ξ ) = ( 2 π i ξ ) k F f ( ξ ) , {\displaystyle ({\mathcal {F}}f^{(k)})(\xi )=(2\pi i\xi )^{k}{\mathcal {F}}f(\xi ),} where f ( k ) {\displaystyle f^{(k)}} is the k {\displaystyle k} th derivative of f {\displaystyle f} . (The exact constant on the right depends on the convention of the Fourier transform used.) This is proved by noting that d d y e − 2 π i y ξ = − 2 π i ξ e − 2 π i y ξ , {\displaystyle {\frac {d}{dy}}e^{-2\pi iy\xi }=-2\pi i\xi e^{-2\pi iy\xi },} so using integration by parts on the Fourier transform of the derivative we get ( F f ′ ) ( ξ ) = ∫ − ∞ ∞ e − 2 π i y ξ f ′ ( y ) d y = [ e − 2 π i y ξ f ( y ) ] − ∞ ∞ − ∫ − ∞ ∞ ( − 2 π i ξ e − 2 π i y ξ ) f ( y ) d y = 2 π i ξ ∫ − ∞ ∞ e − 2 π i y ξ f ( y ) d y = 2 π i ξ F f ( ξ ) . {\displaystyle {\begin{aligned}({\mathcal {F}}f')(\xi )&=\int _{-\infty }^{\infty }e^{-2\pi iy\xi }f'(y)\,dy\\&=\left[e^{-2\pi iy\xi }f(y)\right]_{-\infty }^{\infty }-\int _{-\infty }^{\infty }(-2\pi i\xi e^{-2\pi iy\xi })f(y)\,dy\\[5pt]&=2\pi i\xi \int _{-\infty }^{\infty }e^{-2\pi iy\xi }f(y)\,dy\\[5pt]&=2\pi i\xi {\mathcal {F}}f(\xi ).\end{aligned}}} Applying this inductively gives the result for general k {\displaystyle k} . A similar method can be used to find the Laplace transform of a derivative of a function. ==== Decay of Fourier transform ==== The above result tells us about the decay of the Fourier transform, since it follows that if f {\displaystyle f} and f ( k ) {\displaystyle f^{(k)}} are integrable then | F f ( ξ ) | ≤ I ( f ) 1 + | 2 π ξ | k , where I ( f ) = ∫ − ∞ ∞ ( | f ( y ) | + | f ( k ) ( y ) | ) d y . {\displaystyle \vert {\mathcal {F}}f(\xi )\vert \leq {\frac {I(f)}{1+\vert 2\pi \xi \vert ^{k}}},{\text{ where }}I(f)=\int _{-\infty }^{\infty }{\Bigl (}\vert f(y)\vert +\vert f^{(k)}(y)\vert {\Bigr )}\,dy.} In other words, if f {\displaystyle f} satisfies these conditions then its Fourier transform decays at infinity at least as quickly as 1/|ξ|k. In particular, if k ≥ 2 {\displaystyle k\geq 2} then the Fourier transform is integrable. The proof uses the fact, which is immediate from the definition of the Fourier transform, that | F f ( ξ ) | ≤ ∫ − ∞ ∞ | f ( y ) | d y . {\displaystyle \vert {\mathcal {F}}f(\xi )\vert \leq \int _{-\infty }^{\infty }\vert f(y)\vert \,dy.} Using the same idea on the equality stated at the start of this subsection gives | ( 2 π i ξ ) k F f ( ξ ) | ≤ ∫ − ∞ ∞ | f ( k ) ( y ) | d y . {\displaystyle \vert (2\pi i\xi )^{k}{\mathcal {F}}f(\xi )\vert \leq \int _{-\infty }^{\infty }\vert f^{(k)}(y)\vert \,dy.} Summing these two inequalities and then dividing by 1 + |2πξk| gives the stated inequality. === Use in operator theory === One use of integration by parts in operator theory is that it shows that the −∆ (where ∆ is the Laplace operator) is a positive operator on L 2 {\displaystyle L^{2}} (see Lp space). If f {\displaystyle f} is smooth and compactly supported then, using integration by parts, we have ⟨ − Δ f , f ⟩ L 2 = − ∫ − ∞ ∞ f ″ ( x ) f ( x ) ¯ d x = − [ f ′ ( x ) f ( x ) ¯ ] − ∞ ∞ + ∫ − ∞ ∞ f ′ ( x ) f ′ ( x ) ¯ d x = ∫ − ∞ ∞ | f ′ ( x ) | 2 d x ≥ 0. {\displaystyle {\begin{aligned}\langle -\Delta f,f\rangle _{L^{2}}&=-\int _{-\infty }^{\infty }f''(x){\overline {f(x)}}\,dx\\[5pt]&=-\left[f'(x){\overline {f(x)}}\right]_{-\infty }^{\infty }+\int _{-\infty }^{\infty }f'(x){\overline {f'(x)}}\,dx\\[5pt]&=\int _{-\infty }^{\infty }\vert f'(x)\vert ^{2}\,dx\geq 0.\end{aligned}}} === Other applications === Determining boundary conditions in Sturm–Liouville theory Deriving the Euler–Lagrange equation in the calculus of variations == Repeated integration by parts == Considering a second derivative of v {\displaystyle v} in the integral on the LHS of the formula for partial integration suggests a repeated application to the integral on the RHS: ∫ u v ″ d x = u v ′ − ∫ u ′ v ′ d x = u v ′ − ( u ′ v − ∫ u ″ v d x ) . {\displaystyle \int uv''\,dx=uv'-\int u'v'\,dx=uv'-\left(u'v-\int u''v\,dx\right).} Extending this concept of repeated partial integration to derivatives of degree n leads to ∫ u ( 0 ) v ( n ) d x = u ( 0 ) v ( n − 1 ) − u ( 1 ) v ( n − 2 ) + u ( 2 ) v ( n − 3 ) − ⋯ + ( − 1 ) n − 1 u ( n − 1 ) v ( 0 ) + ( − 1 ) n ∫ u ( n ) v ( 0 ) d x . = ∑ k = 0 n − 1 ( − 1 ) k u ( k ) v ( n − 1 − k ) + ( − 1 ) n ∫ u ( n ) v ( 0 ) d x . {\displaystyle {\begin{aligned}\int u^{(0)}v^{(n)}\,dx&=u^{(0)}v^{(n-1)}-u^{(1)}v^{(n-2)}+u^{(2)}v^{(n-3)}-\cdots +(-1)^{n-1}u^{(n-1)}v^{(0)}+(-1)^{n}\int u^{(n)}v^{(0)}\,dx.\\[5pt]&=\sum _{k=0}^{n-1}(-1)^{k}u^{(k)}v^{(n-1-k)}+(-1)^{n}\int u^{(n)}v^{(0)}\,dx.\end{aligned}}} This concept may be useful when the successive integrals of v ( n ) {\displaystyle v^{(n)}} are readily available (e.g., plain exponentials or sine and cosine, as in Laplace or Fourier transforms), and when the nth derivative of u {\displaystyle u} vanishes (e.g., as a polynomial function with degree ( n − 1 ) {\displaystyle (n-1)} ). The latter condition stops the repeating of partial integration, because the RHS-integral vanishes. In the course of the above repetition of partial integrations the integrals ∫ u ( 0 ) v ( n ) d x {\displaystyle \int u^{(0)}v^{(n)}\,dx\quad } and ∫ u ( ℓ ) v ( n − ℓ ) d x {\displaystyle \quad \int u^{(\ell )}v^{(n-\ell )}\,dx\quad } and ∫ u ( m ) v ( n − m ) d x for 1 ≤ m , ℓ ≤ n {\displaystyle \quad \int u^{(m)}v^{(n-m)}\,dx\quad {\text{ for }}1\leq m,\ell \leq n} get related. This may be interpreted as arbitrarily "shifting" derivatives between v {\displaystyle v} and u {\displaystyle u} within the integrand, and proves useful, too (see Rodrigues' formula). === Tabular integration by parts === The essential process of the above formula can be summarized in a table; the resulting method is called "tabular integration" and was featured in the film Stand and Deliver (1988). For example, consider the integral ∫ x 3 cos ⁡ x d x {\displaystyle \int x^{3}\cos x\,dx\quad } and take u ( 0 ) = x 3 , v ( n ) = cos ⁡ x . {\displaystyle \quad u^{(0)}=x^{3},\quad v^{(n)}=\cos x.} Begin to list in column A the function u ( 0 ) = x 3 {\displaystyle u^{(0)}=x^{3}} and its subsequent derivatives u ( i ) {\displaystyle u^{(i)}} until zero is reached. Then list in column B the function v ( n ) = cos ⁡ x {\displaystyle v^{(n)}=\cos x} and its subsequent integrals v ( n − i ) {\displaystyle v^{(n-i)}} until the size of column B is the same as that of column A. The result is as follows: The product of the entries in row i of columns A and B together with the respective sign give the relevant integrals in step i in the course of repeated integration by parts. Step i = 0 yields the original integral. For the complete result in step i > 0 the ith integral must be added to all the previous products (0 ≤ j < i) of the jth entry of column A and the (j + 1)st entry of column B (i.e., multiply the 1st entry of column A with the 2nd entry of column B, the 2nd entry of column A with the 3rd entry of column B, etc. ...) with the given jth sign. This process comes to a natural halt, when the product, which yields the integral, is zero (i = 4 in the example). The complete result is the following (with the alternating signs in each term): ( + 1 ) ( x 3 ) ( sin ⁡ x ) ⏟ j = 0 + ( − 1 ) ( 3 x 2 ) ( − cos ⁡ x ) ⏟ j = 1 + ( + 1 ) ( 6 x ) ( − sin ⁡ x ) ⏟ j = 2 + ( − 1 ) ( 6 ) ( cos ⁡ x ) ⏟ j = 3 + ∫ ( + 1 ) ( 0 ) ( cos ⁡ x ) d x ⏟ i = 4 : → C . {\displaystyle \underbrace {(+1)(x^{3})(\sin x)} _{j=0}+\underbrace {(-1)(3x^{2})(-\cos x)} _{j=1}+\underbrace {(+1)(6x)(-\sin x)} _{j=2}+\underbrace {(-1)(6)(\cos x)} _{j=3}+\underbrace {\int (+1)(0)(\cos x)\,dx} _{i=4:\;\to \;C}.} This yields ∫ x 3 cos ⁡ x d x ⏟ step 0 = x 3 sin ⁡ x + 3 x 2 cos ⁡ x − 6 x sin ⁡ x − 6 cos ⁡ x + C . {\displaystyle \underbrace {\int x^{3}\cos x\,dx} _{\text{step 0}}=x^{3}\sin x+3x^{2}\cos x-6x\sin x-6\cos x+C.} The repeated partial integration also turns out useful, when in the course of respectively differentiating and integrating the functions u ( i ) {\displaystyle u^{(i)}} and v ( n − i ) {\displaystyle v^{(n-i)}} their product results in a multiple of the original integrand. In this case the repetition may also be terminated with this index i.This can happen, expectably, with exponentials and trigonometric functions. As an example consider ∫ e x cos ⁡ x d x . {\displaystyle \int e^{x}\cos x\,dx.} In this case the product of the terms in columns A and B with the appropriate sign for index i = 2 yields the negative of the original integrand (compare rows i = 0 and i = 2). ∫ e x cos ⁡ x d x ⏟ step 0 = ( + 1 ) ( e x ) ( sin ⁡ x ) ⏟ j = 0 + ( − 1 ) ( e x ) ( − cos ⁡ x ) ⏟ j = 1 + ∫ ( + 1 ) ( e x ) ( − cos ⁡ x ) d x ⏟ i = 2 . {\displaystyle \underbrace {\int e^{x}\cos x\,dx} _{\text{step 0}}=\underbrace {(+1)(e^{x})(\sin x)} _{j=0}+\underbrace {(-1)(e^{x})(-\cos x)} _{j=1}+\underbrace {\int (+1)(e^{x})(-\cos x)\,dx} _{i=2}.} Observing that the integral on the RHS can have its own constant of integration C ′ {\displaystyle C'} , and bringing the abstract integral to the other side, gives: 2 ∫ e x cos ⁡ x d x = e x sin ⁡ x + e x cos ⁡ x + C ′ , {\displaystyle 2\int e^{x}\cos x\,dx=e^{x}\sin x+e^{x}\cos x+C',} and finally: ∫ e x cos ⁡ x d x = 1 2 ( e x ( sin ⁡ x + cos ⁡ x ) ) + C , {\displaystyle \int e^{x}\cos x\,dx={\frac {1}{2}}\left(e^{x}(\sin x+\cos x)\right)+C,} where C = C ′ 2 {\displaystyle C={\frac {C'}{2}}} . == Higher dimensions == Integration by parts can be extended to functions of several variables by applying a version of the fundamental theorem of calculus to an appropriate product rule. There are several such pairings possible in multivariate calculus, involving a scalar-valued function u and vector-valued function (vector field) V. The product rule for divergence states: ∇ ⋅ ( u V ) = u ∇ ⋅ V + ∇ u ⋅ V . {\displaystyle \nabla \cdot (u\mathbf {V} )\ =\ u\,\nabla \cdot \mathbf {V} \ +\ \nabla u\cdot \mathbf {V} .} Suppose Ω {\displaystyle \Omega } is an open bounded subset of R n {\displaystyle \mathbb {R} ^{n}} with a piecewise smooth boundary Γ = ∂ Ω {\displaystyle \Gamma =\partial \Omega } . Integrating over Ω {\displaystyle \Omega } with respect to the standard volume form d Ω {\displaystyle d\Omega } , and applying the divergence theorem, gives: ∫ Γ u V ⋅ n ^ d Γ = ∫ Ω ∇ ⋅ ( u V ) d Ω = ∫ Ω u ∇ ⋅ V d Ω + ∫ Ω ∇ u ⋅ V d Ω , {\displaystyle \int _{\Gamma }u\mathbf {V} \cdot {\hat {\mathbf {n} }}\,d\Gamma \ =\ \int _{\Omega }\nabla \cdot (u\mathbf {V} )\,d\Omega \ =\ \int _{\Omega }u\,\nabla \cdot \mathbf {V} \,d\Omega \ +\ \int _{\Omega }\nabla u\cdot \mathbf {V} \,d\Omega ,} where n ^ {\displaystyle {\hat {\mathbf {n} }}} is the outward unit normal vector to the boundary, integrated with respect to its standard Riemannian volume form d Γ {\displaystyle d\Gamma } . Rearranging gives: ∫ Ω u ∇ ⋅ V d Ω = ∫ Γ u V ⋅ n ^ d Γ − ∫ Ω ∇ u ⋅ V d Ω , {\displaystyle \int _{\Omega }u\,\nabla \cdot \mathbf {V} \,d\Omega \ =\ \int _{\Gamma }u\mathbf {V} \cdot {\hat {\mathbf {n} }}\,d\Gamma -\int _{\Omega }\nabla u\cdot \mathbf {V} \,d\Omega ,} or in other words ∫ Ω u div ⁡ ( V ) d Ω = ∫ Γ u V ⋅ n ^ d Γ − ∫ Ω grad ⁡ ( u ) ⋅ V d Ω . {\displaystyle \int _{\Omega }u\,\operatorname {div} (\mathbf {V} )\,d\Omega \ =\ \int _{\Gamma }u\mathbf {V} \cdot {\hat {\mathbf {n} }}\,d\Gamma -\int _{\Omega }\operatorname {grad} (u)\cdot \mathbf {V} \,d\Omega .} The regularity requirements of the theorem can be relaxed. For instance, the boundary Γ = ∂ Ω {\displaystyle \Gamma =\partial \Omega } need only be Lipschitz continuous, and the functions u, v need only lie in the Sobolev space H 1 ( Ω ) {\displaystyle H^{1}(\Omega )} . === Green's first identity === Consider the continuously differentiable vector fields U = u 1 e 1 + ⋯ + u n e n {\displaystyle \mathbf {U} =u_{1}\mathbf {e} _{1}+\cdots +u_{n}\mathbf {e} _{n}} and v e 1 , … , v e n {\displaystyle v\mathbf {e} _{1},\ldots ,v\mathbf {e} _{n}} , where e i {\displaystyle \mathbf {e} _{i}} is the i-th standard basis vector for i = 1 , … , n {\displaystyle i=1,\ldots ,n} . Now apply the above integration by parts to each u i {\displaystyle u_{i}} times the vector field v e i {\displaystyle v\mathbf {e} _{i}} : ∫ Ω u i ∂ v ∂ x i d Ω = ∫ Γ u i v e i ⋅ n ^ d Γ − ∫ Ω ∂ u i ∂ x i v d Ω . {\displaystyle \int _{\Omega }u_{i}{\frac {\partial v}{\partial x_{i}}}\,d\Omega \ =\ \int _{\Gamma }u_{i}v\,\mathbf {e} _{i}\cdot {\hat {\mathbf {n} }}\,d\Gamma -\int _{\Omega }{\frac {\partial u_{i}}{\partial x_{i}}}v\,d\Omega .} Summing over i gives a new integration by parts formula: ∫ Ω U ⋅ ∇ v d Ω = ∫ Γ v U ⋅ n ^ d Γ − ∫ Ω v ∇ ⋅ U d Ω . {\displaystyle \int _{\Omega }\mathbf {U} \cdot \nabla v\,d\Omega \ =\ \int _{\Gamma }v\mathbf {U} \cdot {\hat {\mathbf {n} }}\,d\Gamma -\int _{\Omega }v\,\nabla \cdot \mathbf {U} \,d\Omega .} The case U = ∇ u {\displaystyle \mathbf {U} =\nabla u} , where u ∈ C 2 ( Ω ¯ ) {\displaystyle u\in C^{2}({\bar {\Omega }})} , is known as the first of Green's identities: ∫ Ω ∇ u ⋅ ∇ v d Ω = ∫ Γ v ∇ u ⋅ n ^ d Γ − ∫ Ω v ∇ 2 u d Ω . {\displaystyle \int _{\Omega }\nabla u\cdot \nabla v\,d\Omega \ =\ \int _{\Gamma }v\,\nabla u\cdot {\hat {\mathbf {n} }}\,d\Gamma -\int _{\Omega }v\,\nabla ^{2}u\,d\Omega .} == See also == Integration by parts for the Lebesgue–Stieltjes integral Integration by parts for semimartingales, involving their quadratic covariation. Integration by substitution Legendre transformation == Notes == == Further reading == Louis Brand (10 October 2013). Advanced Calculus: An Introduction to Classical Analysis. Courier Corporation. pp. 267–. ISBN 978-0-486-15799-3. Hoffmann, Laurence D.; Bradley, Gerald L. (2004). Calculus for Business, Economics, and the Social and Life Sciences (8th ed.). McGraw Hill Higher Education. pp. 450–464. ISBN 0-07-242432-X. Willard, Stephen (1976). Calculus and its Applications. Boston: Prindle, Weber & Schmidt. pp. 193–214. ISBN 0-87150-203-8. Washington, Allyn J. (1966). Technical Calculus with Analytic Geometry. Reading: Addison-Wesley. pp. 218–245. ISBN 0-8465-8603-7. == External links == "Integration by parts", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Integration by parts—from MathWorld
Wikipedia:Integration using Euler's formula#0
In integral calculus, Euler's formula for complex numbers may be used to evaluate integrals involving trigonometric functions. Using Euler's formula, any trigonometric function may be written in terms of complex exponential functions, namely e i x {\displaystyle e^{ix}} and e − i x {\displaystyle e^{-ix}} and then integrated. This technique is often simpler and faster than using trigonometric identities or integration by parts, and is sufficiently powerful to integrate any rational expression involving trigonometric functions. == Euler's formula == Euler's formula states that e i x = cos ⁡ x + i sin ⁡ x . {\displaystyle e^{ix}=\cos x+i\,\sin x.} Substituting − x {\displaystyle -x} for x {\displaystyle x} gives the equation e − i x = cos ⁡ x − i sin ⁡ x {\displaystyle e^{-ix}=\cos x-i\,\sin x} because cosine is an even function and sine is odd. These two equations can be solved for the sine and cosine to give cos ⁡ x = e i x + e − i x 2 and sin ⁡ x = e i x − e − i x 2 i . {\displaystyle \cos x={\frac {e^{ix}+e^{-ix}}{2}}\quad {\text{and}}\quad \sin x={\frac {e^{ix}-e^{-ix}}{2i}}.} == Examples == === First example === Consider the integral ∫ cos 2 ⁡ x d x . {\displaystyle \int \cos ^{2}x\,dx.} The standard approach to this integral is to use a half-angle formula to simplify the integrand. We can use Euler's identity instead: ∫ cos 2 ⁡ x d x = ∫ ( e i x + e − i x 2 ) 2 d x = 1 4 ∫ ( e 2 i x + 2 + e − 2 i x ) d x {\displaystyle {\begin{aligned}\int \cos ^{2}x\,dx\,&=\,\int \left({\frac {e^{ix}+e^{-ix}}{2}}\right)^{2}dx\\[6pt]&=\,{\frac {1}{4}}\int \left(e^{2ix}+2+e^{-2ix}\right)dx\end{aligned}}} At this point, it would be possible to change back to real numbers using the formula e2ix + e−2ix = 2 cos 2x. Alternatively, we can integrate the complex exponentials and not change back to trigonometric functions until the end: 1 4 ∫ ( e 2 i x + 2 + e − 2 i x ) d x = 1 4 ( e 2 i x 2 i + 2 x − e − 2 i x 2 i ) + C = 1 4 ( 2 x + sin ⁡ 2 x ) + C . {\displaystyle {\begin{aligned}{\frac {1}{4}}\int \left(e^{2ix}+2+e^{-2ix}\right)dx&={\frac {1}{4}}\left({\frac {e^{2ix}}{2i}}+2x-{\frac {e^{-2ix}}{2i}}\right)+C\\[6pt]&={\frac {1}{4}}\left(2x+\sin 2x\right)+C.\end{aligned}}} === Second example === Consider the integral ∫ sin 2 ⁡ x cos ⁡ 4 x d x . {\displaystyle \int \sin ^{2}x\cos 4x\,dx.} This integral would be extremely tedious to solve using trigonometric identities, but using Euler's identity makes it relatively painless: ∫ sin 2 ⁡ x cos ⁡ 4 x d x = ∫ ( e i x − e − i x 2 i ) 2 ( e 4 i x + e − 4 i x 2 ) d x = − 1 8 ∫ ( e 2 i x − 2 + e − 2 i x ) ( e 4 i x + e − 4 i x ) d x = − 1 8 ∫ ( e 6 i x − 2 e 4 i x + e 2 i x + e − 2 i x − 2 e − 4 i x + e − 6 i x ) d x . {\displaystyle {\begin{aligned}\int \sin ^{2}x\cos 4x\,dx&=\int \left({\frac {e^{ix}-e^{-ix}}{2i}}\right)^{2}\left({\frac {e^{4ix}+e^{-4ix}}{2}}\right)dx\\[6pt]&=-{\frac {1}{8}}\int \left(e^{2ix}-2+e^{-2ix}\right)\left(e^{4ix}+e^{-4ix}\right)dx\\[6pt]&=-{\frac {1}{8}}\int \left(e^{6ix}-2e^{4ix}+e^{2ix}+e^{-2ix}-2e^{-4ix}+e^{-6ix}\right)dx.\end{aligned}}} At this point we can either integrate directly, or we can first change the integrand to 2 cos 6x − 4 cos 4x + 2 cos 2x and continue from there. Either method gives ∫ sin 2 ⁡ x cos ⁡ 4 x d x = − 1 24 sin ⁡ 6 x + 1 8 sin ⁡ 4 x − 1 8 sin ⁡ 2 x + C . {\displaystyle \int \sin ^{2}x\cos 4x\,dx=-{\frac {1}{24}}\sin 6x+{\frac {1}{8}}\sin 4x-{\frac {1}{8}}\sin 2x+C.} == Using real parts == In addition to Euler's identity, it can be helpful to make judicious use of the real parts of complex expressions. For example, consider the integral ∫ e x cos ⁡ x d x . {\displaystyle \int e^{x}\cos x\,dx.} Since cos x is the real part of eix, we know that ∫ e x cos ⁡ x d x = Re ⁡ ∫ e x e i x d x . {\displaystyle \int e^{x}\cos x\,dx=\operatorname {Re} \int e^{x}e^{ix}\,dx.} The integral on the right is easy to evaluate: ∫ e x e i x d x = ∫ e ( 1 + i ) x d x = e ( 1 + i ) x 1 + i + C . {\displaystyle \int e^{x}e^{ix}\,dx=\int e^{(1+i)x}\,dx={\frac {e^{(1+i)x}}{1+i}}+C.} Thus: ∫ e x cos ⁡ x d x = Re ⁡ ( e ( 1 + i ) x 1 + i ) + C = e x Re ⁡ ( e i x 1 + i ) + C = e x Re ⁡ ( e i x ( 1 − i ) 2 ) + C = e x cos ⁡ x + sin ⁡ x 2 + C . {\displaystyle {\begin{aligned}\int e^{x}\cos x\,dx&=\operatorname {Re} \left({\frac {e^{(1+i)x}}{1+i}}\right)+C\\[6pt]&=e^{x}\operatorname {Re} \left({\frac {e^{ix}}{1+i}}\right)+C\\[6pt]&=e^{x}\operatorname {Re} \left({\frac {e^{ix}(1-i)}{2}}\right)+C\\[6pt]&=e^{x}{\frac {\cos x+\sin x}{2}}+C.\end{aligned}}} == Fractions == In general, this technique may be used to evaluate any fractions involving trigonometric functions. For example, consider the integral ∫ 1 + cos 2 ⁡ x cos ⁡ x + cos ⁡ 3 x d x . {\displaystyle \int {\frac {1+\cos ^{2}x}{\cos x+\cos 3x}}\,dx.} Using Euler's identity, this integral becomes 1 2 ∫ 6 + e 2 i x + e − 2 i x e i x + e − i x + e 3 i x + e − 3 i x d x . {\displaystyle {\frac {1}{2}}\int {\frac {6+e^{2ix}+e^{-2ix}}{e^{ix}+e^{-ix}+e^{3ix}+e^{-3ix}}}\,dx.} If we now make the substitution u = e i x {\displaystyle u=e^{ix}} , the result is the integral of a rational function: − i 2 ∫ 1 + 6 u 2 + u 4 1 + u 2 + u 4 + u 6 d u . {\displaystyle -{\frac {i}{2}}\int {\frac {1+6u^{2}+u^{4}}{1+u^{2}+u^{4}+u^{6}}}\,du.} One may proceed using partial fraction decomposition. == See also == Trigonometric substitution Weierstrass substitution Euler substitution == References ==
Wikipedia:Intensity-duration-frequency curve#0
An intensity-duration-frequency curve (IDF curve) is a mathematical function that relates the intensity of an event (e.g. rainfall) with its duration and frequency of occurrence. Frequency is the inverse of the probability of occurrence. These curves are commonly used in hydrology for flood forecasting and civil engineering for urban drainage design. However, the IDF curves are also analysed in hydrometeorology because of the interest in the time concentration or time-structure of the rainfall, but it is also possible to define IDF curves for drought events. Additionally, applications of IDF curves to risk-based design are emerging outside of hydrometeorology, for example some authors developed IDF curves for food supply chain inflow shocks to US cities. == Mathematical approaches == The IDF curves can take different mathematical expressions, theoretical or empirically fitted to observed event data. For each duration (e.g. 5, 10, 60, 120, 180 ... minutes), the empirical cumulative distribution function (ECDF), and a determined frequency or return period is set. Therefore, the empirical IDF curve is given by the union of the points of equal frequency of occurrence and different duration and intensity Likewise, a theoretical or semi-empirical IDF curve is one whose mathematical expression is physically justified, but presents parameters that must be estimated by empirical fits. === Empirical approaches === There is a large number of empirical approaches that relate the intensity (I), the duration (t) and the return period (p), from fits to power laws such as: Sherman's formula, with three parameters (a, c and n), which are a function of the return period, p: I ( t ) = a ( t + c ) n {\displaystyle I(t)={\frac {a}{(t+c)^{n}}}} Chow's formula, also with three parameters (a, c and n), for a particular return period p: I ( t ) = a t n + c {\displaystyle I(t)={\frac {a}{t^{n}+c}}} Power law according to Aparicio (1997), with four parameters (a, c, m and n), already adjusted for all return periods of interest: I ( t , p ) = a ⋅ p m ( t + c ) n {\displaystyle I(t,p)=a\cdot {\frac {p^{m}}{(t+c)^{n}}}} In hydrometeorology, the simple power law (taking c = 0 {\displaystyle \ c=0} ) is used as a measure of the time-structure of the rainfall: I ( t ) = a t n = I o ( t o t ) n {\displaystyle I(t)={\frac {a}{t^{n}}}=I_{o}\left({\frac {t_{o}}{t}}\right)^{n}} where I o {\displaystyle \ I_{o}} is defined as an intensity of reference for a fixed time t o {\displaystyle \ t_{o}} , i.e. a = I o t o n {\displaystyle \ a=I_{o}t_{o}^{n}} , and n {\displaystyle \ n} is a non-dimensional parameter known as n-index. In a rainfall event, the equivalent to the IDF curve is called Maximum Averaged Intensity (MAI) curve. === Theoretical approaches === To get an IDF curves from a probability distribution, F ( x ) {\displaystyle \ F(x)} it is necessary to mathematically isolate the total amount or depth of the event x {\displaystyle \ x} , which is directly related to the average intensity I {\displaystyle \ I} and the duration t {\displaystyle \ t} , by the equation x = I t {\displaystyle \ x=It} , and since the return period p {\displaystyle p} is defined as the inverse of 1 − F ( x ) {\displaystyle \ 1-F(x)} , the function f ( p ) {\displaystyle \ f(p)} is found as the inverse of F ( x ) {\displaystyle \ F(x)} , according to: I t = f ( p ) ⇐ p = 1 1 − F ( I t ) {\displaystyle It=f(p)\quad \Leftarrow \quad p={\frac {1}{1-F(It)}}} Power law with the return period, derived from the Pareto distribution, for a fixed duration t {\displaystyle \ t} : I ( p ) = k p m ⇐ F ( I t ) = 1 − ( k t I t ) 1 / m = 1 − 1 p {\displaystyle \ I(p)=kp^{m}\quad \Leftarrow \quad F(It)=1-\left({\frac {kt}{It}}\right)^{1/m}=1-{\frac {1}{p}}} where the Pareto distribution constant has been redefined as k ′ = k t {\displaystyle \ k'=kt} , since it is a valid distribution for a specific duration of the event, it has been taken as x = I t {\displaystyle \ x=It} . Function derived from the generalized Pareto distribution, for a given duration t {\displaystyle \ t} : I ( p ) = { μ + σ m ⋅ ( p m − 1 ) ⇐ F ( I ) = 1 − ( 1 + m ( I − μ ) σ ) − 1 / m = 1 − 1 p if m > 0 , μ + σ ln ⁡ ( p ) ⇐ F ( I ) = 1 − exp ⁡ ( − I − μ σ ) = 1 − 1 p if m = 0. {\displaystyle I(p)={\begin{cases}\mu +{\frac {\sigma }{m}}\cdot (p^{m}-1)\quad \Leftarrow \quad F(I)=1-\left(1+{\frac {m(I-\mu )}{\sigma }}\right)^{-1/m}=1-{\frac {1}{p}}&{\text{if }}m>0,\\\quad \mu +\sigma \ln(p)\quad \quad \Leftarrow \quad F(I)=1-\exp \left(-{\frac {I-\mu }{\sigma }}\right)=1-{\frac {1}{p}}&{\text{if }}m=0.\end{cases}}} Note that for m > 0 {\displaystyle \ m>0} y μ = σ m {\displaystyle \ \mu ={\frac {\sigma }{m}}} , the generalized Pareto distribution retrieves the simple form of the Pareto distribution, with k ′ = σ m {\displaystyle \ k'={\frac {\sigma }{m}}} . However, with m = 0 {\displaystyle \ m=0} the exponential distribution is retrieved. Function deduced from the Gumbel distribution and the opposite Gumbel distribution, for a given duration t {\displaystyle \ t} : I ( p ) = μ + σ ln ⁡ ( ln ⁡ ( 1 − 1 p ) ) ⇐ F ( I ) = exp ⁡ ( − exp ⁡ ( − I − μ σ ) ) = 1 − 1 p {\displaystyle I(p)=\mu +\sigma \ln \left(\ln \left(1-{\frac {1}{p}}\right)\right)\quad \Leftarrow \quad \quad F(I)=\exp \left(-\exp \left(-{\frac {I-\mu }{\sigma }}\right)\right)=1-{\frac {1}{p}}} I ( p ) = μ + σ ln ⁡ ( ln ⁡ p ) ⇐ F ( I ) = 1 − exp ⁡ ( − exp ⁡ ( I − μ σ ) ) = 1 − 1 p {\displaystyle I(p)=\mu +\sigma \ln(\ln p)\quad \quad \quad \quad \quad \Leftarrow \quad \quad F(I)=1-\exp \left(-\exp \left({\frac {I-\mu }{\sigma }}\right)\right)=1-{\frac {1}{p}}} == References ==
Wikipedia:Interchange of limiting operations#0
In mathematics, the study of interchange of limiting operations is one of the major concerns of mathematical analysis, in that two given limiting operations, say L and M, cannot be assumed to give the same result when applied in either order. One of the historical sources for this theory is the study of trigonometric series. == Formulation == In symbols, the assumption LM = ML, where the left-hand side means that M is applied first, then L, and vice versa on the right-hand side, is not a valid equation between mathematical operators, under all circumstances and for all operands. An algebraist would say that the operations do not commute. The approach taken in analysis is somewhat different. Conclusions that assume limiting operations do 'commute' are called formal. The analyst tries to delineate conditions under which such conclusions are valid; in other words mathematical rigour is established by the specification of some set of sufficient conditions for the formal analysis to hold. This approach justifies, for example, the notion of uniform convergence. It is relatively rare for such sufficient conditions to be also necessary, so that a sharper piece of analysis may extend the domain of validity of formal results. Professionally speaking, therefore, analysts push the envelope of techniques, and expand the meaning of well-behaved for a given context. G. H. Hardy wrote that "The problem of deciding whether two given limit operations are commutative is one of the most important in mathematics". An opinion apparently not in favour of the piece-wise approach, but of leaving analysis at the level of heuristic, was that of Richard Courant. == Examples == Examples abound, one of the simplest being that for a double sequence am,n: it is not necessarily the case that the operations of taking the limits as m → ∞ and as n → ∞ can be freely interchanged. For example take am,n = 2m − n in which taking the limit first with respect to n gives 0, and with respect to m gives ∞. Many of the fundamental results of infinitesimal calculus also fall into this category: the symmetry of partial derivatives, differentiation under the integral sign, and Fubini's theorem deal with the interchange of differentiation and integration operators. One of the major reasons why the Lebesgue integral is used is that theorems exist, such as the dominated convergence theorem, that give sufficient conditions under which integration and limit operation can be interchanged. Necessary and sufficient conditions for this interchange were discovered by Federico Cafiero. == List of related theorems == Interchange of limits: Moore-Osgood theorem Interchange of limit and infinite summation: Tannery's theorem Interchange of limit and derivatives: If a sequence of functions ( f n ) {\displaystyle (f_{n})} converges at at least one point and the derivatives converge uniformly, then ( f n ) {\displaystyle (f_{n})} converges uniformly as well, say to some function f {\displaystyle f} and the limiting function of the derivatives is f ′ {\displaystyle f'} . While this is often shown using the mean value theorem for real-valued functions, the same method can be applied for higher-dimensional functions by using the mean value inequality instead. Interchange of partial derivatives: Schwarz's theorem Interchange of integrals: Fubini's theorem Interchange of limit and integral: Dominated convergence theorem Vitali convergence theorem Fichera convergence theorem Cafiero convergence theorem Fatou's lemma Monotone convergence theorem for integrals (Beppo Levi's lemma) Interchange of derivative and integral: Leibniz integral rule == See also == Iterated limit Uniform convergence == Notes ==
Wikipedia:Intermediate Mathematical Challenge#0
The United Kingdom Mathematics Trust (UKMT) is a charity founded in 1996 to help with the education of children in mathematics within the UK. == History == The national mathematics competitions had existed prior to the formation of the trust, but the foundation of the UKMT in the summer of 1996 enabled them to be run collectively. The Senior Mathematical Challenge was formerly called the National Mathematics Contest. Founded in 1961, it was run by the Mathematical Association from 1975 until its adoption by the UKMT in 1996. The Junior and Intermediate Mathematical Challenges were the initiative of Tony Gardiner in 1987, and were run by him under the name of the United Kingdom Mathematics Foundation until 1996. In 1995, Gardiner advertised for the formation of a committee and for a host institution that would lead to the establishment of the UKMT, enabling the challenges to be run effectively together under one organization. == Mathematical Challenges == The UKMT runs a series of mathematics challenges to encourage children's interest in mathematics and to develop their skills. The three main challenges are: Junior Mathematical Challenge (UK year 8/S2 and below) Intermediate Mathematical Challenge (UK year 11/S4 and below) Senior Mathematical Challenge (UK year 13/S6 and below) == Certificates == In the Junior and Intermediate Challenges the top scoring 50% of the entrants receive bronze, silver or gold certificates based on their mark in the paper. In the Senior Mathematical Challenge these certificates are awarded to top scoring 66% of the entries. In each case bronze, silver and gold certificates are awarded in the ratio 3 : 2 : 1. So in the Junior and Intermediate Challenges The Gold award is achieved by the top 8-9% of the entrants. The Silver award is achieved by 16-17% of the entrants. The Bronze award is achieved by 25% of the entrants. In the past, only the top 40% of participants received a certificate in the Junior and Intermediate Challenges, and only the top 60% of participants received a certificate in the Senior Challenge. The ratio of bronze, silver, and gold have not changed, still being 3 : 2 : 1. == Junior Mathematical Challenge == The Junior Mathematical Challenge (JMC) is an introductory challenge for pupils in Years 8 or below (aged 13) or below, taking place in spring each year. This takes the form of twenty-five multiple choice questions to be sat in exam conditions, to be completed within one hour. The first fifteen questions are designed to be easier, and a pupil will gain 5 marks for getting a question in this section correct. Questions 16-20 are more difficult and are worth 6 marks. The last five questions are intended to be the most challenging and so are also 6 marks. Questions to which no answer is entered will gain (and lose) 0 marks. However, in recent years there has been no negative marking so wrong questions will be given 0 marks. Previously, the top 40% of students (50% since the 2022 JMC) get a certificate of varying levels (Gold, Silver or Bronze) based on their score. === Junior Kangaroo === Over 10,000 participants from the JMC are invited to participate in the Junior Kangaroo. Most of the Junior Kangaroo participants are those who performed well in the JMC, however the Junior Kangaroo is open to discretionary entries for a fee. Similar to the JMC, the Junior Kangaroo is a 60 minute challenge consisting of 25 multiple-choice problems. Correct answers for Questions 1-15 earn 5 marks, and for Questions 16-25 earn 6 marks. Blank or incorrect answers are marked 0; there is no penalty for wrong answers. The top 25% of participants in the Junior Kangaroo receive a Certificate of Merit. === Junior Mathematical Olympiad === The highest 1200 scorers are also invited to take part in the Junior Mathematical Olympiad (JMO). Like the JMC, the JMO is sat in schools. Students are given 120 minutes to complete the JMO. This is also divided into two sections. Part A is composed of 10 questions in which the candidate gives just the answer (not multiple choice), worth 10 marks (1 mark each). Part B consists of 6 questions and encourages students to write out full solutions. Each question in section B is worth 10 marks and students are encouraged to write complete answers to 2-4 questions rather than hurry through incomplete answers to all 6. If the solution is judged to be incomplete, it is marked on a 0+ basis, maximum 3 marks. If it has an evident logical strategy, it is marked on a 10- basis. The total mark for the whole paper is 70. Everyone who participates in this challenge will gain a certificate (Participation 75%, Distinction 25%); the top 200 or so gaining medals (Gold, Silver, Bronze); with the top fifty winning a book prize. From 2025, this has changed as Part A has been omitted. Section B has stayed the same, though it is no longer called Section B (it is now the only section). This changes the total number of questions to 10 and the marks to 60. However the time given for the JMO, has stayed at 120 minutes. == Intermediate Mathematical Challenge == The Intermediate Mathematical Challenge (IMC) is aimed at school years equivalent to English Years 9-11, taking place in winter each year. Following the same structure as the JMC, this paper presents the student with twenty-five multiple choice questions to be done under exam conditions in one hour. The first fifteen questions are designed to be easier, and a pupil will gain 5 marks for getting a question in this section correct. Questions 16-20 are more difficult and are worth 6 marks, with a penalty of 1 point for a wrong answer which tries to stop pupils guessing. The last five questions are intended to be the most challenging and so are also 6 marks, but with a 2 point penalty for an incorrectly answered question. Questions to which no answer is entered will gain (and lose) 0 marks. Again, the top 40% of students taking this challenge get a certificate. There are two follow-on rounds to this competition: The European Kangaroo and the Intermediate Mathematical Olympiad. Additionally, top performers can be selected for the National Mathematics Summer Schools. === Intermediate Mathematical Olympiad === To prevent this getting confused with the International Mathematical Olympiad, this is often abbreviated to the IMOK Olympiad (IMOK = Intermediate Mathematical Olympiad and Kangaroo). The IMOK is sat by the top 500 scorers from each school year in the Intermediate Maths Challenge and consists of three papers, 'Cayley', 'Hamilton' and 'Maclaurin' named after famous mathematicians. The paper the student will undertake depends on the year group that student is in (Cayley for those in year 9 and below, Hamilton for year 10 and Maclaurin for year 11). Each paper contains six questions. Each solution is marked out of 10 on a 0+ and 10- scale; that is to say, if an answer is judged incomplete or unfinished, it is awarded a few marks for progress and relevant observations, whereas if it is presented as complete and correct, marks are deducted for faults, poor reasoning, or unproven assumptions. As a result, it is quite uncommon for an answer to score a middling mark (e.g. 4–6). This makes the maximum mark out of 60. For a student to get two questions fully correct is considered "very good". All people taking part in this challenge will get a certificate (participation for the bottom 50%, merit for the next 25% and distinction for the top 25%). The mark boundaries for these certificates change every year, but normally around 30 marks will gain a Distinction. Those scoring highly (the top 50) will gain a book prize; again, this changes every year, with 44 marks required in the Maclaurin paper in 2006. Also, the top 100 candidates will receive a medal; bronze for Cayley, silver for Hamilton and gold for Maclaurin. === European Kangaroo === The European Kangaroo is a competition which follows the same structure as the AMC (Australian Mathematics Competition). There are twenty-five multiple-choice questions and no penalty marking. This paper is taken throughout Europe by over 3 million pupils from more than 37 countries. Two different Kangaroo papers follow on from the Intermediate Maths Challenge and the next 5500 highest scorers below the Olympiad threshold are invited to take part (both papers are by invitation only). The Grey Kangaroo is sat by students in year 9 and below and the Pink Kangaroo is sat by those in years 10 and 11. The top 25% of scorers in each paper receive a certificate of merit and the rest receive a certificate of participation. All those who sit either Kangaroo also receive a keyfob containing a different mathematical puzzle each year. (The puzzles along with solutions) === National Mathematics Summer Schools === Selected by lottery, 48 of the top 1.5% of scorers in the IMC are invited to participate in one of three week-long National Mathematics Summer Schools in July. Each from a different school across the UK, the 24 boys and 24 girls are facilitated with a range of activities, including daily lectures, designed to go beyond the GCSE syllabus and explore wider and more challenging areas of mathematics. The UKMT aims to "promote mathematical thinking" and "provide an opportunity for participants to meet other students and adults who enjoy mathematics". They were delivered virtually during the COVID-19 pandemic but had reverted to in-person events by 2022. == Senior Mathematical Challenge == The Senior Mathematical Challenge (SMC) takes place in late-autumn each year, and is open to students who are aged 19 or below and are not registered to attend a university. SMC consists of twenty-five multiple choice questions to be answered in 90 minutes. All candidates start with 25 marks, each correct answer is awarded 4 marks and 1 mark is deducted for each incorrect answer. This gives a score between 0 and 125 marks. Unlike the JMC and IMC, the top 66% get one of the three certificates. Further, the top 1000 highest scorers who are eligible to represent the UK at the International Mathematical Olympiad, together with any discretionary and international candidates, are invited to compete in the British Mathematical Olympiad and the next around 6000 highest scorers are invited to sit the Senior Kangaroo. Discretionary candidates are those students who are entered by their mathematics teachers, on payment of a fee, who did not score quite well enough in the SMC, but who might cope well in the next round. === British Mathematical Olympiad === Round 1 of the Olympiad is a three-and-a-half hour examination including six more difficult, long answer questions, which serve to test entrants' problem-solving skills. As of 2005, a more accessible first question was added to the paper; before this, it only consisted of 5 questions. Approximately 100 highest scoring candidates from BMO1 are invited to sit the BMO2, which is the follow-up round that has the same time limit as BMO1, but in which 4 harder questions are posed. The top 24 scoring students from the second round are subsequently invited to a training camp at Trinity College, Cambridge or Oundle School for the first stage of the International Mathematical Olympiad UK team selection. === Senior Kangaroo === The Senior Kangaroo is a one-hour examination to which the next around 6000 highest scorers below the Olympiad threshold are invited. The paper consists of twenty questions, each of which require three digit answers (leading zeros are used if the answer is less than 100, since the paper is marked by machine). The top 25% of candidates receive a certificate of merit and the rest receive a certificate of participation. == Team Challenge == The UKMT Team Maths Challenge is an annual event. One team from each participating school, comprising four pupils selected from year 8 and 9 (ages 12–14), competes in a regional round. No more than 2 pupils on a team may be from Year 9. There are over 60 regional competitions in the UK, held between February and May. The winning team in each regional round, as well as a few high-scoring runners-up from throughout the country, are then invited to the National Final in London, usually in late June. There are 4 rounds: Group Questions Cross-Numbers Shuttle (NB: The previous Head-to-Head Round has been replaced with another, similar to the Mini-Relay used in the 2007 and 2008 National Finals.) Relay In the National Final however an additional 'Poster Round' is added at the beginning. The poster round is a separate competition, however, since 2018 it is worth up to six marks towards the main event. Four schools have won the Junior Maths Team competition at least twice: Queen Mary's Grammar School in Walsall, City of London School, St Olave's Grammar School, and Westminster Under School. == Senior Team Challenge == A pilot event for a competition similar to the Team Challenge, aimed at 16- to 18-year-olds, was launched in the Autumn of 2007 and has been running ever since. The format is much the same, with a limit of two year 13 (Upper Sixth-Form) pupils per team. Regional finals take place between October and December, with the National Final in early February the following year. Previous winners are below: == British Mathematical Olympiad Subtrust == For more information see British Mathematical Olympiad Subtrust. The British Mathematical Olympiad Subtrust is run by UKMT, which runs the British Mathematical Olympiad as well as the UK Mathematical Olympiad for Girls, several training camps throughout the year such as a winter camp in Hungary, an Easter camp at Trinity College, Cambridge, and other training and selection of the IMO team. == See also == European Kangaroo British Mathematical Olympiad International Mathematical Olympiad International Mathematics Competition for University Students == References == == External links == United Kingdom Mathematics Trust website British Mathematical Olympiad Committee site International Mathematics Competition for University Students (IMC) site Junior Mathematical Challenge Sample Paper Intermediate Mathematical Challenge Sample Paper Senior Mathematical Challenge Sample Paper
Wikipedia:International Centre for Mathematical Sciences#0
The International Centre for Mathematical Sciences (ICMS) is a mathematical research centre based in Edinburgh. According to its website, the centre is "designed to bring together mathematicians and practitioners in science, industry and commerce for research workshops and other meetings." The centre was jointly established in 1990 by the University of Edinburgh and Heriot-Watt University, under the supervision of Professor Elmer Rees, with initial support from Edinburgh District Council, the Scottish Development Agency and the International Centre for Theoretical Physics. Its current operations are primarily funded by grants from the Engineering and Physical Sciences Research Council of the UK. In April 1994 the Centre moved to 14 India Street, Edinburgh, the birthplace of James Clerk Maxwell and home of the James Clerk Maxwell Foundation. In 2010 it was relocated to 15 South College Street to accommodate larger events. As of 2020, the ICMS is located within the newly established Bayes centre. The current scientific director (appointed in 2021) is Professor Minhyong Kim. The ICMS is a member of the European Mathematical Society. == Premises == From April 1994, the Centre rented from the James Clerk Maxwell Foundation accommodation at 14, India Street, the birthplace of James Clerk Maxwell. Increased activity necessitated removal in 2010 to a converted church in South College Street, and then in 2018 to its present location in the nearby Bayes Centre of the University of Edinburgh. == See also == Edinburgh Mathematical Society Isaac Newton Institute, Cambridge == References == == External links == ICMS Web Site
Wikipedia:International Journal of Algebra and Computation#0
The International Journal of Algebra and Computation is published by World Scientific, and contains articles on general mathematics, as well as: Combinatorial group theory and semigroup theory Universal algebra Algorithmic and computational problems in algebra Theory of automata Formal language theory Theory of computation Theoretical computer science According to the Journal Citation Reports, the journal has a 2020 impact factor of 0.719. == Abstracting and indexing == The journal is indexed in: ISI Alerting Services CompuMath Citation Index Science Citation Index Current Contents/Physical, Chemical and Earth Sciences Mathematical Reviews INSPEC Zentralblatt MATH Computer Abstracts
Wikipedia:International Linear Algebra Society#0
The International Linear Algebra Society (ILAS) is a professional mathematical society organized to promote research and education in linear algebra, matrix theory and matrix computation. It serves the international community through conferences, publications, prizes and lectures. Membership in ILAS is open to all mathematicians and scientists interested in furthering its aims and participating in its activities. == History == ILAS was founded in 1989. Its genesis occurred at the Combinatorial Matrix Analysis Conference held at the University of Victoria in British Columbia, Canada, May 20–23, 1987, hosted by Dale Olesky and Pauline van den Driessche. ILAS was initially known as the International Matrix Group, founded in 1987. The founding officers of ILAS were Hans Schneider, President; Robert C. Thompson, Vice President; Daniel Hershkowitz, Secretary; and James R. Weaver, Treasurer. == ILAS Conferences == The inaugural meeting of ILAS took place at Brigham Young University (including one day at the Sundance Mountain Resort) in Provo, Utah, USA, from August 12–15, 1989. The organizing committee consisted of Wayne Barrett, Daniel Hershkowitz, Charles Johnson, Hans Schneider, and Robert C. Thompson. Much additional support came from Don Robinson, Chair of the BYU Mathematics Department, and James R. Weaver, ILAS Treasurer. The conference received support from Brigham Young University, the National Security Agency, and the National Science Foundation. There were 85 in attendance at the conference from 15 countries including Olga Taussky-Todd, a renowned mathematician in Matrix Theory. The proceedings of the Conference appeared in volume 150 of the journal Linear Algebra and Its Applications. The 2nd ILAS conference was held in Lisbon, Portugal, August 3–7, 1992. The chair of the organizing committee was José Dias da Silva. There were 150 participants from 27 countries and the conference was supported by 11 different organizations. The proceedings of the conference can be found in volumes 197–198 of Linear Algebra and Its Applications. ILAS conferences were held the next 4 years, alternating between the United States and Europe, before beginning the standard pattern of holding the Conference two of every three years (with a few exceptions). The number of participants at each ILAS conference has grown steadily through the years. The first ILAS conference outside of the United States and Europe was held in Haifa, Israel in 2001. The first in the Far East was in Shanghai in 2007 and the first in Latin America was in Cancun, Mexico in 2008. The complete list of locations hosting ILAS conferences follows: == Prizes and Special Lectures == ILAS has three prizes named after giants in Linear Algebra. The Hans Schneider Prize. A distinctive feature of the 3rd ILAS meeting held at the University of West Florida in Pensacola, Florida, March 17–20, 1993, was the institution of the Hans Schneider Prize. This prize was initiated thanks to a donation to ILAS from Hans Schneider, the first president of ILAS and a founding editor of the journal Linear Algebra and Its Applications. Typically, the prize is awarded every 3 years and has evolved as a prize to recognize a person's career. The ILAS Taussky–Todd Prize. Olga Taussky-Todd and John Todd have had a decisive impact on the development of theoretical and numerical linear algebra for over half a century. The ILAS Taussky–Todd Prize honors them for their many and varied mathematical achievements and for their efforts in promoting linear algebra and matrix theory. The prize is awarded once every three to four years recognizing a linear algebra researcher in their mid career. The ILAS Taussky–Todd Prize was originally referred to as the Taussky–Todd lecture, and was instituted at the 3rd ILAS meeting held at the University of West Florida in Pensacola, Florida, March 17–20, 1993. The ILAS Richard A. Brualdi Early Career Prize. The prize is named for Richard A. Brualdi, who has had a major impact on the field, especially in combinatorial matrix theory. In addition, he has been instrumental to the success of ILAS since its inception. The ILAS Richard A. Brualdi Early Career Prize was instituted in 2021 and is awarded every three years to an outstanding early career researcher in the field of linear algebra, for distinguished contributions to the field. In addition ILAS awards Special Lectures at ILAS conferences as well as conferences of collaborating mathematics organizations. == Publications == ILAS publishes an electronic journal – the Electronic Journal of Linear Algebra (ELA), founded in 1996. The first Editors-in-Chief were Volker Mehrmann and Daniel Hershkowitz. ELA is a platinum open access journal, meaning that it is free to all: no subscription and no article processing fee or page charges. ELA is an all-electronic journal that welcomes high quality mathematical articles that contribute new insights to matrix analysis and the various aspects of linear algebra and its applications. ELA sets high standards for refereeing while using conventional refereeing of articles that is carried out electronically. ILAS also produces and distributes IMAGE, a semiannual electronic bulletin founded in 1988 with Robert C. Thompson as its first Editor. IMAGE contains: essays related to linear algebra activities; feature articles; interviews of linear algebra experts; book reviews; brief reports on conferences; ILAS business notices; announcements of upcoming workshops and conferences; problems and solutions; and news about individual members. == Presidents == Hans Schneider, 1987–1996 Richard A. Brualdi, 1996–2002 Daniel Hershkowitz, 2002–2008 Stephen Kirkland, 2008–2014 Peter Šemrl, 2014–2020 Daniel B. Szyld, 2020–present == Collaborations with other mathematics organizations == ILAS collaborates with the Society for Industrial and Applied Mathematics (SIAM), the American Mathematical Society (AMS) and the International Workshop on Operator Theory and its Applications (IWOTA). The collaboration with SIAM started in 1999. The SIAM Activity Group on Linear Algebra (SIAG/LA) holds a conference every three years (when the year minus 2000 is divisible by 3). As part of the agreement, and to encourage interaction between ILAS and SIAG/LA members, the two societies do not hold conferences in the same year. As a result, ILAS holds conferences two out of every three years. In addition, the two societies exchange speakers with ILAS sponsoring two ILAS speakers at every triennial SIAM Applied Linear Algebra (SIAM ALA) meeting (organized by SIAG/LA) and with SIAM sponsoring a SIAM speaker at every ILAS conference. The first ILAS speakers at a SIAM ALA meeting were Hans Schneider and Hugo Woerdeman in 2000, and the first SIAM speakers at an ILAS conference were Michele Benzi and Misha Kilmer in 2002. The collaboration with AMS started in late 2020 with the establishment of ILAS as a partner in the Joint Mathematics Meetings (JMM). In this capacity ILAS will support a speaker for the "ILAS Lecture" at the JMM to be selected by ILAS. In addition, at least four special sessions at the JMM will be identified as ILAS special sessions, the contents of which will be determined by ILAS. The partnership took effect starting with the JMM 2022 held virtually. The collaboration with IWOTA started in 2017 with the establishment of the Israel Gohberg ILAS-IWOTA Lecture, which is funded by donations. This lecture series consists of biennial lectures either at an ILAS conference or at an IWOTA meeting. Israel Gohberg was the founding president of IWOTA and an active member of ILAS. The first Israel Gohberg ILAS-IWOTA Lecturer was Vern Paulsen at the 2021 IWOTA Lancaster UK meeting. == References == == External links == International Linear Algebra Society (ILAS) home page Electronic Journal of Linear Algebra (ELA) home page
Wikipedia:International Symposium on Symbolic and Algebraic Computation#0
ISSAC, the International Symposium on Symbolic and Algebraic Computation, is an academic conference in the field of computer algebra. ISSAC has been organized annually since 1988, typically in July. The conference is regularly sponsored by the Association for Computing Machinery special interest group SIGSAM, and the proceedings since 1989 have been published by ACM. ISSAC is considered as being one of the most influential conferences for the publication of scientific computing research. == History == The first ISSAC took place in Rome on 4–8 July 1988. It succeeded a series of meetings held between 1966 and 1987 under the names SYMSAM, SYMSAC, EUROCAL, EUROSAM and EUROCAM. == ISSAC Awards == The Richard D. Jenks Memorial Prize for excellence in software engineering applied to computer algebra is awarded at ISSAC every other year since 2004. The ISSAC Distinguished Paper Award is awarded at ISSAC since 2002 to authors that display excellence in areas that include, but are not limited to, algebraic computation, symbolic-numeric computation, and system design and implementation. The ISSAC Distinguished Student Author Award is awarded at ISSAC since 2004 to authors if they were a student at the time their paper was submitted. == Conference topics == Typical topics include: exact linear algebra; polynomial system solving; symbolic summation; symbolic integration and computational differential algebra; computational group theory; symbolic-numeric algorithms; the design and implementation of computer algebra systems; applications of computer algebra. == See also == Journal of Symbolic Computation == References == == External links == ISSAC web page Bibliographic information about ISSAC at DBLP
Wikipedia:International Workshop on Operator Theory and its Applications#0
International Workshop on Operator Theory and its Applications (IWOTA) was started in 1981 to bring together mathematicians and engineers working in operator theoretic side of functional analysis and its applications to related fields. These include: Differential equations and Integral equations Complex analysis and Harmonic analysis Linear system and Control theory Mathematical physics Signal processing Numerical analysis The other major branch of operator theory, Operator algebras (C* and von Neumann Algebras), is not heavily represented at IWOTA and has its own conferences. IWOTA gathers leading experts from all over the world for an intense exchange of new results, information and opinions, and for tracing the future developments in the field. The IWOTA meetings provide opportunities for participants (including young researchers) to present their own work in invited and contributed talks, to interact with other researchers from around the globe, and to broaden their knowledge of the field. In addition, IWOTA emphasizes cross-disciplinary interaction among mathematicians, electrical engineers and mathematical physicists. In the even years, the IWOTA workshop is a satellite meeting to the biennial International Symposium on the Mathematical Theory of Networks and Systems (MTNS). From the humble beginnings in the early 80's, the IWOTA workshops grew to become one of the largest continuing conferences attended by the community of researchers in operator theory. == History of IWOTA == === First IWOTA Meeting === The International Workshop on Operator Theory and its Applications was started on August 1, 1981, adjacent to the International Symposium on Mathematical Theory of Networks and Systems (MTNS) with goal of exposing operator theorists, even pure theorists, to recent developments in engineering (especially H-infinity methods in control theory) which had a significant intersection with operator theory. Israel Gohberg was the visionary and driving force of IWOTA and president of the IWOTA Steering Committee. From the beginning, J. W. Helton and M. A. Kaashoek served as vice presidents of the steering committee. === West Meets East === Besides the excitement of mathematical discovery over the decades at IWOTA, there was great excitement when the curtain between Soviet bloc and Western operator theorists fell. Until 1990, these two collections of extremely strong mathematicians seldom met due to the tight restrictions on travel from and in the communist countries. When the curtain dropped, the western mathematicians knew the classic Soviet papers but had a spotty knowledge of much of what else their counterparts were doing. Gohberg was one of the operator theorists who knew both sides and he guided IWOTA, a western institution, in bringing (and funding) many prominent FSU bloc operator theorists to speak at the meetings. As the IWOTA programs demonstrate, this significantly accelerated the cultures' mutual assimilation. === Previous IWOTA Meetings === === IWOTA Proceedings === Proceedings of the IWOTA workshops appear in the Springer / Birkhäuser Verlag book series Operator Theory: Advances and Applications (OTAA) (founder: Israel Gohberg). While engineering conference proceedings often are handed to participants as they arrive and contain short papers on each conference talk, the IWOTA proceedings follow mathematics conference tradition and contain a modest number of papers and are published several years after the conference. === Funding Sources === IWOTA has received support from many sources, including the National Science Foundation , the London Mathematical Society, the Engineering and Physical Sciences Research Council, Deutsche Forschungsgemeinschaft, Secretaría de Estado de Investigación, Desarrollo e Innovación (Spain), Australian Mathematical Sciences Institute, National Board for Higher Mathematics, International Centre for Theoretical Physics, Indian Statistical Institute, Korea Research Foundation, United States-India Science & Technology Endowment Fund, Nederlandse Organisatie voor Wetenschappelijk Onderzoek, the Commission for Developing Countries of the International Mathematical Union, Stichting Advancement of Mathematics (Netherlands), the National Research Foundation of South Africa, and Birkhäuser Publishing Ltd. == The IWOTA Steering Committee == IWOTA is directed by a steering committee which chooses the site for the next meeting, elects the chief local organizer(s) and insures the appearance of the enduring themes of IWOTA. The sub-themes of an IWOTA workshop and the lecturers are chosen by the local organizing committee after hearing the steering committee's board. The board consists of its vice presidents: Joseph A. Ball, J. William Helton (Chair), Sanne ter Horst, Igor Klep, Christiane Tretter, Irene Sabadini, Victor Vinnikov and Hugo J. Woerdeman. In addition, past chief organizers who remain active in IWOTA are members of the steering committee. The board governs IWOTA with consultation and the consent of the full steering committee. Honorary members of the steering committee, elected in 2016, are: Israel Gohberg (deceased in 2009), Leiba Rodman (deceased in 2015), Tsuyoshi Ando, Harry Dym (deceased in 2024), Ciprian Foiaş (deceased in 2020), Heinz Langer (deceased in 2024), Nikolai Nikolski. Honorary member of the steering committee, elected in 2024, is: Rien Kaashoek. == Future IWOTA Meetings == IWOTA 2025 will be held at University of Twente in Enschede, The Netherlands. Main organizer is Felix Schwenninger. Dates are July 14-18, 2025 IWOTA 2026 will be held at Université Laval in Quebec City, Canada. Main organizers are Javad Mashreghi and Frédéric Morneau-Guérin. Dates are August 3-7, 2026 == Israel Gohberg ILAS-IWOTA Lecture == The Israel Gohberg ILAS-IWOTA Lecture was introduced in August 2016 and honors the legacy of Israel Gohberg, whose research crossed borders between operator theory, linear algebra, and related fields. This lecture is in collaboration with the International Linear Algebra Society (ILAS). This series of lectures will be delivered at IWOTA and ILAS Conferences, in different years, in the approximate ratio two-thirds at IWOTA and one-third at ILAS. The first three lectures will take place at IWOTA Lancaster UK 2021, ILAS 2022, and IWOTA 2024. Donations for the Israel Gohberg ILAS-IWOTA Lecture Fund are most welcome and can be submitted via the ILAS donation form. Donations are tax deductible in the United States. == References == == External links == Operator Theory: Advances and Applications Series on Springer website IWOTA's YouTube Channel IWOTA 2000 - Bordeaux, France IWOTA 2006 - Seoul, Korea IWOTA 2007 - Potchefstroom, South Africa IWOTA 2008 - Williamsburg, Virginia, U.S.A IWOTA 2010 - Berlin, Germany IWOTA 2011 - Seville, Spain IWOTA 2012 - Sydney, Australia IWOTA 2013 - Bangalore, India IWOTA 2014 - Amsterdam, Netherlands IWOTA 2015 - Tbilisi, Georgia IWOTA 2016 - St. Louis, Missouri, USA IWOTA 2017 - Chemnitz, Germany IWOTA 2019 - Lisbon, Portugal IWOTA Chapman USA 2021 - Orange, California, USA IWOTA Lancaster UK 2021 - Lancaster, United Kingdom IWOTA 2022 - Kraków, Poland IWOTA 2023 - Helsinki, Finland IWOTA 2024 - Canterbury, United Kingdom IWOTA 2025 - Enschede, The Netherlands
Wikipedia:Introductio in analysin infinitorum#0
Introductio in analysin infinitorum (Latin: Introduction to the Analysis of the Infinite) is a two-volume work by Leonhard Euler which lays the foundations of mathematical analysis. Written in Latin and published in 1748, the Introductio contains 18 chapters in the first part and 22 chapters in the second. It has Eneström numbers E101 and E102. It is considered the first precalculus book. == Contents == Chapter 1 is on the concepts of variables and functions. Chapters 2 and 3 are concerned with the transformation of functions. Chapter 4 introduces infinite series through rational functions. According to Henk Bos, The Introduction is meant as a survey of concepts and methods in analysis and analytic geometry preliminary to the study of the differential and integral calculus. [Euler] made of this survey a masterly exercise in introducing as much as possible of analysis without using differentiation or integration. In particular, he introduced the elementary transcendental functions, the logarithm, the exponential function, the trigonometric functions and their inverses without recourse to integral calculus — which was no mean feat, as the logarithm was traditionally linked to the quadrature of the hyperbola and the trigonometric functions to the arc-length of the circle. Euler accomplished this feat by introducing exponentiation ax for arbitrary constant a in the positive real numbers. He noted that mapping x this way is not an algebraic function, but rather a transcendental function. For a > 1 these functions are monotonic increasing and form bijections of the real line with positive real numbers. Then each base a corresponds to an inverse function called the logarithm to base a, in chapter 6. In chapter 7, Euler introduces e as the number whose hyperbolic logarithm is 1. The reference here is to Gregoire de Saint-Vincent who performed a quadrature of the hyperbola y = 1/x through description of the hyperbolic logarithm. Section 122 labels the logarithm to base e the "natural or hyperbolic logarithm...since the quadrature of the hyperbola can be expressed through these logarithms". Here he also gives the exponential series: exp ⁡ ( z ) = ∑ k = 0 ∞ z k k ! = 1 + z + z 2 2 + z 3 6 + z 4 24 + ⋯ {\displaystyle \exp(z)=\sum _{k=0}^{\infty }{z^{k} \over k!}=1+z+{z^{2} \over 2}+{z^{3} \over 6}+{z^{4} \over 24}+\cdots } Then in chapter 8 Euler is prepared to address the classical trigonometric functions as "transcendental quantities that arise from the circle." He uses the unit circle and presents Euler's formula. Chapter 9 considers trinomial factors in polynomials. Chapter 16 is concerned with partitions, a topic in number theory. Continued fractions are the topic of chapter 18. == Impact == Carl Benjamin Boyer's lectures at the 1950 International Congress of Mathematicians compared the influence of Euler's Introductio to that of Euclid's Elements, calling the Elements the foremost textbook of ancient times, and the Introductio "the foremost textbook of modern times". Boyer also wrote: The analysis of Euler comes close to the modern orthodox discipline, the study of functions by means of infinite processes, especially through infinite series. It is doubtful that any other essentially didactic work includes as large a portion of original material that survives in the college courses today...Can be read with comparative ease by the modern student...The prototype of modern textbooks. == English translations == The first translation into English was that by John D. Blanton, published in 1988. The second, by Ian Bruce, is available online. A list of the editions of Introductio has been assembled by V. Frederick Rickey. == Early mentions == J.C. Scriba (2007) review of 1983 reprint of 1885 German edition MR715928 == Reviews of Blanton translation 1988 == Doru Stefanescu MR1025504 Marco Panza (2007) MR2384380 Ricardo Quintero Zazueta (1999) MR1823258 Ernst Hairer & Gerhard Wanner (1996) Analysis by its History, chapter 1, pp 1 to 79, Undergraduate Texts in Mathematics #70, ISBN 978-0-387-77036-9 MR1410751 == References ==
Wikipedia:Invariant differential operator#0
In mathematics and theoretical physics, an invariant differential operator is a kind of mathematical map from some objects to an object of similar type. These objects are typically functions on R n {\displaystyle \mathbb {R} ^{n}} , functions on a manifold, vector valued functions, vector fields, or, more generally, sections of a vector bundle. In an invariant differential operator D {\displaystyle D} , the term differential operator indicates that the value D f {\displaystyle Df} of the map depends only on f ( x ) {\displaystyle f(x)} and the derivatives of f {\displaystyle f} in x {\displaystyle x} . The word invariant indicates that the operator contains some symmetry. This means that there is a group G {\displaystyle G} with a group action on the functions (or other objects in question) and this action is preserved by the operator: D ( g ⋅ f ) = g ⋅ ( D f ) . {\displaystyle D(g\cdot f)=g\cdot (Df).} Usually, the action of the group has the meaning of a change of coordinates (change of observer) and the invariance means that the operator has the same expression in all admissible coordinates. == Invariance on homogeneous spaces == Let M = G/H be a homogeneous space for a Lie group G and a Lie subgroup H. Every representation ρ : H → A u t ( V ) {\displaystyle \rho :H\rightarrow \mathrm {Aut} (\mathbb {V} )} gives rise to a vector bundle V = G × H V where ( g h , v ) ∼ ( g , ρ ( h ) v ) ∀ g ∈ G , h ∈ H and v ∈ V . {\displaystyle V=G\times _{H}\mathbb {V} \;{\text{where}}\;(gh,v)\sim (g,\rho (h)v)\;\forall \;g\in G,\;h\in H\;{\text{and}}\;v\in \mathbb {V} .} Sections φ ∈ Γ ( V ) {\displaystyle \varphi \in \Gamma (V)} can be identified with Γ ( V ) = { φ : G → V : φ ( g h ) = ρ ( h − 1 ) φ ( g ) ∀ g ∈ G , h ∈ H } . {\displaystyle \Gamma (V)=\{\varphi :G\rightarrow \mathbb {V} \;:\;\varphi (gh)=\rho (h^{-1})\varphi (g)\;\forall \;g\in G,\;h\in H\}.} In this form the group G acts on sections via ( ℓ g φ ) ( g ′ ) = φ ( g − 1 g ′ ) . {\displaystyle (\ell _{g}\varphi )(g')=\varphi (g^{-1}g').} Now let V and W be two vector bundles over M. Then a differential operator d : Γ ( V ) → Γ ( W ) {\displaystyle d:\Gamma (V)\rightarrow \Gamma (W)} that maps sections of V to sections of W is called invariant if d ( ℓ g φ ) = ℓ g ( d φ ) . {\displaystyle d(\ell _{g}\varphi )=\ell _{g}(d\varphi ).} for all sections φ {\displaystyle \varphi } in Γ ( V ) {\displaystyle \Gamma (V)} and elements g in G. All linear invariant differential operators on homogeneous parabolic geometries, i.e. when G is semi-simple and H is a parabolic subgroup, are given dually by homomorphisms of generalized Verma modules. == Invariance in terms of abstract indices == Given two connections ∇ {\displaystyle \nabla } and ∇ ^ {\displaystyle {\hat {\nabla }}} and a one form ω {\displaystyle \omega } , we have ∇ a ω b = ∇ ^ a ω b − Q a b c ω c {\displaystyle \nabla _{a}\omega _{b}={\hat {\nabla }}_{a}\omega _{b}-Q_{ab}{}^{c}\omega _{c}} for some tensor Q a b c {\displaystyle Q_{ab}{}^{c}} . Given an equivalence class of connections [ ∇ ] {\displaystyle [\nabla ]} , we say that an operator is invariant if the form of the operator does not change when we change from one connection in the equivalence class to another. For example, if we consider the equivalence class of all torsion free connections, then the tensor Q is symmetric in its lower indices, i.e. Q a b c = Q ( a b ) c {\displaystyle Q_{ab}{}^{c}=Q_{(ab)}{}^{c}} . Therefore we can compute ∇ [ a ω b ] = ∇ ^ [ a ω b ] , {\displaystyle \nabla _{[a}\omega _{b]}={\hat {\nabla }}_{[a}\omega _{b]},} where brackets denote skew symmetrization. This shows the invariance of the exterior derivative when acting on one forms. Equivalence classes of connections arise naturally in differential geometry, for example: in conformal geometry an equivalence class of connections is given by the Levi Civita connections of all metrics in the conformal class; in projective geometry an equivalence class of connection is given by all connections that have the same geodesics; in CR geometry an equivalence class of connections is given by the Tanaka-Webster connections for each choice of pseudohermitian structure == Examples == The usual gradient operator ∇ {\displaystyle \nabla } acting on real valued functions on Euclidean space is invariant with respect to all Euclidean transformations. The differential acting on functions on a manifold with values in 1-forms (its expression is d = ∑ j ∂ j d x j {\displaystyle d=\sum _{j}\partial _{j}\,dx_{j}} in any local coordinates) is invariant with respect to all smooth transformations of the manifold (the action of the transformation on differential forms is just the pullback). More generally, the exterior derivative d : Ω n ( M ) → Ω n + 1 ( M ) {\displaystyle d:\Omega ^{n}(M)\rightarrow \Omega ^{n+1}(M)} that acts on n-forms of any smooth manifold M is invariant with respect to all smooth transformations. It can be shown that the exterior derivative is the only linear invariant differential operator between those bundles. The Dirac operator in physics is invariant with respect to the Poincaré group (if we choose the proper action of the Poincaré group on spinor valued functions. This is, however, a subtle question and if we want to make this mathematically rigorous, we should say that it is invariant with respect to a group which is a double cover of the Poincaré group) The conformal Killing equation X a ↦ ∇ ( a X b ) − 1 n ∇ c X c g a b {\displaystyle X^{a}\mapsto \nabla _{(a}X_{b)}-{\frac {1}{n}}\nabla _{c}X^{c}g_{ab}} is a conformally invariant linear differential operator between vector fields and symmetric trace-free tensors. == Conformal invariance == Given a metric g ( x , y ) = x 1 y n + 2 + x n + 2 y 1 + ∑ i = 2 n + 1 x i y i {\displaystyle g(x,y)=x_{1}y_{n+2}+x_{n+2}y_{1}+\sum _{i=2}^{n+1}x_{i}y_{i}} on R n + 2 {\displaystyle \mathbb {R} ^{n+2}} , we can write the sphere S n {\displaystyle S^{n}} as the space of generators of the nil cone S n = { [ x ] ∈ R P n + 1 : g ( x , x ) = 0 } . {\displaystyle S^{n}=\{[x]\in \mathbb {RP} _{n+1}\;:\;g(x,x)=0\}.} In this way, the flat model of conformal geometry is the sphere S n = G / P {\displaystyle S^{n}=G/P} with G = S O 0 ( n + 1 , 1 ) {\displaystyle G=SO_{0}(n+1,1)} and P the stabilizer of a point in R n + 2 {\displaystyle \mathbb {R} ^{n+2}} . A classification of all linear conformally invariant differential operators on the sphere is known (Eastwood and Rice, 1987). == See also == Differential operators Laplace invariant Invariant factorization of LPDOs == Notes == == References == Slovák, Jan (1993). Invariant Operators on Conformal Manifolds. Research Lecture Notes, University of Vienna (Dissertation). Kolář, Ivan; Michor, Peter; Slovák, Jan (1993). Natural operators in differential geometry (PDF). Springer-Verlag, Berlin, Heidelberg, New York. Archived from the original (PDF) on 2017-03-30. Retrieved 2011-01-05. Eastwood, M. G.; Rice, J. W. (1987). "Conformally invariant differential operators on Minkowski space and their curved analogues". Commun. Math. Phys. 109 (2): 207–228. Bibcode:1987CMaPh.109..207E. doi:10.1007/BF01215221. S2CID 121161256. Kroeske, Jens (2008). "Invariant bilinear differential pairings on parabolic geometries". PhD Thesis from the University of Adelaide. arXiv:0904.3311. Bibcode:2009PhDT.......274K.
Wikipedia:Invariant factorization of LPDOs#0
The factorization of a linear partial differential operator (LPDO) is an important issue in the theory of integrability, due to the Laplace-Darboux transformations, which allow construction of integrable LPDEs. Laplace solved the factorization problem for a bivariate hyperbolic operator of the second order (see Hyperbolic partial differential equation), constructing two Laplace invariants. Each Laplace invariant is an explicit polynomial condition of factorization; coefficients of this polynomial are explicit functions of the coefficients of the initial LPDO. The polynomial conditions of factorization are called invariants because they have the same form for equivalent (i.e. self-adjoint) operators. Beals-Kartashova-factorization (also called BK-factorization) is a constructive procedure to factorize a bivariate operator of the arbitrary order and arbitrary form. Correspondingly, the factorization conditions in this case also have polynomial form, are invariants and coincide with Laplace invariants for bivariate hyperbolic operators of the second order. The factorization procedure is purely algebraic, the number of possible factorizations depending on the number of simple roots of the Characteristic polynomial (also called symbol) of the initial LPDO and reduced LPDOs appearing at each factorization step. Below the factorization procedure is described for a bivariate operator of arbitrary form, of order 2 and 3. Explicit factorization formulas for an operator of the order n {\displaystyle n} can be found in General invariants are defined in and invariant formulation of the Beals-Kartashova factorization is given in == Beals-Kartashova Factorization == === Operator of order 2 === Consider an operator A 2 = a 20 ∂ x 2 + a 11 ∂ x ∂ y + a 02 ∂ y 2 + a 10 ∂ x + a 01 ∂ y + a 00 . {\displaystyle {\mathcal {A}}_{2}=a_{20}\partial _{x}^{2}+a_{11}\partial _{x}\partial _{y}+a_{02}\partial _{y}^{2}+a_{10}\partial _{x}+a_{01}\partial _{y}+a_{00}.} with smooth coefficients and look for a factorization A 2 = ( p 1 ∂ x + p 2 ∂ y + p 3 ) ( p 4 ∂ x + p 5 ∂ y + p 6 ) . {\displaystyle {\mathcal {A}}_{2}=(p_{1}\partial _{x}+p_{2}\partial _{y}+p_{3})(p_{4}\partial _{x}+p_{5}\partial _{y}+p_{6}).} Let us write down the equations on p i {\displaystyle p_{i}} explicitly, keeping in mind the rule of left composition, i.e. that ∂ x ( α ∂ y ) = ∂ x ( α ) ∂ y + α ∂ x y . {\displaystyle \partial _{x}(\alpha \partial _{y})=\partial _{x}(\alpha )\partial _{y}+\alpha \partial _{xy}.} Then in all cases a 20 = p 1 p 4 , {\displaystyle a_{20}=p_{1}p_{4},} a 11 = p 2 p 4 + p 1 p 5 , {\displaystyle a_{11}=p_{2}p_{4}+p_{1}p_{5},} a 02 = p 2 p 5 , {\displaystyle a_{02}=p_{2}p_{5},} a 10 = L ( p 4 ) + p 3 p 4 + p 1 p 6 , {\displaystyle a_{10}={\mathcal {L}}(p_{4})+p_{3}p_{4}+p_{1}p_{6},} a 01 = L ( p 5 ) + p 3 p 5 + p 2 p 6 , {\displaystyle a_{01}={\mathcal {L}}(p_{5})+p_{3}p_{5}+p_{2}p_{6},} a 00 = L ( p 6 ) + p 3 p 6 , {\displaystyle a_{00}={\mathcal {L}}(p_{6})+p_{3}p_{6},} where the notation L = p 1 ∂ x + p 2 ∂ y {\displaystyle {\mathcal {L}}=p_{1}\partial _{x}+p_{2}\partial _{y}} is used. Without loss of generality, a 20 ≠ 0 , {\displaystyle a_{20}\neq 0,} i.e. p 1 ≠ 0 , {\displaystyle p_{1}\neq 0,} and it can be taken as 1, p 1 = 1. {\displaystyle p_{1}=1.} Now solution of the system of 6 equations on the variables p 2 , {\displaystyle p_{2},} . . . {\displaystyle ...} p 6 {\displaystyle p_{6}} can be found in three steps. At the first step, the roots of a quadratic polynomial have to be found. At the second step, a linear system of two algebraic equations has to be solved. At the third step, one algebraic condition has to be checked. Step 1. Variables p 2 , {\displaystyle p_{2},} p 4 , {\displaystyle p_{4},} p 5 {\displaystyle p_{5}} can be found from the first three equations, a 20 = p 1 p 4 , {\displaystyle a_{20}=p_{1}p_{4},} a 11 = p 2 p 4 + p 1 p 5 , {\displaystyle a_{11}=p_{2}p_{4}+p_{1}p_{5},} a 02 = p 2 p 5 . {\displaystyle a_{02}=p_{2}p_{5}.} The (possible) solutions are then the functions of the roots of a quadratic polynomial: P 2 ( − p 2 ) = a 20 ( − p 2 ) 2 + a 11 ( − p 2 ) + a 02 = 0 {\displaystyle {\mathcal {P}}_{2}(-p_{2})=a_{20}(-p_{2})^{2}+a_{11}(-p_{2})+a_{02}=0} Let ω {\displaystyle \omega } be a root of the polynomial P 2 , {\displaystyle {\mathcal {P}}_{2},} then p 1 = 1 , {\displaystyle p_{1}=1,} p 2 = − ω , {\displaystyle p_{2}=-\omega ,} p 4 = a 20 , {\displaystyle p_{4}=a_{20},} p 5 = a 20 ω + a 11 , {\displaystyle p_{5}=a_{20}\omega +a_{11},} Step 2. Substitution of the results obtained at the first step, into the next two equations a 10 = L ( p 4 ) + p 3 p 4 + p 1 p 6 , {\displaystyle a_{10}={\mathcal {L}}(p_{4})+p_{3}p_{4}+p_{1}p_{6},} a 01 = L ( p 5 ) + p 3 p 5 + p 2 p 6 , {\displaystyle a_{01}={\mathcal {L}}(p_{5})+p_{3}p_{5}+p_{2}p_{6},} yields linear system of two algebraic equations: a 10 = L a 20 + p 3 a 20 + p 6 , {\displaystyle a_{10}={\mathcal {L}}a_{20}+p_{3}a_{20}+p_{6},} a 01 = L ( a 11 + a 20 ω ) + p 3 ( a 11 + a 20 ω ) − ω p 6 . , {\displaystyle a_{01}={\mathcal {L}}(a_{11}+a_{20}\omega )+p_{3}(a_{11}+a_{20}\omega )-\omega p_{6}.,} In particularly, if the root ω {\displaystyle \omega } is simple, i.e. P 2 ′ ( ω ) = 2 a 20 ω + a 11 ≠ 0 , {\displaystyle {\mathcal {P}}_{2}'(\omega )=2a_{20}\omega +a_{11}\neq 0,} then these equations have the unique solution: p 3 = ω a 10 + a 01 − ω L a 20 − L ( a 20 ω + a 11 ) 2 a 20 ω + a 11 , {\displaystyle p_{3}={\frac {\omega a_{10}+a_{01}-\omega {\mathcal {L}}a_{20}-{\mathcal {L}}(a_{20}\omega +a_{11})}{2a_{20}\omega +a_{11}}},} p 6 = ( a 20 ω + a 11 ) ( a 10 − L a 20 ) − a 20 ( a 01 − L ( a 20 ω + a 11 ) ) 2 a 20 ω + a 11 . {\displaystyle p_{6}={\frac {(a_{20}\omega +a_{11})(a_{10}-{\mathcal {L}}a_{20})-a_{20}(a_{01}-{\mathcal {L}}(a_{20}\omega +a_{11}))}{2a_{20}\omega +a_{11}}}.} At this step, for each root of the polynomial P 2 {\displaystyle {\mathcal {P}}_{2}} a corresponding set of coefficients p j {\displaystyle p_{j}} is computed. Step 3. Check factorization condition (which is the last of the initial 6 equations) a 00 = L ( p 6 ) + p 3 p 6 , {\displaystyle a_{00}={\mathcal {L}}(p_{6})+p_{3}p_{6},} written in the known variables p j {\displaystyle p_{j}} and ω {\displaystyle \omega } ): a 00 = L { ω a 10 + a 01 − L ( 2 a 20 ω + a 11 ) 2 a 20 ω + a 11 } + ω a 10 + a 01 − L ( 2 a 20 ω + a 11 ) 2 a 20 ω + a 11 × a 20 ( a 01 − L ( a 20 ω + a 11 ) ) + ( a 20 ω + a 11 ) ( a 10 − L a 20 ) 2 a 20 ω + a 11 {\displaystyle a_{00}={\mathcal {L}}\left\{{\frac {\omega a_{10}+a_{01}-{\mathcal {L}}(2a_{20}\omega +a_{11})}{2a_{20}\omega +a_{11}}}\right\}+{\frac {\omega a_{10}+a_{01}-{\mathcal {L}}(2a_{20}\omega +a_{11})}{2a_{20}\omega +a_{11}}}\times {\frac {a_{20}(a_{01}-{\mathcal {L}}(a_{20}\omega +a_{11}))+(a_{20}\omega +a_{11})(a_{10}-{\mathcal {L}}a_{20})}{2a_{20}\omega +a_{11}}}} If l 2 = a 00 − L { ω a 10 + a 01 − L ( 2 a 20 ω + a 11 ) 2 a 20 ω + a 11 } + ω a 10 + a 01 − L ( 2 a 20 ω + a 11 ) 2 a 20 ω + a 11 × a 20 ( a 01 − L ( a 20 ω + a 11 ) ) + ( a 20 ω + a 11 ) ( a 10 − L a 20 ) 2 a 20 ω + a 11 = 0 , {\displaystyle l_{2}=a_{00}-{\mathcal {L}}\left\{{\frac {\omega a_{10}+a_{01}-{\mathcal {L}}(2a_{20}\omega +a_{11})}{2a_{20}\omega +a_{11}}}\right\}+{\frac {\omega a_{10}+a_{01}-{\mathcal {L}}(2a_{20}\omega +a_{11})}{2a_{20}\omega +a_{11}}}\times {\frac {a_{20}(a_{01}-{\mathcal {L}}(a_{20}\omega +a_{11}))+(a_{20}\omega +a_{11})(a_{10}-{\mathcal {L}}a_{20})}{2a_{20}\omega +a_{11}}}=0,} the operator A 2 {\displaystyle {\mathcal {A}}_{2}} is factorizable and explicit form for the factorization coefficients p j {\displaystyle p_{j}} is given above. === Operator of order 3 === Consider an operator A 3 = ∑ j + k ≤ 3 a j k ∂ x j ∂ y k = a 30 ∂ x 3 + a 21 ∂ x 2 ∂ y + a 12 ∂ x ∂ y 2 + a 03 ∂ y 3 + a 20 ∂ x 2 + a 11 ∂ x ∂ y + a 02 ∂ y 2 + a 10 ∂ x + a 01 ∂ y + a 00 . {\displaystyle {\mathcal {A}}_{3}=\sum _{j+k\leq 3}a_{jk}\partial _{x}^{j}\partial _{y}^{k}=a_{30}\partial _{x}^{3}+a_{21}\partial _{x}^{2}\partial _{y}+a_{12}\partial _{x}\partial _{y}^{2}+a_{03}\partial _{y}^{3}+a_{20}\partial _{x}^{2}+a_{11}\partial _{x}\partial _{y}+a_{02}\partial _{y}^{2}+a_{10}\partial _{x}+a_{01}\partial _{y}+a_{00}.} with smooth coefficients and look for a factorization A 3 = ( p 1 ∂ x + p 2 ∂ y + p 3 ) ( p 4 ∂ x 2 + p 5 ∂ x ∂ y + p 6 ∂ y 2 + p 7 ∂ x + p 8 ∂ y + p 9 ) . {\displaystyle {\mathcal {A}}_{3}=(p_{1}\partial _{x}+p_{2}\partial _{y}+p_{3})(p_{4}\partial _{x}^{2}+p_{5}\partial _{x}\partial _{y}+p_{6}\partial _{y}^{2}+p_{7}\partial _{x}+p_{8}\partial _{y}+p_{9}).} Similar to the case of the operator A 2 , {\displaystyle {\mathcal {A}}_{2},} the conditions of factorization are described by the following system: a 30 = p 1 p 4 , {\displaystyle a_{30}=p_{1}p_{4},} a 21 = p 2 p 4 + p 1 p 5 , {\displaystyle a_{21}=p_{2}p_{4}+p_{1}p_{5},} a 12 = p 2 p 5 + p 1 p 6 , {\displaystyle a_{12}=p_{2}p_{5}+p_{1}p_{6},} a 03 = p 2 p 6 , {\displaystyle a_{03}=p_{2}p_{6},} a 20 = L ( p 4 ) + p 3 p 4 + p 1 p 7 , {\displaystyle a_{20}={\mathcal {L}}(p_{4})+p_{3}p_{4}+p_{1}p_{7},} a 11 = L ( p 5 ) + p 3 p 5 + p 2 p 7 + p 1 p 8 , {\displaystyle a_{11}={\mathcal {L}}(p_{5})+p_{3}p_{5}+p_{2}p_{7}+p_{1}p_{8},} a 02 = L ( p 6 ) + p 3 p 6 + p 2 p 8 , {\displaystyle a_{02}={\mathcal {L}}(p_{6})+p_{3}p_{6}+p_{2}p_{8},} a 10 = L ( p 7 ) + p 3 p 7 + p 1 p 9 , {\displaystyle a_{10}={\mathcal {L}}(p_{7})+p_{3}p_{7}+p_{1}p_{9},} a 01 = L ( p 8 ) + p 3 p 8 + p 2 p 9 , {\displaystyle a_{01}={\mathcal {L}}(p_{8})+p_{3}p_{8}+p_{2}p_{9},} a 00 = L ( p 9 ) + p 3 p 9 , {\displaystyle a_{00}={\mathcal {L}}(p_{9})+p_{3}p_{9},} with L = p 1 ∂ x + p 2 ∂ y , {\displaystyle {\mathcal {L}}=p_{1}\partial _{x}+p_{2}\partial _{y},} and again a 30 ≠ 0 , {\displaystyle a_{30}\neq 0,} i.e. p 1 = 1 , {\displaystyle p_{1}=1,} and three-step procedure yields: At the first step, the roots of a cubic polynomial P 3 ( − p 2 ) := a 30 ( − p 2 ) 3 + a 21 ( − p 2 ) 2 + a 12 ( − p 2 ) + a 03 = 0. {\displaystyle {\mathcal {P}}_{3}(-p_{2}):=a_{30}(-p_{2})^{3}+a_{21}(-p_{2})^{2}+a_{12}(-p_{2})+a_{03}=0.} have to be found. Again ω {\displaystyle \omega } denotes a root and first four coefficients are p 1 = 1 , {\displaystyle p_{1}=1,} p 2 = − ω , {\displaystyle p_{2}=-\omega ,} p 4 = a 30 , {\displaystyle p_{4}=a_{30},} p 5 = a 30 ω + a 21 , {\displaystyle p_{5}=a_{30}\omega +a_{21},} p 6 = a 30 ω 2 + a 21 ω + a 12 . {\displaystyle p_{6}=a_{30}\omega ^{2}+a_{21}\omega +a_{12}.} At the second step, a linear system of three algebraic equations has to be solved: a 20 − L a 30 = p 3 a 30 + p 7 , {\displaystyle a_{20}-{\mathcal {L}}a_{30}=p_{3}a_{30}+p_{7},} a 11 − L ( a 30 ω + a 21 ) = p 3 ( a 30 ω + a 21 ) − ω p 7 + p 8 , {\displaystyle a_{11}-{\mathcal {L}}(a_{30}\omega +a_{21})=p_{3}(a_{30}\omega +a_{21})-\omega p_{7}+p_{8},} a 02 − L ( a 30 ω 2 + a 21 ω + a 12 ) = p 3 ( a 30 ω 2 + a 21 ω + a 12 ) − ω p 8 . {\displaystyle a_{02}-{\mathcal {L}}(a_{30}\omega ^{2}+a_{21}\omega +a_{12})=p_{3}(a_{30}\omega ^{2}+a_{21}\omega +a_{12})-\omega p_{8}.} At the third step, two algebraic conditions have to be checked. == Invariant Formulation == Definition The operators A {\displaystyle {\mathcal {A}}} , A ~ {\displaystyle {\tilde {\mathcal {A}}}} are called equivalent if there is a gauge transformation that takes one to the other: A ~ g = e − φ A ( e φ g ) . {\displaystyle {\tilde {\mathcal {A}}}g=e^{-\varphi }{\mathcal {A}}(e^{\varphi }g).} BK-factorization is then pure algebraic procedure which allows to construct explicitly a factorization of an arbitrary order LPDO A ~ {\displaystyle {\tilde {\mathcal {A}}}} in the form A = ∑ j + k ≤ n a j k ∂ x j ∂ y k = L ∘ ∑ j + k ≤ ( n − 1 ) p j k ∂ x j ∂ y k {\displaystyle {\mathcal {A}}=\sum _{j+k\leq n}a_{jk}\partial _{x}^{j}\partial _{y}^{k}={\mathcal {L}}\circ \sum _{j+k\leq (n-1)}p_{jk}\partial _{x}^{j}\partial _{y}^{k}} with first-order operator L = ∂ x − ω ∂ y + p {\displaystyle {\mathcal {L}}=\partial _{x}-\omega \partial _{y}+p} where ω {\displaystyle \omega } is an arbitrary simple root of the characteristic polynomial P ( t ) = ∑ k = 0 n a n − k , k t n − k , P ( ω ) = 0. {\displaystyle {\mathcal {P}}(t)=\sum _{k=0}^{n}a_{n-k,k}t^{n-k},\quad {\mathcal {P}}(\omega )=0.} Factorization is possible then for each simple root ω ~ {\displaystyle {\tilde {\omega }}} iff for n = 2 → l 2 = 0 , {\displaystyle n=2\ \ \rightarrow l_{2}=0,} for n = 3 → l 3 = 0 , l 31 = 0 , {\displaystyle n=3\ \ \rightarrow l_{3}=0,l_{31}=0,} for n = 4 → l 4 = 0 , l 41 = 0 , l 42 = 0 , {\displaystyle n=4\ \ \rightarrow l_{4}=0,l_{41}=0,l_{42}=0,} and so on. All functions l 2 , l 3 , l 31 , l 4 , l 41 , l 42 , . . . {\displaystyle l_{2},l_{3},l_{31},l_{4},l_{41},\ \ l_{42},...} are known functions, for instance, l 2 = a 00 − L ( p 6 ) + p 3 p 6 , {\displaystyle l_{2}=a_{00}-{\mathcal {L}}(p_{6})+p_{3}p_{6},} l 3 = a 00 − L ( p 9 ) + p 3 p 9 , {\displaystyle l_{3}=a_{00}-{\mathcal {L}}(p_{9})+p_{3}p_{9},} l 31 = a 01 − L ( p 8 ) + p 3 p 8 + p 2 p 9 , {\displaystyle l_{31}=a_{01}-{\mathcal {L}}(p_{8})+p_{3}p_{8}+p_{2}p_{9},} and so on. Theorem All functions l 2 = a 00 − L ( p 6 ) + p 3 p 6 , l 3 = a 00 − L ( p 9 ) + p 3 p 9 , l 31 , . . . . {\displaystyle l_{2}=a_{00}-{\mathcal {L}}(p_{6})+p_{3}p_{6},l_{3}=a_{00}-{\mathcal {L}}(p_{9})+p_{3}p_{9},l_{31},....} are invariants under gauge transformations. Definition Invariants l 2 = a 00 − L ( p 6 ) + p 3 p 6 , l 3 = a 00 − L ( p 9 ) + p 3 p 9 , l 31 , . . . . . {\displaystyle l_{2}=a_{00}-{\mathcal {L}}(p_{6})+p_{3}p_{6},l_{3}=a_{00}-{\mathcal {L}}(p_{9})+p_{3}p_{9},l_{31},.....} are called generalized invariants of a bivariate operator of arbitrary order. In particular case of the bivariate hyperbolic operator its generalized invariants coincide with Laplace invariants (see Laplace invariant). Corollary If an operator A ~ {\displaystyle {\tilde {\mathcal {A}}}} is factorizable, then all operators equivalent to it, are also factorizable. Equivalent operators are easy to compute: e − φ ∂ x e φ = ∂ x + φ x , e − φ ∂ y e φ = ∂ y + φ y , {\displaystyle e^{-\varphi }\partial _{x}e^{\varphi }=\partial _{x}+\varphi _{x},\quad e^{-\varphi }\partial _{y}e^{\varphi }=\partial _{y}+\varphi _{y},} e − φ ∂ x ∂ y e φ = e − φ ∂ x e φ e − φ ∂ y e φ = ( ∂ x + φ x ) ∘ ( ∂ y + φ y ) {\displaystyle e^{-\varphi }\partial _{x}\partial _{y}e^{\varphi }=e^{-\varphi }\partial _{x}e^{\varphi }e^{-\varphi }\partial _{y}e^{\varphi }=(\partial _{x}+\varphi _{x})\circ (\partial _{y}+\varphi _{y})} and so on. Some example are given below: A 1 = ∂ x ∂ y + x ∂ x + 1 = ∂ x ( ∂ y + x ) , l 2 ( A 1 ) = 1 − 1 − 0 = 0 ; {\displaystyle A_{1}=\partial _{x}\partial _{y}+x\partial _{x}+1=\partial _{x}(\partial _{y}+x),\quad l_{2}(A_{1})=1-1-0=0;} A 2 = ∂ x ∂ y + x ∂ x + ∂ y + x + 1 , A 2 = e − x A 1 e x ; l 2 ( A 2 ) = ( x + 1 ) − 1 − x = 0 ; {\displaystyle A_{2}=\partial _{x}\partial _{y}+x\partial _{x}+\partial _{y}+x+1,\quad A_{2}=e^{-x}A_{1}e^{x};\quad l_{2}(A_{2})=(x+1)-1-x=0;} A 3 = ∂ x ∂ y + 2 x ∂ x + ( y + 1 ) ∂ y + 2 ( x y + x + 1 ) , A 3 = e − x y A 2 e x y ; l 2 ( A 3 ) = 2 ( x + 1 + x y ) − 2 − 2 x ( y + 1 ) = 0 ; {\displaystyle A_{3}=\partial _{x}\partial _{y}+2x\partial _{x}+(y+1)\partial _{y}+2(xy+x+1),\quad A_{3}=e^{-xy}A_{2}e^{xy};\quad l_{2}(A_{3})=2(x+1+xy)-2-2x(y+1)=0;} A 4 = ∂ x ∂ y + x ∂ x + ( cos ⁡ x + 1 ) ∂ y + x cos ⁡ x + x + 1 , A 4 = e − sin ⁡ x A 2 e sin ⁡ x ; l 2 ( A 4 ) = 0. {\displaystyle A_{4}=\partial _{x}\partial _{y}+x\partial _{x}+(\cos x+1)\partial _{y}+x\cos x+x+1,\quad A_{4}=e^{-\sin x}A_{2}e^{\sin x};\quad l_{2}(A_{4})=0.} == Transpose == Factorization of an operator is the first step on the way of solving corresponding equation. But for solution we need right factors and BK-factorization constructs left factors which are easy to construct. On the other hand, the existence of a certain right factor of a LPDO is equivalent to the existence of a corresponding left factor of the transpose of that operator. Definition The transpose A t {\displaystyle {\mathcal {A}}^{t}} of an operator A = ∑ a α ∂ α , ∂ α = ∂ 1 α 1 ⋯ ∂ n α n . {\displaystyle {\mathcal {A}}=\sum a_{\alpha }\partial ^{\alpha },\qquad \partial ^{\alpha }=\partial _{1}^{\alpha _{1}}\cdots \partial _{n}^{\alpha _{n}}.} is defined as A t u = ∑ ( − 1 ) | α | ∂ α ( a α u ) . {\displaystyle {\mathcal {A}}^{t}u=\sum (-1)^{|\alpha |}\partial ^{\alpha }(a_{\alpha }u).} and the identity ∂ γ ( u v ) = ∑ ( γ α ) ∂ α u , ∂ γ − α v {\displaystyle \partial ^{\gamma }(uv)=\sum {\binom {\gamma }{\alpha }}\partial ^{\alpha }u,\partial ^{\gamma -\alpha }v} implies that A t = ∑ ( − 1 ) | α + β | ( α + β α ) ( ∂ β a α + β ) ∂ α . {\displaystyle {\mathcal {A}}^{t}=\sum (-1)^{|\alpha +\beta |}{\binom {\alpha +\beta }{\alpha }}(\partial ^{\beta }a_{\alpha +\beta })\partial ^{\alpha }.} Now the coefficients are A t = ∑ a ~ α ∂ α , {\displaystyle {\mathcal {A}}^{t}=\sum {\tilde {a}}_{\alpha }\partial ^{\alpha },} a ~ α = ∑ ( − 1 ) | α + β | ( α + β α ) ∂ β ( a α + β ) . {\displaystyle {\tilde {a}}_{\alpha }=\sum (-1)^{|\alpha +\beta |}{\binom {\alpha +\beta }{\alpha }}\partial ^{\beta }(a_{\alpha +\beta }).} with a standard convention for binomial coefficients in several variables (see Binomial coefficient), e.g. in two variables ( α β ) = ( ( α 1 , α 2 ) ( β 1 , β 2 ) ) = ( α 1 β 1 ) ( α 2 β 2 ) . {\displaystyle {\binom {\alpha }{\beta }}={\binom {(\alpha _{1},\alpha _{2})}{(\beta _{1},\beta _{2})}}={\binom {\alpha _{1}}{\beta _{1}}}\,{\binom {\alpha _{2}}{\beta _{2}}}.} In particular, for the operator A 2 {\displaystyle {\mathcal {A}}_{2}} the coefficients are a ~ j k = a j k , j + k = 2 ; a ~ 10 = − a 10 + 2 ∂ x a 20 + ∂ y a 11 , a ~ 01 = − a 01 + ∂ x a 11 + 2 ∂ y a 02 , {\displaystyle {\tilde {a}}_{jk}=a_{jk},\quad j+k=2;{\tilde {a}}_{10}=-a_{10}+2\partial _{x}a_{20}+\partial _{y}a_{11},{\tilde {a}}_{01}=-a_{01}+\partial _{x}a_{11}+2\partial _{y}a_{02},} a ~ 00 = a 00 − ∂ x a 10 − ∂ y a 01 + ∂ x 2 a 20 + ∂ x ∂ x a 11 + ∂ y 2 a 02 . {\displaystyle {\tilde {a}}_{00}=a_{00}-\partial _{x}a_{10}-\partial _{y}a_{01}+\partial _{x}^{2}a_{20}+\partial _{x}\partial _{x}a_{11}+\partial _{y}^{2}a_{02}.} For instance, the operator ∂ x x − ∂ y y + y ∂ x + x ∂ y + 1 4 ( y 2 − x 2 ) − 1 {\displaystyle \partial _{xx}-\partial _{yy}+y\partial _{x}+x\partial _{y}+{\frac {1}{4}}(y^{2}-x^{2})-1} is factorizable as [ ∂ x + ∂ y + 1 2 ( y − x ) ] [ . . . ] {\displaystyle {\big [}\partial _{x}+\partial _{y}+{\tfrac {1}{2}}(y-x){\big ]}\,{\big [}...{\big ]}} and its transpose A 1 t {\displaystyle {\mathcal {A}}_{1}^{t}} is factorizable then as [ . . . ] [ ∂ x − ∂ y + 1 2 ( y + x ) ] . {\displaystyle {\big [}...{\big ]}\,{\big [}\partial _{x}-\partial _{y}+{\tfrac {1}{2}}(y+x){\big ]}.} == See also == Partial derivative Invariant (mathematics) Invariant theory Characteristic polynomial == Notes == == References == J. Weiss. Bäcklund transformation and the Painlevé property. [1] J. Math. Phys. 27, 1293-1305 (1986). R. Beals, E. Kartashova. Constructively factoring linear partial differential operators in two variables. Theor. Math. Phys. 145(2), pp. 1510-1523 (2005) E. Kartashova. A Hierarchy of Generalized Invariants for Linear Partial Differential Operators. Theor. Math. Phys. 147(3), pp. 839-846 (2006) E. Kartashova, O. Rudenko. Invariant Form of BK-factorization and its Applications. Proc. GIFT-2006, pp. 225–241, Eds.: J. Calmet, R. W. Tucker, Karlsruhe University Press (2006); arXiv
Wikipedia:Invariant set postulate#0
The invariant set postulate concerns the possible relationship between fractal geometry and quantum mechanics and in particular the hypothesis that the former can assist in resolving some of the challenges posed by the latter. It is underpinned by nonlinear dynamical systems theory and black hole thermodynamics. == Author == The proposer of the postulate is climate scientist and physicist Tim Palmer. Palmer completed a PhD at the University of Oxford under Dennis Sciama, the same supervisor that Stephen Hawking had and then worked with Hawking himself at the University of Cambridge on supergravity theory. He later switched to meteorology and has established a reputation pioneering ensemble forecasting. He now works at the European Centre for Medium-Range Weather Forecasts in Reading, England. == Overview == Palmer argues that the postulate may help to resolve some of the paradoxes of quantum mechanics that have been discussed since the Bohr–Einstein debates of the 1920s and 30s and which remain unresolved. The idea backs Einstein's view that quantum theory is incomplete, but also agrees with Bohr's contention that quantum systems are not independent of the observer. The key idea involved is that there exists a state space for the Universe, and that the state of the entire Universe can be expressed as a point in this state space. This state space can then be divided into "real" and "unreal" sets (parts), where, for example, the states where the Nazis lost WW2 are in the "real" set, and the states where the Nazis won WW2 are in the "unreal" set of points. The partition of state space into these two sets is unchanging, making the sets invariant. If the Universe is a complex system affected by chaos then its invariant set (a fixed state of rest) is likely to be a fractal. According to Palmer this could resolve problems posed by the Kochen–Specker theorem, which appears to indicate that physics may have to abandon the idea of any kind of objective reality, and the apparent paradox of action at a distance. In a paper submitted to the Proceedings of the Royal Society he indicates how the idea can account for quantum uncertainty and problems of "contextuality". For example, exploring the quantum problem of wave-particle duality, one of the central mysteries of quantum theory, the author claims that "in terms of the Invariant Set Postulate, the paradox is easily resolved, in principle at least". The paper and related talks given at the Perimeter Institute and University of Oxford also explores the role of gravity in quantum physics. == Critical reception == New Scientist quotes Bob Coeke of Oxford University as stating "What makes this really interesting is that it gets away from the usual debates over multiple universes and hidden variables and so on. It suggests there might be an underlying physical geometry that physics has just missed, which is radical and very positive". He added that "Palmer manages to explain some quantum phenomena, but he hasn't yet derived the whole rigid structure of the theory. This is really necessary." Robert Spekkens has said: "I think his approach is really interesting and novel. Other physicists have shown how you can find a way out of the Kochen–Specker theorem, but this work actually provides a mechanism to explain the theorem." According to Todd Brun, it is a tall order, to make a serious rival to quantum mechanics, a really predictive theory, out of Palmer's ideas. This goal has not been achieved yet. == See also == Fractal cosmology == References ==
Wikipedia:Invariant subspace#0
In mathematics, an invariant subspace of a linear mapping T : V → V i.e. from some vector space V to itself, is a subspace W of V that is preserved by T. More generally, an invariant subspace for a collection of linear mappings is a subspace preserved by each mapping individually. == For a single operator == Consider a vector space V {\displaystyle V} and a linear map T : V → V . {\displaystyle T:V\to V.} A subspace W ⊆ V {\displaystyle W\subseteq V} is called an invariant subspace for T {\displaystyle T} , or equivalently, T-invariant, if T transforms any vector v ∈ W {\displaystyle \mathbf {v} \in W} back into W. In formulas, this can be written v ∈ W ⟹ T ( v ) ∈ W {\displaystyle \mathbf {v} \in W\implies T(\mathbf {v} )\in W} or T W ⊆ W . {\displaystyle TW\subseteq W{\text{.}}} In this case, T restricts to an endomorphism of W: T | W : W → W ; T | W ( w ) = T ( w ) . {\displaystyle T|_{W}:W\to W{\text{;}}\quad T|_{W}(\mathbf {w} )=T(\mathbf {w} ){\text{.}}} The existence of an invariant subspace also has a matrix formulation. Pick a basis C for W and complete it to a basis B of V. With respect to B, the operator T has form T = [ T | W T 12 0 T 22 ] {\displaystyle T={\begin{bmatrix}T|_{W}&T_{12}\\0&T_{22}\end{bmatrix}}} for some T12 and T22, where T | W {\displaystyle T|_{W}} here denotes the matrix of T | W {\displaystyle T|_{W}} with respect to the basis C. == Examples == Any linear map T : V → V {\displaystyle T:V\to V} admits the following invariant subspaces: The vector space V {\displaystyle V} , because T {\displaystyle T} maps every vector in V {\displaystyle V} into V . {\displaystyle V.} The set { 0 } {\displaystyle \{0\}} , because T ( 0 ) = 0 {\displaystyle T(0)=0} . These are the improper and trivial invariant subspaces, respectively. Certain linear operators have no proper non-trivial invariant subspace: for instance, rotation of a two-dimensional real vector space. However, the axis of a rotation in three dimensions is always an invariant subspace. === 1-dimensional subspaces === If U is a 1-dimensional invariant subspace for operator T with vector v ∈ U, then the vectors v and Tv must be linearly dependent. Thus ∀ v ∈ U ∃ α ∈ R : T v = α v . {\displaystyle \forall \mathbf {v} \in U\;\exists \alpha \in \mathbb {R} :T\mathbf {v} =\alpha \mathbf {v} {\text{.}}} In fact, the scalar α does not depend on v. The equation above formulates an eigenvalue problem. Any eigenvector for T spans a 1-dimensional invariant subspace, and vice-versa. In particular, a nonzero invariant vector (i.e. a fixed point of T) spans an invariant subspace of dimension 1. As a consequence of the fundamental theorem of algebra, every linear operator on a nonzero finite-dimensional complex vector space has an eigenvector. Therefore, every such linear operator in at least two dimensions has a proper non-trivial invariant subspace. == Diagonalization via projections == Determining whether a given subspace W is invariant under T is ostensibly a problem of geometric nature. Matrix representation allows one to phrase this problem algebraically. Write V as the direct sum W ⊕ W′; a suitable W′ can always be chosen by extending a basis of W. The associated projection operator P onto W has matrix representation P = [ 1 0 0 0 ] : W ⊕ W ′ → W ⊕ W ′ . {\displaystyle P={\begin{bmatrix}1&0\\0&0\end{bmatrix}}:{\begin{matrix}W\\\oplus \\W'\end{matrix}}\rightarrow {\begin{matrix}W\\\oplus \\W'\end{matrix}}.} A straightforward calculation shows that W is T-invariant if and only if PTP = TP. If 1 is the identity operator, then 1-P is projection onto W′. The equation TP = PT holds if and only if both im(P) and im(1 − P) are invariant under T. In that case, T has matrix representation T = [ T 11 0 0 T 22 ] : im ⁡ ( P ) ⊕ im ⁡ ( 1 − P ) → im ⁡ ( P ) ⊕ im ⁡ ( 1 − P ) . {\displaystyle T={\begin{bmatrix}T_{11}&0\\0&T_{22}\end{bmatrix}}:{\begin{matrix}\operatorname {im} (P)\\\oplus \\\operatorname {im} (1-P)\end{matrix}}\rightarrow {\begin{matrix}\operatorname {im} (P)\\\oplus \\\operatorname {im} (1-P)\end{matrix}}\;.} Colloquially, a projection that commutes with T "diagonalizes" T. == Lattice of subspaces == As the above examples indicate, the invariant subspaces of a given linear transformation T shed light on the structure of T. When V is a finite-dimensional vector space over an algebraically closed field, linear transformations acting on V are characterized (up to similarity) by the Jordan canonical form, which decomposes V into invariant subspaces of T. Many fundamental questions regarding T can be translated to questions about invariant subspaces of T. The set of T-invariant subspaces of V is sometimes called the invariant-subspace lattice of T and written Lat(T). As the name suggests, it is a (modular) lattice, with meets and joins given by (respectively) set intersection and linear span. A minimal element in Lat(T) in said to be a minimal invariant subspace. In the study of infinite-dimensional operators, Lat(T) is sometimes restricted to only the closed invariant subspaces. == For multiple operators == Given a collection T of operators, a subspace is called T-invariant if it is invariant under each T ∈ T. As in the single-operator case, the invariant-subspace lattice of T, written Lat(T), is the set of all T-invariant subspaces, and bears the same meet and join operations. Set-theoretically, it is the intersection L a t ( T ) = ⋂ T ∈ T L a t ( T ) . {\displaystyle \mathrm {Lat} ({\mathcal {T}})=\bigcap _{T\in {\mathcal {T}}}{\mathrm {Lat} (T)}{\text{.}}} === Examples === Let End(V) be the set of all linear operators on V. Then Lat(End(V))={0,V}. Given a representation of a group G on a vector space V, we have a linear transformation T(g) : V → V for every element g of G. If a subspace W of V is invariant with respect to all these transformations, then it is a subrepresentation and the group G acts on W in a natural way. The same construction applies to representations of an algebra. As another example, let T ∈ End(V) and Σ be the algebra generated by {1, T }, where 1 is the identity operator. Then Lat(T) = Lat(Σ). === Fundamental theorem of noncommutative algebra === Just as the fundamental theorem of algebra ensures that every linear transformation acting on a finite-dimensional complex vector space has a non-trivial invariant subspace, the fundamental theorem of noncommutative algebra asserts that Lat(Σ) contains non-trivial elements for certain Σ. One consequence is that every commuting family in L(V) can be simultaneously upper-triangularized. To see this, note that an upper-triangular matrix representation corresponds to a flag of invariant subspaces, that a commuting family generates a commuting algebra, and that End(V) is not commutative when dim(V) ≥ 2. == Left ideals == If A is an algebra, one can define a left regular representation Φ on A: Φ(a)b = ab is a homomorphism from A to L(A), the algebra of linear transformations on A The invariant subspaces of Φ are precisely the left ideals of A. A left ideal M of A gives a subrepresentation of A on M. If M is a left ideal of A then the left regular representation Φ on M now descends to a representation Φ' on the quotient vector space A/M. If [b] denotes an equivalence class in A/M, Φ'(a)[b] = [ab]. The kernel of the representation Φ' is the set {a ∈ A | ab ∈ M for all b}. The representation Φ' is irreducible if and only if M is a maximal left ideal, since a subspace V ⊂ A/M is an invariant under {Φ'(a) | a ∈ A} if and only if its preimage under the quotient map, V + M, is a left ideal in A. == Invariant subspace problem == The invariant subspace problem concerns the case where V is a separable Hilbert space over the complex numbers, of dimension > 1, and T is a bounded operator. The problem is to decide whether every such T has a non-trivial, closed, invariant subspace. It is unsolved. In the more general case where V is assumed to be a Banach space, Per Enflo (1976) found an example of an operator without an invariant subspace. A concrete example of an operator without an invariant subspace was produced in 1985 by Charles Read. == Almost-invariant halfspaces == Related to invariant subspaces are so-called almost-invariant-halfspaces (AIHS's). A closed subspace Y {\displaystyle Y} of a Banach space X {\displaystyle X} is said to be almost-invariant under an operator T ∈ B ( X ) {\displaystyle T\in {\mathcal {B}}(X)} if T Y ⊆ Y + E {\displaystyle TY\subseteq Y+E} for some finite-dimensional subspace E {\displaystyle E} ; equivalently, Y {\displaystyle Y} is almost-invariant under T {\displaystyle T} if there is a finite-rank operator F ∈ B ( X ) {\displaystyle F\in {\mathcal {B}}(X)} such that ( T + F ) Y ⊆ Y {\displaystyle (T+F)Y\subseteq Y} , i.e. if Y {\displaystyle Y} is invariant (in the usual sense) under T + F {\displaystyle T+F} . In this case, the minimum possible dimension of E {\displaystyle E} (or rank of F {\displaystyle F} ) is called the defect. Clearly, every finite-dimensional and finite-codimensional subspace is almost-invariant under every operator. Thus, to make things non-trivial, we say that Y {\displaystyle Y} is a halfspace whenever it is a closed subspace with infinite dimension and infinite codimension. The AIHS problem asks whether every operator admits an AIHS. In the complex setting it has already been solved; that is, if X {\displaystyle X} is a complex infinite-dimensional Banach space and T ∈ B ( X ) {\displaystyle T\in {\mathcal {B}}(X)} then T {\displaystyle T} admits an AIHS of defect at most 1. It is not currently known whether the same holds if X {\displaystyle X} is a real Banach space. However, some partial results have been established: for instance, any self-adjoint operator on an infinite-dimensional real Hilbert space admits an AIHS, as does any strictly singular (or compact) operator acting on a real infinite-dimensional reflexive space. == See also == Invariant manifold Lomonosov's invariant subspace theorem == References == == Sources == Abramovich, Yuri A.; Aliprantis, Charalambos D. (2002). An Invitation to Operator Theory. American Mathematical Society. ISBN 978-0-8218-2146-6. Beauzamy, Bernard (1988). Introduction to Operator Theory and Invariant Subspaces. North Holland. Enflo, Per; Lomonosov, Victor (2001). "Some aspects of the invariant subspace problem". Handbook of the geometry of Banach spaces. Vol. I. Amsterdam: North-Holland. pp. 533–559. Gohberg, Israel; Lancaster, Peter; Rodman, Leiba (2006). Invariant Subspaces of Matrices with Applications. Classics in Applied Mathematics. Vol. 51 (Reprint, with list of errata and new preface, of the 1986 Wiley ed.). Society for Industrial and Applied Mathematics (SIAM). pp. xxii+692. ISBN 978-0-89871-608-5. Lyubich, Yurii I. (1988). Introduction to the Theory of Banach Representations of Groups (Translated from the 1985 Russian-language ed.). Kharkov, Ukraine: Birkhäuser Verlag. Radjavi, Heydar; Rosenthal, Peter (2003). Invariant Subspaces (Update of 1973 Springer-Verlag ed.). Dover Publications. ISBN 0-486-42822-2. Roman, Stephen (2008). Advanced Linear Algebra. Graduate Texts in Mathematics (Third ed.). Springer. ISBN 978-0-387-72828-5.
Wikipedia:Invariants of tensors#0
In mathematics, in the fields of multilinear algebra and representation theory, the principal invariants of the second rank tensor A {\displaystyle \mathbf {A} } are the coefficients of the characteristic polynomial p ( λ ) = det ( A − λ I ) {\displaystyle \ p(\lambda )=\det(\mathbf {A} -\lambda \mathbf {I} )} , where I {\displaystyle \mathbf {I} } is the identity operator and λ i ∈ C {\displaystyle \lambda _{i}\in \mathbb {C} } are the roots of the polynomial p {\displaystyle \ p} and the eigenvalues of A {\displaystyle \mathbf {A} } . More broadly, any scalar-valued function f ( A ) {\displaystyle f(\mathbf {A} )} is an invariant of A {\displaystyle \mathbf {A} } if and only if f ( Q A Q T ) = f ( A ) {\displaystyle f(\mathbf {Q} \mathbf {A} \mathbf {Q} ^{T})=f(\mathbf {A} )} for all orthogonal Q {\displaystyle \mathbf {Q} } . This means that a formula expressing an invariant in terms of components, A i j {\displaystyle A_{ij}} , will give the same result for all Cartesian bases. For example, even though individual diagonal components of A {\displaystyle \mathbf {A} } will change with a change in basis, the sum of diagonal components will not change. == Properties == The principal invariants do not change with rotations of the coordinate system (they are objective, or in more modern terminology, satisfy the principle of material frame-indifference) and any function of the principal invariants is also objective. == Calculation of the invariants of rank two tensors == In a majority of engineering applications, the principal invariants of (rank two) tensors of dimension three are sought, such as those for the right Cauchy-Green deformation tensor C {\displaystyle \mathbf {C} } which has the eigenvalues λ 1 2 {\displaystyle \lambda _{1}^{2}} , λ 2 2 {\displaystyle \lambda _{2}^{2}} , and λ 3 2 {\displaystyle \lambda _{3}^{2}} . Where λ 1 {\displaystyle \lambda _{1}} , λ 2 {\displaystyle \lambda _{2}} , and λ 3 {\displaystyle \lambda _{3}} are the principal stretches, i.e. the eigenvalues of U = C {\displaystyle \mathbf {U} ={\sqrt {\mathbf {C} }}} . === Principal invariants === For such tensors, the principal invariants are given by: I 1 = t r ( A ) = A 11 + A 22 + A 33 = λ 1 + λ 2 + λ 3 I 2 = 1 2 ( ( t r ( A ) ) 2 − t r ( A 2 ) ) = A 11 A 22 + A 22 A 33 + A 11 A 33 − A 12 A 21 − A 23 A 32 − A 13 A 31 = λ 1 λ 2 + λ 1 λ 3 + λ 2 λ 3 I 3 = det ( A ) = − A 13 A 22 A 31 + A 12 A 23 A 31 + A 13 A 21 A 32 − A 11 A 23 A 32 − A 12 A 21 A 33 + A 11 A 22 A 33 = λ 1 λ 2 λ 3 {\displaystyle {\begin{aligned}I_{1}&=\mathrm {tr} (\mathbf {A} )=A_{11}+A_{22}+A_{33}=\lambda _{1}+\lambda _{2}+\lambda _{3}\\I_{2}&={\frac {1}{2}}\left((\mathrm {tr} (\mathbf {A} ))^{2}-\mathrm {tr} \left(\mathbf {A} ^{2}\right)\right)=A_{11}A_{22}+A_{22}A_{33}+A_{11}A_{33}-A_{12}A_{21}-A_{23}A_{32}-A_{13}A_{31}=\lambda _{1}\lambda _{2}+\lambda _{1}\lambda _{3}+\lambda _{2}\lambda _{3}\\I_{3}&=\det(\mathbf {A} )=-A_{13}A_{22}A_{31}+A_{12}A_{23}A_{31}+A_{13}A_{21}A_{32}-A_{11}A_{23}A_{32}-A_{12}A_{21}A_{33}+A_{11}A_{22}A_{33}=\lambda _{1}\lambda _{2}\lambda _{3}\end{aligned}}} For symmetric tensors, these definitions are reduced. The correspondence between the principal invariants and the characteristic polynomial of a tensor, in tandem with the Cayley–Hamilton theorem reveals that A 3 − I 1 A 2 + I 2 A − I 3 I = 0 {\displaystyle \ \mathbf {A} ^{3}-I_{1}\mathbf {A} ^{2}+I_{2}\mathbf {A} -I_{3}\mathbf {I} =0} where I {\displaystyle \mathbf {I} } is the second-order identity tensor. === Main invariants === In addition to the principal invariants listed above, it is also possible to introduce the notion of main invariants J 1 = λ 1 + λ 2 + λ 3 = I 1 J 2 = λ 1 2 + λ 2 2 + λ 3 2 = I 1 2 − 2 I 2 J 3 = λ 1 3 + λ 2 3 + λ 3 3 = I 1 3 − 3 I 1 I 2 + 3 I 3 {\displaystyle {\begin{aligned}J_{1}&=\lambda _{1}+\lambda _{2}+\lambda _{3}=I_{1}\\J_{2}&=\lambda _{1}^{2}+\lambda _{2}^{2}+\lambda _{3}^{2}=I_{1}^{2}-2I_{2}\\J_{3}&=\lambda _{1}^{3}+\lambda _{2}^{3}+\lambda _{3}^{3}=I_{1}^{3}-3I_{1}I_{2}+3I_{3}\end{aligned}}} which are functions of the principal invariants above. These are the coefficients of the characteristic polynomial of the deviator A − ( t r ( A ) / 3 ) I {\displaystyle \mathbf {A} -(\mathrm {tr} (\mathbf {A} )/3)\mathbf {I} } , such that it is traceless. The separation of a tensor into a component that is a multiple of the identity and a traceless component is standard in hydrodynamics, where the former is called isotropic, providing the modified pressure, and the latter is called deviatoric, providing shear effects. === Mixed invariants === Furthermore, mixed invariants between pairs of rank two tensors may also be defined. == Calculation of the invariants of order two tensors of higher dimension == These may be extracted by evaluating the characteristic polynomial directly, using the Faddeev-LeVerrier algorithm for example. == Calculation of the invariants of higher order tensors == The invariants of rank three, four, and higher order tensors may also be determined. == Engineering applications == A scalar function f {\displaystyle f} that depends entirely on the principal invariants of a tensor is objective, i.e., independent of rotations of the coordinate system. This property is commonly used in formulating closed-form expressions for the strain energy density, or Helmholtz free energy, of a nonlinear material possessing isotropic symmetry. This technique was first introduced into isotropic turbulence by Howard P. Robertson in 1940 where he was able to derive Kármán–Howarth equation from the invariant principle. George Batchelor and Subrahmanyan Chandrasekhar exploited this technique and developed an extended treatment for axisymmetric turbulence. === Invariants of non-symmetric tensors === A real tensor A {\displaystyle \mathbf {A} } in 3D (i.e., one with a 3x3 component matrix) has as many as six independent invariants, three being the invariants of its symmetric part and three characterizing the orientation of the axial vector of the skew-symmetric part relative to the principal directions of the symmetric part. For example, if the Cartesian components of A {\displaystyle \mathbf {A} } are [ A ] = [ 931 5480 − 717 − 5120 1650 1090 1533 − 610 1169 ] , {\displaystyle [A]={\begin{bmatrix}931&5480&-717\\-5120&1650&1090\\1533&-610&1169\end{bmatrix}},} the first step would be to evaluate the axial vector w {\displaystyle \mathbf {w} } associated with the skew-symmetric part. Specifically, the axial vector has components w 1 = A 32 − A 23 2 = − 850 w 2 = A 13 − A 31 2 = − 1125 w 3 = A 21 − A 12 2 = − 5300 {\displaystyle {\begin{aligned}w_{1}&={\frac {A_{32}-A_{23}}{2}}=-850\\w_{2}&={\frac {A_{13}-A_{31}}{2}}=-1125\\w_{3}&={\frac {A_{21}-A_{12}}{2}}=-5300\end{aligned}}} The next step finds the principal values of the symmetric part of A {\displaystyle \mathbf {A} } . Even though the eigenvalues of a real non-symmetric tensor might be complex, the eigenvalues of its symmetric part will always be real and therefore can be ordered from largest to smallest. The corresponding orthonormal principal basis directions can be assigned senses to ensure that the axial vector w {\displaystyle \mathbf {w} } points within the first octant. With respect to that special basis, the components of A {\displaystyle \mathbf {A} } are [ A ′ ] = [ 1875 − 2500 3125 2500 1250 − 3750 − 3125 3750 625 ] , {\displaystyle [A']={\begin{bmatrix}1875&-2500&3125\\2500&1250&-3750\\-3125&3750&625\end{bmatrix}},} The first three invariants of A {\displaystyle \mathbf {A} } are the diagonal components of this matrix: a 1 = A 11 ′ = 1875 , a 2 = A 22 ′ = 1250 , a 3 = A 33 ′ = 625 {\displaystyle a_{1}=A'_{11}=1875,a_{2}=A'_{22}=1250,a_{3}=A'_{33}=625} (equal to the ordered principal values of the tensor's symmetric part). The remaining three invariants are the axial vector's components in this basis: w 1 ′ = A 32 ′ = 3750 , w 2 ′ = A 13 ′ = 3125 , w 3 ′ = A 21 ′ = 2500 {\displaystyle w'_{1}=A'_{32}=3750,w'_{2}=A'_{13}=3125,w'_{3}=A'_{21}=2500} . Note: the magnitude of the axial vector, w ⋅ w {\displaystyle {\sqrt {\mathbf {w} \cdot \mathbf {w} }}} , is the sole invariant of the skew part of A {\displaystyle \mathbf {A} } , whereas these distinct three invariants characterize (in a sense) "alignment" between the symmetric and skew parts of A {\displaystyle \mathbf {A} } . Incidentally, it is a myth that a tensor is positive definite if its eigenvalues are positive. Instead, it is positive definite if and only if the eigenvalues of its symmetric part are positive. == See also == Symmetric polynomial Elementary symmetric polynomial Newton's identities Invariant theory == References ==
Wikipedia:Inverse element#0
In mathematics, the concept of an inverse element generalises the concepts of opposite (−x) and reciprocal (1/x) of numbers. Given an operation denoted here ∗, and an identity element denoted e, if x ∗ y = e, one says that x is a left inverse of y, and that y is a right inverse of x. (An identity element is an element such that x * e = x and e * y = y for all x and y for which the left-hand sides are defined.) When the operation ∗ is associative, if an element x has both a left inverse and a right inverse, then these two inverses are equal and unique; they are called the inverse element or simply the inverse. Often an adjective is added for specifying the operation, such as in additive inverse, multiplicative inverse, and functional inverse. In this case (associative operation), an invertible element is an element that has an inverse. In a ring, an invertible element, also called a unit, is an element that is invertible under multiplication (this is not ambiguous, as every element is invertible under addition). Inverses are commonly used in groups—where every element is invertible, and rings—where invertible elements are also called units. They are also commonly used for operations that are not defined for all possible operands, such as inverse matrices and inverse functions. This has been generalized to category theory, where, by definition, an isomorphism is an invertible morphism. The word 'inverse' is derived from Latin: inversus that means 'turned upside down', 'overturned'. This may take its origin from the case of fractions, where the (multiplicative) inverse is obtained by exchanging the numerator and the denominator (the inverse of x y {\displaystyle {\tfrac {x}{y}}} is y x {\displaystyle {\tfrac {y}{x}}} ). == Definitions and basic properties == The concepts of inverse element and invertible element are commonly defined for binary operations that are everywhere defined (that is, the operation is defined for any two elements of its domain). However, these concepts are also commonly used with partial operations, that is operations that are not defined everywhere. Common examples are matrix multiplication, function composition and composition of morphisms in a category. It follows that the common definitions of associativity and identity element must be extended to partial operations; this is the object of the first subsections. In this section, X is a set (possibly a proper class) on which a partial operation (possibly total) is defined, which is denoted with ∗ . {\displaystyle *.} === Associativity === A partial operation is associative if x ∗ ( y ∗ z ) = ( x ∗ y ) ∗ z {\displaystyle x*(y*z)=(x*y)*z} for every x, y, z in X for which one of the members of the equality is defined; the equality means that the other member of the equality must also be defined. Examples of non-total associative operations are multiplication of matrices of arbitrary size, and function composition. === Identity elements === Let ∗ {\displaystyle *} be a possibly partial associative operation on a set X. An identity element, or simply an identity is an element e such that x ∗ e = x and e ∗ y = y {\displaystyle x*e=x\quad {\text{and}}\quad e*y=y} for every x and y for which the left-hand sides of the equalities are defined. If e and f are two identity elements such that e ∗ f {\displaystyle e*f} is defined, then e = f . {\displaystyle e=f.} (This results immediately from the definition, by e = e ∗ f = f . {\displaystyle e=e*f=f.} ) It follows that a total operation has at most one identity element, and if e and f are different identities, then e ∗ f {\displaystyle e*f} is not defined. For example, in the case of matrix multiplication, there is one n×n identity matrix for every positive integer n, and two identity matrices of different size cannot be multiplied together. Similarly, identity functions are identity elements for function composition, and the composition of the identity functions of two different sets are not defined. === Left and right inverses === If x ∗ y = e , {\displaystyle x*y=e,} where e is an identity element, one says that x is a left inverse of y, and y is a right inverse of x. Left and right inverses do not always exist, even when the operation is total and associative. For example, addition is a total associative operation on nonnegative integers, which has 0 as additive identity, and 0 is the only element that has an additive inverse. This lack of inverses is the main motivation for extending the natural numbers into the integers. An element can have several left inverses and several right inverses, even when the operation is total and associative. For example, consider the functions from the integers to the integers. The doubling function x ↦ 2 x {\displaystyle x\mapsto 2x} has infinitely many left inverses under function composition, which are the functions that divide by two the even numbers, and give any value to odd numbers. Similarly, every function that maps n to either 2 n {\displaystyle 2n} or 2 n + 1 {\displaystyle 2n+1} is a right inverse of the function n ↦ ⌊ n 2 ⌋ , {\textstyle n\mapsto \left\lfloor {\frac {n}{2}}\right\rfloor ,} the floor function that maps n to n 2 {\textstyle {\frac {n}{2}}} or n − 1 2 , {\textstyle {\frac {n-1}{2}},} depending whether n is even or odd. More generally, a function has a left inverse for function composition if and only if it is injective, and it has a right inverse if and only if it is surjective. In category theory, right inverses are also called sections, and left inverses are called retractions. === Inverses === An element is invertible under an operation if it has a left inverse and a right inverse. In the common case where the operation is associative, the left and right inverse of an element are equal and unique. Indeed, if l and r are respectively a left inverse and a right inverse of x, then l = l ∗ ( x ∗ r ) = ( l ∗ x ) ∗ r = r . {\displaystyle l=l*(x*r)=(l*x)*r=r.} The inverse of an invertible element is its unique left or right inverse. If the operation is denoted as an addition, the inverse, or additive inverse, of an element x is denoted − x . {\displaystyle -x.} Otherwise, the inverse of x is generally denoted x − 1 , {\displaystyle x^{-1},} or, in the case of a commutative multiplication 1 x . {\textstyle {\frac {1}{x}}.} When there may be a confusion between several operations, the symbol of the operation may be added before the exponent, such as in x ∗ − 1 . {\displaystyle x^{*-1}.} The notation f ∘ − 1 {\displaystyle f^{\circ -1}} is not commonly used for function composition, since 1 f {\textstyle {\frac {1}{f}}} can be used for the multiplicative inverse. If x and y are invertible, and x ∗ y {\displaystyle x*y} is defined, then x ∗ y {\displaystyle x*y} is invertible, and its inverse is y − 1 x − 1 . {\displaystyle y^{-1}x^{-1}.} An invertible homomorphism is called an isomorphism. In category theory, an invertible morphism is also called an isomorphism. == In groups == A group is a set with an associative operation that has an identity element, and for which every element has an inverse. Thus, the inverse is a function from the group to itself that may also be considered as an operation of arity one. It is also an involution, since the inverse of the inverse of an element is the element itself. A group may act on a set as transformations of this set. In this case, the inverse g − 1 {\displaystyle g^{-1}} of a group element g {\displaystyle g} defines a transformation that is the inverse of the transformation defined by g , {\displaystyle g,} that is, the transformation that "undoes" the transformation defined by g . {\displaystyle g.} For example, the Rubik's cube group represents the finite sequences of elementary moves. The inverse of such a sequence is obtained by applying the inverse of each move in the reverse order. == In monoids == A monoid is a set with an associative operation that has an identity element. The invertible elements in a monoid form a group under monoid operation. A ring is a monoid for ring multiplication. In this case, the invertible elements are also called units and form the group of units of the ring. If a monoid is not commutative, there may exist non-invertible elements that have a left inverse or a right inverse (not both, as, otherwise, the element would be invertible). For example, the set of the functions from a set to itself is a monoid under function composition. In this monoid, the invertible elements are the bijective functions; the elements that have left inverses are the injective functions, and those that have right inverses are the surjective functions. Given a monoid, one may want extend it by adding inverse to some elements. This is generally impossible for non-commutative monoids, but, in a commutative monoid, it is possible to add inverses to the elements that have the cancellation property (an element x has the cancellation property if x y = x z {\displaystyle xy=xz} implies y = z , {\displaystyle y=z,} and y x = z x {\displaystyle yx=zx} implies y = z {\displaystyle y=z} ). This extension of a monoid is allowed by Grothendieck group construction. This is the method that is commonly used for constructing integers from natural numbers, rational numbers from integers and, more generally, the field of fractions of an integral domain, and localizations of commutative rings. == In rings == A ring is an algebraic structure with two operations, addition and multiplication, which are denoted as the usual operations on numbers. Under addition, a ring is an abelian group, which means that addition is commutative and associative; it has an identity, called the additive identity, and denoted 0; and every element x has an inverse, called its additive inverse and denoted −x. Because of commutativity, the concepts of left and right inverses are meaningless since they do not differ from inverses. Under multiplication, a ring is a monoid; this means that multiplication is associative and has an identity called the multiplicative identity and denoted 1. An invertible element for multiplication is called a unit. The inverse or multiplicative inverse (for avoiding confusion with additive inverses) of a unit x is denoted x − 1 , {\displaystyle x^{-1},} or, when the multiplication is commutative, 1 x . {\textstyle {\frac {1}{x}}.} The additive identity 0 is never a unit, except when the ring is the zero ring, which has 0 as its unique element. If 0 is the only non-unit, the ring is a field if the multiplication is commutative, or a division ring otherwise. In a noncommutative ring (that is, a ring whose multiplication is not commutative), a non-invertible element may have one or several left or right inverses. This is, for example, the case of the linear functions from an infinite-dimensional vector space to itself. A commutative ring (that is, a ring whose multiplication is commutative) may be extended by adding inverses to elements that are not zero divisors (that is, their product with a nonzero element cannot be 0). This is the process of localization, which produces, in particular, the field of rational numbers from the ring of integers, and, more generally, the field of fractions of an integral domain. Localization is also used with zero divisors, but, in this case the original ring is not a subring of the localisation; instead, it is mapped non-injectively to the localization. == Matrices == Matrix multiplication is commonly defined for matrices over a field, and straightforwardly extended to matrices over rings, rngs and semirings. However, in this section, only matrices over a commutative ring are considered, because of the use of the concept of rank and determinant. If A is a m×n matrix (that is, a matrix with m rows and n columns), and B is a p×q matrix, the product AB is defined if n = p, and only in this case. An identity matrix, that is, an identity element for matrix multiplication is a square matrix (same number for rows and columns) whose entries of the main diagonal are all equal to 1, and all other entries are 0. An invertible matrix is an invertible element under matrix multiplication. A matrix over a commutative ring R is invertible if and only if its determinant is a unit in R (that is, is invertible in R. In this case, its inverse matrix can be computed with Cramer's rule. If R is a field, the determinant is invertible if and only if it is not zero. As the case of fields is more common, one see often invertible matrices defined as matrices with a nonzero determinant, but this is incorrect over rings. In the case of integer matrices (that is, matrices with integer entries), an invertible matrix is a matrix that has an inverse that is also an integer matrix. Such a matrix is called a unimodular matrix for distinguishing it from matrices that are invertible over the real numbers. A square integer matrix is unimodular if and only if its determinant is 1 or −1, since these two numbers are the only units in the ring of integers. A matrix has a left inverse if and only if its rank equals its number of columns. This left inverse is not unique except for square matrices where the left inverse equal the inverse matrix. Similarly, a right inverse exists if and only if the rank equals the number of rows; it is not unique in the case of a rectangular matrix, and equals the inverse matrix in the case of a square matrix. == Functions, homomorphisms and morphisms == Composition is a partial operation that generalizes to homomorphisms of algebraic structures and morphisms of categories into operations that are also called composition, and share many properties with function composition. In all the case, composition is associative. If f : X → Y {\displaystyle f\colon X\to Y} and g : Y ′ → Z , {\displaystyle g\colon Y'\to Z,} the composition g ∘ f {\displaystyle g\circ f} is defined if and only if Y ′ = Y {\displaystyle Y'=Y} or, in the function and homomorphism cases, Y ⊂ Y ′ . {\displaystyle Y\subset Y'.} In the function and homomorphism cases, this means that the codomain of f {\displaystyle f} equals or is included in the domain of g. In the morphism case, this means that the codomain of f {\displaystyle f} equals the domain of g. There is an identity id X : X → X {\displaystyle \operatorname {id} _{X}\colon X\to X} for every object X (set, algebraic structure or object), which is called also an identity function in the function case. A function is invertible if and only if it is a bijection. An invertible homomorphism or morphism is called an isomorphism. An homomorphism of algebraic structures is an isomorphism if and only if it is a bijection. The inverse of a bijection is called an inverse function. In the other cases, one talks of inverse isomorphisms. A function has a left inverse or a right inverse if and only it is injective or surjective, respectively. An homomorphism of algebraic structures that has a left inverse or a right inverse is respectively injective or surjective, but the converse is not true in some algebraic structures. For example, the converse is true for vector spaces but not for modules over a ring: a homomorphism of modules that has a left inverse of a right inverse is called respectively a split epimorphism or a split monomorphism. This terminology is also used for morphisms in any category. == Generalizations == === In a unital magma === Let S {\displaystyle S} be a unital magma, that is, a set with a binary operation ∗ {\displaystyle *} and an identity element e ∈ S {\displaystyle e\in S} . If, for a , b ∈ S {\displaystyle a,b\in S} , we have a ∗ b = e {\displaystyle a*b=e} , then a {\displaystyle a} is called a left inverse of b {\displaystyle b} and b {\displaystyle b} is called a right inverse of a {\displaystyle a} . If an element x {\displaystyle x} is both a left inverse and a right inverse of y {\displaystyle y} , then x {\displaystyle x} is called a two-sided inverse, or simply an inverse, of y {\displaystyle y} . An element with a two-sided inverse in S {\displaystyle S} is called invertible in S {\displaystyle S} . An element with an inverse element only on one side is left invertible or right invertible. Elements of a unital magma ( S , ∗ ) {\displaystyle (S,*)} may have multiple left, right or two-sided inverses. For example, in the magma given by the Cayley table the elements 2 and 3 each have two two-sided inverses. A unital magma in which all elements are invertible need not be a loop. For example, in the magma ( S , ∗ ) {\displaystyle (S,*)} given by the Cayley table every element has a unique two-sided inverse (namely itself), but ( S , ∗ ) {\displaystyle (S,*)} is not a loop because the Cayley table is not a Latin square. Similarly, a loop need not have two-sided inverses. For example, in the loop given by the Cayley table the only element with a two-sided inverse is the identity element 1. If the operation ∗ {\displaystyle *} is associative then if an element has both a left inverse and a right inverse, they are equal. In other words, in a monoid (an associative unital magma) every element has at most one inverse (as defined in this section). In a monoid, the set of invertible elements is a group, called the group of units of S {\displaystyle S} , and denoted by U ( S ) {\displaystyle U(S)} or H1. === In a semigroup === The definition in the previous section generalizes the notion of inverse in group relative to the notion of identity. It's also possible, albeit less obvious, to generalize the notion of an inverse by dropping the identity element but keeping associativity; that is, in a semigroup. In a semigroup S an element x is called (von Neumann) regular if there exists some element z in S such that xzx = x; z is sometimes called a pseudoinverse. An element y is called (simply) an inverse of x if xyx = x and y = yxy. Every regular element has at least one inverse: if x = xzx then it is easy to verify that y = zxz is an inverse of x as defined in this section. Another easy to prove fact: if y is an inverse of x then e = xy and f = yx are idempotents, that is ee = e and ff = f. Thus, every pair of (mutually) inverse elements gives rise to two idempotents, and ex = xf = x, ye = fy = y, and e acts as a left identity on x, while f acts a right identity, and the left/right roles are reversed for y. This simple observation can be generalized using Green's relations: every idempotent e in an arbitrary semigroup is a left identity for Re and right identity for Le. An intuitive description of this fact is that every pair of mutually inverse elements produces a local left identity, and respectively, a local right identity. In a monoid, the notion of inverse as defined in the previous section is strictly narrower than the definition given in this section. Only elements in the Green class H1 have an inverse from the unital magma perspective, whereas for any idempotent e, the elements of He have an inverse as defined in this section. Under this more general definition, inverses need not be unique (or exist) in an arbitrary semigroup or monoid. If all elements are regular, then the semigroup (or monoid) is called regular, and every element has at least one inverse. If every element has exactly one inverse as defined in this section, then the semigroup is called an inverse semigroup. Finally, an inverse semigroup with only one idempotent is a group. An inverse semigroup may have an absorbing element 0 because 000 = 0, whereas a group may not. Outside semigroup theory, a unique inverse as defined in this section is sometimes called a quasi-inverse. This is generally justified because in most applications (for example, all examples in this article) associativity holds, which makes this notion a generalization of the left/right inverse relative to an identity (see Generalized inverse). === U-semigroups === A natural generalization of the inverse semigroup is to define an (arbitrary) unary operation ° such that (a°)° = a for all a in S; this endows S with a type ⟨2,1⟩ algebra. A semigroup endowed with such an operation is called a U-semigroup. Although it may seem that a° will be the inverse of a, this is not necessarily the case. In order to obtain interesting notion(s), the unary operation must somehow interact with the semigroup operation. Two classes of U-semigroups have been studied: I-semigroups, in which the interaction axiom is aa°a = a *-semigroups, in which the interaction axiom is (ab)° = b°a°. Such an operation is called an involution, and typically denoted by a* Clearly a group is both an I-semigroup and a *-semigroup. A class of semigroups important in semigroup theory are completely regular semigroups; these are I-semigroups in which one additionally has aa° = a°a; in other words every element has commuting pseudoinverse a°. There are few concrete examples of such semigroups however; most are completely simple semigroups. In contrast, a subclass of *-semigroups, the *-regular semigroups (in the sense of Drazin), yield one of best known examples of a (unique) pseudoinverse, the Moore–Penrose inverse. In this case however the involution a* is not the pseudoinverse. Rather, the pseudoinverse of x is the unique element y such that xyx = x, yxy = y, (xy)* = xy, (yx)* = yx. Since *-regular semigroups generalize inverse semigroups, the unique element defined this way in a *-regular semigroup is called the generalized inverse or Moore–Penrose inverse. === Semirings === === Examples === All examples in this section involve associative operators. ==== Galois connections ==== The lower and upper adjoints in a (monotone) Galois connection, L and G are quasi-inverses of each other; that is, LGL = L and GLG = G and one uniquely determines the other. They are not left or right inverses of each other however. ==== Generalized inverses of matrices ==== A square matrix M {\displaystyle M} with entries in a field K {\displaystyle K} is invertible (in the set of all square matrices of the same size, under matrix multiplication) if and only if its determinant is different from zero. If the determinant of M {\displaystyle M} is zero, it is impossible for it to have a one-sided inverse; therefore a left inverse or right inverse implies the existence of the other one. See invertible matrix for more. More generally, a square matrix over a commutative ring R {\displaystyle R} is invertible if and only if its determinant is invertible in R {\displaystyle R} . Non-square matrices of full rank have several one-sided inverses: For A : m × n ∣ m > n {\displaystyle A:m\times n\mid m>n} we have left inverses; for example, ( A T A ) − 1 A T ⏟ A left − 1 A = I n {\displaystyle \underbrace {\left(A^{\text{T}}A\right)^{-1}A^{\text{T}}} _{A_{\text{left}}^{-1}}A=I_{n}} For A : m × n ∣ m < n {\displaystyle A:m\times n\mid m<n} we have right inverses; for example, A A T ( A A T ) − 1 ⏟ A right − 1 = I m {\displaystyle A\underbrace {A^{\text{T}}\left(AA^{\text{T}}\right)^{-1}} _{A_{\text{right}}^{-1}}=I_{m}} The left inverse can be used to determine the least norm solution of A x = b {\displaystyle Ax=b} , which is also the least squares formula for regression and is given by x = ( A T A ) − 1 A T b . {\displaystyle x=\left(A^{\text{T}}A\right)^{-1}A^{\text{T}}b.} No rank deficient matrix has any (even one-sided) inverse. However, the Moore–Penrose inverse exists for all matrices, and coincides with the left or right (or true) inverse when it exists. As an example of matrix inverses, consider: A : 2 × 3 = [ 1 2 3 4 5 6 ] {\displaystyle A:2\times 3={\begin{bmatrix}1&2&3\\4&5&6\end{bmatrix}}} So, as m < n, we have a right inverse, A right − 1 = A T ( A A T ) − 1 . {\displaystyle A_{\text{right}}^{-1}=A^{\text{T}}\left(AA^{\text{T}}\right)^{-1}.} By components it is computed as A A T = [ 1 2 3 4 5 6 ] [ 1 4 2 5 3 6 ] = [ 14 32 32 77 ] ( A A T ) − 1 = [ 14 32 32 77 ] − 1 = 1 54 [ 77 − 32 − 32 14 ] A T ( A A T ) − 1 = 1 54 [ 1 4 2 5 3 6 ] [ 77 − 32 − 32 14 ] = 1 18 [ − 17 8 − 2 2 13 − 4 ] = A right − 1 {\displaystyle {\begin{aligned}AA^{\text{T}}&={\begin{bmatrix}1&2&3\\4&5&6\end{bmatrix}}{\begin{bmatrix}1&4\\2&5\\3&6\end{bmatrix}}={\begin{bmatrix}14&32\\32&77\end{bmatrix}}\\[3pt]\left(AA^{\text{T}}\right)^{-1}&={\begin{bmatrix}14&32\\32&77\end{bmatrix}}^{-1}={\frac {1}{54}}{\begin{bmatrix}77&-32\\-32&14\end{bmatrix}}\\[3pt]A^{\text{T}}\left(AA^{\text{T}}\right)^{-1}&={\frac {1}{54}}{\begin{bmatrix}1&4\\2&5\\3&6\end{bmatrix}}{\begin{bmatrix}77&-32\\-32&14\end{bmatrix}}={\frac {1}{18}}{\begin{bmatrix}-17&8\\-2&2\\13&-4\end{bmatrix}}=A_{\text{right}}^{-1}\end{aligned}}} The left inverse doesn't exist, because A T A = [ 1 4 2 5 3 6 ] [ 1 2 3 4 5 6 ] = [ 17 22 27 22 29 36 27 36 45 ] {\displaystyle A^{\text{T}}A={\begin{bmatrix}1&4\\2&5\\3&6\end{bmatrix}}{\begin{bmatrix}1&2&3\\4&5&6\end{bmatrix}}={\begin{bmatrix}17&22&27\\22&29&36\\27&36&45\end{bmatrix}}} which is a singular matrix, and cannot be inverted. == See also == Division ring Latin square property Loop (algebra) Unit (ring theory) == Notes == == References == M. Kilp, U. Knauer, A.V. Mikhalev, Monoids, Acts and Categories with Applications to Wreath Products and Graphs, De Gruyter Expositions in Mathematics vol. 29, Walter de Gruyter, 2000, ISBN 3-11-015248-7, p. 15 (def in unital magma) and p. 33 (def in semigroup) Howie, John M. (1995). Fundamentals of Semigroup Theory. Clarendon Press. ISBN 0-19-851194-9. contains all of the semigroup material herein except *-regular semigroups. Drazin, M.P., Regular semigroups with involution, Proc. Symp. on Regular Semigroups (DeKalb, 1979), 29–46 Miyuki Yamada, P-systems in regular semigroups, Semigroup Forum, 24(1), December 1982, pp. 173–187 Nordahl, T.E., and H.E. Scheiblich, Regular * Semigroups, Semigroup Forum, 16(1978), 369–377.
Wikipedia:Inverse function rule#0
In calculus, the inverse function rule is a formula that expresses the derivative of the inverse of a bijective and differentiable function f in terms of the derivative of f. More precisely, if the inverse of f {\displaystyle f} is denoted as f − 1 {\displaystyle f^{-1}} , where f − 1 ( y ) = x {\displaystyle f^{-1}(y)=x} if and only if f ( x ) = y {\displaystyle f(x)=y} , then the inverse function rule is, in Lagrange's notation, [ f − 1 ] ′ ( y ) = 1 f ′ ( f − 1 ( y ) ) {\displaystyle \left[f^{-1}\right]'(y)={\frac {1}{f'\left(f^{-1}(y)\right)}}} . This formula holds in general whenever f {\displaystyle f} is continuous and injective on an interval I, with f {\displaystyle f} being differentiable at f − 1 ( y ) {\displaystyle f^{-1}(y)} ( ∈ I {\displaystyle \in I} ) and where f ′ ( f − 1 ( y ) ) ≠ 0 {\displaystyle f'(f^{-1}(y))\neq 0} . The same formula is also equivalent to the expression D [ f − 1 ] = 1 ( D f ) ∘ ( f − 1 ) , {\displaystyle {\mathcal {D}}\left[f^{-1}\right]={\frac {1}{({\mathcal {D}}f)\circ \left(f^{-1}\right)}},} where D {\displaystyle {\mathcal {D}}} denotes the unary derivative operator (on the space of functions) and ∘ {\displaystyle \circ } denotes function composition. Geometrically, a function and inverse function have graphs that are reflections, in the line y = x {\displaystyle y=x} . This reflection operation turns the gradient of any line into its reciprocal. Assuming that f {\displaystyle f} has an inverse in a neighbourhood of x {\displaystyle x} and that its derivative at that point is non-zero, its inverse is guaranteed to be differentiable at x {\displaystyle x} and have a derivative given by the above formula. The inverse function rule may also be expressed in Leibniz's notation. As that notation suggests, d x d y ⋅ d y d x = 1. {\displaystyle {\frac {dx}{dy}}\,\cdot \,{\frac {dy}{dx}}=1.} This relation is obtained by differentiating the equation f − 1 ( y ) = x {\displaystyle f^{-1}(y)=x} in terms of x and applying the chain rule, yielding that: d x d y ⋅ d y d x = d x d x {\displaystyle {\frac {dx}{dy}}\,\cdot \,{\frac {dy}{dx}}={\frac {dx}{dx}}} considering that the derivative of x with respect to x is 1. == Derivation == Let f {\displaystyle f} be an invertible (bijective) function, let x {\displaystyle x} be in the domain of f {\displaystyle f} , and let y = f ( x ) . {\displaystyle y=f(x).} Let g = f − 1 . {\displaystyle g=f^{-1}.} So, f ( g ( y ) ) = y . {\displaystyle f(g(y))=y.} Derivating this equation with respect to ⁠ y {\displaystyle y} ⁠, and using the chain rule, one gets f ′ ( g ( y ) ) ⋅ g ′ ( y ) = 1. {\displaystyle f'(g(y))\cdot g'(y)=1.} That is, g ′ ( y ) = 1 f ′ ( g ( y ) ) {\displaystyle g'(y)={\frac {1}{f'(g(y))}}} or ( f − 1 ) ′ ( y ) = 1 f ′ ( f − 1 ( y ) ) . {\displaystyle (f^{-1})^{\prime }(y)={\frac {1}{f^{\prime }(f^{-1}(y))}}.} == Examples == y = x 2 {\displaystyle y=x^{2}} (for positive x) has inverse x = y {\displaystyle x={\sqrt {y}}} . d y d x = 2 x ; d x d y = 1 2 y = 1 2 x {\displaystyle {\frac {dy}{dx}}=2x{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }};{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\frac {dx}{dy}}={\frac {1}{2{\sqrt {y}}}}={\frac {1}{2x}}} d y d x ⋅ d x d y = 2 x ⋅ 1 2 x = 1. {\displaystyle {\frac {dy}{dx}}\,\cdot \,{\frac {dx}{dy}}=2x\cdot {\frac {1}{2x}}=1.} At x = 0 {\displaystyle x=0} , however, there is a problem: the graph of the square root function becomes vertical, corresponding to a horizontal tangent for the square function. y = e x {\displaystyle y=e^{x}} (for real x) has inverse x = ln ⁡ y {\displaystyle x=\ln {y}} (for positive y {\displaystyle y} ) d y d x = e x ; d x d y = 1 y = e − x {\displaystyle {\frac {dy}{dx}}=e^{x}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }};{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\frac {dx}{dy}}={\frac {1}{y}}=e^{-x}} d y d x ⋅ d x d y = e x ⋅ e − x = 1. {\displaystyle {\frac {dy}{dx}}\,\cdot \,{\frac {dx}{dy}}=e^{x}\cdot e^{-x}=1.} == Additional properties == Integrating this relationship gives f − 1 ( x ) = ∫ 1 f ′ ( f − 1 ( x ) ) d x + C . {\displaystyle {f^{-1}}(x)=\int {\frac {1}{f'({f^{-1}}(x))}}\,{dx}+C.} This is only useful if the integral exists. In particular we need f ′ ( x ) {\displaystyle f'(x)} to be non-zero across the range of integration. It follows that a function that has a continuous derivative has an inverse in a neighbourhood of every point where the derivative is non-zero. This need not be true if the derivative is not continuous. Another very interesting and useful property is the following: ∫ f − 1 ( x ) d x = x f − 1 ( x ) − F ( f − 1 ( x ) ) + C {\displaystyle \int f^{-1}(x)\,{dx}=xf^{-1}(x)-F(f^{-1}(x))+C} Where F {\displaystyle F} denotes the antiderivative of f {\displaystyle f} . The inverse of the derivative of f(x) is also of interest, as it is used in showing the convexity of the Legendre transform. Let z = f ′ ( x ) {\displaystyle z=f'(x)} then we have, assuming f ″ ( x ) ≠ 0 {\displaystyle f''(x)\neq 0} : d ( f ′ ) − 1 ( z ) d z = 1 f ″ ( x ) {\displaystyle {\frac {d(f')^{-1}(z)}{dz}}={\frac {1}{f''(x)}}} This can be shown using the previous notation y = f ( x ) {\displaystyle y=f(x)} . Then we have: f ′ ( x ) = d y d x = d y d z d z d x = d y d z f ″ ( x ) ⇒ d y d z = f ′ ( x ) f ″ ( x ) {\displaystyle f'(x)={\frac {dy}{dx}}={\frac {dy}{dz}}{\frac {dz}{dx}}={\frac {dy}{dz}}f''(x)\Rightarrow {\frac {dy}{dz}}={\frac {f'(x)}{f''(x)}}} Therefore: d ( f ′ ) − 1 ( z ) d z = d x d z = d y d z d x d y = f ′ ( x ) f ″ ( x ) 1 f ′ ( x ) = 1 f ″ ( x ) {\displaystyle {\frac {d(f')^{-1}(z)}{dz}}={\frac {dx}{dz}}={\frac {dy}{dz}}{\frac {dx}{dy}}={\frac {f'(x)}{f''(x)}}{\frac {1}{f'(x)}}={\frac {1}{f''(x)}}} By induction, we can generalize this result for any integer n ≥ 1 {\displaystyle n\geq 1} , with z = f ( n ) ( x ) {\displaystyle z=f^{(n)}(x)} , the nth derivative of f(x), and y = f ( n − 1 ) ( x ) {\displaystyle y=f^{(n-1)}(x)} , assuming f ( i ) ( x ) ≠ 0 for 0 < i ≤ n + 1 {\displaystyle f^{(i)}(x)\neq 0{\text{ for }}0<i\leq n+1} : d ( f ( n ) ) − 1 ( z ) d z = 1 f ( n + 1 ) ( x ) {\displaystyle {\frac {d(f^{(n)})^{-1}(z)}{dz}}={\frac {1}{f^{(n+1)}(x)}}} == Higher derivatives == The chain rule given above is obtained by differentiating the identity f − 1 ( f ( x ) ) = x {\displaystyle f^{-1}(f(x))=x} with respect to x. One can continue the same process for higher derivatives. Differentiating the identity twice with respect to x, one obtains d 2 y d x 2 ⋅ d x d y + d d x ( d x d y ) ⋅ ( d y d x ) = 0 , {\displaystyle {\frac {d^{2}y}{dx^{2}}}\,\cdot \,{\frac {dx}{dy}}+{\frac {d}{dx}}\left({\frac {dx}{dy}}\right)\,\cdot \,\left({\frac {dy}{dx}}\right)=0,} that is simplified further by the chain rule as d 2 y d x 2 ⋅ d x d y + d 2 x d y 2 ⋅ ( d y d x ) 2 = 0. {\displaystyle {\frac {d^{2}y}{dx^{2}}}\,\cdot \,{\frac {dx}{dy}}+{\frac {d^{2}x}{dy^{2}}}\,\cdot \,\left({\frac {dy}{dx}}\right)^{2}=0.} Replacing the first derivative, using the identity obtained earlier, we get d 2 y d x 2 = − d 2 x d y 2 ⋅ ( d y d x ) 3 . {\displaystyle {\frac {d^{2}y}{dx^{2}}}=-{\frac {d^{2}x}{dy^{2}}}\,\cdot \,\left({\frac {dy}{dx}}\right)^{3}.} Similarly for the third derivative: d 3 y d x 3 = − d 3 x d y 3 ⋅ ( d y d x ) 4 − 3 d 2 x d y 2 ⋅ d 2 y d x 2 ⋅ ( d y d x ) 2 {\displaystyle {\frac {d^{3}y}{dx^{3}}}=-{\frac {d^{3}x}{dy^{3}}}\,\cdot \,\left({\frac {dy}{dx}}\right)^{4}-3{\frac {d^{2}x}{dy^{2}}}\,\cdot \,{\frac {d^{2}y}{dx^{2}}}\,\cdot \,\left({\frac {dy}{dx}}\right)^{2}} or using the formula for the second derivative, d 3 y d x 3 = − d 3 x d y 3 ⋅ ( d y d x ) 4 + 3 ( d 2 x d y 2 ) 2 ⋅ ( d y d x ) 5 {\displaystyle {\frac {d^{3}y}{dx^{3}}}=-{\frac {d^{3}x}{dy^{3}}}\,\cdot \,\left({\frac {dy}{dx}}\right)^{4}+3\left({\frac {d^{2}x}{dy^{2}}}\right)^{2}\,\cdot \,\left({\frac {dy}{dx}}\right)^{5}} These formulas are generalized by the Faà di Bruno's formula. These formulas can also be written using Lagrange's notation. If f and g are inverses, then g ″ ( x ) = − f ″ ( g ( x ) ) [ f ′ ( g ( x ) ) ] 3 {\displaystyle g''(x)={\frac {-f''(g(x))}{[f'(g(x))]^{3}}}} == Example == y = e x {\displaystyle y=e^{x}} has the inverse x = ln ⁡ y {\displaystyle x=\ln y} . Using the formula for the second derivative of the inverse function, d y d x = d 2 y d x 2 = e x = y ; ( d y d x ) 3 = y 3 ; {\displaystyle {\frac {dy}{dx}}={\frac {d^{2}y}{dx^{2}}}=e^{x}=y{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }};{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}\left({\frac {dy}{dx}}\right)^{3}=y^{3};} so that d 2 x d y 2 ⋅ y 3 + y = 0 ; d 2 x d y 2 = − 1 y 2 {\displaystyle {\frac {d^{2}x}{dy^{2}}}\,\cdot \,y^{3}+y=0{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }};{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\frac {d^{2}x}{dy^{2}}}=-{\frac {1}{y^{2}}}} , which agrees with the direct calculation. == See also == Calculus – Branch of mathematics Chain rule – For derivatives of composed functions Differentiation of trigonometric functions – Mathematical process of finding the derivative of a trigonometric function Differentiation rules – Rules for computing derivatives of functions Implicit function theorem – On converting relations to functions of several real variables Integration of inverse functions – Mathematical theorem, used in calculusPages displaying short descriptions of redirect targets Inverse function – Mathematical concept Inverse function theorem – Theorem in mathematics Table of derivatives – Rules for computing derivatives of functionsPages displaying short descriptions of redirect targets Vector calculus identities – Mathematical identities == References == Marsden, Jerrold E.; Weinstein, Alan (1981). "Chapter 8: Inverse Functions and the Chain Rule". Calculus unlimited (PDF). Menlo Park, Calif.: Benjamin/Cummings Pub. Co. ISBN 0-8053-6932-5.
Wikipedia:Inverse limit#0
In mathematics, the inverse limit (also called the projective limit) is a construction that allows one to "glue together" several related objects, the precise gluing process being specified by morphisms between the objects. Thus, inverse limits can be defined in any category although their existence depends on the category that is considered. They are a special case of the concept of limit in category theory. By working in the dual category, that is by reversing the arrows, an inverse limit becomes a direct limit or inductive limit, and a limit becomes a colimit. == Formal definition == === Algebraic objects === We start with the definition of an inverse system (or projective system) of groups and homomorphisms. Let ( I , ≤ ) {\displaystyle (I,\leq )} be a directed poset (not all authors require I to be directed). Let (Ai)i∈I be a family of groups and suppose we have a family of homomorphisms f i j : A j → A i {\displaystyle f_{ij}:A_{j}\to A_{i}} for all i ≤ j {\displaystyle i\leq j} (note the order) with the following properties: f i i {\displaystyle f_{ii}} is the identity on A i {\displaystyle A_{i}} , f i k = f i j ∘ f j k for all i ≤ j ≤ k . {\displaystyle f_{ik}=f_{ij}\circ f_{jk}\quad {\text{for all }}i\leq j\leq k.} Then the pair ( ( A i ) i ∈ I , ( f i j ) i ≤ j ∈ I ) {\displaystyle ((A_{i})_{i\in I},(f_{ij})_{i\leq j\in I})} is called an inverse system of groups and morphisms over I {\displaystyle I} , and the morphisms f i j {\displaystyle f_{ij}} are called the transition morphisms of the system. We define the inverse limit of the inverse system ( ( A i ) i ∈ I , ( f i j ) i ≤ j ∈ I ) {\displaystyle ((A_{i})_{i\in I},(f_{ij})_{i\leq j\in I})} as a particular subgroup of the direct product of the A i {\displaystyle A_{i}} 's: A = lim ← i ∈ I ⁡ A i = { a → ∈ ∏ i ∈ I A i | a i = f i j ( a j ) for all i ≤ j in I } . {\displaystyle A=\varprojlim _{i\in I}{A_{i}}=\left\{\left.{\vec {a}}\in \prod _{i\in I}A_{i}\;\right|\;a_{i}=f_{ij}(a_{j}){\text{ for all }}i\leq j{\text{ in }}I\right\}.} The inverse limit A {\displaystyle A} comes equipped with natural projections πi: A → Ai which pick out the ith component of the direct product for each i {\displaystyle i} in I {\displaystyle I} . The inverse limit and the natural projections satisfy a universal property described in the next section. This same construction may be carried out if the A i {\displaystyle A_{i}} 's are sets, semigroups, topological spaces, rings, modules (over a fixed ring), algebras (over a fixed ring), etc., and the homomorphisms are morphisms in the corresponding category. The inverse limit will also belong to that category. === General definition === The inverse limit can be defined abstractly in an arbitrary category by means of a universal property. Let ( X i , f i j ) {\textstyle (X_{i},f_{ij})} be an inverse system of objects and morphisms in a category C (same definition as above). The inverse limit of this system is an object X in C together with morphisms πi: X → Xi (called projections) satisfying πi = f i j {\displaystyle f_{ij}} ∘ πj for all i ≤ j. The pair (X, πi) must be universal in the sense that for any other such pair (Y, ψi) there exists a unique morphism u: Y → X such that the diagram commutes for all i ≤ j. The inverse limit is often denoted X = lim ← ⁡ X i {\displaystyle X=\varprojlim X_{i}} with the inverse system ( X i , f i j ) {\textstyle (X_{i},f_{ij})} and the canonical projections π i {\displaystyle \pi _{i}} being understood. In some categories, the inverse limit of certain inverse systems does not exist. If it does, however, it is unique in a strong sense: given any two inverse limits X and X' of an inverse system, there exists a unique isomorphism X′ → X commuting with the projection maps. Inverse systems and inverse limits in a category C admit an alternative description in terms of functors. Any partially ordered set I can be considered as a small category where the morphisms consist of arrows i → j if and only if i ≤ j. An inverse system is then just a contravariant functor I → C. Let C I o p {\displaystyle C^{I^{\mathrm {op} }}} be the category of these functors (with natural transformations as morphisms). An object X of C can be considered a trivial inverse system, where all objects are equal to X and all arrow are the identity of X. This defines a "trivial functor" from C to C I o p . {\displaystyle C^{I^{\mathrm {op} }}.} The inverse limit, if it exists, is defined as a right adjoint of this trivial functor. == Examples == The ring of p-adic integers is the inverse limit of the rings Z / p n Z {\displaystyle \mathbb {Z} /p^{n}\mathbb {Z} } (see modular arithmetic) with the index set being the natural numbers with the usual order, and the morphisms being "take remainder". That is, one considers sequences of integers ( n 1 , n 2 , … ) {\displaystyle (n_{1},n_{2},\dots )} such that each element of the sequence "projects" down to the previous ones, namely, that n i ≡ n j mod p i {\displaystyle n_{i}\equiv n_{j}{\mbox{ mod }}p^{i}} whenever i < j . {\displaystyle i<j.} The natural topology on the p-adic integers is the one implied here, namely the product topology with cylinder sets as the open sets. The p-adic solenoid is the inverse limit of the topological groups R / p n Z {\displaystyle \mathbb {R} /p^{n}\mathbb {Z} } with the index set being the natural numbers with the usual order, and the morphisms being "take remainder". That is, one considers sequences of real numbers ( x 1 , x 2 , … ) {\displaystyle (x_{1},x_{2},\dots )} such that each element of the sequence "projects" down to the previous ones, namely, that x i ≡ x j mod p i {\displaystyle x_{i}\equiv x_{j}{\mbox{ mod }}p^{i}} whenever i < j . {\displaystyle i<j.} Its elements are exactly of form n + r {\displaystyle n+r} , where n {\displaystyle n} is a p-adic integer, and r ∈ [ 0 , 1 ) {\displaystyle r\in [0,1)} is the "remainder". The ring R [ [ t ] ] {\displaystyle \textstyle R[[t]]} of formal power series over a commutative ring R can be thought of as the inverse limit of the rings R [ t ] / t n R [ t ] {\displaystyle \textstyle R[t]/t^{n}R[t]} , indexed by the natural numbers as usually ordered, with the morphisms from R [ t ] / t n + j R [ t ] {\displaystyle \textstyle R[t]/t^{n+j}R[t]} to R [ t ] / t n R [ t ] {\displaystyle \textstyle R[t]/t^{n}R[t]} given by the natural projection. Pro-finite groups are defined as inverse limits of (discrete) finite groups. Let the index set I of an inverse system (Xi, f i j {\displaystyle f_{ij}} ) have a greatest element m. Then the natural projection πm: X → Xm is an isomorphism. In the category of sets, every inverse system has an inverse limit, which can be constructed in an elementary manner as a subset of the product of the sets forming the inverse system. The inverse limit of any inverse system of non-empty finite sets is non-empty. This is a generalization of Kőnig's lemma in graph theory and may be proved with Tychonoff's theorem, viewing the finite sets as compact discrete spaces, and then applying the finite intersection property characterization of compactness. In the category of topological spaces, every inverse system has an inverse limit. It is constructed by placing the initial topology on the underlying set-theoretic inverse limit. This is known as the limit topology. The set of infinite strings is the inverse limit of the set of finite strings, and is thus endowed with the limit topology. As the original spaces are discrete, the limit space is totally disconnected. This is one way of realizing the p-adic numbers and the Cantor set (as infinite strings). == Derived functors of the inverse limit == For an abelian category C, the inverse limit functor lim ← : C I → C {\displaystyle \varprojlim :C^{I}\rightarrow C} is left exact. If I is ordered (not simply partially ordered) and countable, and C is the category Ab of abelian groups, the Mittag-Leffler condition is a condition on the transition morphisms fij that ensures the exactness of lim ← {\displaystyle \varprojlim } . Specifically, Eilenberg constructed a functor lim ← ⁡ 1 : Ab I → Ab {\displaystyle \varprojlim {}^{1}:\operatorname {Ab} ^{I}\rightarrow \operatorname {Ab} } (pronounced "lim one") such that if (Ai, fij), (Bi, gij), and (Ci, hij) are three inverse systems of abelian groups, and 0 → A i → B i → C i → 0 {\displaystyle 0\rightarrow A_{i}\rightarrow B_{i}\rightarrow C_{i}\rightarrow 0} is a short exact sequence of inverse systems, then 0 → lim ← ⁡ A i → lim ← ⁡ B i → lim ← ⁡ C i → lim ← ⁡ 1 A i {\displaystyle 0\rightarrow \varprojlim A_{i}\rightarrow \varprojlim B_{i}\rightarrow \varprojlim C_{i}\rightarrow \varprojlim {}^{1}A_{i}} is an exact sequence in Ab. === Mittag-Leffler condition === If the ranges of the morphisms of an inverse system of abelian groups (Ai, fij) are stationary, that is, for every k there exists j ≥ k such that for all i ≥ j : f k j ( A j ) = f k i ( A i ) {\displaystyle f_{kj}(A_{j})=f_{ki}(A_{i})} one says that the system satisfies the Mittag-Leffler condition. The name "Mittag-Leffler" for this condition was given by Bourbaki in their chapter on uniform structures for a similar result about inverse limits of complete Hausdorff uniform spaces. Mittag-Leffler used a similar argument in the proof of Mittag-Leffler's theorem. The following situations are examples where the Mittag-Leffler condition is satisfied: a system in which the morphisms fij are surjective a system of finite-dimensional vector spaces or finite abelian groups or modules of finite length or Artinian modules. An example where lim ← ⁡ 1 {\displaystyle \varprojlim {}^{1}} is non-zero is obtained by taking I to be the non-negative integers, letting Ai = piZ, Bi = Z, and Ci = Bi / Ai = Z/piZ. Then lim ← ⁡ 1 A i = Z p / Z {\displaystyle \varprojlim {}^{1}A_{i}=\mathbf {Z} _{p}/\mathbf {Z} } where Zp denotes the p-adic integers. === Further results === More generally, if C is an arbitrary abelian category that has enough injectives, then so does CI, and the right derived functors of the inverse limit functor can thus be defined. The nth right derived functor is denoted R n lim ← : C I → C . {\displaystyle R^{n}\varprojlim :C^{I}\rightarrow C.} In the case where C satisfies Grothendieck's axiom (AB4*), Jan-Erik Roos generalized the functor lim1 on AbI to series of functors limn such that lim ← ⁡ n ≅ R n lim ← . {\displaystyle \varprojlim {}^{n}\cong R^{n}\varprojlim .} It was thought for almost 40 years that Roos had proved (in Sur les foncteurs dérivés de lim. Applications.) that lim1 Ai = 0 for (Ai, fij) an inverse system with surjective transition morphisms and I the set of non-negative integers (such inverse systems are often called "Mittag-Leffler sequences"). However, in 2002, Amnon Neeman and Pierre Deligne constructed an example of such a system in a category satisfying (AB4) (in addition to (AB4*)) with lim1 Ai ≠ 0. Roos has since shown (in "Derived functors of inverse limits revisited") that his result is correct if C has a set of generators (in addition to satisfying (AB3) and (AB4*)). Barry Mitchell has shown (in "The cohomological dimension of a directed set") that if I has cardinality ℵ d {\displaystyle \aleph _{d}} (the dth infinite cardinal), then Rnlim is zero for all n ≥ d + 2. This applies to the I-indexed diagrams in the category of R-modules, with R a commutative ring; it is not necessarily true in an arbitrary abelian category (see Roos' "Derived functors of inverse limits revisited" for examples of abelian categories in which limn, on diagrams indexed by a countable set, is nonzero for n > 1). == Related concepts and generalizations == The categorical dual of an inverse limit is a direct limit (or inductive limit). More general concepts are the limits and colimits of category theory. The terminology is somewhat confusing: inverse limits are a class of limits, while direct limits are a class of colimits. == Notes == == References == Bourbaki, Nicolas (1989), Algebra I, Springer, ISBN 978-3-540-64243-5, OCLC 40551484 Bourbaki, Nicolas (1989), General topology: Chapters 1-4, Springer, ISBN 978-3-540-64241-1, OCLC 40551485 Mac Lane, Saunders (September 1998), Categories for the Working Mathematician (2nd ed.), Springer, ISBN 0-387-98403-8 Mitchell, Barry (1972), "Rings with several objects", Advances in Mathematics, 8: 1–161, doi:10.1016/0001-8708(72)90002-3, MR 0294454 Neeman, Amnon (2002), "A counterexample to a 1961 "theorem" in homological algebra (with appendix by Pierre Deligne)", Inventiones Mathematicae, 148 (2): 397–420, doi:10.1007/s002220100197, MR 1906154 Roos, Jan-Erik (1961), "Sur les foncteurs dérivés de lim. Applications", C. R. Acad. Sci. Paris, 252: 3702–3704, MR 0132091 Roos, Jan-Erik (2006), "Derived functors of inverse limits revisited", J. London Math. Soc., Series 2, 73 (1): 65–83, doi:10.1112/S0024610705022416, MR 2197371 Section 3.5 of Weibel, Charles A. (1994). An introduction to homological algebra. Cambridge Studies in Advanced Mathematics. Vol. 38. Cambridge University Press. ISBN 978-0-521-55987-4. MR 1269324. OCLC 36131259.
Wikipedia:Inverse trigonometric functions#0
In mathematics, the inverse trigonometric functions (occasionally also called antitrigonometric, cyclometric, or arcus functions) are the inverse functions of the trigonometric functions, under suitably restricted domains. Specifically, they are the inverses of the sine, cosine, tangent, cotangent, secant, and cosecant functions, and are used to obtain an angle from any of the angle's trigonometric ratios. Inverse trigonometric functions are widely used in engineering, navigation, physics, and geometry. == Notation == Several notations for the inverse trigonometric functions exist. The most common convention is to name inverse trigonometric functions using an arc- prefix: arcsin(x), arccos(x), arctan(x), etc. (This convention is used throughout this article.) This notation arises from the following geometric relationships: when measuring in radians, an angle of θ radians will correspond to an arc whose length is rθ, where r is the radius of the circle. Thus in the unit circle, the cosine of x function is both the arc and the angle, because the arc of a circle of radius 1 is the same as the angle. Or, "the arc whose cosine is x" is the same as "the angle whose cosine is x", because the length of the arc of the circle in radii is the same as the measurement of the angle in radians. In computer programming languages, the inverse trigonometric functions are often called by the abbreviated forms asin, acos, atan. The notations sin−1(x), cos−1(x), tan−1(x), etc., as introduced by John Herschel in 1813, are often used as well in English-language sources, much more than the also established sin[−1](x), cos[−1](x), tan[−1](x) – conventions consistent with the notation of an inverse function, that is useful (for example) to define the multivalued version of each inverse trigonometric function: tan − 1 ⁡ ( x ) = { arctan ⁡ ( x ) + π k ∣ k ∈ Z } . {\displaystyle \tan ^{-1}(x)=\{\arctan(x)+\pi k\mid k\in \mathbb {Z} \}~.} However, this might appear to conflict logically with the common semantics for expressions such as sin2(x) (although only sin2 x, without parentheses, is the really common use), which refer to numeric power rather than function composition, and therefore may result in confusion between notation for the reciprocal (multiplicative inverse) and inverse function. The confusion is somewhat mitigated by the fact that each of the reciprocal trigonometric functions has its own name — for example, (cos(x))−1 = sec(x). Nevertheless, certain authors advise against using it, since it is ambiguous. Another precarious convention used by a small number of authors is to use an uppercase first letter, along with a “−1” superscript: Sin−1(x), Cos−1(x), Tan−1(x), etc. Although it is intended to avoid confusion with the reciprocal, which should be represented by sin−1(x), cos−1(x), etc., or, better, by sin−1 x, cos−1 x, etc., it in turn creates yet another major source of ambiguity, especially since many popular high-level programming languages (e.g. Mathematica and MAGMA) use those very same capitalised representations for the standard trig functions, whereas others (Python, SymPy, NumPy, Matlab, MAPLE, etc.) use lower-case. Hence, since 2009, the ISO 80000-2 standard has specified solely the "arc" prefix for the inverse functions. == Basic concepts == === Principal values === Since none of the six trigonometric functions are one-to-one, they must be restricted in order to have inverse functions. Therefore, the result ranges of the inverse functions are proper (i.e. strict) subsets of the domains of the original functions. For example, using function in the sense of multivalued functions, just as the square root function y = x {\displaystyle y={\sqrt {x}}} could be defined from y 2 = x , {\displaystyle y^{2}=x,} the function y = arcsin ⁡ ( x ) {\displaystyle y=\arcsin(x)} is defined so that sin ⁡ ( y ) = x . {\displaystyle \sin(y)=x.} For a given real number x , {\displaystyle x,} with − 1 ≤ x ≤ 1 , {\displaystyle -1\leq x\leq 1,} there are multiple (in fact, countably infinitely many) numbers y {\displaystyle y} such that sin ⁡ ( y ) = x {\displaystyle \sin(y)=x} ; for example, sin ⁡ ( 0 ) = 0 , {\displaystyle \sin(0)=0,} but also sin ⁡ ( π ) = 0 , {\displaystyle \sin(\pi )=0,} sin ⁡ ( 2 π ) = 0 , {\displaystyle \sin(2\pi )=0,} etc. When only one value is desired, the function may be restricted to its principal branch. With this restriction, for each x {\displaystyle x} in the domain, the expression arcsin ⁡ ( x ) {\displaystyle \arcsin(x)} will evaluate only to a single value, called its principal value. These properties apply to all the inverse trigonometric functions. The principal inverses are listed in the following table. Note: Some authors define the range of arcsecant to be ( 0 ≤ y < π 2 {\textstyle 0\leq y<{\frac {\pi }{2}}} or π ≤ y < 3 π 2 {\textstyle \pi \leq y<{\frac {3\pi }{2}}} ), because the tangent function is nonnegative on this domain. This makes some computations more consistent. For example, using this range, tan ⁡ ( arcsec ⁡ ( x ) ) = x 2 − 1 , {\displaystyle \tan(\operatorname {arcsec}(x))={\sqrt {x^{2}-1}},} whereas with the range ( 0 ≤ y < π 2 {\textstyle 0\leq y<{\frac {\pi }{2}}} or π 2 < y ≤ π {\textstyle {\frac {\pi }{2}}<y\leq \pi } ), we would have to write tan ⁡ ( arcsec ⁡ ( x ) ) = ± x 2 − 1 , {\displaystyle \tan(\operatorname {arcsec}(x))=\pm {\sqrt {x^{2}-1}},} since tangent is nonnegative on 0 ≤ y < π 2 , {\textstyle 0\leq y<{\frac {\pi }{2}},} but nonpositive on π 2 < y ≤ π . {\textstyle {\frac {\pi }{2}}<y\leq \pi .} For a similar reason, the same authors define the range of arccosecant to be ( − π < y ≤ − π 2 {\textstyle (-\pi <y\leq -{\frac {\pi }{2}}} or 0 < y ≤ π 2 ) . {\textstyle 0<y\leq {\frac {\pi }{2}}).} ==== Domains ==== If x is allowed to be a complex number, then the range of y applies only to its real part. The table below displays names and domains of the inverse trigonometric functions along with the range of their usual principal values in radians. The symbol R = ( − ∞ , ∞ ) {\displaystyle \mathbb {R} =(-\infty ,\infty )} denotes the set of all real numbers and Z = { … , − 2 , − 1 , 0 , 1 , 2 , … } {\displaystyle \mathbb {Z} =\{\ldots ,\,-2,\,-1,\,0,\,1,\,2,\,\ldots \}} denotes the set of all integers. The set of all integer multiples of π {\displaystyle \pi } is denoted by π Z := { π n : n ∈ Z } = { … , − 2 π , − π , 0 , π , 2 π , … } . {\displaystyle \pi \mathbb {Z} ~:=~\{\pi n\;:\;n\in \mathbb {Z} \}~=~\{\ldots ,\,-2\pi ,\,-\pi ,\,0,\,\pi ,\,2\pi ,\,\ldots \}.} The symbol ∖ {\displaystyle \,\setminus \,} denotes set subtraction so that, for instance, R ∖ ( − 1 , 1 ) = ( − ∞ , − 1 ] ∪ [ 1 , ∞ ) {\displaystyle \mathbb {R} \setminus (-1,1)=(-\infty ,-1]\cup [1,\infty )} is the set of points in R {\displaystyle \mathbb {R} } (that is, real numbers) that are not in the interval ( − 1 , 1 ) . {\displaystyle (-1,1).} The Minkowski sum notation π Z + ( 0 , π ) {\textstyle \pi \mathbb {Z} +(0,\pi )} and π Z + ( − π 2 , π 2 ) {\displaystyle \pi \mathbb {Z} +{\bigl (}{-{\tfrac {\pi }{2}}},{\tfrac {\pi }{2}}{\bigr )}} that is used above to concisely write the domains of cot , csc , tan , and sec {\displaystyle \cot ,\csc ,\tan ,{\text{ and }}\sec } is now explained. Domain of cotangent cot {\displaystyle \cot } and cosecant csc {\displaystyle \csc } : The domains of cot {\displaystyle \,\cot \,} and csc {\displaystyle \,\csc \,} are the same. They are the set of all angles θ {\displaystyle \theta } at which sin ⁡ θ ≠ 0 , {\displaystyle \sin \theta \neq 0,} i.e. all real numbers that are not of the form π n {\displaystyle \pi n} for some integer n , {\displaystyle n,} π Z + ( 0 , π ) = ⋯ ∪ ( − 2 π , − π ) ∪ ( − π , 0 ) ∪ ( 0 , π ) ∪ ( π , 2 π ) ∪ ⋯ = R ∖ π Z {\displaystyle {\begin{aligned}\pi \mathbb {Z} +(0,\pi )&=\cdots \cup (-2\pi ,-\pi )\cup (-\pi ,0)\cup (0,\pi )\cup (\pi ,2\pi )\cup \cdots \\&=\mathbb {R} \setminus \pi \mathbb {Z} \end{aligned}}} Domain of tangent tan {\displaystyle \tan } and secant sec {\displaystyle \sec } : The domains of tan {\displaystyle \,\tan \,} and sec {\displaystyle \,\sec \,} are the same. They are the set of all angles θ {\displaystyle \theta } at which cos ⁡ θ ≠ 0 , {\displaystyle \cos \theta \neq 0,} π Z + ( − π 2 , π 2 ) = ⋯ ∪ ( − 3 π 2 , − π 2 ) ∪ ( − π 2 , π 2 ) ∪ ( π 2 , 3 π 2 ) ∪ ⋯ = R ∖ ( π 2 + π Z ) {\displaystyle {\begin{aligned}\pi \mathbb {Z} +\left(-{\tfrac {\pi }{2}},{\tfrac {\pi }{2}}\right)&=\cdots \cup {\bigl (}{-{\tfrac {3\pi }{2}}},{-{\tfrac {\pi }{2}}}{\bigr )}\cup {\bigl (}{-{\tfrac {\pi }{2}}},{\tfrac {\pi }{2}}{\bigr )}\cup {\bigl (}{\tfrac {\pi }{2}},{\tfrac {3\pi }{2}}{\bigr )}\cup \cdots \\&=\mathbb {R} \setminus \left({\tfrac {\pi }{2}}+\pi \mathbb {Z} \right)\\\end{aligned}}} === Solutions to elementary trigonometric equations === Each of the trigonometric functions is periodic in the real part of its argument, running through all its values twice in each interval of 2 π : {\displaystyle 2\pi :} Sine and cosecant begin their period at 2 π k − π 2 {\textstyle 2\pi k-{\frac {\pi }{2}}} (where k {\displaystyle k} is an integer), finish it at 2 π k + π 2 , {\textstyle 2\pi k+{\frac {\pi }{2}},} and then reverse themselves over 2 π k + π 2 {\textstyle 2\pi k+{\frac {\pi }{2}}} to 2 π k + 3 π 2 . {\textstyle 2\pi k+{\frac {3\pi }{2}}.} Cosine and secant begin their period at 2 π k , {\displaystyle 2\pi k,} finish it at 2 π k + π . {\displaystyle 2\pi k+\pi .} and then reverse themselves over 2 π k + π {\displaystyle 2\pi k+\pi } to 2 π k + 2 π . {\displaystyle 2\pi k+2\pi .} Tangent begins its period at 2 π k − π 2 , {\textstyle 2\pi k-{\frac {\pi }{2}},} finishes it at 2 π k + π 2 , {\textstyle 2\pi k+{\frac {\pi }{2}},} and then repeats it (forward) over 2 π k + π 2 {\textstyle 2\pi k+{\frac {\pi }{2}}} to 2 π k + 3 π 2 . {\textstyle 2\pi k+{\frac {3\pi }{2}}.} Cotangent begins its period at 2 π k , {\displaystyle 2\pi k,} finishes it at 2 π k + π , {\displaystyle 2\pi k+\pi ,} and then repeats it (forward) over 2 π k + π {\displaystyle 2\pi k+\pi } to 2 π k + 2 π . {\displaystyle 2\pi k+2\pi .} This periodicity is reflected in the general inverses, where k {\displaystyle k} is some integer. The following table shows how inverse trigonometric functions may be used to solve equalities involving the six standard trigonometric functions. It is assumed that the given values θ , {\displaystyle \theta ,} r , {\displaystyle r,} s , {\displaystyle s,} x , {\displaystyle x,} and y {\displaystyle y} all lie within appropriate ranges so that the relevant expressions below are well-defined. Note that "for some k ∈ Z {\displaystyle k\in \mathbb {Z} } " is just another way of saying "for some integer k . {\displaystyle k.} " The symbol is logical equality and indicates that if the left hand side is true then so is the right hand side and, conversely, if the right hand side is true then so is the left hand side (see this footnote for more details and an example illustrating this concept). where the first four solutions can be written in expanded form as: For example, if cos ⁡ θ = − 1 {\displaystyle \cos \theta =-1} then θ = π + 2 π k = − π + 2 π ( 1 + k ) {\displaystyle \theta =\pi +2\pi k=-\pi +2\pi (1+k)} for some k ∈ Z . {\displaystyle k\in \mathbb {Z} .} While if sin ⁡ θ = ± 1 {\displaystyle \sin \theta =\pm 1} then θ = π 2 + π k = − π 2 + π ( k + 1 ) {\textstyle \theta ={\frac {\pi }{2}}+\pi k=-{\frac {\pi }{2}}+\pi (k+1)} for some k ∈ Z , {\displaystyle k\in \mathbb {Z} ,} where k {\displaystyle k} will be even if sin ⁡ θ = 1 {\displaystyle \sin \theta =1} and it will be odd if sin ⁡ θ = − 1. {\displaystyle \sin \theta =-1.} The equations sec ⁡ θ = − 1 {\displaystyle \sec \theta =-1} and csc ⁡ θ = ± 1 {\displaystyle \csc \theta =\pm 1} have the same solutions as cos ⁡ θ = − 1 {\displaystyle \cos \theta =-1} and sin ⁡ θ = ± 1 , {\displaystyle \sin \theta =\pm 1,} respectively. In all equations above except for those just solved (i.e. except for sin {\displaystyle \sin } / csc ⁡ θ = ± 1 {\displaystyle \csc \theta =\pm 1} and cos {\displaystyle \cos } / sec ⁡ θ = − 1 {\displaystyle \sec \theta =-1} ), the integer k {\displaystyle k} in the solution's formula is uniquely determined by θ {\displaystyle \theta } (for fixed r , s , x , {\displaystyle r,s,x,} and y {\displaystyle y} ). With the help of integer parity Parity ⁡ ( h ) = { 0 if h is even 1 if h is odd {\displaystyle \operatorname {Parity} (h)={\begin{cases}0&{\text{if }}h{\text{ is even }}\\1&{\text{if }}h{\text{ is odd }}\\\end{cases}}} it is possible to write a solution to cos ⁡ θ = x {\displaystyle \cos \theta =x} that doesn't involve the "plus or minus" ± {\displaystyle \,\pm \,} symbol: c o s θ = x {\displaystyle cos\;\theta =x\quad } if and only if θ = ( − 1 ) h arccos ⁡ ( x ) + π h + π Parity ⁡ ( h ) {\displaystyle \quad \theta =(-1)^{h}\arccos(x)+\pi h+\pi \operatorname {Parity} (h)\quad } for some h ∈ Z . {\displaystyle h\in \mathbb {Z} .} And similarly for the secant function, s e c θ = r {\displaystyle sec\;\theta =r\quad } if and only if θ = ( − 1 ) h arcsec ⁡ ( r ) + π h + π Parity ⁡ ( h ) {\displaystyle \quad \theta =(-1)^{h}\operatorname {arcsec}(r)+\pi h+\pi \operatorname {Parity} (h)\quad } for some h ∈ Z , {\displaystyle h\in \mathbb {Z} ,} where π h + π Parity ⁡ ( h ) {\displaystyle \pi h+\pi \operatorname {Parity} (h)} equals π h {\displaystyle \pi h} when the integer h {\displaystyle h} is even, and equals π h + π {\displaystyle \pi h+\pi } when it's odd. ==== Detailed example and explanation of the "plus or minus" symbol ± ==== The solutions to cos ⁡ θ = x {\displaystyle \cos \theta =x} and sec ⁡ θ = x {\displaystyle \sec \theta =x} involve the "plus or minus" symbol ± , {\displaystyle \,\pm ,\,} whose meaning is now clarified. Only the solution to cos ⁡ θ = x {\displaystyle \cos \theta =x} will be discussed since the discussion for sec ⁡ θ = x {\displaystyle \sec \theta =x} is the same. We are given x {\displaystyle x} between − 1 ≤ x ≤ 1 {\displaystyle -1\leq x\leq 1} and we know that there is an angle θ {\displaystyle \theta } in some interval that satisfies cos ⁡ θ = x . {\displaystyle \cos \theta =x.} We want to find this θ . {\displaystyle \theta .} The table above indicates that the solution is θ = ± arccos ⁡ x + 2 π k for some k ∈ Z {\displaystyle \,\theta =\pm \arccos x+2\pi k\,\quad {\text{ for some }}k\in \mathbb {Z} } which is a shorthand way of saying that (at least) one of the following statement is true: θ = arccos ⁡ x + 2 π k {\displaystyle \,\theta =\arccos x+2\pi k\,} for some integer k , {\displaystyle k,} or θ = − arccos ⁡ x + 2 π k {\displaystyle \,\theta =-\arccos x+2\pi k\,} for some integer k . {\displaystyle k.} As mentioned above, if arccos ⁡ x = π {\displaystyle \,\arccos x=\pi \,} (which by definition only happens when x = cos ⁡ π = − 1 {\displaystyle x=\cos \pi =-1} ) then both statements (1) and (2) hold, although with different values for the integer k {\displaystyle k} : if K {\displaystyle K} is the integer from statement (1), meaning that θ = π + 2 π K {\displaystyle \theta =\pi +2\pi K} holds, then the integer k {\displaystyle k} for statement (2) is K + 1 {\displaystyle K+1} (because θ = − π + 2 π ( 1 + K ) {\displaystyle \theta =-\pi +2\pi (1+K)} ). However, if x ≠ − 1 {\displaystyle x\neq -1} then the integer k {\displaystyle k} is unique and completely determined by θ . {\displaystyle \theta .} If arccos ⁡ x = 0 {\displaystyle \,\arccos x=0\,} (which by definition only happens when x = cos ⁡ 0 = 1 {\displaystyle x=\cos 0=1} ) then ± arccos ⁡ x = 0 {\displaystyle \,\pm \arccos x=0\,} (because + arccos ⁡ x = + 0 = 0 {\displaystyle \,+\arccos x=+0=0\,} and − arccos ⁡ x = − 0 = 0 {\displaystyle \,-\arccos x=-0=0\,} so in both cases ± arccos ⁡ x {\displaystyle \,\pm \arccos x\,} is equal to 0 {\displaystyle 0} ) and so the statements (1) and (2) happen to be identical in this particular case (and so both hold). Having considered the cases arccos ⁡ x = 0 {\displaystyle \,\arccos x=0\,} and arccos ⁡ x = π , {\displaystyle \,\arccos x=\pi ,\,} we now focus on the case where arccos ⁡ x ≠ 0 {\displaystyle \,\arccos x\neq 0\,} and arccos ⁡ x ≠ π , {\displaystyle \,\arccos x\neq \pi ,\,} So assume this from now on. The solution to cos ⁡ θ = x {\displaystyle \cos \theta =x} is still θ = ± arccos ⁡ x + 2 π k for some k ∈ Z {\displaystyle \,\theta =\pm \arccos x+2\pi k\,\quad {\text{ for some }}k\in \mathbb {Z} } which as before is shorthand for saying that one of statements (1) and (2) is true. However this time, because arccos ⁡ x ≠ 0 {\displaystyle \,\arccos x\neq 0\,} and 0 < arccos ⁡ x < π , {\displaystyle \,0<\arccos x<\pi ,\,} statements (1) and (2) are different and furthermore, exactly one of the two equalities holds (not both). Additional information about θ {\displaystyle \theta } is needed to determine which one holds. For example, suppose that x = 0 {\displaystyle x=0} and that all that is known about θ {\displaystyle \theta } is that − π ≤ θ ≤ π {\displaystyle \,-\pi \leq \theta \leq \pi \,} (and nothing more is known). Then arccos ⁡ x = arccos ⁡ 0 = π 2 {\displaystyle \arccos x=\arccos 0={\frac {\pi }{2}}} and moreover, in this particular case k = 0 {\displaystyle k=0} (for both the + {\displaystyle \,+\,} case and the − {\displaystyle \,-\,} case) and so consequently, θ = ± arccos ⁡ x + 2 π k = ± ( π 2 ) + 2 π ( 0 ) = ± π 2 . {\displaystyle \theta ~=~\pm \arccos x+2\pi k~=~\pm \left({\frac {\pi }{2}}\right)+2\pi (0)~=~\pm {\frac {\pi }{2}}.} This means that θ {\displaystyle \theta } could be either π / 2 {\displaystyle \,\pi /2\,} or − π / 2. {\displaystyle \,-\pi /2.} Without additional information it is not possible to determine which of these values θ {\displaystyle \theta } has. An example of some additional information that could determine the value of θ {\displaystyle \theta } would be knowing that the angle is above the x {\displaystyle x} -axis (in which case θ = π / 2 {\displaystyle \theta =\pi /2} ) or alternatively, knowing that it is below the x {\displaystyle x} -axis (in which case θ = − π / 2 {\displaystyle \theta =-\pi /2} ). ==== Equal identical trigonometric functions ==== The table below shows how two angles θ {\displaystyle \theta } and φ {\displaystyle \varphi } must be related if their values under a given trigonometric function are equal or negatives of each other. The vertical double arrow ⇕ {\displaystyle \Updownarrow } in the last row indicates that θ {\displaystyle \theta } and φ {\displaystyle \varphi } satisfy | sin ⁡ θ | = | sin ⁡ φ | {\displaystyle \left|\sin \theta \right|=\left|\sin \varphi \right|} if and only if they satisfy | cos ⁡ θ | = | cos ⁡ φ | . {\displaystyle \left|\cos \theta \right|=\left|\cos \varphi \right|.} Set of all solutions to elementary trigonometric equations Thus given a single solution θ {\displaystyle \theta } to an elementary trigonometric equation ( sin ⁡ θ = y {\displaystyle \sin \theta =y} is such an equation, for instance, and because sin ⁡ ( arcsin ⁡ y ) = y {\displaystyle \sin(\arcsin y)=y} always holds, θ := arcsin ⁡ y {\displaystyle \theta :=\arcsin y} is always a solution), the set of all solutions to it are: === Transforming equations === The equations above can be transformed by using the reflection and shift identities: These formulas imply, in particular, that the following hold: sin ⁡ θ = − sin ⁡ ( − θ ) = − sin ⁡ ( π + θ ) = − sin ⁡ ( π − θ ) = − cos ⁡ ( π 2 + θ ) = − cos ⁡ ( π 2 − θ ) = − cos ⁡ ( − π 2 − θ ) = − cos ⁡ ( − π 2 + θ ) = − cos ⁡ ( 3 π 2 − θ ) = − cos ⁡ ( − 3 π 2 + θ ) cos ⁡ θ = − cos ⁡ ( − θ ) = − cos ⁡ ( π + θ ) = − cos ⁡ ( π − θ ) = − sin ⁡ ( π 2 + θ ) = − sin ⁡ ( π 2 − θ ) = − sin ⁡ ( − π 2 − θ ) = − sin ⁡ ( − π 2 + θ ) = − sin ⁡ ( 3 π 2 − θ ) = − sin ⁡ ( − 3 π 2 + θ ) tan ⁡ θ = − tan ⁡ ( − θ ) = − tan ⁡ ( π + θ ) = − tan ⁡ ( π − θ ) = − cot ⁡ ( π 2 + θ ) = − cot ⁡ ( π 2 − θ ) = − cot ⁡ ( − π 2 − θ ) = − cot ⁡ ( − π 2 + θ ) = − cot ⁡ ( 3 π 2 − θ ) = − cot ⁡ ( − 3 π 2 + θ ) {\displaystyle {\begin{aligned}\sin \theta &=-\sin(-\theta )&&=-\sin(\pi +\theta )&&={\phantom {-}}\sin(\pi -\theta )\\&=-\cos \left({\frac {\pi }{2}}+\theta \right)&&={\phantom {-}}\cos \left({\frac {\pi }{2}}-\theta \right)&&=-\cos \left(-{\frac {\pi }{2}}-\theta \right)\\&={\phantom {-}}\cos \left(-{\frac {\pi }{2}}+\theta \right)&&=-\cos \left({\frac {3\pi }{2}}-\theta \right)&&=-\cos \left(-{\frac {3\pi }{2}}+\theta \right)\\[0.3ex]\cos \theta &={\phantom {-}}\cos(-\theta )&&=-\cos(\pi +\theta )&&=-\cos(\pi -\theta )\\&={\phantom {-}}\sin \left({\frac {\pi }{2}}+\theta \right)&&={\phantom {-}}\sin \left({\frac {\pi }{2}}-\theta \right)&&=-\sin \left(-{\frac {\pi }{2}}-\theta \right)\\&=-\sin \left(-{\frac {\pi }{2}}+\theta \right)&&=-\sin \left({\frac {3\pi }{2}}-\theta \right)&&={\phantom {-}}\sin \left(-{\frac {3\pi }{2}}+\theta \right)\\[0.3ex]\tan \theta &=-\tan(-\theta )&&={\phantom {-}}\tan(\pi +\theta )&&=-\tan(\pi -\theta )\\&=-\cot \left({\frac {\pi }{2}}+\theta \right)&&={\phantom {-}}\cot \left({\frac {\pi }{2}}-\theta \right)&&={\phantom {-}}\cot \left(-{\frac {\pi }{2}}-\theta \right)\\&=-\cot \left(-{\frac {\pi }{2}}+\theta \right)&&={\phantom {-}}\cot \left({\frac {3\pi }{2}}-\theta \right)&&=-\cot \left(-{\frac {3\pi }{2}}+\theta \right)\\[0.3ex]\end{aligned}}} where swapping sin ↔ csc , {\displaystyle \sin \leftrightarrow \csc ,} swapping cos ↔ sec , {\displaystyle \cos \leftrightarrow \sec ,} and swapping tan ↔ cot {\displaystyle \tan \leftrightarrow \cot } gives the analogous equations for csc , sec , and cot , {\displaystyle \csc ,\sec ,{\text{ and }}\cot ,} respectively. So for example, by using the equality sin ⁡ ( π 2 − θ ) = cos ⁡ θ , {\textstyle \sin \left({\frac {\pi }{2}}-\theta \right)=\cos \theta ,} the equation cos ⁡ θ = x {\displaystyle \cos \theta =x} can be transformed into sin ⁡ ( π 2 − θ ) = x , {\textstyle \sin \left({\frac {\pi }{2}}-\theta \right)=x,} which allows for the solution to the equation sin ⁡ φ = x {\displaystyle \;\sin \varphi =x\;} (where φ := π 2 − θ {\textstyle \varphi :={\frac {\pi }{2}}-\theta } ) to be used; that solution being: φ = ( − 1 ) k arcsin ⁡ ( x ) + π k for some k ∈ Z , {\displaystyle \varphi =(-1)^{k}\arcsin(x)+\pi k\;{\text{ for some }}k\in \mathbb {Z} ,} which becomes: π 2 − θ = ( − 1 ) k arcsin ⁡ ( x ) + π k for some k ∈ Z {\displaystyle {\frac {\pi }{2}}-\theta ~=~(-1)^{k}\arcsin(x)+\pi k\quad {\text{ for some }}k\in \mathbb {Z} } where using the fact that ( − 1 ) k = ( − 1 ) − k {\displaystyle (-1)^{k}=(-1)^{-k}} and substituting h := − k {\displaystyle h:=-k} proves that another solution to cos ⁡ θ = x {\displaystyle \;\cos \theta =x\;} is: θ = ( − 1 ) h + 1 arcsin ⁡ ( x ) + π h + π 2 for some h ∈ Z . {\displaystyle \theta ~=~(-1)^{h+1}\arcsin(x)+\pi h+{\frac {\pi }{2}}\quad {\text{ for some }}h\in \mathbb {Z} .} The substitution arcsin ⁡ x = π 2 − arccos ⁡ x {\displaystyle \;\arcsin x={\frac {\pi }{2}}-\arccos x\;} may be used express the right hand side of the above formula in terms of arccos ⁡ x {\displaystyle \;\arccos x\;} instead of arcsin ⁡ x . {\displaystyle \;\arcsin x.\;} === Relationships between trigonometric functions and inverse trigonometric functions === Trigonometric functions of inverse trigonometric functions are tabulated below. A quick way to derive them is by considering the geometry of a right-angled triangle, with one side of length 1 and another side of length x , {\displaystyle x,} then applying the Pythagorean theorem and definitions of the trigonometric ratios. It is worth noting that for arcsecant and arccosecant, the diagram assumes that x {\displaystyle x} is positive, and thus the result has to be corrected through the use of absolute values and the signum (sgn) operation. === Relationships among the inverse trigonometric functions === Complementary angles: arccos ⁡ ( x ) = π 2 − arcsin ⁡ ( x ) arccot ⁡ ( x ) = π 2 − arctan ⁡ ( x ) arccsc ⁡ ( x ) = π 2 − arcsec ⁡ ( x ) {\displaystyle {\begin{aligned}\arccos(x)&={\frac {\pi }{2}}-\arcsin(x)\\[0.5em]\operatorname {arccot}(x)&={\frac {\pi }{2}}-\arctan(x)\\[0.5em]\operatorname {arccsc}(x)&={\frac {\pi }{2}}-\operatorname {arcsec}(x)\end{aligned}}} Negative arguments: arcsin ⁡ ( − x ) = − arcsin ⁡ ( x ) arccsc ⁡ ( − x ) = − arccsc ⁡ ( x ) arccos ⁡ ( − x ) = π − arccos ⁡ ( x ) arcsec ⁡ ( − x ) = π − arcsec ⁡ ( x ) arctan ⁡ ( − x ) = − arctan ⁡ ( x ) arccot ⁡ ( − x ) = π − arccot ⁡ ( x ) {\displaystyle {\begin{aligned}\arcsin(-x)&=-\arcsin(x)\\\operatorname {arccsc}(-x)&=-\operatorname {arccsc}(x)\\\arccos(-x)&=\pi -\arccos(x)\\\operatorname {arcsec}(-x)&=\pi -\operatorname {arcsec}(x)\\\arctan(-x)&=-\arctan(x)\\\operatorname {arccot}(-x)&=\pi -\operatorname {arccot}(x)\end{aligned}}} Reciprocal arguments: arcsin ⁡ ( 1 x ) = arccsc ⁡ ( x ) arccsc ⁡ ( 1 x ) = arcsin ⁡ ( x ) arccos ⁡ ( 1 x ) = arcsec ⁡ ( x ) arcsec ⁡ ( 1 x ) = arccos ⁡ ( x ) arctan ⁡ ( 1 x ) = arccot ⁡ ( x ) = π 2 − arctan ⁡ ( x ) , if x > 0 arctan ⁡ ( 1 x ) = arccot ⁡ ( x ) − π = − π 2 − arctan ⁡ ( x ) , if x < 0 arccot ⁡ ( 1 x ) = arctan ⁡ ( x ) = π 2 − arccot ⁡ ( x ) , if x > 0 arccot ⁡ ( 1 x ) = arctan ⁡ ( x ) + π = 3 π 2 − arccot ⁡ ( x ) , if x < 0 {\displaystyle {\begin{aligned}\arcsin \left({\frac {1}{x}}\right)&=\operatorname {arccsc}(x)&\\[0.3em]\operatorname {arccsc} \left({\frac {1}{x}}\right)&=\arcsin(x)&\\[0.3em]\arccos \left({\frac {1}{x}}\right)&=\operatorname {arcsec}(x)&\\[0.3em]\operatorname {arcsec} \left({\frac {1}{x}}\right)&=\arccos(x)&\\[0.3em]\arctan \left({\frac {1}{x}}\right)&=\operatorname {arccot}(x)&={\frac {\pi }{2}}-\arctan(x)\,,{\text{ if }}x>0\\[0.3em]\arctan \left({\frac {1}{x}}\right)&=\operatorname {arccot}(x)-\pi &=-{\frac {\pi }{2}}-\arctan(x)\,,{\text{ if }}x<0\\[0.3em]\operatorname {arccot} \left({\frac {1}{x}}\right)&=\arctan(x)&={\frac {\pi }{2}}-\operatorname {arccot}(x)\,,{\text{ if }}x>0\\[0.3em]\operatorname {arccot} \left({\frac {1}{x}}\right)&=\arctan(x)+\pi &={\frac {3\pi }{2}}-\operatorname {arccot}(x)\,,{\text{ if }}x<0\end{aligned}}} The identities above can be used with (and derived from) the fact that sin {\displaystyle \sin } and csc {\displaystyle \csc } are reciprocals (i.e. csc = 1 sin {\displaystyle \csc ={\tfrac {1}{\sin }}} ), as are cos {\displaystyle \cos } and sec , {\displaystyle \sec ,} and tan {\displaystyle \tan } and cot . {\displaystyle \cot .} Useful identities if one only has a fragment of a sine table: arcsin ⁡ ( x ) = 1 2 arccos ⁡ ( 1 − 2 x 2 ) , if 0 ≤ x ≤ 1 arcsin ⁡ ( x ) = arctan ⁡ ( x 1 − x 2 ) arccos ⁡ ( x ) = 1 2 arccos ⁡ ( 2 x 2 − 1 ) , if 0 ≤ x ≤ 1 arccos ⁡ ( x ) = arctan ⁡ ( 1 − x 2 x ) arccos ⁡ ( x ) = arcsin ⁡ ( 1 − x 2 ) , if 0 ≤ x ≤ 1 , from which you get arccos ( 1 − x 2 1 + x 2 ) = arcsin ⁡ ( 2 x 1 + x 2 ) , if 0 ≤ x ≤ 1 arcsin ( 1 − x 2 ) = π 2 − sgn ⁡ ( x ) arcsin ⁡ ( x ) arctan ⁡ ( x ) = arcsin ⁡ ( x 1 + x 2 ) arccot ⁡ ( x ) = arccos ⁡ ( x 1 + x 2 ) {\displaystyle {\begin{aligned}\arcsin(x)&={\frac {1}{2}}\arccos \left(1-2x^{2}\right)\,,{\text{ if }}0\leq x\leq 1\\\arcsin(x)&=\arctan \left({\frac {x}{\sqrt {1-x^{2}}}}\right)\\\arccos(x)&={\frac {1}{2}}\arccos \left(2x^{2}-1\right)\,,{\text{ if }}0\leq x\leq 1\\\arccos(x)&=\arctan \left({\frac {\sqrt {1-x^{2}}}{x}}\right)\\\arccos(x)&=\arcsin \left({\sqrt {1-x^{2}}}\right)\,,{\text{ if }}0\leq x\leq 1{\text{ , from which you get }}\\\arccos &\left({\frac {1-x^{2}}{1+x^{2}}}\right)=\arcsin \left({\frac {2x}{1+x^{2}}}\right)\,,{\text{ if }}0\leq x\leq 1\\\arcsin &\left({\sqrt {1-x^{2}}}\right)={\frac {\pi }{2}}-\operatorname {sgn}(x)\arcsin(x)\\\arctan(x)&=\arcsin \left({\frac {x}{\sqrt {1+x^{2}}}}\right)\\\operatorname {arccot}(x)&=\arccos \left({\frac {x}{\sqrt {1+x^{2}}}}\right)\end{aligned}}} Whenever the square root of a complex number is used here, we choose the root with the positive real part (or positive imaginary part if the square was negative real). A useful form that follows directly from the table above is arctan ⁡ ( x ) = arccos ⁡ ( 1 1 + x 2 ) , if x ≥ 0 {\displaystyle \arctan(x)=\arccos \left({\sqrt {\frac {1}{1+x^{2}}}}\right)\,,{\text{ if }}x\geq 0} . It is obtained by recognizing that cos ⁡ ( arctan ⁡ ( x ) ) = 1 1 + x 2 = cos ⁡ ( arccos ⁡ ( 1 1 + x 2 ) ) {\displaystyle \cos \left(\arctan \left(x\right)\right)={\sqrt {\frac {1}{1+x^{2}}}}=\cos \left(\arccos \left({\sqrt {\frac {1}{1+x^{2}}}}\right)\right)} . From the half-angle formula, tan ⁡ ( θ 2 ) = sin ⁡ ( θ ) 1 + cos ⁡ ( θ ) {\displaystyle \tan \left({\tfrac {\theta }{2}}\right)={\tfrac {\sin(\theta )}{1+\cos(\theta )}}} , we get: arcsin ⁡ ( x ) = 2 arctan ⁡ ( x 1 + 1 − x 2 ) arccos ⁡ ( x ) = 2 arctan ⁡ ( 1 − x 2 1 + x ) , if − 1 < x ≤ 1 arctan ⁡ ( x ) = 2 arctan ⁡ ( x 1 + 1 + x 2 ) {\displaystyle {\begin{aligned}\arcsin(x)&=2\arctan \left({\frac {x}{1+{\sqrt {1-x^{2}}}}}\right)\\[0.5em]\arccos(x)&=2\arctan \left({\frac {\sqrt {1-x^{2}}}{1+x}}\right)\,,{\text{ if }}-1<x\leq 1\\[0.5em]\arctan(x)&=2\arctan \left({\frac {x}{1+{\sqrt {1+x^{2}}}}}\right)\end{aligned}}} === Arctangent addition formula === arctan ⁡ ( u ) ± arctan ⁡ ( v ) = arctan ⁡ ( u ± v 1 ∓ u v ) ( mod π ) , u v ≠ 1 . {\displaystyle \arctan(u)\pm \arctan(v)=\arctan \left({\frac {u\pm v}{1\mp uv}}\right){\pmod {\pi }}\,,\quad uv\neq 1\,.} This is derived from the tangent addition formula tan ⁡ ( α ± β ) = tan ⁡ ( α ) ± tan ⁡ ( β ) 1 ∓ tan ⁡ ( α ) tan ⁡ ( β ) , {\displaystyle \tan(\alpha \pm \beta )={\frac {\tan(\alpha )\pm \tan(\beta )}{1\mp \tan(\alpha )\tan(\beta )}}\,,} by letting α = arctan ⁡ ( u ) , β = arctan ⁡ ( v ) . {\displaystyle \alpha =\arctan(u)\,,\quad \beta =\arctan(v)\,.} == In calculus == === Derivatives of inverse trigonometric functions === The derivatives for complex values of z are as follows: d d z arcsin ⁡ ( z ) = 1 1 − z 2 ; z ≠ − 1 , + 1 d d z arccos ⁡ ( z ) = − 1 1 − z 2 ; z ≠ − 1 , + 1 d d z arctan ⁡ ( z ) = 1 1 + z 2 ; z ≠ − i , + i d d z arccot ⁡ ( z ) = − 1 1 + z 2 ; z ≠ − i , + i d d z arcsec ⁡ ( z ) = 1 z 2 1 − 1 z 2 ; z ≠ − 1 , 0 , + 1 d d z arccsc ⁡ ( z ) = − 1 z 2 1 − 1 z 2 ; z ≠ − 1 , 0 , + 1 {\displaystyle {\begin{aligned}{\frac {d}{dz}}\arcsin(z)&{}={\frac {1}{\sqrt {1-z^{2}}}}\;;&z&{}\neq -1,+1\\{\frac {d}{dz}}\arccos(z)&{}=-{\frac {1}{\sqrt {1-z^{2}}}}\;;&z&{}\neq -1,+1\\{\frac {d}{dz}}\arctan(z)&{}={\frac {1}{1+z^{2}}}\;;&z&{}\neq -i,+i\\{\frac {d}{dz}}\operatorname {arccot}(z)&{}=-{\frac {1}{1+z^{2}}}\;;&z&{}\neq -i,+i\\{\frac {d}{dz}}\operatorname {arcsec}(z)&{}={\frac {1}{z^{2}{\sqrt {1-{\frac {1}{z^{2}}}}}}}\;;&z&{}\neq -1,0,+1\\{\frac {d}{dz}}\operatorname {arccsc}(z)&{}=-{\frac {1}{z^{2}{\sqrt {1-{\frac {1}{z^{2}}}}}}}\;;&z&{}\neq -1,0,+1\end{aligned}}} Only for real values of x: d d x arcsec ⁡ ( x ) = 1 | x | x 2 − 1 ; | x | > 1 d d x arccsc ⁡ ( x ) = − 1 | x | x 2 − 1 ; | x | > 1 {\displaystyle {\begin{aligned}{\frac {d}{dx}}\operatorname {arcsec}(x)&{}={\frac {1}{|x|{\sqrt {x^{2}-1}}}}\;;&|x|>1\\{\frac {d}{dx}}\operatorname {arccsc}(x)&{}=-{\frac {1}{|x|{\sqrt {x^{2}-1}}}}\;;&|x|>1\end{aligned}}} These formulas can be derived in terms of the derivatives of trigonometric functions. For example, if x = sin ⁡ θ {\displaystyle x=\sin \theta } , then d x / d θ = cos ⁡ θ = 1 − x 2 , {\textstyle dx/d\theta =\cos \theta ={\sqrt {1-x^{2}}},} so d d x arcsin ⁡ ( x ) = d θ d x = 1 d x / d θ = 1 1 − x 2 . {\displaystyle {\frac {d}{dx}}\arcsin(x)={\frac {d\theta }{dx}}={\frac {1}{dx/d\theta }}={\frac {1}{\sqrt {1-x^{2}}}}.} === Expression as definite integrals === Integrating the derivative and fixing the value at one point gives an expression for the inverse trigonometric function as a definite integral: arcsin ⁡ ( x ) = ∫ 0 x 1 1 − z 2 d z , | x | ≤ 1 arccos ⁡ ( x ) = ∫ x 1 1 1 − z 2 d z , | x | ≤ 1 arctan ⁡ ( x ) = ∫ 0 x 1 z 2 + 1 d z , arccot ⁡ ( x ) = ∫ x ∞ 1 z 2 + 1 d z , arcsec ⁡ ( x ) = ∫ 1 x 1 z z 2 − 1 d z = π + ∫ − x − 1 1 z z 2 − 1 d z , x ≥ 1 arccsc ⁡ ( x ) = ∫ x ∞ 1 z z 2 − 1 d z = ∫ − ∞ − x 1 z z 2 − 1 d z , x ≥ 1 {\displaystyle {\begin{aligned}\arcsin(x)&{}=\int _{0}^{x}{\frac {1}{\sqrt {1-z^{2}}}}\,dz\;,&|x|&{}\leq 1\\\arccos(x)&{}=\int _{x}^{1}{\frac {1}{\sqrt {1-z^{2}}}}\,dz\;,&|x|&{}\leq 1\\\arctan(x)&{}=\int _{0}^{x}{\frac {1}{z^{2}+1}}\,dz\;,\\\operatorname {arccot}(x)&{}=\int _{x}^{\infty }{\frac {1}{z^{2}+1}}\,dz\;,\\\operatorname {arcsec}(x)&{}=\int _{1}^{x}{\frac {1}{z{\sqrt {z^{2}-1}}}}\,dz=\pi +\int _{-x}^{-1}{\frac {1}{z{\sqrt {z^{2}-1}}}}\,dz\;,&x&{}\geq 1\\\operatorname {arccsc}(x)&{}=\int _{x}^{\infty }{\frac {1}{z{\sqrt {z^{2}-1}}}}\,dz=\int _{-\infty }^{-x}{\frac {1}{z{\sqrt {z^{2}-1}}}}\,dz\;,&x&{}\geq 1\\\end{aligned}}} When x equals 1, the integrals with limited domains are improper integrals, but still well-defined. === Infinite series === Similar to the sine and cosine functions, the inverse trigonometric functions can also be calculated using power series, as follows. For arcsine, the series can be derived by expanding its derivative, 1 1 − z 2 {\textstyle {\tfrac {1}{\sqrt {1-z^{2}}}}} , as a binomial series, and integrating term by term (using the integral definition as above). The series for arctangent can similarly be derived by expanding its derivative 1 1 + z 2 {\textstyle {\frac {1}{1+z^{2}}}} in a geometric series, and applying the integral definition above (see Leibniz series). arcsin ⁡ ( z ) = z + ( 1 2 ) z 3 3 + ( 1 ⋅ 3 2 ⋅ 4 ) z 5 5 + ( 1 ⋅ 3 ⋅ 5 2 ⋅ 4 ⋅ 6 ) z 7 7 + ⋯ = ∑ n = 0 ∞ ( 2 n − 1 ) ! ! ( 2 n ) ! ! z 2 n + 1 2 n + 1 = ∑ n = 0 ∞ ( 2 n ) ! ( 2 n n ! ) 2 z 2 n + 1 2 n + 1 ; | z | ≤ 1 {\displaystyle {\begin{aligned}\arcsin(z)&=z+\left({\frac {1}{2}}\right){\frac {z^{3}}{3}}+\left({\frac {1\cdot 3}{2\cdot 4}}\right){\frac {z^{5}}{5}}+\left({\frac {1\cdot 3\cdot 5}{2\cdot 4\cdot 6}}\right){\frac {z^{7}}{7}}+\cdots \\[5pt]&=\sum _{n=0}^{\infty }{\frac {(2n-1)!!}{(2n)!!}}{\frac {z^{2n+1}}{2n+1}}\\[5pt]&=\sum _{n=0}^{\infty }{\frac {(2n)!}{(2^{n}n!)^{2}}}{\frac {z^{2n+1}}{2n+1}}\,;\qquad |z|\leq 1\end{aligned}}} arctan ⁡ ( z ) = z − z 3 3 + z 5 5 − z 7 7 + ⋯ = ∑ n = 0 ∞ ( − 1 ) n z 2 n + 1 2 n + 1 ; | z | ≤ 1 z ≠ i , − i {\displaystyle \arctan(z)=z-{\frac {z^{3}}{3}}+{\frac {z^{5}}{5}}-{\frac {z^{7}}{7}}+\cdots =\sum _{n=0}^{\infty }{\frac {(-1)^{n}z^{2n+1}}{2n+1}}\,;\qquad |z|\leq 1\qquad z\neq i,-i} Series for the other inverse trigonometric functions can be given in terms of these according to the relationships given above. For example, arccos ⁡ ( x ) = π / 2 − arcsin ⁡ ( x ) {\displaystyle \arccos(x)=\pi /2-\arcsin(x)} , arccsc ⁡ ( x ) = arcsin ⁡ ( 1 / x ) {\displaystyle \operatorname {arccsc}(x)=\arcsin(1/x)} , and so on. Another series is given by: 2 ( arcsin ⁡ ( x 2 ) ) 2 = ∑ n = 1 ∞ x 2 n n 2 ( 2 n n ) . {\displaystyle 2\left(\arcsin \left({\frac {x}{2}}\right)\right)^{2}=\sum _{n=1}^{\infty }{\frac {x^{2n}}{n^{2}{\binom {2n}{n}}}}.} Leonhard Euler found a series for the arctangent that converges more quickly than its Taylor series: arctan ⁡ ( z ) = z 1 + z 2 ∑ n = 0 ∞ ∏ k = 1 n 2 k z 2 ( 2 k + 1 ) ( 1 + z 2 ) . {\displaystyle \arctan(z)={\frac {z}{1+z^{2}}}\sum _{n=0}^{\infty }\prod _{k=1}^{n}{\frac {2kz^{2}}{(2k+1)(1+z^{2})}}.} (The term in the sum for n = 0 is the empty product, so is 1.) Alternatively, this can be expressed as arctan ⁡ ( z ) = ∑ n = 0 ∞ 2 2 n ( n ! ) 2 ( 2 n + 1 ) ! z 2 n + 1 ( 1 + z 2 ) n + 1 . {\displaystyle \arctan(z)=\sum _{n=0}^{\infty }{\frac {2^{2n}(n!)^{2}}{(2n+1)!}}{\frac {z^{2n+1}}{(1+z^{2})^{n+1}}}.} Another series for the arctangent function is given by arctan ⁡ ( z ) = i ∑ n = 1 ∞ 1 2 n − 1 ( 1 ( 1 + 2 i / z ) 2 n − 1 − 1 ( 1 − 2 i / z ) 2 n − 1 ) , {\displaystyle \arctan(z)=i\sum _{n=1}^{\infty }{\frac {1}{2n-1}}\left({\frac {1}{(1+2i/z)^{2n-1}}}-{\frac {1}{(1-2i/z)^{2n-1}}}\right),} where i = − 1 {\displaystyle i={\sqrt {-1}}} is the imaginary unit. ==== Continued fractions for arctangent ==== Two alternatives to the power series for arctangent are these generalized continued fractions: arctan ⁡ ( z ) = z 1 + ( 1 z ) 2 3 − 1 z 2 + ( 3 z ) 2 5 − 3 z 2 + ( 5 z ) 2 7 − 5 z 2 + ( 7 z ) 2 9 − 7 z 2 + ⋱ = z 1 + ( 1 z ) 2 3 + ( 2 z ) 2 5 + ( 3 z ) 2 7 + ( 4 z ) 2 9 + ⋱ {\displaystyle \arctan(z)={\frac {z}{1+{\cfrac {(1z)^{2}}{3-1z^{2}+{\cfrac {(3z)^{2}}{5-3z^{2}+{\cfrac {(5z)^{2}}{7-5z^{2}+{\cfrac {(7z)^{2}}{9-7z^{2}+\ddots }}}}}}}}}}={\frac {z}{1+{\cfrac {(1z)^{2}}{3+{\cfrac {(2z)^{2}}{5+{\cfrac {(3z)^{2}}{7+{\cfrac {(4z)^{2}}{9+\ddots }}}}}}}}}}} The second of these is valid in the cut complex plane. There are two cuts, from −i to the point at infinity, going down the imaginary axis, and from i to the point at infinity, going up the same axis. It works best for real numbers running from −1 to 1. The partial denominators are the odd natural numbers, and the partial numerators (after the first) are just (nz)2, with each perfect square appearing once. The first was developed by Leonhard Euler; the second by Carl Friedrich Gauss utilizing the Gaussian hypergeometric series. === Indefinite integrals of inverse trigonometric functions === For real and complex values of z: ∫ arcsin ⁡ ( z ) d z = z arcsin ⁡ ( z ) + 1 − z 2 + C ∫ arccos ⁡ ( z ) d z = z arccos ⁡ ( z ) − 1 − z 2 + C ∫ arctan ⁡ ( z ) d z = z arctan ⁡ ( z ) − 1 2 ln ⁡ ( 1 + z 2 ) + C ∫ arccot ⁡ ( z ) d z = z arccot ⁡ ( z ) + 1 2 ln ⁡ ( 1 + z 2 ) + C ∫ arcsec ⁡ ( z ) d z = z arcsec ⁡ ( z ) − ln ⁡ [ z ( 1 + z 2 − 1 z 2 ) ] + C ∫ arccsc ⁡ ( z ) d z = z arccsc ⁡ ( z ) + ln ⁡ [ z ( 1 + z 2 − 1 z 2 ) ] + C {\displaystyle {\begin{aligned}\int \arcsin(z)\,dz&{}=z\,\arcsin(z)+{\sqrt {1-z^{2}}}+C\\\int \arccos(z)\,dz&{}=z\,\arccos(z)-{\sqrt {1-z^{2}}}+C\\\int \arctan(z)\,dz&{}=z\,\arctan(z)-{\frac {1}{2}}\ln \left(1+z^{2}\right)+C\\\int \operatorname {arccot}(z)\,dz&{}=z\,\operatorname {arccot}(z)+{\frac {1}{2}}\ln \left(1+z^{2}\right)+C\\\int \operatorname {arcsec}(z)\,dz&{}=z\,\operatorname {arcsec}(z)-\ln \left[z\left(1+{\sqrt {\frac {z^{2}-1}{z^{2}}}}\right)\right]+C\\\int \operatorname {arccsc}(z)\,dz&{}=z\,\operatorname {arccsc}(z)+\ln \left[z\left(1+{\sqrt {\frac {z^{2}-1}{z^{2}}}}\right)\right]+C\end{aligned}}} For real x ≥ 1: ∫ arcsec ⁡ ( x ) d x = x arcsec ⁡ ( x ) − ln ⁡ ( x + x 2 − 1 ) + C ∫ arccsc ⁡ ( x ) d x = x arccsc ⁡ ( x ) + ln ⁡ ( x + x 2 − 1 ) + C {\displaystyle {\begin{aligned}\int \operatorname {arcsec}(x)\,dx&{}=x\,\operatorname {arcsec}(x)-\ln \left(x+{\sqrt {x^{2}-1}}\right)+C\\\int \operatorname {arccsc}(x)\,dx&{}=x\,\operatorname {arccsc}(x)+\ln \left(x+{\sqrt {x^{2}-1}}\right)+C\end{aligned}}} For all real x not between -1 and 1: ∫ arcsec ⁡ ( x ) d x = x arcsec ⁡ ( x ) − sgn ⁡ ( x ) ln ⁡ | x + x 2 − 1 | + C ∫ arccsc ⁡ ( x ) d x = x arccsc ⁡ ( x ) + sgn ⁡ ( x ) ln ⁡ | x + x 2 − 1 | + C {\displaystyle {\begin{aligned}\int \operatorname {arcsec}(x)\,dx&{}=x\,\operatorname {arcsec}(x)-\operatorname {sgn}(x)\ln \left|x+{\sqrt {x^{2}-1}}\right|+C\\\int \operatorname {arccsc}(x)\,dx&{}=x\,\operatorname {arccsc}(x)+\operatorname {sgn}(x)\ln \left|x+{\sqrt {x^{2}-1}}\right|+C\end{aligned}}} The absolute value is necessary to compensate for both negative and positive values of the arcsecant and arccosecant functions. The signum function is also necessary due to the absolute values in the derivatives of the two functions, which create two different solutions for positive and negative values of x. These can be further simplified using the logarithmic definitions of the inverse hyperbolic functions: ∫ arcsec ⁡ ( x ) d x = x arcsec ⁡ ( x ) − arcosh ⁡ ( | x | ) + C ∫ arccsc ⁡ ( x ) d x = x arccsc ⁡ ( x ) + arcosh ⁡ ( | x | ) + C {\displaystyle {\begin{aligned}\int \operatorname {arcsec}(x)\,dx&{}=x\,\operatorname {arcsec}(x)-\operatorname {arcosh} (|x|)+C\\\int \operatorname {arccsc}(x)\,dx&{}=x\,\operatorname {arccsc}(x)+\operatorname {arcosh} (|x|)+C\\\end{aligned}}} The absolute value in the argument of the arcosh function creates a negative half of its graph, making it identical to the signum logarithmic function shown above. All of these antiderivatives can be derived using integration by parts and the simple derivative forms shown above. ==== Example ==== Using ∫ u d v = u v − ∫ v d u {\displaystyle \int u\,dv=uv-\int v\,du} (i.e. integration by parts), set u = arcsin ⁡ ( x ) d v = d x d u = d x 1 − x 2 v = x {\displaystyle {\begin{aligned}u&=\arcsin(x)&dv&=dx\\du&={\frac {dx}{\sqrt {1-x^{2}}}}&v&=x\end{aligned}}} Then ∫ arcsin ⁡ ( x ) d x = x arcsin ⁡ ( x ) − ∫ x 1 − x 2 d x , {\displaystyle \int \arcsin(x)\,dx=x\arcsin(x)-\int {\frac {x}{\sqrt {1-x^{2}}}}\,dx,} which by the simple substitution w = 1 − x 2 , d w = − 2 x d x {\displaystyle w=1-x^{2},\ dw=-2x\,dx} yields the final result: ∫ arcsin ⁡ ( x ) d x = x arcsin ⁡ ( x ) + 1 − x 2 + C {\displaystyle \int \arcsin(x)\,dx=x\arcsin(x)+{\sqrt {1-x^{2}}}+C} == Extension to the complex plane == Since the inverse trigonometric functions are analytic functions, they can be extended from the real line to the complex plane. This results in functions with multiple sheets and branch points. One possible way of defining the extension is: arctan ⁡ ( z ) = ∫ 0 z d x 1 + x 2 z ≠ − i , + i {\displaystyle \arctan(z)=\int _{0}^{z}{\frac {dx}{1+x^{2}}}\quad z\neq -i,+i} where the part of the imaginary axis which does not lie strictly between the branch points (−i and +i) is the branch cut between the principal sheet and other sheets. The path of the integral must not cross a branch cut. For z not on a branch cut, a straight line path from 0 to z is such a path. For z on a branch cut, the path must approach from Re[x] > 0 for the upper branch cut and from Re[x] < 0 for the lower branch cut. The arcsine function may then be defined as: arcsin ⁡ ( z ) = arctan ⁡ ( z 1 − z 2 ) z ≠ − 1 , + 1 {\displaystyle \arcsin(z)=\arctan \left({\frac {z}{\sqrt {1-z^{2}}}}\right)\quad z\neq -1,+1} where (the square-root function has its cut along the negative real axis and) the part of the real axis which does not lie strictly between −1 and +1 is the branch cut between the principal sheet of arcsin and other sheets; arccos ⁡ ( z ) = π 2 − arcsin ⁡ ( z ) z ≠ − 1 , + 1 {\displaystyle \arccos(z)={\frac {\pi }{2}}-\arcsin(z)\quad z\neq -1,+1} which has the same cut as arcsin; arccot ⁡ ( z ) = π 2 − arctan ⁡ ( z ) z ≠ − i , i {\displaystyle \operatorname {arccot}(z)={\frac {\pi }{2}}-\arctan(z)\quad z\neq -i,i} which has the same cut as arctan; arcsec ⁡ ( z ) = arccos ⁡ ( 1 z ) z ≠ − 1 , 0 , + 1 {\displaystyle \operatorname {arcsec}(z)=\arccos \left({\frac {1}{z}}\right)\quad z\neq -1,0,+1} where the part of the real axis between −1 and +1 inclusive is the cut between the principal sheet of arcsec and other sheets; arccsc ⁡ ( z ) = arcsin ⁡ ( 1 z ) z ≠ − 1 , 0 , + 1 {\displaystyle \operatorname {arccsc}(z)=\arcsin \left({\frac {1}{z}}\right)\quad z\neq -1,0,+1} which has the same cut as arcsec. === Logarithmic forms === These functions may also be expressed using complex logarithms. This extends their domains to the complex plane in a natural fashion. The following identities for principal values of the functions hold everywhere that they are defined, even on their branch cuts. arcsin ⁡ ( z ) = − i ln ⁡ ( 1 − z 2 + i z ) = i ln ⁡ ( 1 − z 2 − i z ) = arccsc ⁡ ( 1 z ) arccos ⁡ ( z ) = − i ln ⁡ ( i 1 − z 2 + z ) = π 2 − arcsin ⁡ ( z ) = arcsec ⁡ ( 1 z ) arctan ⁡ ( z ) = − i 2 ln ⁡ ( i − z i + z ) = − i 2 ln ⁡ ( 1 + i z 1 − i z ) = arccot ⁡ ( 1 z ) arccot ⁡ ( z ) = − i 2 ln ⁡ ( z + i z − i ) = − i 2 ln ⁡ ( i z − 1 i z + 1 ) = arctan ⁡ ( 1 z ) arcsec ⁡ ( z ) = − i ln ⁡ ( i 1 − 1 z 2 + 1 z ) = π 2 − arccsc ⁡ ( z ) = arccos ⁡ ( 1 z ) arccsc ⁡ ( z ) = − i ln ⁡ ( 1 − 1 z 2 + i z ) = i ln ⁡ ( 1 − 1 z 2 − i z ) = arcsin ⁡ ( 1 z ) {\displaystyle {\begin{aligned}\arcsin(z)&{}=-i\ln \left({\sqrt {1-z^{2}}}+iz\right)=i\ln \left({\sqrt {1-z^{2}}}-iz\right)&{}=\operatorname {arccsc} \left({\frac {1}{z}}\right)\\[10pt]\arccos(z)&{}=-i\ln \left(i{\sqrt {1-z^{2}}}+z\right)={\frac {\pi }{2}}-\arcsin(z)&{}=\operatorname {arcsec} \left({\frac {1}{z}}\right)\\[10pt]\arctan(z)&{}=-{\frac {i}{2}}\ln \left({\frac {i-z}{i+z}}\right)=-{\frac {i}{2}}\ln \left({\frac {1+iz}{1-iz}}\right)&{}=\operatorname {arccot} \left({\frac {1}{z}}\right)\\[10pt]\operatorname {arccot}(z)&{}=-{\frac {i}{2}}\ln \left({\frac {z+i}{z-i}}\right)=-{\frac {i}{2}}\ln \left({\frac {iz-1}{iz+1}}\right)&{}=\arctan \left({\frac {1}{z}}\right)\\[10pt]\operatorname {arcsec}(z)&{}=-i\ln \left(i{\sqrt {1-{\frac {1}{z^{2}}}}}+{\frac {1}{z}}\right)={\frac {\pi }{2}}-\operatorname {arccsc}(z)&{}=\arccos \left({\frac {1}{z}}\right)\\[10pt]\operatorname {arccsc}(z)&{}=-i\ln \left({\sqrt {1-{\frac {1}{z^{2}}}}}+{\frac {i}{z}}\right)=i\ln \left({\sqrt {1-{\frac {1}{z^{2}}}}}-{\frac {i}{z}}\right)&{}=\arcsin \left({\frac {1}{z}}\right)\end{aligned}}} ==== Generalization ==== Because all of the inverse trigonometric functions output an angle of a right triangle, they can be generalized by using Euler's formula to form a right triangle in the complex plane. Algebraically, this gives us: c e i θ = c cos ⁡ ( θ ) + i c sin ⁡ ( θ ) {\displaystyle ce^{i\theta }=c\cos(\theta )+ic\sin(\theta )} or c e i θ = a + i b {\displaystyle ce^{i\theta }=a+ib} where a {\displaystyle a} is the adjacent side, b {\displaystyle b} is the opposite side, and c {\displaystyle c} is the hypotenuse. From here, we can solve for θ {\displaystyle \theta } . e ln ⁡ ( c ) + i θ = a + i b ln ⁡ c + i θ = ln ⁡ ( a + i b ) θ = Im ⁡ ( ln ⁡ ( a + i b ) ) {\displaystyle {\begin{aligned}e^{\ln(c)+i\theta }&=a+ib\\\ln c+i\theta &=\ln(a+ib)\\\theta &=\operatorname {Im} \left(\ln(a+ib)\right)\end{aligned}}} or θ = − i ln ⁡ ( a + i b c ) {\displaystyle \theta =-i\ln \left({\frac {a+ib}{c}}\right)} Simply taking the imaginary part works for any real-valued a {\displaystyle a} and b {\displaystyle b} , but if a {\displaystyle a} or b {\displaystyle b} is complex-valued, we have to use the final equation so that the real part of the result isn't excluded. Since the length of the hypotenuse doesn't change the angle, ignoring the real part of ln ⁡ ( a + b i ) {\displaystyle \ln(a+bi)} also removes c {\displaystyle c} from the equation. In the final equation, we see that the angle of the triangle in the complex plane can be found by inputting the lengths of each side. By setting one of the three sides equal to 1 and one of the remaining sides equal to our input z {\displaystyle z} , we obtain a formula for one of the inverse trig functions, for a total of six equations. Because the inverse trig functions require only one input, we must put the final side of the triangle in terms of the other two using the Pythagorean Theorem relation a 2 + b 2 = c 2 {\displaystyle a^{2}+b^{2}=c^{2}} The table below shows the values of a, b, and c for each of the inverse trig functions and the equivalent expressions for θ {\displaystyle \theta } that result from plugging the values into the equations θ = − i ln ⁡ ( a + i b c ) {\displaystyle \theta =-i\ln \left({\tfrac {a+ib}{c}}\right)} above and simplifying. a b c − i ln ⁡ ( a + i b c ) θ θ a , b ∈ R arcsin ⁡ ( z ) 1 − z 2 z 1 − i ln ⁡ ( 1 − z 2 + i z 1 ) = − i ln ⁡ ( 1 − z 2 + i z ) Im ⁡ ( ln ⁡ ( 1 − z 2 + i z ) ) arccos ⁡ ( z ) z 1 − z 2 1 − i ln ⁡ ( z + i 1 − z 2 1 ) = − i ln ⁡ ( z + z 2 − 1 ) Im ⁡ ( ln ⁡ ( z + z 2 − 1 ) ) arctan ⁡ ( z ) 1 z 1 + z 2 − i ln ⁡ ( 1 + i z 1 + z 2 ) = − i 2 ln ⁡ ( i − z i + z ) Im ⁡ ( ln ⁡ ( 1 + i z ) ) arccot ⁡ ( z ) z 1 z 2 + 1 − i ln ⁡ ( z + i z 2 + 1 ) = − i 2 ln ⁡ ( z + i z − i ) Im ⁡ ( ln ⁡ ( z + i ) ) arcsec ⁡ ( z ) 1 z 2 − 1 z − i ln ⁡ ( 1 + i z 2 − 1 z ) = − i ln ⁡ ( 1 z + 1 z 2 − 1 ) Im ⁡ ( ln ⁡ ( 1 z + 1 z 2 − 1 ) ) arccsc ⁡ ( z ) z 2 − 1 1 z − i ln ⁡ ( z 2 − 1 + i z ) = − i ln ⁡ ( 1 − 1 z 2 + i z ) Im ⁡ ( ln ⁡ ( 1 − 1 z 2 + i z ) ) {\displaystyle {\begin{aligned}&a&&b&&c&&-i\ln \left({\frac {a+ib}{c}}\right)&&\theta &&\theta _{a,b\in \mathbb {R} }\\\arcsin(z)\ \ &{\sqrt {1-z^{2}}}&&z&&1&&-i\ln \left({\frac {{\sqrt {1-z^{2}}}+iz}{1}}\right)&&=-i\ln \left({\sqrt {1-z^{2}}}+iz\right)&&\operatorname {Im} \left(\ln \left({\sqrt {1-z^{2}}}+iz\right)\right)\\\arccos(z)\ \ &z&&{\sqrt {1-z^{2}}}&&1&&-i\ln \left({\frac {z+i{\sqrt {1-z^{2}}}}{1}}\right)&&=-i\ln \left(z+{\sqrt {z^{2}-1}}\right)&&\operatorname {Im} \left(\ln \left(z+{\sqrt {z^{2}-1}}\right)\right)\\\arctan(z)\ \ &1&&z&&{\sqrt {1+z^{2}}}&&-i\ln \left({\frac {1+iz}{\sqrt {1+z^{2}}}}\right)&&=-{\frac {i}{2}}\ln \left({\frac {i-z}{i+z}}\right)&&\operatorname {Im} \left(\ln \left(1+iz\right)\right)\\\operatorname {arccot}(z)\ \ &z&&1&&{\sqrt {z^{2}+1}}&&-i\ln \left({\frac {z+i}{\sqrt {z^{2}+1}}}\right)&&=-{\frac {i}{2}}\ln \left({\frac {z+i}{z-i}}\right)&&\operatorname {Im} \left(\ln \left(z+i\right)\right)\\\operatorname {arcsec}(z)\ \ &1&&{\sqrt {z^{2}-1}}&&z&&-i\ln \left({\frac {1+i{\sqrt {z^{2}-1}}}{z}}\right)&&=-i\ln \left({\frac {1}{z}}+{\sqrt {{\frac {1}{z^{2}}}-1}}\right)&&\operatorname {Im} \left(\ln \left({\frac {1}{z}}+{\sqrt {{\frac {1}{z^{2}}}-1}}\right)\right)\\\operatorname {arccsc}(z)\ \ &{\sqrt {z^{2}-1}}&&1&&z&&-i\ln \left({\frac {{\sqrt {z^{2}-1}}+i}{z}}\right)&&=-i\ln \left({\sqrt {1-{\frac {1}{z^{2}}}}}+{\frac {i}{z}}\right)&&\operatorname {Im} \left(\ln \left({\sqrt {1-{\frac {1}{z^{2}}}}}+{\frac {i}{z}}\right)\right)\\\end{aligned}}} The particular form of the simplified expression can cause the output to differ from the usual principal branch of each of the inverse trig functions. The formulations given will output the usual principal branch when using the Im ⁡ ( ln ⁡ z ) ∈ ( − π , π ] {\displaystyle \operatorname {Im} \left(\ln z\right)\in (-\pi ,\pi ]} and Re ⁡ ( z ) ≥ 0 {\displaystyle \operatorname {Re} \left({\sqrt {z}}\right)\geq 0} principal branch for every function except arccotangent in the θ {\displaystyle \theta } column. Arccotangent in the θ {\displaystyle \theta } column will output on its usual principal branch by using the Im ⁡ ( ln ⁡ z ) ∈ [ 0 , 2 π ) {\displaystyle \operatorname {Im} \left(\ln z\right)\in [0,2\pi )} and Im ⁡ ( z ) ≥ 0 {\displaystyle \operatorname {Im} \left({\sqrt {z}}\right)\geq 0} convention. In this sense, all of the inverse trig functions can be thought of as specific cases of the complex-valued log function. Since these definition work for any complex-valued z {\displaystyle z} , the definitions allow for hyperbolic angles as outputs and can be used to further define the inverse hyperbolic functions. It's possible to algebraically prove these relations by starting with the exponential forms of the trigonometric functions and solving for the inverse function. ==== Example proof ==== sin ⁡ ( ϕ ) = z ϕ = arcsin ⁡ ( z ) {\displaystyle {\begin{aligned}\sin(\phi )&=z\\\phi &=\arcsin(z)\end{aligned}}} Using the exponential definition of sine, and letting ξ = e i ϕ , {\displaystyle \xi =e^{i\phi },} z = e i ϕ − e − i ϕ 2 i 2 i z = ξ − 1 ξ 0 = ξ 2 − 2 i z ξ − 1 ξ = i z ± 1 − z 2 ϕ = − i ln ⁡ ( i z ± 1 − z 2 ) {\displaystyle {\begin{aligned}z&={\frac {e^{i\phi }-e^{-i\phi }}{2i}}\\[10mu]2iz&=\xi -{\frac {1}{\xi }}\\[5mu]0&=\xi ^{2}-2iz\xi -1\\[5mu]\xi &=iz\pm {\sqrt {1-z^{2}}}\\[5mu]\phi &=-i\ln \left(iz\pm {\sqrt {1-z^{2}}}\right)\end{aligned}}} (the positive branch is chosen) ϕ = arcsin ⁡ ( z ) = − i ln ⁡ ( i z + 1 − z 2 ) {\displaystyle \phi =\arcsin(z)=-i\ln \left(iz+{\sqrt {1-z^{2}}}\right)} == Applications == === Finding the angle of a right triangle === Inverse trigonometric functions are useful when trying to determine the remaining two angles of a right triangle when the lengths of the sides of the triangle are known. Recalling the right-triangle definitions of sine and cosine, it follows that θ = arcsin ⁡ ( opposite hypotenuse ) = arccos ⁡ ( adjacent hypotenuse ) . {\displaystyle \theta =\arcsin \left({\frac {\text{opposite}}{\text{hypotenuse}}}\right)=\arccos \left({\frac {\text{adjacent}}{\text{hypotenuse}}}\right).} Often, the hypotenuse is unknown and would need to be calculated before using arcsine or arccosine using the Pythagorean Theorem: a 2 + b 2 = h 2 {\displaystyle a^{2}+b^{2}=h^{2}} where h {\displaystyle h} is the length of the hypotenuse. Arctangent comes in handy in this situation, as the length of the hypotenuse is not needed. θ = arctan ⁡ ( opposite adjacent ) . {\displaystyle \theta =\arctan \left({\frac {\text{opposite}}{\text{adjacent}}}\right)\,.} For example, suppose a roof drops 8 feet as it runs out 20 feet. The roof makes an angle θ with the horizontal, where θ may be computed as follows: θ = arctan ⁡ ( opposite adjacent ) = arctan ⁡ ( rise run ) = arctan ⁡ ( 8 20 ) ≈ 21.8 ∘ . {\displaystyle \theta =\arctan \left({\frac {\text{opposite}}{\text{adjacent}}}\right)=\arctan \left({\frac {\text{rise}}{\text{run}}}\right)=\arctan \left({\frac {8}{20}}\right)\approx 21.8^{\circ }\,.} === In computer science and engineering === ==== Two-argument variant of arctangent ==== The two-argument atan2 function computes the arctangent of y/x given y and x, but with a range of (−π, π]. In other words, atan2(y, x) is the angle between the positive x-axis of a plane and the point (x, y) on it, with positive sign for counter-clockwise angles (upper half-plane, y > 0), and negative sign for clockwise angles (lower half-plane, y < 0). It was first introduced in many computer programming languages, but it is now also common in other fields of science and engineering. In terms of the standard arctan function, that is with range of (−π/2, π/2), it can be expressed as follows: atan2 ⁡ ( y , x ) = { arctan ⁡ ( y x ) x > 0 arctan ⁡ ( y x ) + π y ≥ 0 , x < 0 arctan ⁡ ( y x ) − π y < 0 , x < 0 π 2 y > 0 , x = 0 − π 2 y < 0 , x = 0 undefined y = 0 , x = 0 {\displaystyle \operatorname {atan2} (y,x)={\begin{cases}\arctan \left({\frac {y}{x}}\right)&\quad x>0\\\arctan \left({\frac {y}{x}}\right)+\pi &\quad y\geq 0,\;x<0\\\arctan \left({\frac {y}{x}}\right)-\pi &\quad y<0,\;x<0\\{\frac {\pi }{2}}&\quad y>0,\;x=0\\-{\frac {\pi }{2}}&\quad y<0,\;x=0\\{\text{undefined}}&\quad y=0,\;x=0\end{cases}}} It also equals the principal value of the argument of the complex number x + iy. This limited version of the function above may also be defined using the tangent half-angle formulae as follows: atan2 ⁡ ( y , x ) = 2 arctan ⁡ ( y x 2 + y 2 + x ) {\displaystyle \operatorname {atan2} (y,x)=2\arctan \left({\frac {y}{{\sqrt {x^{2}+y^{2}}}+x}}\right)} provided that either x > 0 or y ≠ 0. However this fails if given x ≤ 0 and y = 0 so the expression is unsuitable for computational use. The above argument order (y, x) seems to be the most common, and in particular is used in ISO standards such as the C programming language, but a few authors may use the opposite convention (x, y) so some caution is warranted. (See variations at atan2 § Realizations of the function in common computer languages.) ==== Arctangent function with location parameter ==== In many applications the solution y {\displaystyle y} of the equation x = tan ⁡ ( y ) {\displaystyle x=\tan(y)} is to come as close as possible to a given value − ∞ < η < ∞ {\displaystyle -\infty <\eta <\infty } . The adequate solution is produced by the parameter modified arctangent function y = arctan η ⁡ ( x ) := arctan ⁡ ( x ) + π rni ⁡ ( η − arctan ⁡ ( x ) π ) . {\displaystyle y=\arctan _{\eta }(x):=\arctan(x)+\pi \,\operatorname {rni} \left({\frac {\eta -\arctan(x)}{\pi }}\right)\,.} The function rni {\displaystyle \operatorname {rni} } rounds to the nearest integer. ==== Numerical accuracy ==== For angles near 0 and π, arccosine is ill-conditioned, and similarly with arcsine for angles near −π/2 and π/2. Computer applications thus need to consider the stability of inputs to these functions and the sensitivity of their calculations, or use alternate methods. == See also == == Notes == == References == Abramowitz, Milton; Stegun, Irene A., eds. (1972). Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. New York: Dover Publications. ISBN 978-0-486-61272-0. == External links == Weisstein, Eric W. "Inverse Tangent". MathWorld.
Wikipedia:Inversion transformation#0
In mathematical physics, inversion transformations are a natural extension of Poincaré transformations to include all conformal, one-to-one transformations on coordinate space-time. They are less studied in physics because, unlike the rotations and translations of Poincaré symmetry, an object cannot be physically transformed by the inversion symmetry. Some physical theories are invariant under this symmetry, in these cases it is what is known as a 'hidden symmetry'. Other hidden symmetries of physics include gauge symmetry and general covariance. == Early use == In 1831 the mathematician Ludwig Immanuel Magnus began to publish on transformations of the plane generated by inversion in a circle of radius R. His work initiated a large body of publications, now called inversive geometry. The most prominently named mathematician became August Ferdinand Möbius once he reduced the planar transformations to complex number arithmetic. In the company of physicists employing the inversion transformation early on was Lord Kelvin, and the association with him leads it to be called the Kelvin transform. == Transformation on coordinates == In the following we shall use imaginary time ( t ′ = i t {\displaystyle t'=it} ) so that space-time is Euclidean and the equations are simpler. The Poincaré transformations are given by the coordinate transformation on space-time parametrized by the 4-vectors V V μ ′ = O μ ν V ν + P μ {\displaystyle V_{\mu }^{\prime }=O_{\mu }^{\nu }V_{\nu }+P_{\mu }\,} where O {\displaystyle O} is an orthogonal matrix and P {\displaystyle P} is a 4-vector. Applying this transformation twice on a 4-vector gives a third transformation of the same form. The basic invariant under this transformation is the space-time length given by the distance between two space-time points given by 4-vectors x and y: r = | x − y | . {\displaystyle r=|x-y|.\,} These transformations are subgroups of general 1-1 conformal transformations on space-time. It is possible to extend these transformations to include all 1-1 conformal transformations on space-time V μ ′ = ( A τ ν V ν + B τ ) ( C τ μ ν V ν + D τ μ ) − 1 . {\displaystyle V_{\mu }^{\prime }=\left(A_{\tau }^{\nu }V_{\nu }+B_{\tau }\right)\left(C_{\tau \mu }^{\nu }V_{\nu }+D_{\tau \mu }\right)^{-1}.} We must also have an equivalent condition to the orthogonality condition of the Poincaré transformations: A A T + B C = D D T + C B {\displaystyle AA^{T}+BC=DD^{T}+CB\,} Because one can divide the top and bottom of the transformation by D , {\displaystyle D,} we lose no generality by setting D {\displaystyle D} to the unit matrix. We end up with V μ ′ = ( O μ ν V ν + P τ ) ( δ τ μ + Q τ μ ν V ν ) − 1 . {\displaystyle V_{\mu }^{\prime }=\left(O_{\mu }^{\nu }V_{\nu }+P_{\tau }\right)\left(\delta _{\tau \mu }+Q_{\tau \mu }^{\nu }V_{\nu }\right)^{-1}.\,} Applying this transformation twice on a 4-vector gives a transformation of the same form. The new symmetry of 'inversion' is given by the 3-tensor Q . {\displaystyle Q.} This symmetry becomes Poincaré symmetry if we set Q = 0. {\displaystyle Q=0.} When Q = 0 {\displaystyle Q=0} the second condition requires that O {\displaystyle O} is an orthogonal matrix. This transformation is 1-1 meaning that each point is mapped to a unique point only if we theoretically include the points at infinity. == Invariants == The invariants for this symmetry in 4 dimensions is unknown however it is known that the invariant requires a minimum of 4 space-time points. In one dimension, the invariant is the well known cross-ratio from Möbius transformations: ( x − X ) ( y − Y ) ( x − Y ) ( y − X ) . {\displaystyle {\frac {(x-X)(y-Y)}{(x-Y)(y-X)}}.} Because the only invariants under this symmetry involve a minimum of 4 points, this symmetry cannot be a symmetry of point particle theory. Point particle theory relies on knowing the lengths of paths of particles through space-time (e.g., from x {\displaystyle x} to y {\displaystyle y} ). The symmetry can be a symmetry of a string theory in which the strings are uniquely determined by their endpoints. The propagator for this theory for a string starting at the endpoints ( x , X ) {\displaystyle (x,X)} and ending at the endpoints ( y , Y ) {\displaystyle (y,Y)} is a conformal function of the 4-dimensional invariant. A string field in endpoint-string theory is a function over the endpoints. ϕ ( x , X ) . {\displaystyle \phi (x,X).\,} == Physical evidence == Although it is natural to generalize the Poincaré transformations in order to find hidden symmetries in physics and thus narrow down the number of possible theories of high-energy physics, it is difficult to experimentally examine this symmetry as it is not possible to transform an object under this symmetry. The indirect evidence of this symmetry is given by how accurately fundamental theories of physics that are invariant under this symmetry make predictions. Other indirect evidence is whether theories that are invariant under this symmetry lead to contradictions such as giving probabilities greater than 1. So far there has been no direct evidence that the fundamental constituents of the Universe are strings. The symmetry could also be a broken symmetry meaning that although it is a symmetry of physics, the Universe has 'frozen out' in one particular direction so this symmetry is no longer evident. == See also == Rotation group SO(3) Coordinate rotations and reflections Spacetime symmetries CPT symmetry Field (physics) superstrings == References ==
Wikipedia:Investigations in Mathematics Learning#0
Investigations in Mathematics Learning is the official research journal of the Research Council for Mathematics Learning. RCML seeks to stimulate, generate, coordinate, and disseminate research efforts designed to understand and/or influence factors that affect mathematics learning. == References ==
Wikipedia:Involution (mathematics)#0
In mathematics, an involution, involutory function, or self-inverse function is a function f that is its own inverse, f(f(x)) = x for all x in the domain of f. Equivalently, applying f twice produces the original value. == General properties == Any involution is a bijection. The identity map is a trivial example of an involution. Examples of nontrivial involutions include negation (x ↦ −x), reciprocation (x ↦ 1/x), and complex conjugation (z ↦ z) in arithmetic; reflection, half-turn rotation, and circle inversion in geometry; complementation in set theory; and reciprocal ciphers such as the ROT13 transformation and the Beaufort polyalphabetic cipher. The composition g ∘ f of two involutions f and g is an involution if and only if they commute: g ∘ f = f ∘ g. == Involutions on finite sets == The number of involutions, including the identity involution, on a set with n = 0, 1, 2, ... elements is given by a recurrence relation found by Heinrich August Rothe in 1800: a 0 = a 1 = 1 {\displaystyle a_{0}=a_{1}=1} and a n = a n − 1 + ( n − 1 ) a n − 2 {\displaystyle a_{n}=a_{n-1}+(n-1)a_{n-2}} for n > 1. {\displaystyle n>1.} The first few terms of this sequence are 1, 1, 2, 4, 10, 26, 76, 232 (sequence A000085 in the OEIS); these numbers are called the telephone numbers, and they also count the number of Young tableaux with a given number of cells. The number an can also be expressed by non-recursive formulas, such as the sum a n = ∑ m = 0 ⌊ n 2 ⌋ n ! 2 m m ! ( n − 2 m ) ! . {\displaystyle a_{n}=\sum _{m=0}^{\lfloor {\frac {n}{2}}\rfloor }{\frac {n!}{2^{m}m!(n-2m)!}}.} The number of fixed points of an involution on a finite set and its number of elements have the same parity. Thus the number of fixed points of all the involutions on a given finite set have the same parity. In particular, every involution on an odd number of elements has at least one fixed point. This can be used to prove Fermat's two squares theorem. == Involution throughout the fields of mathematics == === Real-valued functions === The graph of an involution (on the real numbers) is symmetric across the line y = x. This is due to the fact that the inverse of any general function will be its reflection over the line y = x. This can be seen by "swapping" x with y. If, in particular, the function is an involution, then its graph is its own reflection. Some basic examples of involutions include the functions f ( x ) = a − x , f ( x ) = b x − a + a {\displaystyle {\begin{alignedat}{1}f(x)&=a-x\;,\\f(x)&={\frac {b}{x-a}}+a\end{alignedat}}} Besides, we can construct an involution by wrapping an involution g in a bijection h and its inverse ( h − 1 ∘ g ∘ h {\displaystyle h^{-1}\circ g\circ h} ). For instance : f ( x ) = 1 − x 2 on [ 0 ; 1 ] ( g ( x ) = 1 − x and h ( x ) = x 2 ) , f ( x ) = ln ⁡ ( e x + 1 e x − 1 ) ( g ( x ) = x + 1 x − 1 = 2 x − 1 + 1 and h ( x ) = e x ) {\displaystyle {\begin{alignedat}{2}f(x)&={\sqrt {1-x^{2}}}\quad {\textrm {on}}\;[0;1]&{\bigl (}g(x)=1-x\quad {\textrm {and}}\quad h(x)=x^{2}{\bigr )},\\f(x)&=\ln \left({\frac {e^{x}+1}{e^{x}-1}}\right)&{\bigl (}g(x)={\frac {x+1}{x-1}}={\frac {2}{x-1}}+1\quad {\textrm {and}}\quad h(x)=e^{x}{\bigr )}\\\end{alignedat}}} === Euclidean geometry === A simple example of an involution of the three-dimensional Euclidean space is reflection through a plane. Performing a reflection twice brings a point back to its original coordinates. Another involution is reflection through the origin; not a reflection in the above sense, and so, a distinct example. These transformations are examples of affine involutions. === Projective geometry === An involution is a projectivity of period 2, that is, a projectivity that interchanges pairs of points.: 24 Any projectivity that interchanges two points is an involution. The three pairs of opposite sides of a complete quadrangle meet any line (not through a vertex) in three pairs of an involution. This theorem has been called Desargues's Involution Theorem. Its origins can be seen in Lemma IV of the lemmas to the Porisms of Euclid in Volume VII of the Collection of Pappus of Alexandria. If an involution has one fixed point, it has another, and consists of the correspondence between harmonic conjugates with respect to these two points. In this instance the involution is termed "hyperbolic", while if there are no fixed points it is "elliptic". In the context of projectivities, fixed points are called double points.: 53 Another type of involution occurring in projective geometry is a polarity that is a correlation of period 2. === Linear algebra === In linear algebra, an involution is a linear operator T on a vector space, such that T2 = I. Except for in characteristic 2, such operators are diagonalizable for a given basis with just 1s and −1s on the diagonal of the corresponding matrix. If the operator is orthogonal (an orthogonal involution), it is orthonormally diagonalizable. For example, suppose that a basis for a vector space V is chosen, and that e1 and e2 are basis elements. There exists a linear transformation f that sends e1 to e2, and sends e2 to e1, and that is the identity on all other basis vectors. It can be checked that f(f(x)) = x for all x in V. That is, f is an involution of V. For a specific basis, any linear operator can be represented by a matrix T. Every matrix has a transpose, obtained by swapping rows for columns. This transposition is an involution on the set of matrices. Since elementwise complex conjugation is an independent involution, the conjugate transpose or Hermitian adjoint is also an involution. The definition of involution extends readily to modules. Given a module M over a ring R, an R endomorphism f of M is called an involution if f2 is the identity homomorphism on M. Involutions are related to idempotents; if 2 is invertible then they correspond in a one-to-one manner. In functional analysis, Banach *-algebras and C*-algebras are special types of Banach algebras with involutions. === Quaternion algebra, groups, semigroups === In a quaternion algebra, an (anti-)involution is defined by the following axioms: if we consider a transformation x ↦ f ( x ) {\displaystyle x\mapsto f(x)} then it is an involution if f ( f ( x ) ) = x {\displaystyle f(f(x))=x} (it is its own inverse) f ( x 1 + x 2 ) = f ( x 1 ) + f ( x 2 ) {\displaystyle f(x_{1}+x_{2})=f(x_{1})+f(x_{2})} and f ( λ x ) = λ f ( x ) {\displaystyle f(\lambda x)=\lambda f(x)} (it is linear) f ( x 1 x 2 ) = f ( x 1 ) f ( x 2 ) {\displaystyle f(x_{1}x_{2})=f(x_{1})f(x_{2})} An anti-involution does not obey the last axiom but instead f ( x 1 x 2 ) = f ( x 2 ) f ( x 1 ) {\displaystyle f(x_{1}x_{2})=f(x_{2})f(x_{1})} This former law is sometimes called antidistributive. It also appears in groups as (xy)−1 = (y)−1(x)−1. Taken as an axiom, it leads to the notion of semigroup with involution, of which there are natural examples that are not groups, for example square matrix multiplication (i.e. the full linear monoid) with transpose as the involution. === Ring theory === In ring theory, the word involution is customarily taken to mean an antihomomorphism that is its own inverse function. Examples of involutions in common rings: complex conjugation on the complex plane, and its equivalent in the split-complex numbers taking the transpose in a matrix ring. === Group theory === In group theory, an element of a group is an involution if it has order 2; that is, an involution is an element a such that a ≠ e and a2 = e, where e is the identity element. Originally, this definition agreed with the first definition above, since members of groups were always bijections from a set into itself; that is, group was taken to mean permutation group. By the end of the 19th century, group was defined more broadly, and accordingly so was involution. A permutation is an involution if and only if it can be written as a finite product of disjoint transpositions. The involutions of a group have a large impact on the group's structure. The study of involutions was instrumental in the classification of finite simple groups. An element x of a group G is called strongly real if there is an involution t with xt = x−1 (where xt = x−1 = t−1 ⋅ x ⋅ t). Coxeter groups are groups generated by a set S of involutions subject only to relations involving powers of pairs of elements of S. Coxeter groups can be used, among other things, to describe the possible regular polyhedra and their generalizations to higher dimensions. === Mathematical logic === The operation of complement in Boolean algebras is an involution. Accordingly, negation in classical logic satisfies the law of double negation: ¬¬A is equivalent to A. Generally in non-classical logics, negation that satisfies the law of double negation is called involutive. In algebraic semantics, such a negation is realized as an involution on the algebra of truth values. Examples of logics that have involutive negation are Kleene and Bochvar three-valued logics, Łukasiewicz many-valued logic, the fuzzy logic 'involutive monoidal t-norm logic' (IMTL), etc. Involutive negation is sometimes added as an additional connective to logics with non-involutive negation; this is usual, for example, in t-norm fuzzy logics. The involutiveness of negation is an important characterization property for logics and the corresponding varieties of algebras. For instance, involutive negation characterizes Boolean algebras among Heyting algebras. Correspondingly, classical Boolean logic arises by adding the law of double negation to intuitionistic logic. The same relationship holds also between MV-algebras and BL-algebras (and so correspondingly between Łukasiewicz logic and fuzzy logic BL), IMTL and MTL, and other pairs of important varieties of algebras (respectively, corresponding logics). In the study of binary relations, every relation has a converse relation. Since the converse of the converse is the original relation, the conversion operation is an involution on the category of relations. Binary relations are ordered through inclusion. While this ordering is reversed with the complementation involution, it is preserved under conversion. === Computer science === The XOR bitwise operation with a given value for one parameter is an involution on the other parameter. XOR masks in some instances were used to draw graphics on images in such a way that drawing them twice on the background reverts the background to its original state. Two special cases of this, which are also involutions, are the bitwise NOT operation which is XOR with an all-ones value, and stream cipher encryption, which is an XOR with a secret keystream. This predates binary computers; practically all mechanical cipher machines implement a reciprocal cipher, an involution on each typed-in letter. Instead of designing two kinds of machines, one for encrypting and one for decrypting, all the machines can be identical and can be set up (keyed) the same way. Another involution used in computers is an order-2 bitwise permutation. For example. a color value stored as integers in the form (R, G, B), could exchange R and B, resulting in the form (B, G, R): f(f(RGB)) = RGB, f(f(BGR)) = BGR. === Physics === Legendre transformation, which converts between the Lagrangian and Hamiltonian, is an involutive operation. Integrability, a central notion of physics and in particular the subfield of integrable systems, is closely related to involution, for example in context of Kramers–Wannier duality. == See also == Atbash Automorphism Idempotence ROT13 == References == == Further reading == Ell, Todd A.; Sangwine, Stephen J. (2007). "Quaternion involutions and anti-involutions". Computers & Mathematics with Applications. 53 (1): 137–143. arXiv:math/0506034. doi:10.1016/j.camwa.2006.10.029. S2CID 45639619. Knus, Max-Albert; Merkurjev, Alexander; Rost, Markus; Tignol, Jean-Pierre (1998), The book of involutions, Colloquium Publications, vol. 44, With a preface by J. Tits, Providence, RI: American Mathematical Society, ISBN 0-8218-0904-0, Zbl 0955.16001 "Involution", Encyclopedia of Mathematics, EMS Press, 2001 [1994] == External links == Media related to Involution at Wikimedia Commons
Wikipedia:Ioan Dzițac#0
Ioan Dzițac (14 February 1953 – 6 February 2021) was a Romanian professor (of Ukrainian descent) of mathematics and computer science. He obtained his B.S. and M.Sc. in Mathematics (1977) and PhD in Computer Science (2002) from Babeș-Bolyai University of Cluj-Napoca. He was a professor at the Aurel Vlaicu University of Arad and part of the leadership of Agora University in Oradea until his sudden death in 2021. == Education and career == Dzițac was born in Poienile de sub Munte, Maramureș County. After attending elementary school in Repedea (1960–1968), he studied at the Dragoș Vodă High School in Sighetu Marmației (1968–1972) and then at the Faculty of Mathematics, Babeș-Bolyai University (1972–1977). In 2002 he obtained his PhD in computer science at the Faculty of Mathematics and Informatics, Babeș-Bolyai University with thesis "Methods for parallel and distributed computing in solving operational equations" under the supervision of Grigor Moldovan. Between 1977 and 1991, Dzițac taught mathematics in pre-university education, obtaining a permanent teacher certification on all levels (1980), second grade teacher certification (1985), and first grade teacher certification (1990). In 1986 he received the title of Distinguished Professor. Since 1991, he accessed through competition in the higher education system the position of Lecturer (1991–2003) and associate professor (2003–2005) at the University of Oradea, associate professor (2005–2009) at Agora University, and then Professor (2009–2021) at Aurel Vlaicu University of Arad. At Agora University he founded, together with Florin Gheorghe Filip and Mișu-Jan Manolescu, the International conference on Computers, Communications & Control (ICCCC) and the International Journal of Computers, Communications & Control (IJCCC) journals which, in less than two years, have been covered by Thomson ISI. Since 2006 he was the associate editor-in-chief of the IJCCC journal until his sudden death in 2021. Dzițac was a visiting professor at the Chinese Academy of Science (2013–2016), as well as a consulting member of the Hoseo University in South Korea. == Management positions == In 1996, Dzițac was elected vice president of the Romanian Society of Applied and Industrial Mathematics (ROMAI), a position he occupied until 2011 (he was re-elected in 1999–2009). In April 2004, he was elected as director of the Department of Mathematics and Computer Science of the University of Oradea, a position he occupied for a year, and in October 2005 he was elected head of department at Agora University. Since October 2009 he was the director of the Centre "Agora Research & Development". As of 2012, Dzițac was rector of Agora University. == Awards == In recognition of his merits, Dzițac was awarded the following degrees and titles (see [1]) Title of “Distinguished Professor” accorded by the Romanian Ministry of Education (1988) The Award for Young Researcher accorded by the Romanian Society of Applied and Industrial Mathematics (2003) “SIVECO” Popularity Award accorded for an informatics product for E-Learning (2006) “Excellence Diploma” accorded by the Associations of the Economics Faculties from Romania (2007) “Excellence Diploma” EWNLC 2008 (2008) Title of “Teacher of the Year 2008” accorded by the Agora University Senate “Excellence Diploma” accorded by the “Aurel Vlaicu” University from Arad (2010) Senior Member of IEEE (2011) == Published works == Dzițac was the author or co-author of over 50 scientific papers in mathematics, computer science, and didactics, including over 15 in the ISI. Dzițac was the author / co-author / editor of over 20 books on mathematics, computer science and didactics. May be mentioned: Monte Carlo Method: Hazard and Determinism (in collaboration, University of Oradea Publishing House, 2000) Parallel Computing (University of Oradea Publishing House, 2001) Didactics of Informatics (in collaboration, University of Oradea Publishing House, 2003) Proceedings of the 11th Conference on applied and industrial mathematics (CAIM 2003), Vol. 1–2 (in collaboration, University of Oradea Publishing House, 2003) Proceedings of International Conference on Computers and Communications-ICCC 2004 (in collaboration, University of Oradea Publishing House, 2004) Economic Mathematics ( Agora University Publishing House, 2005) Distributed Systems: Information Models (in collaboration, Agora University Publishing House, 2006) Distributed Systems: Mathematical Models (in collaboration, Agora University Publishing House, 2006) Parallel and Distributed Methods for Algebraic Systems Resolution ( Agora University Publishing House, 2006) Proceedings of International Conference on Computers, Communications & Control-ICCCC 2006 (in collaboration, Agora University Publishing House, 2006), Proceedings of International Conference on Computers, Communications & Control-ICCCC 2008 (in collaboration, Agora University Publishing House, 2008), From Natural Language to Soft Computing: New Paradigms in Artificial Intelligence (in collaboration, Romanian Academy Publishing House, 2008) etc. == Family life == Dzițac was married to Karla Dzițac, marriage that resulted in two children: Renata Moca and Cristian Dzițac. Renata is a commodity specialist at Plexus in Oradea and Cristian is an engineer. Dzițac divorced his first wife in the 1990s. In 1996, on 14 February, he married Simona Dzițac (an Assistant Professor of Engineering at University of Oradea) with whom he had another daughter, Domnica Ioana Dzițac (born January 30, 1999). He considered his youngest daughter, Domnica, his heiress to his academic and professional work. She attended the International Baccalaureate Diploma in Denmark and then continued on her father's steps with an undergraduate degree in Computer Science from New York University Abu Dhabi (September 2017 – May 2021). == Death == Dzițac died on 6 February 2021 due to a heart attack in Oradea, Romania, eight days short from his 68th birthday. He was greatly regretted by family, friends, collaborators, mentees and students. His work will remain as legacy. == Notes == Adelina GEORGESCU, Cătălin-Liviu BICHIR, George CÂRLIG, Romanian Mathematicians from everywhere, Ed. Power Flower, 2004, Archived 2013-05-25 at the Wayback Machine Personal website of Ioan Dzițac Romanian National Library Ioan Dzițac profile at Ad Astra ROMAI website Agora University ISI Web of Science RID "Aurel Vlaicu" University of Arad Prof.univ.Phd. Ioan Dzițac, Archived 2010-07-19 at the Wayback Machine Ioan Dzițac in Google Scholar Citations == External links == Website International Conference on Computers, Communications & Control Website International Journal of Computers, Communications & Control
Wikipedia:Ioana Dumitriu#0
Ioana Dumitriu (born July 6, 1976) is a Romanian-American mathematician who works as a professor of mathematics at the University of California, San Diego. Her research interests include the theory of random matrices, numerical analysis, scientific computing, and game theory. == Life == Dumitriu is the daughter of two Romanian electrical engineering professors from Bucharest. Early in her life she was identified as having mathematical talent, and at age 11 won a national mathematics contest. She entered mathematics training camps in preparation for participation on the Romanian team at the International Mathematical Olympiad, although her highest level of participation in the olympiad was the national semifinal. As a 19-year-old freshman at New York University (NYU), Dumitriu already was taking graduate-level classes in mathematics. She graduated summa cum laude from NYU in 1999 with a B.A. in mathematics and a minor in computer science. She earned her Ph.D. in 2003 from the Massachusetts Institute of Technology under the supervision of Alan Edelman, with a thesis on Eigenvalue statistics for beta-ensembles. After postdoctoral research as a Miller Research Fellow at the University of California, Berkeley, she joined the faculty of the University of Washington in 2006, moving to UC San Diego in 2019. == Awards and honors == Dumitriu won the Alice T. Schafer prize for excellence in mathematics by an undergraduate woman in 1996. Also in 1996, as a sophomore at New York University, Dumitriu became the first woman to become a Putnam Fellow, meaning that she earned one of the top five scores at the William Lowell Putnam Mathematical Competition. In 1995, 1996, and 1997 she won the Elizabeth Lowell Putnam Award that is given to the top woman in the contest, a record that was not matched until ten years later when Alison Miller also won the same award in three consecutive years. She won the Leslie Fox Prize for Numerical Analysis (given to a young numerical analysis researcher who excels both mathematically and in presentation skills) in 2007. In 2009 she received a CAREER Award from the National Science Foundation. In 2012, she became one of the inaugural fellows of the American Mathematical Society. == References == == Selected publications == Dumitriu, Ioana; Edelman, Alan (2002). "Matrix models for beta ensembles". Journal of Mathematical Physics. 43 (11): 5830–5847. arXiv:math-ph/0206043. Bibcode:2002JMP....43.5830D. doi:10.1063/1.1507823. MR 1936554. S2CID 15758222. Demmel, James; Dumitriu, Ioana; Holtz, Olga (2007). "Fast linear algebra is stable". Numerische Mathematik. 108 (1): 59–91. arXiv:math/0612264. doi:10.1007/s00211-007-0114-x. MR 2350185. S2CID 6057731. == External links == Ioana Dumitriu publications indexed by Google Scholar
Wikipedia:Ion Ghica#0
Ion Ghica (Romanian pronunciation: [iˈon ˈɡika] ; 12 August 1816 – 7 May 1897) was a Romanian statesman, mathematician, diplomat and politician, who was Prime Minister of Romania five times. He was a full member of the Romanian Academy and its president many times (1876–1882, 1884–1887, 1890–1893 and 1894–1895). He was the older brother and associate of Pantazi Ghica, a prolific writer and politician. == Early life and Revolution == He was born in Bucharest, Wallachia, to the prominent Ghica boyar family, and was the nephew of both Grigore Alexandru Ghica (who was to become Prince of Wallachia in the 1840s and 1850s) and Ion Câmpineanu, a Carbonari-inspired radical. His father was Dimitrie (Tache) Ghica and his mother – Maria née Câmpineanu. Ion Ghica was educated in Bucharest and in Western Europe, studying engineering and mathematics in Mine School of Paris (France) from 1837 to 1840. After finishing his studies in Paris, he left for Moldavia and was involved in the failed Frăția ("Brotherhood") conspiracy of 1848, which was intended to bring about the union of Wallachia and Moldavia under one native Romanian leader, Prince Mihai Sturdza. Ion Ghica became a professor on geology and mineralogy and later professor on political economy at the Academia Mihăileană which was founded by the same Prince Sturdza in Iași (future University of Iași). He is considered the first great Romanian economist. He joined the Wallachian revolutionary camp, and, in the name of the Provisional Government then established in Bucharest, went to Istanbul to approach the Ottoman Imperial government; he, Nicolae Bălcescu, and General Gheorghe Magheru were instrumental in mediating negotiations between the Transylvanian Romanian leader Avram Iancu and the Hungarian Revolutionary government of Lajos Kossuth. == Prince of Samos == While in Istanbul, he was appointed Bey (governor) of Samos (1854–1859), where he proved his leadership skills by extirpating local piracy (most of which was aimed at transports supplying the Crimean War). After completing the task, Ghica was awarded the honorary title of Bey of Samos by Sultan Abd-ul-Mejid I in 1856. == Political career in Romania == In 1859, after the union of Moldavia and Walachia had been effected, Prince Alexandru Ioan Cuza asked Ion Ghica to return. Later (1866), despite being trusted by Prince Cuza, Ghica took active part in the secret grouping that secured Cuza's overthrow. He was the first prime minister under Prince of Romania (afterwards King of Romania) Carol of Hohenzollern. In 1866, Ghica became the first chairman of the newly established Bank of Romania. He is also noted as one of the first major Liberal figures in the Kingdom of Romania, and one of the leaders of the incipient Liberal Party. His group's radicalism, with its boyar leadership that had engineered the defunct Revolution, surfaced as republicanism whenever Carol approached the Conservatives; Ghica joined the anti-dynastic movement of 1870–1871 that had surfaced with the Republic of Ploiești. The matter of the Liberals' loyalty was ultimately settled 1876, with the exceptionally long Liberal Ministry of Ion Brătianu. In 1881, Ghica was appointed Romanian Minister in London, an office he retained until 1889; he died in Ghergani, Dâmbovița County. Furthermore, Ghica was a member of the Macedo-Romanian Cultural Society. == Works == Beside his political distinction, Ion Ghica earned a literary reputation by writing his Letters, addressed to Vasile Alecsandri, his lifelong friend. Conceived and written during his residency in London, the letters depict the ancestral stage of Romanian society, as it appeared to be fading away. He was also the author of Amintiri din pribegie ("Recollections from Exile"), in 1848, and of Convorbiri Economice ("Conversations on Economics"), dealing with major economic issues. He was the first to advocate the favoring of local initiatives over foreign investments in industry and commerce – to a certain extent, this took the form of protectionism (a characteristic of the Liberal Party throughout the coming period, and until World War II). == Footnotes == == References == Gaster, Moses (1911). "Ghica" . In Chisholm, Hugh (ed.). Encyclopædia Britannica. Vol. 11 (11th ed.). Cambridge University Press. p. 922. Mamina, Ion; Bulei, Ion (1994), Guverne și guvernanți (1866–1916) (in Romanian), București: Silex Publishing Neagoe, Stelian (1995), Istoria guvernelor României de la începuturi – 1859 până în zilele noastre – 1995 (in Romanian), București: Machiavelli Publishing
Wikipedia:Irene M. Gamba#0
Irene Martínez Gamba (born 1957) is an Argentine–American mathematician. She works as a professor of mathematics at the University of Texas at Austin, where she holds the W.A. Tex Moncrief, Jr. Chair in Computational Engineering and Sciences and is head of the Applied Mathematics Group in the Oden Institute for Computational Engineering and Sciences. == Education and career == Gamba graduated from the University of Buenos Aires in 1981. She went to the University of Chicago for her graduate studies, earning a master's degree in 1985 and a Ph.D. in 1989, under the supervision of Jim Douglas, Jr. After postdoctoral studies at Purdue University and the Courant Institute of Mathematical Sciences of New York University, she became an assistant professor at NYU in 1994 and associate professor in 1996. She became a professor at the University of Texas at Austin in 1997. At the University of Texas, she was the Joe B. and Louise Cook Professor from 2007 to 2013, the John T. Stuart III Centennial Professor from 2013 to 2014, and the W.A. Tex Moncreif, Jr. Chair in Computational Sciences and Engineering III since 2014. == Recognition == In 2012, Gamba became a fellow of the Society for Industrial and Applied Mathematics, and one of the inaugural fellows of the American Mathematical Society. The Association for Women in Mathematics selected her as their 2014 Sonia Kovalevsky Lecturer. She is also a former AMS Council Member at large. == References == == External links == Home page Irene M. Gamba publications indexed by Google Scholar
Wikipedia:Irene Moroz#0
Irene Margaret Moroz is a British applied mathematician whose research interests include differential equations including the Schrödinger–Newton equation, attractors, synchronization of chaos, and applications to geophysical fluid dynamics, voice analysis, the population dynamics of plankton, and dynamo theory. She is Professor of Mathematics and Applied Mathematics in the Mathematical Institute, University of Oxford and Senior Mathematics Fellow at St Hilda's College, Oxford. Moroz was educated at the University of Leeds, and was formerly affiliated with the University of East Anglia before becoming Applied Mathematics Fellow at St. Hilda's in 1992. At the Mathematical Institute, she is group lead for the Mathematical Geoscience Group. Her work in dynamo theory, with collaborators including Raymond Hide and Andrew Soward, involved the derivation of simple systems of coupled differential equations for dynamos such as the Earth's magnetic field, that can model phenomena involving intermittent collapses of these fields. == References == == External links == Home page Irene Moroz publications indexed by Google Scholar Irene Moroz at the Mathematics Genealogy Project
Wikipedia:Irene Sabadini#0
Irene Maria Sabadini is an Italian mathematician specializing in complex analysis, hypercomplex analysis and the analysis of superoscillations. She is a professor of mathematics at the Polytechnic University of Milan, and head of the department of mathematics there. == Education == Sabadini earned her PhD at the University of Milan in 1996. Her dissertation, Toward a Theory of Quaternionic Hyperfunctions, was supervised by Daniele C. Struppa. == Books == Sabadini is the author of multiple books in mathematics including: Analysis of Dirac systems and computational algebra (with Colombo, Sommen, and Struppa, Birkhäuser 2004) Noncommutative functional calculus: Theory and applications of slice hyperholomorphic functions (with Colombo and Struppa, Birkhäuser/Springer, 2011) Entire slice regular functions (with Colombo and Struppa, Springer, 2016) Slice hyperholomorphic Schur analysis (with Alpay and Colombo, Birkhäuser/Springer, 2016) The mathematics of superoscillations (with Aharonov, Colombo, Struppa, and Tollaksen, American Mathematical Society, 2017) Quaternionic approximation: With application to slice regular functions (with Gal, Birkhäuser/Springer, 2019) Quaternionic de Branges spaces and characteristic operator function (Springer, 2020) Michele Sce's works in hypercomplex analysis: A translation with commentaries (with Colombo and Struppa, Birkhäuser/Springer, 2020) She is also the editor or coeditor of multiple edited volumes. == References == == External links == Irene Sabadini publications indexed by Google Scholar
Wikipedia:Irene Sciriha#0
Irene Sciriha Aquilina is a Maltese mathematician specializing in spectral graph theory and chemical graph theory. A particular topic of her research has been the singular graphs, graphs whose adjacency matrix is a singular matrix, and the nut graphs, singular graphs all of whose nontrivial induced subgraphs are non-singular. She is a professor of mathematics at the University of Malta. She is a Fellow of the Institute of Combinatorics and its Applications. == Education and career == Sciriha studied mathematics at the University of Malta, earning bachelor's and master's degrees as the only woman studying mathematics or physics there at that time. She completed a PhD in 1998 at the University of Reading in England. Her dissertation, On some aspects of graph spectra, was jointly supervised by Anthony Hilton and Stanley Fiorini, and she also worked with Nash Williams, David Stirling and Peter Rowlinson. Sciriha specializes in spectral graph theory and chemical graph theory. A particular topic of her research has been the singular graphs, graphs whose adjacency matrix is a singular matrix, and the nut graphs, singular graphs all of whose nontrivial induced subgraphs are non-singular. She began teaching at the University of Malta in 1971, and is a professor of mathematics there. She was convenor of European Women in Mathematics from 2000 to 2001. She was also a representative for Malta on the Helsinki Group of the European Commission. == Recognition == Sciriha is a Fellow of the Institute of Combinatorics and its Applications. One of her students, chemist Martha Borg, won the Turner Prize at the University of Sheffield for a doctoral dissertation co-advised by Sciriha and Patrick W. Fowler. == References == == External links == Irene Sciriha presenting on Eigenvalues in Graphs and Molecules for the Malta Mathematical Society, 1 May 2021, via YouTube Irene Sciriha publications indexed by Google Scholar Home page
Wikipedia:Irina Mitrea#0
Irina Mitrea is a Romanian-American mathematician who works as professor and department chair at the Department of Mathematics of Temple University. She is known for her contributions to harmonic analysis, particularly on the interface of this field with partial differential equations, geometric measure theory, scattering theory, complex analysis and validated numerics. She is also known for her efforts to promote mathematics among young women. == Education and career == Mitrea earned a master's degree from the University of Bucharest in 1993, and completed her doctorate in 2000 at the University of Minnesota under the supervision of Carlos Kenig and Mikhail Safonov. Her dissertation was Spectral Properties of Elliptic Layer Potentials on Non-Smooth Domains. Her publications include over fifty research articles and three books published by Springer‐Verlag, Birkhäuser, and De Gruyter. After temporary positions at the Institute for Advanced Study and Cornell University, she joined the faculty of the University of Virginia in 2004, and earned tenure there in 2007. She also taught at the Worcester Polytechnic Institute before moving to Temple. She is the founder of the Girls and Mathematics Program at Temple University, a week-long summer camp in mathematics for middle-school girls. She is a member of the National Alliance for Doctoral Studies in the Mathematical Sciences, an organization providing mentorship to "build a national community of students, faculty, and staff who will work together to transform our departments, colleges, and universities into institutions where all students are welcome." == Recognition == In 2008, Mitrea won the Ruth I. Michler Memorial Prize of the Association for Women in Mathematics. In 2014, she was elected as a fellow of the American Mathematical Society "for contributions to partial differential equations and related fields as well as outreach to women and under-represented minorities at all educational levels." Also in 2014, Mitrea was awarded a Von Neumann Fellowship at the Institute for Advanced Study in Princeton, New Jersey. In 2015 she received the AWM Service Award from the Association for Women in Mathematics. She is part of the 2019 class of fellows of the Association for Women in Mathematics. == References == == External links == Home page
Wikipedia:Irina Shevtsova#0
Irina Shevtsova (Russian: Ири́на Генна́дьевна Шевцо́ва) (born 1983) is a Russian mathematician, Dr.Sc., and Professor at Moscow State University. She graduated from the faculty MSU CMC (2004). She has been working at the Moscow State University since 2006. She defended the thesis "Optimization of the structure of moment estimates of the accuracy of normal approximation for distributions of sums of independent random variables" for the degree of Doctor of Physical and Mathematical Sciences in 2013. Svetsova is the author of two books and more than 70 scientific articles. Her areas of research include central limit theorems of probability theory, estimates of the rate of convergence, and the analytical methods of probability theory. A number of papers are devoted to refinement of estimates of the rate of convergence in the central limit theorem for sums of independent random variables under different instantaneous conditions, and also to the study of regular and asymptotically regular constants in these estimates. In particular, the upper bound for the absolute constant in the classical Berry-Esseen inequality is refined, and two-sided estimates for asymptotically regular constants in the Berry-Essen inequality are obtained in the absence of the third moment. == Bibliography == Grigoriev, Evgeny, ed. (2010). Faculty of Computational Mathematics and Cybernetics: History and Modernity: A Biographical Directory. Moscow: Publishing house of Moscow University. pp. 325–326. ISBN 978-5-211-05838-5. == References == == External links == Irina Shevtsova publications indexed by Google Scholar
Wikipedia:Irreducible polynomial#0
In mathematics, an irreducible polynomial is, roughly speaking, a polynomial that cannot be factored into the product of two non-constant polynomials. The property of irreducibility depends on the nature of the coefficients that are accepted for the possible factors, that is, the ring to which the coefficients of the polynomial and its possible factors are supposed to belong. For example, the polynomial x2 − 2 is a polynomial with integer coefficients, but, as every integer is also a real number, it is also a polynomial with real coefficients. It is irreducible if it is considered as a polynomial with integer coefficients, but it factors as ( x − 2 ) ( x + 2 ) {\displaystyle \left(x-{\sqrt {2}}\right)\left(x+{\sqrt {2}}\right)} if it is considered as a polynomial with real coefficients. One says that the polynomial x2 − 2 is irreducible over the integers but not over the reals. Polynomial irreducibility can be considered for polynomials with coefficients in an integral domain, and there are two common definitions. Most often, a polynomial over an integral domain R is said to be irreducible if it is not the product of two polynomials that have their coefficients in R, and that are not unit in R. Equivalently, for this definition, an irreducible polynomial is an irreducible element in a ring of polynomials over R. If R is a field, the two definitions of irreducibility are equivalent. For the second definition, a polynomial is irreducible if it cannot be factored into polynomials with coefficients in the same domain that both have a positive degree. Equivalently, a polynomial is irreducible if it is irreducible over the field of fractions of the integral domain. For example, the polynomial 2 ( x 2 − 2 ) ∈ Z [ x ] {\displaystyle 2(x^{2}-2)\in \mathbb {Z} [x]} is irreducible for the second definition, and not for the first one. On the other hand, x 2 − 2 {\displaystyle x^{2}-2} is irreducible in Z [ x ] {\displaystyle \mathbb {Z} [x]} for the two definitions, while it is reducible in R [ x ] . {\displaystyle \mathbb {R} [x].} A polynomial that is irreducible over any field containing the coefficients is absolutely irreducible. By the fundamental theorem of algebra, a univariate polynomial is absolutely irreducible if and only if its degree is one. On the other hand, with several indeterminates, there are absolutely irreducible polynomials of any degree, such as x 2 + y n − 1 , {\displaystyle x^{2}+y^{n}-1,} for any positive integer n. A polynomial that is not irreducible is sometimes said to be a reducible polynomial. Irreducible polynomials appear naturally in the study of polynomial factorization and algebraic field extensions. It is helpful to compare irreducible polynomials to prime numbers: prime numbers (together with the corresponding negative numbers of equal magnitude) are the irreducible integers. They exhibit many of the general properties of the concept of "irreducibility" that equally apply to irreducible polynomials, such as the essentially unique factorization into prime or irreducible factors. When the coefficient ring is a field or other unique factorization domain, an irreducible polynomial is also called a prime polynomial, because it generates a prime ideal. == Definition == If F is a field, a non-constant polynomial is irreducible over F if its coefficients belong to F and it cannot be factored into the product of two non-constant polynomials with coefficients in F. A polynomial with integer coefficients, or, more generally, with coefficients in a unique factorization domain R, is sometimes said to be irreducible (or irreducible over R) if it is an irreducible element of the polynomial ring, that is, it is not invertible, not zero, and cannot be factored into the product of two non-invertible polynomials with coefficients in R. This definition generalizes the definition given for the case of coefficients in a field, because, over a field, the non-constant polynomials are exactly the polynomials that are non-invertible and non-zero. Another definition is frequently used, saying that a polynomial is irreducible over R if it is irreducible over the field of fractions of R (the field of rational numbers, if R is the integers). This second definition is not used in this article. The equivalence of the two definitions depends on R. == Simple examples == The following six polynomials demonstrate some elementary properties of reducible and irreducible polynomials: p 1 ( x ) = x 2 + 4 x + 4 = ( x + 2 ) 2 p 2 ( x ) = x 2 − 4 = ( x − 2 ) ( x + 2 ) p 3 ( x ) = 9 x 2 − 3 = 3 ( 3 x 2 − 1 ) = 3 ( x 3 − 1 ) ( x 3 + 1 ) p 4 ( x ) = x 2 − 4 9 = ( x − 2 3 ) ( x + 2 3 ) p 5 ( x ) = x 2 − 2 = ( x − 2 ) ( x + 2 ) p 6 ( x ) = x 2 + 1 = ( x − i ) ( x + i ) {\displaystyle {\begin{aligned}p_{1}(x)&=x^{2}+4x+4\,={(x+2)^{2}}\\p_{2}(x)&=x^{2}-4\,={(x-2)(x+2)}\\p_{3}(x)&=9x^{2}-3\,=3\left(3x^{2}-1\right)\,=3\left(x{\sqrt {3}}-1\right)\left(x{\sqrt {3}}+1\right)\\p_{4}(x)&=x^{2}-{\frac {4}{9}}\,=\left(x-{\frac {2}{3}}\right)\left(x+{\frac {2}{3}}\right)\\p_{5}(x)&=x^{2}-2\,=\left(x-{\sqrt {2}}\right)\left(x+{\sqrt {2}}\right)\\p_{6}(x)&=x^{2}+1\,={(x-i)(x+i)}\end{aligned}}} Over the integers, the first three polynomials are reducible (the third one is reducible because the factor 3 is not invertible in the integers); the last two are irreducible. (The fourth, of course, is not a polynomial over the integers.) Over the rational numbers, the first two and the fourth polynomials are reducible, but the other three polynomials are irreducible (as a polynomial over the rationals, 3 is a unit, and, therefore, does not count as a factor). Over the real numbers, the first five polynomials are reducible, but p 6 ( x ) {\displaystyle p_{6}(x)} is irreducible. Over the complex numbers, all six polynomials are reducible. == Over the complex numbers == Over the complex field, and, more generally, over an algebraically closed field, a univariate polynomial is irreducible if and only if its degree is one. This fact is known as the fundamental theorem of algebra in the case of the complex numbers and, in general, as the condition of being algebraically closed. It follows that every nonconstant univariate polynomial can be factored as a ( x − z 1 ) ⋯ ( x − z n ) {\displaystyle a\left(x-z_{1}\right)\cdots \left(x-z_{n}\right)} where n {\displaystyle n} is the degree, a {\displaystyle a} is the leading coefficient and z 1 , … , z n {\displaystyle z_{1},\dots ,z_{n}} are the zeros of the polynomial (not necessarily distinct, and not necessarily having explicit algebraic expressions). There are irreducible multivariate polynomials of every degree over the complex numbers. For example, the polynomial x n + y n − 1 , {\displaystyle x^{n}+y^{n}-1,} which defines a Fermat curve, is irreducible for every positive n. == Over the reals == Over the field of reals, the degree of an irreducible univariate polynomial is either one or two. More precisely, the irreducible polynomials are the polynomials of degree one and the quadratic polynomials a x 2 + b x + c {\displaystyle ax^{2}+bx+c} that have a negative discriminant b 2 − 4 a c . {\displaystyle b^{2}-4ac.} It follows that every non-constant univariate polynomial can be factored as a product of polynomials of degree at most two. For example, x 4 + 1 {\displaystyle x^{4}+1} factors over the real numbers as ( x 2 + 2 x + 1 ) ( x 2 − 2 x + 1 ) , {\displaystyle \left(x^{2}+{\sqrt {2}}x+1\right)\left(x^{2}-{\sqrt {2}}x+1\right),} and it cannot be factored further, as both factors have a negative discriminant: ( ± 2 ) 2 − 4 = − 2 < 0. {\displaystyle \left(\pm {\sqrt {2}}\right)^{2}-4=-2<0.} == Unique factorization property == Every polynomial over a field F may be factored into a product of a non-zero constant and a finite number of irreducible (over F) polynomials. This decomposition is unique up to the order of the factors and the multiplication of the factors by non-zero constants whose product is 1. Over a unique factorization domain the same theorem is true, but is more accurately formulated by using the notion of primitive polynomial. A primitive polynomial is a polynomial over a unique factorization domain, such that 1 is a greatest common divisor of its coefficients. Let F be a unique factorization domain. A non-constant irreducible polynomial over F is primitive. A primitive polynomial over F is irreducible over F if and only if it is irreducible over the field of fractions of F. Every polynomial over F may be decomposed into the product of a non-zero constant and a finite number of non-constant irreducible primitive polynomials. The non-zero constant may itself be decomposed into the product of a unit of F and a finite number of irreducible elements of F. Both factorizations are unique up to the order of the factors and the multiplication of the factors by a unit of F. This is this theorem which motivates that the definition of irreducible polynomial over a unique factorization domain often supposes that the polynomial is non-constant. All algorithms which are presently implemented for factoring polynomials over the integers and over the rational numbers use this result (see Factorization of polynomials). == Over the integers and finite fields == The irreducibility of a polynomial over the integers Z {\displaystyle \mathbb {Z} } is related to that over the field F p {\displaystyle \mathbb {F} _{p}} of p {\displaystyle p} elements (for a prime p {\displaystyle p} ). In particular, if a univariate polynomial f over Z {\displaystyle \mathbb {Z} } is irreducible over F p {\displaystyle \mathbb {F} _{p}} for some prime p {\displaystyle p} that does not divide the leading coefficient of f (the coefficient of the highest power of the variable), then f is irreducible over Z {\displaystyle \mathbb {Z} } (that is, it is not the product of two non-constant polynomials with integer coefficients). Eisenstein's criterion is a variant of this property where irreducibility over p 2 {\displaystyle p^{2}} is also involved. The converse, however, is not true: there are polynomials of arbitrarily large degree that are irreducible over the integers and reducible over every finite field. A simple example of such a polynomial is x 4 + 1. {\displaystyle x^{4}+1.} The relationship between irreducibility over the integers and irreducibility modulo p is deeper than the previous result: to date, all implemented algorithms for factorization and irreducibility over the integers and over the rational numbers use the factorization over finite fields as a subroutine. The number of degree n irreducible monic polynomials over a field F q {\displaystyle \mathbb {F} _{q}} for q a prime power is given by Moreau's necklace-counting function: M ( q , n ) = 1 n ∑ d ∣ n μ ( d ) q n d , {\displaystyle M(q,n)={\frac {1}{n}}\sum _{d\mid n}\mu (d)q^{\frac {n}{d}},} where μ is the Möbius function. For q = 2, such polynomials are commonly used to generate pseudorandom binary sequences. In some sense, almost all polynomials with coefficients zero or one are irreducible over the integers. More precisely, if a version of the Riemann hypothesis for Dedekind zeta functions is assumed, the probability of being irreducible over the integers for a polynomial with random coefficients in {0, 1} tends to one when the degree increases. == Algorithms == The unique factorization property of polynomials does not mean that the factorization of a given polynomial may always be computed. Even the irreducibility of a polynomial may not always be proved by a computation: there are fields over which no algorithm can exist for deciding the irreducibility of arbitrary polynomials. Algorithms for factoring polynomials and deciding irreducibility are known and implemented in computer algebra systems for polynomials over the integers, the rational numbers, finite fields and finitely generated field extension of these fields. All these algorithms use the algorithms for factorization of polynomials over finite fields. == Field extension == The notions of irreducible polynomial and of algebraic field extension are strongly related, in the following way. Let x be an element of an extension L of a field K. This element is said to be algebraic if it is a root of a nonzero polynomial with coefficients in K. Among the polynomials of which x is a root, there is exactly one which is monic and of minimal degree, called the minimal polynomial of x. The minimal polynomial of an algebraic element x of L is irreducible, and is the unique monic irreducible polynomial of which x is a root. The minimal polynomial of x divides every polynomial which has x as a root (this is Abel's irreducibility theorem). Conversely, if P ( X ) ∈ K [ X ] {\displaystyle P(X)\in K[X]} is a univariate polynomial over a field K, let L = K [ X ] / P ( X ) {\displaystyle L=K[X]/P(X)} be the quotient ring of the polynomial ring K [ X ] {\displaystyle K[X]} by the ideal generated by P. Then L is a field if and only if P is irreducible over K. In this case, if x is the image of X in L, the minimal polynomial of x is the quotient of P by its leading coefficient. An example of the above is the standard definition of the complex numbers as C = R [ X ] / ( X 2 + 1 ) . {\displaystyle \mathbb {C} =\mathbb {R} [X]\;/\left(X^{2}+1\right).} If a polynomial P has an irreducible factor Q over K, which has a degree greater than one, one may apply to Q the preceding construction of an algebraic extension, to get an extension in which P has at least one more root than in K. Iterating this construction, one gets eventually a field over which P factors into linear factors. This field, unique up to a field isomorphism, is called the splitting field of P. == Over an integral domain == If R is an integral domain, an element f of R that is neither zero nor a unit is called irreducible if there are no non-units g and h with f = gh. One can show that every prime element is irreducible; the converse is not true in general but holds in unique factorization domains. The polynomial ring F[x] over a field F (or any unique-factorization domain) is again a unique factorization domain. Inductively, this means that the polynomial ring in n indeterminates (over a ring R) is a unique factorization domain if the same is true for R. == See also == Gauss's lemma (polynomial) Rational root theorem, a method of finding whether a polynomial has a linear factor with rational coefficients Eisenstein's criterion Perron's irreducibility criterion Hilbert's irreducibility theorem Cohn's irreducibility criterion Irreducible component of a topological space Factorization of polynomials over finite fields Quartic function § Reducible quartics Cubic function § Factorization Casus irreducibilis, the irreducible cubic with three real roots Quadratic equation § Quadratic factorization == Notes == == References == Lang, Serge (2002), Algebra, Graduate Texts in Mathematics, vol. 211 (Revised third ed.), New York: Springer-Verlag, ISBN 978-0-387-95385-4, MR 1878556. This classical book covers most of the content of this article. Gallian, Joseph (2012), Contemporary Abstract Algebra (8th ed.), Cengage Learning, ISBN 978-1285402734 Lidl, Rudolf; Niederreiter, Harald (1997), Finite fields (2nd ed.), Cambridge University Press, ISBN 978-0-521-39231-0, pp. 91. Mac Lane, Saunders; Birkhoff, Garrett (1999), Algebra (3rd ed.), American Mathematical Society, ISBN 9780821816462 Menezes, Alfred J.; Van Oorschot, Paul C.; Vanstone, Scott A. (1997), Handbook of applied cryptography, CRC Press, ISBN 978-0-8493-8523-0, pp. 154. == External links == Weisstein, Eric W. "Irreducible Polynomial". MathWorld. irreducible polynomial at PlanetMath. Information on Primitive and Irreducible Polynomials, The (Combinatorial) Object Server.
Wikipedia:Irving Kaplansky#0
Irving Kaplansky (March 22, 1917 – June 25, 2006) was a mathematician, college professor, author, and amateur musician. == Biography == Kaplansky or "Kap" as his friends and colleagues called him was born in Toronto, Ontario, Canada, to Polish-Jewish immigrants. His father worked as a tailor, and his mother ran a grocery and, eventually, a chain of bakeries. He went to Harbord Collegiate Institute receiving the Prince of Wales Scholarship as a teenager. He attended the University of Toronto as an undergraduate and finished first in his class for three consecutive years. In his senior year, he competed in the first William Lowell Putnam Mathematical Competition, becoming one of the first five recipients of the Putnam Fellowship, which paid for graduate studies at Harvard University. Administered by the Mathematical Association of America, the competition is widely considered to be the most difficult mathematics examination in the world and "its difficulty is such that the median score is often zero or one (out of 120) despite being attempted by students specializing in mathematics." After receiving his Ph.D. from Harvard in 1941 as Saunders Mac Lane's first student, he remained at Harvard as a Benjamin Peirce Instructor, and in 1944 moved with Mac Lane to Columbia University for one year to collaborate on work surrounding World War II working on "miscellaneous studies in mathematics applied to warfare analysis with emphasis upon aerial gunnery, studies of fire control equipment, and rocketry and toss bombing" with the Applied Mathematics Panel. He was a member of the Institute for Advanced Study and attended the 1946 Princeton University Bicentennial. He was professor of mathematics at the University of Chicago from 1945 to 1984, and Chair of the department from 1962 to 1967. In 1968, Kaplansky was presented an honorary doctoral degree from Queen's University with the university noting "we honour as a Canadian whose clarity of lectures, elegance of writing, and profundity of research have won him widespread acclaim as the greatest mathematician this country has so far produced." From 1967 to 1969, Kaplansky wrote the mathematics section of Encyclopædia Britannica. Kaplansky was the Director of the Mathematical Sciences Research Institute from 1984 to 1992, and the President of the American Mathematical Society from 1985 to 1986. Kaplansky was also an accomplished amateur musician. He had perfect pitch, studied piano until the age of 15, earned money in high school as a dance band musician, taught Tom Lehrer, and played in Harvard's jazz band in graduate school. He also had a regular program on Harvard's student radio station. After moving to the University of Chicago, he stopped playing for two decades, but then returned to music as an accompanist for student-run Gilbert and Sullivan productions and as a calliope player in football game parades. He often composed music based on mathematical themes. One of those compositions, A Song About Pi, is a melody based on assigning notes to the first 14 decimal places of pi, and has occasionally been performed by his daughter, singer-songwriter Lucy Kaplansky. == Mathematical contributions == Kaplansky made major contributions to group theory, ring theory, the theory of operator algebras and field theory and created the Kaplansky density theorem, Kaplansky's game and Kaplansky conjecture. He published more than 150 articles and 11 mathematical books. Kaplansky was the doctoral supervisor of 55 students including notable mathematicians Hyman Bass, Susanna S. Epp, Günter Lumer, Eben Matlis, Donald Ornstein, Ed Posner, Alex F. T. W. Rosenberg, Judith D. Sally, and Harold Widom. He has over 950 academic descendants, including many through his academic grandchildren David J. Foulis (who studied with Kaplansky at the University of Chicago before completing his doctorate under the supervision of Kaplansky's student Fred Wright, Jr.) and Carl Pearcy (the student of H. Arlen Brown, who had been jointly supervised by Kaplansky and Paul Halmos). == Awards and honors == Kaplansky was a member of the National Academy of Sciences and the American Academy of Arts and Sciences, Director of the Mathematical Sciences Research Institute, and President of the American Mathematical Society. He was the plenary speaker at the British Mathematical Colloquium in 1966. Won the William Lowell Putnam Mathematical Competition, the Guggenheim Fellowship, the Jeffery–Williams Prize, and the Leroy P. Steele Prize. == Selected publications == === Books === Kaplansky, Irving (1954). Infinite Abelian groups. revised edn. 1971 with several later reprintings —— (1955). An introduction to differential algebra. University of Chicago Press. 2nd edn. Paris: Hermann. 1957. —— (1966). Introdução à teoria de Galois, por I. Kaplansky. Pref. de Elon Lages Lima. —— (1968). Rings of operators. —— (1969). Fields and rings. 2nd edn. 1972 —— (1969). Linear algebra and geometry; a second course. revised edn. 1974 —— (1970). Algebraic and analytic aspects of operator algebras. American Mathematical Soc. ISBN 9780821816509. —— (1971). Lie Algebras and Locally Compact Groups. University of Chicago Press. ISBN 0-226-42453-7. several later reprintings —— (1972). Set theory and metric spaces. 2nd edn. 1977 —— (1974). Commutative Rings. Lectures in Mathematics. University of Chicago Press. ISBN 0-226-42454-5. 1st edn. 1966; revised 1974 with several later reprintings with I. N. Herstein: —— (1974). Matters mathematical. New York, Harper & Row. ISBN 9780060428037. 2nd edn. 1978 —— (1995). Selected papers and other writings. Springer. ISBN 9780387944067. === Articles === Kaplansky, Irving (1944). "Symbolic solution of certain problems in permutations". Bull. Amer. Math. Soc. 50 (12): 906–914. doi:10.1090/s0002-9904-1944-08261-x. MR 0011393. —— (1945). "A note on groups without isomorphic subgroups". Bull. Amer. Math. Soc. 51 (8): 529–530. doi:10.1090/s0002-9904-1945-08382-7. MR 0012267. with I. S. Cohen: Cohen, I. S.; Kaplansky, Irving (1946). "Rings with a finite number of primes. I". Trans. Amer. Math. Soc. 60: 468–477. doi:10.1090/s0002-9947-1946-0019595-7. MR 0019595. —— (1946). "On a problem of Kurosch and Jacobson". Bull. Amer. Math. Soc. 52 (6): 496–500. doi:10.1090/s0002-9904-1946-08600-0. MR 0016758. —— (1947). "Lattices of continuous functions". Bull. Amer. Math. Soc. 53 (6): 617–623. doi:10.1090/s0002-9904-1947-08856-x. MR 0020715. with Richard F. Arens: Arens, Richard F.; Kaplansky, Irving (1948). "Topological representations of algebras". Trans. Amer. Math. Soc. 63 (3): 457–481. doi:10.1090/s0002-9947-1948-0025453-6. MR 0025453. —— (1948). "Rings with a polynomial identity". Bull. Amer. Math. Soc. 54 (6): 575–580. doi:10.1090/s0002-9904-1948-09049-8. MR 0025451. "Topological rings". Bull. Amer. Math. Soc. 54: 909–916. 1948. doi:10.1090/S0002-9904-1948-09096-6. MR 0027269. —— (1949). "Elementary divisors and modules". Trans. Amer. Math. Soc. 66 (2): 464–491. doi:10.1090/s0002-9947-1949-0031470-3. MR 0031470. —— (1949). "Primary ideals in group algebras". Proc Natl Acad Sci U S A. 35 (3): 133–136. Bibcode:1949PNAS...35..133K. doi:10.1073/pnas.35.3.133. PMC 1062983. PMID 16588871. —— (1950). "Topological representations of algebras. II". Trans. Amer. Math. Soc. 68: 62–75. doi:10.1090/s0002-9947-1950-0032612-4. MR 0032612. —— (1950). "The Weierstrass theorem in fields with valuations". Proc. Amer. Math. Soc. 1 (3): 356–357. doi:10.1090/s0002-9939-1950-0035760-3. MR 0035760. —— (1951). "The structure of certain operator algebras". Trans. Amer. Math. Soc. 70 (2): 219–255. doi:10.1090/s0002-9947-1951-0042066-0. MR 0042066. —— (1952). "Modules over Dedekind rings and valuations rings". Trans. Amer. Math. Soc. 72 (2): 327–340. doi:10.1090/s0002-9947-1952-0046349-0. MR 0046349. —— (1952). "Orthogonal similarity in infinite dimensional spaces". Proc. Amer. Math. Soc. 3: 16–25. doi:10.1090/s0002-9939-1952-0046564-1. MR 0046564. —— (1952). "Symmetry of Banach algebras". Proc. Amer. Math. Soc. 3 (3): 396–399. doi:10.1090/s0002-9939-1952-0048711-4. MR 0048711. —— (1952). "Some results on abelian groups". Proc Natl Acad Sci U S A. 38 (6): 538–540. Bibcode:1952PNAS...38..538K. doi:10.1073/pnas.38.6.538. PMC 1063607. PMID 16589142. —— (1953). "Infinite dimensional quadratic forms admitting composition". Proc. Amer. Math. Soc. 4 (6): 956–960. doi:10.1090/s0002-9939-1953-0059895-7. MR 0059895. —— (1953). "Dual modules over a valuation ring. I". Proc. Amer. Math. Soc. 4 (2): 213–219. doi:10.1090/s0002-9939-1953-0053092-7. MR 0053092. —— (1958). "Lie algebras of characteristic p". Trans. Amer. Math. Soc. 89: 149–183. doi:10.1090/s0002-9947-1958-0099359-7. MR 0099359. —— (1962). "Decomposability of modules". Proc. Amer. Math. Soc. 13 (4): 532–535. doi:10.1090/s0002-9939-1962-0137738-6. MR 0137738. —— (1980). "Superalgebras". Pacific J. Math. 86 (1): 93–98. doi:10.2140/pjm.1980.86.93. —— (1994). "A quasi-commutative ring that is not neo-commutative". Proc. Amer. Math. Soc. 122: 321. doi:10.1090/s0002-9939-1994-1257114-3. MR 1257114. "The forms x+32y2 and x+64y2 ". Proc. Amer. Math. Soc. 131: 2299–2300. 2003. doi:10.1090/s0002-9939-03-07022-9. MR 1963780. == See also == Kaplansky's theorem on projective modules == Notes == == References == Albert, Nancy E. (2007). "Irving Kaplansky: Some reflections on his early years". Celebratio Mathematica. Retrieved 2025-05-09. Peterson, Ivars. (2013). "A Song about Pi" http://mtarchive.blogspot.com/2013/09/a-song-about-pi.html?m=1 Freund, Peter G. O. Irving Kaplansky and Supersymmetry. arXiv:physics/0703037 Bass, Hyman; Lam, T.Y. (December 2007). "Irving Kaplansky (1917–2006)" (PDF). Notices of the American Mathematical Society. 54 (11): 1477–1493. Retrieved 2008-01-05. Kadison, Richard V. (February 2008). "Irving Kaplansky's Role in Mid-Twentieth Century Functional Analysis" (PDF). Notices of the AMS. 55 (2): 216–225. Retrieved 2008-01-05. == External links == O'Connor, John J.; Robertson, Edmund F., "Irving Kaplansky", MacTutor History of Mathematics Archive, University of St Andrews Pearce, Jeremy (July 13, 2006). "Irving Kaplansky, 89, a Pioneer in Mathematical Exploration". The New York Times. p. C15. Retrieved 2008-01-05. Irving Kaplansky + Ternary Quadratic Forms Irving Kaplansky + Lie Superalgebras search on author Irving Kaplansky from Google Scholar
Wikipedia:Irène Gijbels#0
Irène Gijbels is a mathematical statistician at KU Leuven in Belgium, and an expert on nonparametric statistics. She has also collaborated with TopSportLab, a KU Leuven spin-off, on software for risk assessment of sports injuries. == Education and career == Gijbels earned her Ph.D. in 1990 from Limburgs Universitair Centrum. Her dissertation, supervised by Noël Veraverbeke, was Asymptotic Representations under Random Censoring. She joined KU Leuven after postdoctoral research as a Fulbright scholar at the University of North Carolina at Chapel Hill and the Mathematical Sciences Research Institute. == Book == With Jianqing Fan, Gijbels is the author of Local Polynomial Modelling and Its Applications (CRC Press, 1996). == Recognition == Gijbels is an elected member of the International Statistical Institute and the Royal Flemish Academy of Belgium for Science and the Arts, and a fellow of the American Statistical Association and the Institute of Mathematical Statistics. == References == == External links == Irène Gijbels publications indexed by Google Scholar
Wikipedia:Irène Waldspurger#0
Irène Waldspurger is a French mathematician and a researcher at the Research Centre in Mathematics of Decision (CEREMADE) where her research focuses on algorithm to solve phase problems, a class of problem relevant for a large number of imaging techniques used in science and medicine. She is also a professor at Paris Sciences et Lettres University. == Education and career == Waldspurger competed for France in the 2006 International Mathematical Olympiad, winning a bronze medal. Waldspurger was a student of the prestigious Ecole Normale Superieure, in Paris, France, where she was ranked first at the entrance exam in 2008. She pursued her doctoral research at École Normale Supérieure, working on phase retrieval techniques using wavelet transforms under the supervision of Stephane Mallat, which she completed in 2015. She then joined the Massachusetts Institute of Technology for a postdoctoral fellowship, before returning to France in 2017 to join the French National Centre for Scientific Research. == Recognition == In 2020, Waldspurger was one of the Peccot Lecturers and Peccot Prize winners of the College de France, and won the CNRS Bronze Medal. == References == == External links == Irène Waldspurger
Wikipedia:Isaac Jacob Schoenberg#0
Isaac Jacob Schoenberg (April 21, 1903 – February 21, 1990) was a Romanian-American mathematician, known for his invention of splines. == Life and career == Schoenberg was born in Galați to a Jewish family, the youngest of four children. He studied at the University of Iași, receiving his M.A. in 1922. From 1922 to 1925 he studied at the Universities of Berlin and Göttingen, working on a topic in analytic number theory suggested by Issai Schur. He presented his thesis to the University of Iași, obtaining his Ph.D. in 1926. In Göttingen, he met Edmund Landau, who arranged a visit for Schoenberg to the Hebrew University of Jerusalem in 1928. During this visit, Schoenberg began his work on total positivity and variation-diminishing linear transformations. In 1930, he returned from Jerusalem, and married Landau's daughter Charlotte in Berlin. In 1930, he was awarded a Rockefeller Fellowship, which enabled him to go to the United States, visiting the University of Chicago, Harvard, and the Institute for Advanced Study in Princeton, New Jersey. From 1935, he taught at Swarthmore College and Colby College. In 1941, he was appointed to the faculty at the University of Pennsylvania. During 1943–1945 he was released from U. Penn. in order to perform war work as a mathematician at the Aberdeen Proving Ground. It was during this time that he initiated the work for which he is most famous, the theory of splines. In 1966 he moved to the University of Wisconsin–Madison where he became a member of the Mathematics Research Center. He remained there until he retired in 1973. In 1974 he won a Lester R. Ford Award. == Books == Schoenberg, I. J. (1973), Cardinal Spline Interpolation, Society for Industrial and Applied Mathematics Schoenberg, I. J. (1982), Mathematical time exposures, Mathematical Association of America, ISBN 0-88385-438-4, Unknown ID:loc=82-062766 Schoenberg, I. J. (1988), Selected Papers, Vol.1 and 2 (Ed. C. de Boor), Birkhäuser == Papers == He wrote about 175 papers on many disparate subjects. Around 50 of these were on Splines. He also wrote on Approximation theory, the Kakeya problem, Polya frequency functions, and a problem of Edmund Landau. His coauthors included John von Neumann, Hans Rademacher, Theodore Motzkin, George Polya, A. S. Besicovitch, Gábor Szegő, Donald J. Newman, Richard Askey, Bernard Epstein and Carl de Boor. == See also == Perfect spline == References == Schoenberg, Contributions to the problem of approximation of equidistant data by analytic functions, Quart. Appl. Math., vol. 4, pp. 45–99 and 112–141, 1946. == External links == O'Connor, John J.; Robertson, Edmund F., "Isaac Jacob Schoenberg", MacTutor History of Mathematics Archive, University of St Andrews Isaac Jacob Schoenberg at the Mathematics Genealogy Project Schoenberg, Isaac Jacob (HAT = History of Approximation Theory website) Archives Spotlight: The Isaac Jacob Schoenberg Papers
Wikipedia:Isaac Namioka#0
Isaac Namioka (April 25, 1928 – September 25, 2019) was a Japanese-American mathematician who worked in general topology and functional analysis. He was a professor emeritus of mathematics at the University of Washington. He died at home in Seattle on September 25, 2019. == Early life and education == Namioka was born in Tōno, not far from Namioka in the north of Honshu, Japan. When he was young his parents moved farther south, to Himeji. He attended graduate school at the University of California, Berkeley, earning a doctorate in 1956 under the supervision of John L. Kelley. As a graduate student, Namioka married Chinese-American mathematics student Lensey Namioka, later to become a well-known novelist who used Namioka's Japanese heritage in some of her novels. == Career == Namioka taught at Cornell University until 1963, when he moved to the University of Washington. There he was the doctoral advisor to four students. He has over 20 academic descendants, largely through his student Joseph Rosenblatt, who became a professor at the University of Illinois at Urbana–Champaign. == Contributions == Namioka's book Linear Topological Spaces with Kelley has become a "standard text". Although his doctoral work and this book both concerned general topology, his interests later shifted to functional analysis. With Asplund in 1967, Namioka gave one of the first complete proofs of the Ryll-Nardzewski fixed-point theorem. Following his 1974 paper "separate continuity and joint continuity", a Namioka space has come to mean a topological space X with the property that whenever Y is a compact space and function f from the Cartesian product of X and Y to Z is separately continuous in X and Y, there must exist a dense Gδ set within X whose Cartesian product with Y is a subset of the set of points of continuity of f. The result of the 1974 paper, a proof of this property for a specific class of topological spaces, has come to be known as Namioka's theorem. In 1975, Namioka and Phelps established one side of the theorem that a space is an Asplund space if and only if its dual space has the Radon–Nikodým property. The other side was completed in 1978 by Stegall. == Awards and honors == A special issue of the Journal of Mathematical Analysis and Applications was dedicated to Namioka to honor his 80th birthday. In 2012, he became one of the inaugural fellows of the American Mathematical Society. == Selected publications == Books Partially Ordered Linear Topological Spaces (Memoirs of the American Mathematical Society 14, 1957) Linear Topological Spaces (with John L. Kelley, Van Nostrand, 1963; Graduate Texts in Mathematics 36, Springer-Verlag, 1976) Research papers Namioka, I.; Asplund, E. (1967), "A geometric proof of Ryll-Nardzewski's fixed point theorem", Bulletin of the American Mathematical Society, 73 (3): 443–445, doi:10.1090/s0002-9904-1967-11779-8, MR 0209904. Namioka, I. (1974), "Separate continuity and joint continuity", Pacific Journal of Mathematics, 51 (2): 515–531, doi:10.2140/pjm.1974.51.515, MR 0370466. Namioka, I.; Phelps, R. R. (1975), "Banach spaces which are Asplund spaces", Duke Mathematical Journal, 42 (4): 735–750, doi:10.1215/s0012-7094-75-04261-1, MR 0390721. == References ==
Wikipedia:Isaak Russman#0
Isaak Borisovich Russman (Russian: Исаак Борисович Руссман; 7 March 1938 – 11 July 2005) was a Russian mathematician and economist. He studied and worked at Voronezh State University. Isaak Borisovich Russman was born on March 7, 1938, in Voronezh. Although his childhood dream was studying astronomy, in 1955 he entered Voronezh State University where he studied in the Physics and Mathematics department. Starting in 1969 and until the end of his life, Russman conducted research in operations research at the same institution where he had studied. Russman taught discrete mathematics, the theory of algorithms and mathematical logic, probability theory, economic cybernetics, and systems analysis. Russman conducted research on topics related to simulation-targeted systems (economic, social, institutional), quality assessment, and building valuation models. He is famous for creating the concept "difficulty in achieving the objectives", a concept which is used to assess the value of a certain specified requirement. This approach was usefully applied to models of control and management of organizational systems and portfolio optimization models. == External links == In memory of Isaak Russman (Russian language) Scientific contributions (translated to English via Google)
Wikipedia:Isabel Dotti#0
Isabel Graciela Dotti de Miatello (born 1947) is an Argentine mathematician specializing in the connections between group theory and differential topology, including the theory of complex nilmanifolds, nilpotent Lie groups, hypercomplex manifolds, and hyperkähler manifolds. She is a professor in the Faculty of Mathematics, Astronomy and Physics of the National University of Córdoba. == Education and career == Dotti was born on 21 June 1947 in Freyre, a town in San Justo Department, Córdoba. She earned a bachelor's degree in mathematics in 1970 at the National University of Córdoba, and completed a doctorate at Rutgers University in the United States in 1976. Her dissertation, Extension of Actions on Stiefel Manifolds, was supervised by Glen Bredon. After temporary positions at the Federal University of Pernambuco in Brazil, at Rutgers, and at the National University of Córdoba, she obtained a permanent faculty position at the National University of Córdoba in 1983. == Recognition == Dotti is a numbered member of the National Academy of Sciences of Argentina, elected in 2007. == References == == External links == Home page
Wikipedia:Isabella Bashmakova#0
Isabella Grigoryevna Bashmakova (Russian: Изабелла Григорьевна Башмакова, 1921–2005) was a Russian historian of mathematics. In 2001, she was a recipient of the Alexander Koyré Medal of the International Academy of the History of Science. == Education and career == Bashmakova was born on January 3, 1921, in Rostov-on-Don, to a family of Armenian descent. Her father, Grigory Georgiyevich Bashmakov, was a lawyer. Her family moved to Moscow in 1932. She began studies in the Faculty of Mechanics and Mathematics at Moscow State University in 1938, but was evacuated from Moscow during World War II, during which she served as a nurse in Samarkand. She completed a Ph.D. in 1948, under the supervision of Sofya Yanovskaya. She continued at Moscow State as an assistant professor, and in 1949 was promoted to associate professor. In 1950 her husband, mathematician Andrei I. Lapin, was arrested for his opposition to Lysenkoism, but in part due to Bashmakova's efforts he was freed again in 1952. Bashmakova completed her D.Sc. in 1961 and became a full professor in 1968. She retired and became a professor emeritus in 1999, and died on July 17, 2005, while vacationing in Zvenigorod. == Contributions == Bashmakova's dissertation concerned the history of definitions of integers and rational numbers, from Euclid and Eudoxus to Zolotarev, Dedekind, and Kronecker. Her later research contributions include a comparison of the tools used by Diophantus to solve Diophantine equation, versus more modern methods; following a line of thought suggested by Jacobi, she suggested that Diophantus' methods were more sophisticated than previously thought, but that their sophistication had been hidden by the emphasis on specific cases in Diophantus's writings. She used complex numbers to reinterpret the geometric transformations studied by François Viète. She has also studied the history of algebraic curves, and translated the works of Fermat into Russian. == Books == Bashmakova's books include: Диофант и диофантовы уравнения, Nauka, 1972; Diophant und diophantische Gleichungen, Birkhäuser, 1974; Diophantus and Diophantine Equations, Mathematical Association of America, 1997. Становление алгебры: Из истории математических идей [The development of algebra: From the history of mathematical ideas], Znanie, 1979. История диофантова анализа: От Диофанта до Ферма [History of Diophantine analysis: From Diophantus to Fermat, Nauka, 1984. The beginnings and evolution of algebra (with Galina Smirnova, Mathematical Association of America, 2000) == Recognition == In 1986, the International Congress of Mathematicians initially published a list of speakers that included no women. After protests, the executive committee of the congress invited six women to speak at the congress. Bashmakova was one of those six; she was unable to travel to the congress, but her paper appears in its proceedings. The International Academy of the History of Science elected her as a corresponding member in 1966 and a full member in 1971. She was awarded honorary diplomas in 1971, 1976, and 1980. In 2001, she was awarded the Alexander Koyré Medal of the International Academy of the History of Science. In 2011, a conference of the Russian Academy of Sciences was dedicated in her honor. == References ==
Wikipedia:Isabella Novik#0
Isabella Novik (Hebrew: איזבלה נוביק; born 1971) is a mathematician who works at the University of Washington as the Robert R. & Elaine F. Phelps Professor in Mathematics. Her research concerns algebraic combinatorics and polyhedral combinatorics. Novik earned her Ph.D. from the Hebrew University of Jerusalem in 1999, under the supervision of Gil Kalai. Her doctoral dissertation, Face Numbers of Polytopes and Manifolds, won the Haim Nessyahu Prize in Mathematics, awarded by the Israel Mathematical Union for the best annual doctoral dissertations in mathematics. She was an Alfred P. Sloan Research Fellow for 2006–2008, and was elected as a member of the 2017 class of Fellows of the American Mathematical Society "for contributions to algebraic and geometric combinatorics". == References ==
Wikipedia:Isaiah Kantor#0
Isaiah Kantor (or Issai Kantor, or Isai Lʹvovich Kantor) (1936–2006) was a mathematician who introduced the Kantor–Koecher–Tits construction, and the Kantor double, a Jordan superalgebra constructed from a Poisson algebra. == References == Kantor, I. L.; Solodovnikov, A. S. (1989) [1973], Hypercomplex numbers, Berlin, New York: Springer-Verlag, ISBN 978-0-387-96980-0, MR 0347870
Wikipedia:Isidor Natanson#0
Isidor Pavlovich Natanson (Russian: Исидор Павлович Натансон; February 8, 1906 in Zurich – July 3, 1964 in Leningrad) was a Swiss-born Soviet mathematician known for contributions to real analysis and constructive function theory, in particular, for his textbooks on these subjects. His son, Garal'd Natanson (1930–2003), was also a known mathematician. == Selected publications == Natanson, I. P. (1955). Theory of functions of a real variable. New York: Frederick Ungar Publishing Co. MR 0067952. Konstruktive Funktionentheorie. Berlin: Akademie Verlag. 1955. Natanson, I. P. (1964). Constructive function theory. Vol. I. Uniform approximation. Translated by Alexis N. Obolensky. New York: Frederick Ungar Publishing Co. MR 0196340. Zbl 0133.31101. Natanson, I. P. (1965). Constructive function theory. Vol. II. Approximation in mean. New York: Frederick Ungar Publishing Co. MR 0196341. Zbl 0136.36302. Natanson, I. P. (1965). Constructive function theory. Vol. III. Interpolation and approximation quadratures. New York: Ungar Publishing Co. MR 0196342. Zbl 0178.39701. == References == == External links == "Isidor Pavlovich Natanson" (in Russian). St. Petersburg University. Isidor Natanson at the Mathematics Genealogy Project
Wikipedia:Islamic geometric patterns#0
Islamic geometric patterns are one of the major forms of Islamic ornament, which tends to avoid using figurative images, as it is forbidden to create a representation of an important Islamic figure according to many holy scriptures. The geometric designs in Islamic art are often built on combinations of repeated squares and circles, which may be overlapped and interlaced, as can arabesques (with which they are often combined), to form intricate and complex patterns, including a wide variety of tessellations. These may constitute the entire decoration, may form a framework for floral or calligraphic embellishments, or may retreat into the background around other motifs. The complexity and variety of patterns used evolved from simple stars and lozenges in the ninth century, through a variety of 6- to 13-point patterns by the 13th century, and finally to include also 14- and 16-point stars in the sixteenth century. Geometric patterns occur in a variety of forms in Islamic art and architecture. These include kilim carpets, Persian girih and Moroccan zellij tilework, muqarnas decorative vaulting, jali pierced stone screens, ceramics, leather, stained glass, woodwork, and metalwork. Interest in Islamic geometric patterns is increasing in the West, both among craftsmen and artists like M. C. Escher in the twentieth century, and among mathematicians and physicists such as Peter J. Lu and Paul Steinhardt. == Background == === Islamic decoration === Islamic geometric patterns are derived from simpler designs used in earlier cultures: Greek, Roman, and Sasanian. They are one of three forms of Islamic decoration, the others being the arabesque based on curving and branching plant forms, and Islamic calligraphy; all three are frequently used together. From the 9th century onward, a range of sophisticated geometric patterns based on polygonal tessellation began to appear in Islamic art, eventually becoming dominant. Islamic art mostly avoids figurative images to avoid becoming objects of worship. This aniconism in Islamic culture caused artists to explore non-figural art, and created a general aesthetic shift toward mathematically based decoration. === Purpose === Authors such as Keith Critchlow argue that Islamic patterns are created to lead the viewer to an understanding of the underlying reality, rather than being mere decoration, as writers interested only in pattern sometimes imply. In Islamic culture, the patterns are believed to be the bridge to the spiritual realm, the instrument to purify the mind and the soul. David Wade states that "Much of the art of Islam, whether in architecture, ceramics, textiles or books, is the art of decoration – which is to say, of transformation." Wade argues that the aim is to transfigure, turning mosques "into lightness and pattern", while "the decorated pages of a Qur’an can become windows onto the infinite." Against this, Doris Behrens-Abouseif states in her book Beauty in Arabic Culture that a "major difference" between the philosophical thinking of Medieval Europe and the Islamic world is exactly that the concepts of the good and the beautiful are separated in Arabic culture. She argues that beauty, whether in poetry or in the visual arts, was enjoyed "for its own sake, without commitment to religious or moral criteria". Styles of Islamic geometric decoration == Pattern formation == Many Islamic designs are built on squares and circles, typically repeated, overlapped and interlaced to form intricate and complex patterns. A recurring motif is the 8-pointed star, often seen in Islamic tilework; it is made of two squares, one rotated 45 degrees with respect to the other. The fourth basic shape is the polygon, including pentagons and octagons. All of these can be combined and reworked to form complicated patterns with a variety of symmetries including reflections and rotations. Such patterns can be seen as mathematical tessellations, which can extend indefinitely and thus suggest infinity. They are constructed on grids that require only ruler and compass to draw. Artist and educator Roman Verostko argues that such constructions are in effect algorithms, making Islamic geometric patterns forerunners of modern algorithmic art. The circle symbolizes unity and diversity in nature, and many Islamic patterns are drawn starting with a circle. For example, the decoration of the 15th-century mosque in Yazd, Persia is based on a circle, divided into six by six circles drawn around it, all touching at its centre and each touching its two neighbours' centres to form a regular hexagon. On this basis is constructed a six-pointed star surrounded by six smaller irregular hexagons to form a tessellating star pattern. This forms the basic design which is outlined in white on the wall of the mosque. That design, however, is overlaid with an intersecting tracery in blue around tiles of other colours, forming an elaborate pattern that partially conceals the original and underlying design. A similar design forms the logo of the Mohammed Ali Research Center. One of the early Western students of Islamic patterns, Ernest Hanbury Hankin, defined a "geometrical arabesque" as a pattern formed "with the help of construction lines consisting of polygons in contact." He observed that many different combinations of polygons can be used as long as the residual spaces between the polygons are reasonably symmetrical. For example, a grid of octagons in contact has squares (of the same side as the octagons) as the residual spaces. Every octagon is the basis for an 8-point star, as seen at Akbar's tomb, Sikandra (1605–1613). Hankin considered the "skill of the Arabian artists in discovering suitable combinations of polygons .. almost astounding." He further records that if a star occurs in a corner, exactly one quarter of it should be shown; if along an edge, exactly one half of it. The Topkapı Scroll, made in Timurid dynasty Iran in the late-15th century or beginning of the 16th century, contains 114 patterns including coloured designs for girih tilings and muqarnas quarter or semidomes. The mathematical properties of the decorative tile and stucco patterns of the Alhambra palace in Granada, Spain have been extensively studied. Some authors have claimed on dubious grounds to have found most or all of the 17 wallpaper groups there. Moroccan geometric woodwork from the 14th to 19th centuries makes use of only 5 wallpaper groups, mainly p4mm and c2mm, with p6mm and p2mm occasionally and p4gm rarely; it is claimed that the "Hasba" (measure) method of construction, which starts with n-fold rosettes, can however generate all 17 groups. Methods of construction == Evolution == === Early stage === The earliest geometrical forms in Islamic art were occasional isolated geometric shapes such as 8-pointed stars and lozenges containing squares. These date from 836 in the Great Mosque of Kairouan, Tunisia, and since then have spread all across the Islamic world. === Middle stage === The next development, marking the middle stage of Islamic geometric pattern usage, was of 6- and 8-point stars, which appear in 879 at the Ibn Tulun Mosque, Cairo, and then became widespread. A wider variety of patterns were used from the 11th century. Abstract 6- and 8-point shapes appear in the Tower of Kharaqan at Qazvin, Persia in 1067, and the Al-Juyushi Mosque, Egypt in 1085, again becoming widespread from there, though 6-point patterns are rare in Turkey. In 1086, 7- and 10-point girih patterns (with heptagons, 5- and 6-pointed stars, triangles and irregular hexagons) appear in the Jameh Mosque of Isfahan. 10-point girih became widespread in the Islamic world, except in the Spanish Al-Andalus. Soon afterwards, sweeping 9-, 11-, and 13-point girih patterns were used in the Barsian Mosque, also in Persia, in 1098; these, like 7-point geometrical patterns, are rarely used outside Persia and central Asia. Finally, marking the end of the middle stage, 8- and 12-point girih rosette patterns appear in the Alâeddin Mosque at Konya, Turkey in 1220, and in the Abbasid palace in Baghdad in 1230, going on to become widespread across the Islamic world. === Late stage === The beginning of the late stage is marked by the use of simple 16-point patterns at the Hasan Sadaqah mausoleum in Cairo in 1321, and in the Alhambra in Spain in 1338–1390. These patterns are rarely found outside these two regions. More elaborate combined 16-point geometrical patterns are found in the Sultan Hassan complex in Cairo in 1363, but rarely elsewhere. Finally, 14-point patterns appear in the Jama Masjid at Fatehpur Sikri in India in 1571–1596, but in few other places. == Artforms == Several artforms in different parts of the Islamic world make use of geometric patterns. These include ceramics, girih strapwork, jali pierced stone screens, kilim rugs, leather, metalwork, muqarnas vaulting, shakaba stained glass, woodwork, and zellij tiling. === Ceramics === Ceramics lend themselves to circular motifs, whether radial or tangential. Bowls or plates can be decorated inside or out with radial stripes; these may be partly figurative, representing stylised leaves or flower petals, while circular bands can run around a bowl or jug. Patterns of these types were employed on Islamic ceramics from the Ayyubid period, 13th century. Radially symmetric flowers with, say, 6 petals lend themselves to increasingly stylised geometric designs which can combine geometric simplicity with recognisably naturalistic motifs, brightly coloured glazes, and a radial composition that ideally suits circular crockery. Potters often chose patterns suited to the shape of the vessel they were making. Thus an unglazed earthenware water flask from Aleppo in the shape of a vertical circle (with handles and neck above) is decorated with a ring of moulded braiding around an Arabic inscription with a small 8-petalled flower at the centre. === Girih tilings and woodwork === Girih are elaborate interlacing patterns formed of five standardized shapes. The style is used in Persian Islamic architecture and also in decorative woodwork. Girih designs are traditionally made in different media including cut brickwork, stucco, and mosaic faience tilework. In woodwork, especially in the Safavid period, it could be applied either as lattice frames, left plain or inset with panels such as of coloured glass; or as mosaic panels used to decorate walls and ceilings, whether sacred or secular. In architecture, girih forms decorative interlaced strapwork surfaces from the 15th century to the 20th century. Most designs are based on a partially hidden geometric grid which provides a regular array of points; this is made into a pattern using 2-, 3-, 4-, and 6-fold rotational symmetries which can fill the plane. The visible pattern superimposed on the grid is also geometric, with 6-, 8-, 10- and 12-pointed stars and a variety of convex polygons, joined by straps which typically seem to weave over and under each other. The visible pattern does not coincide with the underlying construction lines of the tiling. The visible patterns and the underlying tiling represent a bridge linking the invisible to the visible, analogous to the "epistemological quest" in Islamic culture, the search for the nature of knowledge. === Jali === Jali are pierced stone screens with regularly repeating patterns. They are characteristic of Indo-Islamic architecture, for example in the Mughal dynasty buildings at Fatehpur Sikri and the Taj Mahal. The geometric designs combine polygons such as octagons and pentagons with other shapes such as 5- and 8-pointed stars. The patterns emphasized symmetries and suggested infinity by repetition. Jali functioned as windows or room dividers, providing privacy but allowing in air and light. Jali forms a prominent element of the architecture of India. The use of perforated walls has declined with modern building standards and the need for security. Modern, simplified jali walls, for example made with pre-moulded clay or cement blocks, have been popularised by the architect Laurie Baker. Pierced windows in girih style are sometimes found elsewhere in the Islamic world, such as in windows of the Mosque of Ibn Tulun in Cairo. === Kilim === A kilim is an Islamic flatwoven carpet (without a pile), whether for household use or a prayer mat. The pattern is made by winding the weft threads back over the warp threads when a colour boundary is reached. This technique leaves a gap or vertical slit, so kilims are sometimes called slit-woven textiles. Kilims are often decorated with geometric patterns with 2- or 4-fold mirror or rotational symmetries. Because weaving uses vertical and horizontal threads, curves are difficult to generate, and patterns are accordingly formed mainly with straight edges. Kilim patterns are often characteristic of specific regions. Kilim motifs are often symbolic as well as decorative. For example, the wolf's mouth or wolf's foot motif (Turkish: Kurt Aǧzi, Kurt İzi) expresses the tribal weavers' desires for protection of their families' flocks from wolves. === Leather === Islamic leather is often embossed with patterns similar to those already described. Leather book covers, starting with the Quran where figurative artwork was excluded, were decorated with a combination of kufic script, medallions and geometric patterns, typically bordered by geometric braiding. === Metalwork === Metal artefacts share the same geometric designs that are used in other forms of Islamic art. However, in the view of Hamilton Gibb, the emphasis differs: geometric patterns tend to be used for borders, and if they are in the main decorative area they are most often used in combination with other motifs such as floral designs, arabesques, animal motifs, or calligraphic script. Geometric designs in Islamic metalwork can form a grid decorated with these other motifs, or they can form the background pattern. Even where metal objects such as bowls and dishes do not seem to have geometric decoration, still the designs, such as arabesques, are often set in octagonal compartments or arranged in concentric bands around the object. Both closed designs (which do not repeat) and open or repetitive patterns are used. Patterns such as interlaced six-pointed stars were especially popular from the 12th century. Eva Baer notes that while this design was essentially simple, it was elaborated by metalworkers into intricate patterns interlaced with arabesques, sometimes organised around further basic Islamic patterns, such as the hexagonal pattern of six overlapping circles. === Muqarnas === Muqarnas are elaborately carved ceilings to semi-domes, often used in mosques. They are typically made of stucco (and thus do not have a structural function), but can also be of wood, brick, and stone. They are characteristic of Islamic architecture of the Middle Ages from Spain and Morocco in the west to Persia in the east. Architecturally they form multiple tiers of squinches, diminishing in size as they rise. They are often elaborately decorated. === Stained glass === Geometrically patterned stained glass is used in a variety of settings in Islamic architecture. It is found in the surviving summer residence of the Palace of Shaki Khans, Azerbaijan, constructed in 1797. Patterns in the "shabaka" windows include 6-, 8-, and 12-point stars. These wood-framed decorative windows are distinctive features of the palace's architecture. Shabaka are still constructed the traditional way in Sheki in the 21st century. Traditions of stained glass set in wooden frames (not lead as in Europe) survive in workshops in Iran as well as Azerbaijan. Glazed windows set in stucco arranged in girih-like patterns are found both in Turkey and the Arab lands; a late example, without the traditional balance of design elements, was made in Tunisia for the International Colonial Exhibition in Amsterdam in 1883. The old city of Sana'a in Yemen has stained glass windows in its tall buildings. === Zellij === Zellij (Arabic: الزَّلِيْج) is geometric tilework with glazed terracotta tiles set into plaster, forming colourful mosaic patterns including regular and semiregular tessellations. The tradition is characteristic of Morocco, but is also found in Moorish Spain. Zellij is used to decorate mosques, public buildings and wealthy private houses. === Illustrations === Media used for Islamic geometric patterns == Outside Islamic art == === In Western culture === It is sometimes supposed in Western society that mistakes in repetitive Islamic patterns such as those on carpets were intentionally introduced as a show of humility by artists who believed only Allah can produce perfection, but this theory is denied. Major Western collections hold many objects of widely varying materials with Islamic geometric patterns. The Victoria and Albert Museum in London holds at least 283 such objects, of materials including wallpaper, carved wood, inlaid wood, tin- or lead-glazed earthenware, brass, stucco, glass, woven silk, ivory, and pen or pencil drawings. The Metropolitan Museum of Art in New York has among other relevant holdings 124 mediaeval (1000–1400 A.D.) objects bearing Islamic geometric patterns, including a pair of Egyptian minbar (pulpit) doors almost 2 m. high in rosewood and mulberry inlaid with ivory and ebony; and an entire mihrab (prayer niche) from Isfahan, decorated with polychrome mosaic, and weighing over 2,000 kg. Islamic decoration and craftsmanship had a significant influence on Western art when Venetian merchants brought goods of many types back to Italy from the 14th century onwards. The Dutch artist M. C. Escher was inspired by the Alhambra's intricate decorative designs to study the mathematics of tessellation, transforming his style and influencing the rest of his artistic career. In his own words it was "the richest source of inspiration I have ever tapped." === Influence on the sciences === Cultural organisations such as the Mathematical Sciences Research Institute and the Institute for Advanced Study run events on geometric patterns and related aspects of Islamic art. In 2013 the Istanbul Center of Design and the Ensar Foundation ran what they claimed was the first ever symposium of Islamic Arts and Geometric Patterns, in Istanbul. The panel included the experts on Islamic geometric pattern Carol Bier, Jay Bonner, Eric Broug, Hacali Necefoğlu and Reza Sarhangi. In Britain, The Prince's School of Traditional Arts runs a range of courses in Islamic art including geometry, calligraphy, and arabesque (vegetal forms), tile-making, and plaster carving. Computer graphics and computer-aided manufacturing make it possible to design and produce Islamic geometric patterns effectively and economically. Craig S. Kaplan explains and illustrates in his Ph.D. thesis how Islamic star patterns can be generated algorithmically. Two physicists, Peter J. Lu and Paul Steinhardt, attracted controversy in 2007 by claiming that girih designs such as that used on the Darb-e Imam shrine in Isfahan were able to create quasi-periodic tilings resembling those discovered by Roger Penrose in 1973. They showed that rather than the traditional ruler and compass construction, it was possible to create girih designs using a set of five "girih tiles", all equilateral polygons, secondarily decorated with lines (for the strapwork). In 2016, Ahmad Rafsanjani described the use of Islamic geometric patterns from tomb towers in Iran to create auxetic materials from perforated rubber sheets. These are stable in either a contracted or an expanded state, and can switch between the two, which might be useful for surgical stents or for spacecraft components. When a conventional material is stretched along one axis, it contracts along other axes (at right angles to the stretch). But auxetic materials expand at right angles to the pull. The internal structure that enables this unusual behaviour is inspired by two of the 70 Islamic patterns that Rafsanjani noted on the tomb towers. == Notes == == References == == External links == Museum with no Frontiers: Geometric Decoration Victoria and Albert Museum: Teachers' resource: Maths and Islamic art & design
Wikipedia:Ismail Mustafa al-Falaki#0
Ismail Mustafa, Ismail Effendi Mustafa, Ismail Bey Mustapha, Ismail Mustafa al-Falaki or Ismail Pasha al-Falaki (1825 – 27 July 1901) was an Egyptian astronomer and mathematician. Effendi, Bey and Pasha corresponded to the different ranks he attained along his career; "al-Falaki" was added to his name literally meaning "the astronomer". He was born in Cairo to a family of Turkish origin and was educated in Paris, France. == Scientific career == Egyptian astronomy has ancient roots which were revived in the 19th century by the modernist impetus of Muhammad Ali who founded in Sabtieh, Boulaq district, in Cairo an Observatory which he was keen to keep in harmony with the progress of this science still in progress. The staff at this establishment were recruited from among the best students of the Boulaq École polytechnique (Polytechnic), headed by Charles Joseph Lambert, a French engineer. Thus, Ismail Mustafa entered the Observatory after his technical studies. Charles Joseph Lambert, wanting to give greater impetus to the intellectual movement which was germinating in the country and to respond to the aspirations of Viceroy Abbas I, easily obtained the sovereign's acquiescence to the sending to Europe, in 1850, of three young engineers chosen from among the best graduates of the Bulaq École polytechnique. Mahmoud Hamdi (Mahmoud Pasha al-Falaki), Ismail Mustafa and Hussein Ibrahim were appointed to complete their studies in France. Mahmoud and Ismail devoted themselves to the in-depth study of astronomy, and through their erudition public favor forever bestowed upon them the title of al-Falaki (the astronomer). After completing his practical and theoretical studies Ismail Mustafa had the special mission of taking care of the construction of astronomical instruments, in order to be able to ensure, in the future, the perfect functioning and possible repair of the Egyptian Observatory's devices. To this end, he devoted himself for an entire year to the study of the construction and repair of precision instruments in Brunner's workshops in Paris. In 1858, a Technical Commission was set up to continue, by adopting the procedures instituted in Europe, the cadastre work, inaugurated by means of the Kassaba, under Muhammad Ali. This Commission suggested to Viceroy Mohammed Sa'id Pasha the idea of building geodetic devices which were ordered in France. While Mahmoud al-Falaki was in charge, in Egypt, of the direction of the work of the general map, the viceroy entrusted to Ismail the study, in Europe, of the precision apparatus calibrated against the metre intended to measure the geodesic bases and already built by Jean Brunner in Paris. Ismail Mustafa had the task to carry out the experiments necessary for determining the expansion coefficients of the two platinum and brass rules, and to compare Egyptian standard with a known standard. The Spanish standard designed by Carlos Ibáñez e Ibáñez de Ibero and Frutos Saavedra Meneses was chosen for this purpose, as it had served as a model for the construction of the Egyptian standard. In addition, the Spanish standard had been compared with Borda's double-toise N° 1, which served as a comparison module for the measurement of all geodesic bases in France. On the return of Ismail Mustafa to Egypt, after 14 years of stay in Europe, the Khedive Isma'il Pasha taking into consideration the wise advice suggested by Urbain Le Verrier to his predecessor to put Boulaq Observatory on the same level as similar establishments in Europe, instructed him to set up a new observatory, which was established in 1868 at Abbassia, and would later be transferred to Helwan, in 1903. Ismail Mustafa al-Falaki took charge of the Abbassia Observatory which became the Khedival Observatory. In 1873 Ismail Mustafa was delegated to the International Statistical Congress in Moscow, where the Tsar conferred on him the rank of Commander in the Imperial Order of Saint Anna. In 1883 he was appointed director of the École polytechnique. He was also appointed director of the School of Land Surveying, he founded. He taught cosmography, geodesy and astronomy at the Military Academy and in the two schools he directed. He was the author of books in Arabic including an elementary treatise on astronomy and the first volume of a long-term work on the same subject and geodesy. He retired in 1886. Until his death he published Arabic almanacs and European calendars on behalf of the Egyptian state. In 1899, he was conferred the insignia of Grand Officer in the Imperial Order of the Medjidie. == Legacy == Egypt was, after the United States of America and Spain in Europe, the first country in Africa to use a geodetic standard calibrated against the metre. The history of the metre reveals that it was then chosen as an international scientific unit of length by the European Arc Measurement which would later become the International Association of Geodesy. The inspiration for the creation of this association came to Johann Jacob Baeyer following the measurement of the geodetic arc of Struve. In 1954, the connection of the southerly extension of the Struve Arc with an arc running north from South Africa through Egypt brought the course of a major meridian arc back to land where Eratosthenes had founded geodesy. == References == == Bibliography ==
Wikipedia:Isomorphism class#0
In mathematics, an isomorphism is a structure-preserving mapping or morphism between two structures of the same type that can be reversed by an inverse mapping. Two mathematical structures are isomorphic if an isomorphism exists between them. The word is derived from Ancient Greek ἴσος (isos) 'equal' and μορφή (morphe) 'form, shape'. The interest in isomorphisms lies in the fact that two isomorphic objects have the same properties (excluding further information such as additional structure or names of objects). Thus isomorphic structures cannot be distinguished from the point of view of structure only, and may often be identified. In mathematical jargon, one says that two objects are the same up to an isomorphism. A common example where isomorphic structures cannot be identified is when the structures are substructures of a larger one. For example, all subspaces of dimension one of a vector space are isomorphic and cannot be identified. An automorphism is an isomorphism from a structure to itself. An isomorphism between two structures is a canonical isomorphism (a canonical map that is an isomorphism) if there is only one isomorphism between the two structures (as is the case for solutions of a universal property), or if the isomorphism is much more natural (in some sense) than other isomorphisms. For example, for every prime number p, all fields with p elements are canonically isomorphic, with a unique isomorphism. The isomorphism theorems provide canonical isomorphisms that are not unique. The term isomorphism is mainly used for algebraic structures and categories. In the case of algebraic structures, mappings are called homomorphisms, and a homomorphism is an isomorphism if and only if it is bijective. In various areas of mathematics, isomorphisms have received specialized names, depending on the type of structure under consideration. For example: An isometry is an isomorphism of metric spaces. A homeomorphism is an isomorphism of topological spaces. A diffeomorphism is an isomorphism of spaces equipped with a differential structure, typically differentiable manifolds. A symplectomorphism is an isomorphism of symplectic manifolds. A permutation is an automorphism of a set. In geometry, isomorphisms and automorphisms are often called transformations, for example rigid transformations, affine transformations, projective transformations. Category theory, which can be viewed as a formalization of the concept of mapping between structures, provides a language that may be used to unify the approach to these different aspects of the basic idea. == Examples == === Logarithm and exponential === Let R + {\displaystyle \mathbb {R} ^{+}} be the multiplicative group of positive real numbers, and let R {\displaystyle \mathbb {R} } be the additive group of real numbers. The logarithm function log : R + → R {\displaystyle \log :\mathbb {R} ^{+}\to \mathbb {R} } satisfies log ⁡ ( x y ) = log ⁡ x + log ⁡ y {\displaystyle \log(xy)=\log x+\log y} for all x , y ∈ R + , {\displaystyle x,y\in \mathbb {R} ^{+},} so it is a group homomorphism. The exponential function exp : R → R + {\displaystyle \exp :\mathbb {R} \to \mathbb {R} ^{+}} satisfies exp ⁡ ( x + y ) = ( exp ⁡ x ) ( exp ⁡ y ) {\displaystyle \exp(x+y)=(\exp x)(\exp y)} for all x , y ∈ R , {\displaystyle x,y\in \mathbb {R} ,} so it too is a homomorphism. The identities log ⁡ exp ⁡ x = x {\displaystyle \log \exp x=x} and exp ⁡ log ⁡ y = y {\displaystyle \exp \log y=y} show that log {\displaystyle \log } and exp {\displaystyle \exp } are inverses of each other. Since log {\displaystyle \log } is a homomorphism that has an inverse that is also a homomorphism, log {\displaystyle \log } is an isomorphism of groups, i.e., R + ≅ R {\displaystyle \mathbb {R} ^{+}\cong \mathbb {R} } via the isomorphism log ⁡ x {\displaystyle \log x} . The log {\displaystyle \log } function is an isomorphism which translates multiplication of positive real numbers into addition of real numbers. This facility makes it possible to multiply real numbers using a ruler and a table of logarithms, or using a slide rule with a logarithmic scale. === Integers modulo 6 === Consider the group ( Z 6 , + ) , {\displaystyle (\mathbb {Z} _{6},+),} the integers from 0 to 5 with addition modulo 6. Also consider the group ( Z 2 × Z 3 , + ) , {\displaystyle \left(\mathbb {Z} _{2}\times \mathbb {Z} _{3},+\right),} the ordered pairs where the x coordinates can be 0 or 1, and the y coordinates can be 0, 1, or 2, where addition in the x-coordinate is modulo 2 and addition in the y-coordinate is modulo 3. These structures are isomorphic under addition, under the following scheme: ( 0 , 0 ) ↦ 0 ( 1 , 1 ) ↦ 1 ( 0 , 2 ) ↦ 2 ( 1 , 0 ) ↦ 3 ( 0 , 1 ) ↦ 4 ( 1 , 2 ) ↦ 5 {\displaystyle {\begin{alignedat}{4}(0,0)&\mapsto 0\\(1,1)&\mapsto 1\\(0,2)&\mapsto 2\\(1,0)&\mapsto 3\\(0,1)&\mapsto 4\\(1,2)&\mapsto 5\\\end{alignedat}}} or in general ( a , b ) ↦ ( 3 a + 4 b ) mod 6. {\displaystyle (a,b)\mapsto (3a+4b)\mod 6.} For example, ( 1 , 1 ) + ( 1 , 0 ) = ( 0 , 1 ) , {\displaystyle (1,1)+(1,0)=(0,1),} which translates in the other system as 1 + 3 = 4. {\displaystyle 1+3=4.} Even though these two groups "look" different in that the sets contain different elements, they are indeed isomorphic: their structures are exactly the same. More generally, the direct product of two cyclic groups Z m {\displaystyle \mathbb {Z} _{m}} and Z n {\displaystyle \mathbb {Z} _{n}} is isomorphic to ( Z m n , + ) {\displaystyle (\mathbb {Z} _{mn},+)} if and only if m and n are coprime, per the Chinese remainder theorem. === Relation-preserving isomorphism === If one object consists of a set X with a binary relation R and the other object consists of a set Y with a binary relation S then an isomorphism from X to Y is a bijective function f : X → Y {\displaystyle f:X\to Y} such that: S ⁡ ( f ( u ) , f ( v ) ) if and only if R ⁡ ( u , v ) {\displaystyle \operatorname {S} (f(u),f(v))\quad {\text{ if and only if }}\quad \operatorname {R} (u,v)} S is reflexive, irreflexive, symmetric, antisymmetric, asymmetric, transitive, total, trichotomous, a partial order, total order, well-order, strict weak order, total preorder (weak order), an equivalence relation, or a relation with any other special properties, if and only if R is. For example, R is an ordering ≤ and S an ordering ⊑ , {\displaystyle \scriptstyle \sqsubseteq ,} then an isomorphism from X to Y is a bijective function f : X → Y {\displaystyle f:X\to Y} such that f ( u ) ⊑ f ( v ) if and only if u ≤ v . {\displaystyle f(u)\sqsubseteq f(v)\quad {\text{ if and only if }}\quad u\leq v.} Such an isomorphism is called an order isomorphism or (less commonly) an isotone isomorphism. If X = Y , {\displaystyle X=Y,} then this is a relation-preserving automorphism. == Applications == In algebra, isomorphisms are defined for all algebraic structures. Some are more specifically studied; for example: Linear isomorphisms between vector spaces; they are specified by invertible matrices. Group isomorphisms between groups; the classification of isomorphism classes of finite groups is an open problem. Ring isomorphism between rings. Field isomorphisms are the same as ring isomorphism between fields; their study, and more specifically the study of field automorphisms is an important part of Galois theory. Just as the automorphisms of an algebraic structure form a group, the isomorphisms between two algebras sharing a common structure form a heap. Letting a particular isomorphism identify the two structures turns this heap into a group. In mathematical analysis, the Laplace transform is an isomorphism mapping hard differential equations into easier algebraic equations. In graph theory, an isomorphism between two graphs G and H is a bijective map f from the vertices of G to the vertices of H that preserves the "edge structure" in the sense that there is an edge from vertex u to vertex v in G if and only if there is an edge from f ( u ) {\displaystyle f(u)} to f ( v ) {\displaystyle f(v)} in H. See graph isomorphism. In order theory, an isomorphism between two partially ordered sets P and Q is a bijective map f {\displaystyle f} from P to Q that preserves the order structure in the sense that for any elements x {\displaystyle x} and y {\displaystyle y} of P we have x {\displaystyle x} less than y {\displaystyle y} in P if and only if f ( x ) {\displaystyle f(x)} is less than f ( y ) {\displaystyle f(y)} in Q. As an example, the set {1,2,3,6} of whole numbers ordered by the is-a-factor-of relation is isomorphic to the set {O, A, B, AB} of blood types ordered by the can-donate-to relation. See order isomorphism. In mathematical analysis, an isomorphism between two Hilbert spaces is a bijection preserving addition, scalar multiplication, and inner product. In early theories of logical atomism, the formal relationship between facts and true propositions was theorized by Bertrand Russell and Ludwig Wittgenstein to be isomorphic. An example of this line of thinking can be found in Russell's Introduction to Mathematical Philosophy. In cybernetics, the good regulator theorem or Conant–Ashby theorem is stated as "Every good regulator of a system must be a model of that system". Whether regulated or self-regulating, an isomorphism is required between the regulator and processing parts of the system. == Category theoretic view == In category theory, given a category C, an isomorphism is a morphism f : a → b {\displaystyle f:a\to b} that has an inverse morphism g : b → a , {\displaystyle g:b\to a,} that is, f g = 1 b {\displaystyle fg=1_{b}} and g f = 1 a . {\displaystyle gf=1_{a}.} Two categories C and D are isomorphic if there exist functors F : C → D {\displaystyle F:C\to D} and G : D → C {\displaystyle G:D\to C} which are mutually inverse to each other, that is, F G = 1 D {\displaystyle FG=1_{D}} (the identity functor on D) and G F = 1 C {\displaystyle GF=1_{C}} (the identity functor on C). === Isomorphism vs. bijective morphism === In a concrete category (roughly, a category whose objects are sets (perhaps with extra structure) and whose morphisms are structure-preserving functions), such as the category of topological spaces or categories of algebraic objects (like the category of groups, the category of rings, and the category of modules), an isomorphism must be bijective on the underlying sets. In algebraic categories (specifically, categories of varieties in the sense of universal algebra), an isomorphism is the same as a homomorphism which is bijective on underlying sets. However, there are concrete categories in which bijective morphisms are not necessarily isomorphisms (such as the category of topological spaces). == Isomorphism class == Since a composition of isomorphisms is an isomorphism, since the identity is an isomorphism and since the inverse of an isomorphism is an isomorphism, the relation that two mathematical objects are isomorphic is an equivalence relation. An equivalence class given by isomorphisms is commonly called an isomorphism class. === Examples === Examples of isomorphism classes are plentiful in mathematics. Two sets are isomorphic if there is a bijection between them. The isomorphism class of a finite set can be identified with the non-negative integer representing the number of elements it contains. The isomorphism class of a finite-dimensional vector space can be identified with the non-negative integer representing its dimension. The classification of finite simple groups enumerates the isomorphism classes of all finite simple groups. The classification of closed surfaces enumerates the isomorphism classes of all connected closed surfaces. Ordinals are essentially defined as isomorphism classes of well-ordered sets (though there are technical issues involved). There are three isomorphism classes of the planar subalgebras of M(2,R), the 2 x 2 real matrices. However, there are circumstances in which the isomorphism class of an object conceals vital information about it. Given a mathematical structure, it is common that two substructures belong to the same isomorphism class. However, the way they are included in the whole structure can not be studied if they are identified. For example, in a finite-dimensional vector space, all subspaces of the same dimension are isomorphic, but must be distinguished to consider their intersection, sum, etc. In homotopy theory, the fundamental group of a space X {\displaystyle X} at a point p {\displaystyle p} , though technically denoted π 1 ( X , p ) {\displaystyle \pi _{1}(X,p)} to emphasize the dependence on the base point, is often written lazily as simply π 1 ( X ) {\displaystyle \pi _{1}(X)} if X {\displaystyle X} is path connected. The reason for this is that the existence of a path between two points allows one to identify loops at one with loops at the other; however, unless π 1 ( X , p ) {\displaystyle \pi _{1}(X,p)} is abelian this isomorphism is non-unique. Furthermore, the classification of covering spaces makes strict reference to particular subgroups of π 1 ( X , p ) {\displaystyle \pi _{1}(X,p)} , specifically distinguishing between isomorphic but conjugate subgroups, and therefore amalgamating the elements of an isomorphism class into a single featureless object seriously decreases the level of detail provided by the theory. == Relation to equality == Although there are cases where isomorphic objects can be considered equal, one must distinguish equality and isomorphism. Equality is when two objects are the same, and therefore everything that is true about one object is true about the other. On the other hand, isomorphisms are related to some structure, and two isomorphic objects share only the properties that are related to this structure. For example, the sets A = { x ∈ Z ∣ x 2 < 2 } and B = { − 1 , 0 , 1 } {\displaystyle A=\left\{x\in \mathbb {Z} \mid x^{2}<2\right\}\quad {\text{ and }}\quad B=\{-1,0,1\}} are equal; they are merely different representations—the first an intensional one (in set builder notation), and the second extensional (by explicit enumeration)—of the same subset of the integers. By contrast, the sets { A , B , C } {\displaystyle \{A,B,C\}} and { 1 , 2 , 3 } {\displaystyle \{1,2,3\}} are not equal since they do not have the same elements. They are isomorphic as sets, but there are many choices (in fact 6) of an isomorphism between them: one isomorphism is A ↦ 1 , B ↦ 2 , C ↦ 3 , {\displaystyle {\text{A}}\mapsto 1,{\text{B}}\mapsto 2,{\text{C}}\mapsto 3,} while another is A ↦ 3 , B ↦ 2 , C ↦ 1 , {\displaystyle {\text{A}}\mapsto 3,{\text{B}}\mapsto 2,{\text{C}}\mapsto 1,} and no one isomorphism is intrinsically better than any other. On this view and in this sense, these two sets are not equal because one cannot consider them identical: one can choose an isomorphism between them, but that is a weaker claim than identity and valid only in the context of the chosen isomorphism. Also, integers and even numbers are isomorphic as ordered sets and abelian groups (for addition), but cannot be considered equal sets, since one is a proper subset of the other. On the other hand, when sets (or other mathematical objects) are defined only by their properties, without considering the nature of their elements, one often considers them to be equal. This is generally the case with solutions of universal properties. For example, the rational numbers are formally defined as equivalence classes of pairs of integers, although nobody thinks of a rational number as a set (equivalence class). The universal property of the rational numbers is essentially that they form a field that contains the integers and does not contain any proper subfield. Given two fields with these properties, there is a unique field isomorphism between them. This allows identifying these two fields, since every property of one of them can be transferred to the other through the isomorphism. The real numbers that can be expressed as a quotient of integers form the smallest subfield of the reals. There is thus a unique isomorphism from this subfield of the reals to the rational numbers defined by equivalence classes. == See also == == Notes == == References == == Further reading == Mazur, Barry (12 June 2007), When is one thing equal to some other thing? (PDF) == External links == "Isomorphism", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Weisstein, Eric W. "Isomorphism". MathWorld.
Wikipedia:Isoperimetric dimension#0
In mathematics, the isoperimetric dimension of a manifold is a notion of dimension that tries to capture how the large-scale behavior of the manifold resembles that of a Euclidean space (unlike the topological dimension or the Hausdorff dimension which compare different local behaviors against those of the Euclidean space). In the Euclidean space, the isoperimetric inequality says that of all bodies with the same volume, the ball has the smallest surface area. In other manifolds it is usually very difficult to find the precise body minimizing the surface area, and this is not what the isoperimetric dimension is about. The question we will ask is, what is approximately the minimal surface area, whatever the body realizing it might be. == Formal definition == We say about a differentiable manifold M that it satisfies a d-dimensional isoperimetric inequality if for any open set D in M with a smooth boundary one has area ⁡ ( ∂ D ) ≥ C vol ⁡ ( D ) ( d − 1 ) / d . {\displaystyle \operatorname {area} (\partial D)\geq C\operatorname {vol} (D)^{(d-1)/d}.} The notations vol and area refer to the regular notions of volume and surface area on the manifold, or more precisely, if the manifold has n topological dimensions then vol refers to n-dimensional volume and area refers to (n − 1)-dimensional volume. C here refers to some constant, which does not depend on D (it may depend on the manifold and on d). The isoperimetric dimension of M is the supremum of all values of d such that M satisfies a d-dimensional isoperimetric inequality. == Examples == A d-dimensional Euclidean space has isoperimetric dimension d. This is the well known isoperimetric problem — as discussed above, for the Euclidean space the constant C is known precisely since the minimum is achieved for the ball. An infinite cylinder (i.e. a product of the circle and the line) has topological dimension 2 but isoperimetric dimension 1. Indeed, multiplying any manifold with a compact manifold does not change the isoperimetric dimension (it only changes the value of the constant C). Any compact manifold has isoperimetric dimension 0. It is also possible for the isoperimetric dimension to be larger than the topological dimension. The simplest example is the infinite jungle gym, which has topological dimension 2 and isoperimetric dimension 3. See [1] for pictures and Mathematica code. The hyperbolic plane has topological dimension 2 and isoperimetric dimension infinity. In fact the hyperbolic plane has positive Cheeger constant. This means that it satisfies the inequality area ⁡ ( ∂ D ) ≥ C vol ⁡ ( D ) , {\displaystyle \operatorname {area} (\partial D)\geq C\operatorname {vol} (D),} which obviously implies infinite isoperimetric dimension. == Consequences of isoperimetry == A simple integration over r (or sum in the case of graphs) shows that a d-dimensional isoperimetric inequality implies a d-dimensional volume growth, namely vol ⁡ B ( x , r ) ≥ C r d {\displaystyle \operatorname {vol} B(x,r)\geq Cr^{d}} where B(x,r) denotes the ball of radius r around the point x in the Riemannian distance or in the graph distance. In general, the opposite is not true, i.e. even uniformly exponential volume growth does not imply any kind of isoperimetric inequality. A simple example can be had by taking the graph Z (i.e. all the integers with edges between n and n + 1) and connecting to the vertex n a complete binary tree of height |n|. Both properties (exponential growth and 0 isoperimetric dimension) are easy to verify. An interesting exception is the case of groups. It turns out that a group with polynomial growth of order d has isoperimetric dimension d. This holds both for the case of Lie groups and for the Cayley graph of a finitely generated group. A theorem of Varopoulos connects the isoperimetric dimension of a graph to the rate of escape of random walk on the graph. The result states Varopoulos' theorem: If G is a graph satisfying a d-dimensional isoperimetric inequality then p n ( x , y ) ≤ C n − d / 2 {\displaystyle p_{n}(x,y)\leq Cn^{-d/2}} where p n ( x , y ) {\textstyle p_{n}(x,y)} is the probability that a random walk on G starting from x will be in y after n steps, and C is some constant. == References == Isaac Chavel, Isoperimetric Inequalities: Differential geometric and analytic perspectives, Cambridge university press, Cambridge, UK (2001), ISBN 0-521-80267-9 Discusses the topic in the context of manifolds, no mention of graphs. N. Th. Varopoulos, Isoperimetric inequalities and Markov chains, J. Funct. Anal. 63:2 (1985), 215–239. Thierry Coulhon and Laurent Saloff-Coste, Isopérimétrie pour les groupes et les variétés, Rev. Mat. Iberoamericana 9:2 (1993), 293–314. This paper contains the result that on groups of polynomial growth, volume growth and isoperimetric inequalities are equivalent. In French. Fan Chung, Discrete Isoperimetric Inequalities. Surveys in Differential Geometry IX, International Press, (2004), 53–82. http://math.ucsd.edu/~fan/wp/iso.pdf. This paper contains a precise definition of the isoperimetric dimension of a graph, and establishes many of its properties.
Wikipedia:Israel Gelfand#0
Israel Moiseevich Gelfand, also written Israïl Moyseyovich Gel'fand, or Izrail M. Gelfand (Yiddish: ישראל געלפֿאַנד, Russian: Изра́иль Моисе́евич Гельфа́нд, Ukrainian: Ізраїль Мойсейович Гельфанд; 2 September [O.S. 20 August] 1913 – 5 October 2009) was a prominent Soviet and American mathematician, one of the greatest mathematicians of the 20th century, biologist, teacher and organizer of mathematical education. He made significant contributions to many branches of mathematics, including group theory, representation theory and functional analysis. The recipient of many awards, including the Order of Lenin and the first Wolf Prize, he was a Foreign Fellow of the Royal Society and professor at Moscow State University and, after immigrating to the United States shortly before his 76th birthday, at Rutgers University. Gelfand is also a 1994 MacArthur Fellow. His legacy continues through his students, who include Endre Szemerédi, Alexandre Kirillov, Edward Frenkel, Joseph Bernstein, David Kazhdan, as well as his own son, Sergei Gelfand. == Early years == A native of Kherson Governorate, Russian Empire (now, Odesa Oblast, Ukraine), Gelfand was born into a Jewish family in the small southern Ukrainian town of Okny. According to his own account, Gelfand was expelled from high school under the Soviets because his father had been a mill owner. Bypassing both high school and college, he proceeded to postgraduate study at the age of 19 at Moscow State University, where his advisor was the preeminent mathematician Andrei Kolmogorov. He received his PhD in 1935. Gelfand immigrated to the United States in 1989. == Work == Gelfand is known for many developments including: the book Calculus of Variations (1963), which he co-authored with Sergei Fomin; Gelfand's formula, which expresses the spectral radius as a limit of matrix norms. the Gelfand representation in Banach algebra theory; the Gelfand–Mazur theorem in Banach algebra theory; the Gelfand–Naimark theorem; the Gelfand–Naimark–Segal construction; Gelfand–Shilov spaces; the Gelfand–Pettis integral; the representation theory of the complex classical Lie groups; contributions to the theory of Verma modules in the representation theory of semisimple Lie algebras (with I. N. Bernstein and S. I. Gelfand); contributions to distribution theory and measures on infinite-dimensional spaces; the first observation of the connection of automorphic forms with representations (with Sergei Fomin); conjectures about the Atiyah–Singer index theorem; ordinary differential equations (Gelfand–Levitan theory); work on calculus of variations and soliton theory (Gelfand–Dikii equations); contributions to the philosophy of cusp forms; Gelfand–Fuchs cohomology of Lie algebras; Gelfand–Kirillov dimension; integral geometry; combinatorial definition of the Pontryagin class; Coxeter functors; general hypergeometric functions; Gelfand–Tsetlin patterns; Gelfand–Lokutsievski method; and many other results, particularly in the representation theory of classical groups. Gelfand ran a seminar at Moscow State University from 1943[1] until May 1989 (when it continued at Rutgers University), which covered a wide range of topics and was an important school for many mathematicians. == Influence outside mathematics == The Gelfand–Tsetlin (also spelled Zetlin) basis is a widely used tool in theoretical physics and the result of Gelfand's work on the representation theory of the unitary group and Lie groups in general. Gelfand also published works on biology and medicine. For a long time he took an interest in cell biology and organized a research seminar on the subject. He worked extensively in mathematics education, particularly with correspondence education. In 1994, he was awarded a MacArthur Fellowship for this work. == Personal life == Gelfand was married to Zorya Shapiro, and their two sons, Sergei and Vladimir both live in the United States. The third son, Aleksandr, died of leukemia. Following the divorce from his first wife, Gelfand married his second wife, Tatiana; together they had a daughter, Tatiana. The family also includes four grandchildren and three great-grandchildren. Memories about I. Gelfand are collected at a dedicated website handled by his family. Gelfand was an advocate of animal rights. He became a vegetarian in 1994 and vegan in 2000. == Honors and awards == Gelfand held several honorary degrees and was awarded the Order of Lenin three times for his research. In 1977 he was elected a Foreign Member of the Royal Society. He won the Wolf Prize in 1978, Kyoto Prize in 1989 and MacArthur Foundation Fellowship in 1994. He held the presidency of the Moscow Mathematical Society between 1968 and 1970, and was elected a foreign member of the U.S. National Academy of Science, the American Academy of Arts and Sciences, the Royal Irish Academy, the American Mathematical Society and the London Mathematical Society. In an October 2003 article in The New York Times, written on the occasion of his 90th birthday, Gelfand is described as a scholar who is considered "among the greatest mathematicians of the 20th century", having exerted a tremendous influence on the field both through his own works and those of his students. == Death == Gelfand died at the Robert Wood Johnson University Hospital near his home in Highland Park, New Jersey. He was less than five weeks past his 96th birthday. His death was first reported on the blog of his former collaborator Andrei Zelevinsky and confirmed a few hours later by an obituary in the Russian online newspaper Polit.ru. == Publications == Gelfand, I. M. (1998), Lectures on linear algebra, Courier Dover Publications, ISBN 978-0-486-66082-0 Gelfand, I. M.; Fomin, Sergei V. (1963), Silverman, Richard A. (ed.), Calculus of variations, Englewood Cliffs, N.J.: Prentice-Hall Inc., ISBN 978-0-486-41448-5, MR 0160139 {{citation}}: ISBN / Date incompatibility (help) Gelfand, I.; Raikov, D.; Shilov, G. (1964) [1960], Commutative normed rings, Translated from the Russian, with a supplementary chapter, New York: Chelsea Publishing Co., ISBN 978-0-8218-2022-3, MR 0205105 {{citation}}: ISBN / Date incompatibility (help) Gel'fand, I. M.; Shilov, G. E. (1964) [1958], Generalized functions. Vol. I: Properties and operations, Translated by Eugene Saletan, Boston, MA: Academic Press, ISBN 978-0-12-279501-5, MR 0166596 {{citation}}: ISBN / Date incompatibility (help) Gelfand, I. M.; Shilov, G. E. (1968) [1958], Generalized functions. Vol. 2. Spaces of fundamental and generalized functions, Translated from the Russian by Morris D. Friedman, Amiel Feinstein and Christian P. Peltzer, Boston, MA: Academic Press, ISBN 978-0-12-279502-2, MR 0230128 Gelfand, I. M.; Shilov, G. E. (1967) [1958], Generalized functions. Vol. 3: Theory of differential equations, Translated from the Russian by Meinhard E. Mayer, Boston, MA: Academic Press, MR 0217416 Gelfand, I. M.; Vilenkin, N. Ya. (1964) [1961], Generalized functions. Vol. 4: Applications of harmonic analysis, Translated by Amiel Feinstein, Boston, MA: Academic Press, ISBN 978-0-12-279504-6, MR 0173945 {{citation}}: ISBN / Date incompatibility (help) Gelfand, I. M.; Graev, M. I.; Vilenkin, N. Ya. (1966) [1962], Generalized functions. Vol. 5: Integral geometry and representation theory, Translated from the Russian by Eugene Saletan, Boston, MA: Academic Press, ISBN 978-0-12-279505-3, MR 0207913 Gelfand, I. M.; Graev, M. I.; Pyatetskii-Shapiro, I. I. (1969), Representation theory and automorphic functions, Translated from the Russian by K. A. Hirsch, Philadelphia, Pa.: W. B. Saunders Co., ISBN 978-0-12-279506-0, MR 0233772 Gelfand, Izrail M. (1987), Gindikin, S. G.; Guillemin, V. W.; Kirillov, A. A.; Kostant, Bertram; Sternberg, Shlomo (eds.), Collected papers. Vol. I, Berlin, New York: Springer-Verlag, ISBN 978-3-540-13619-4, MR 0929821 Gelfand, Izrail M. (1988), Gindikin, S. G.; Guillemin, V. W.; Kirillov, A. A.; Kostant, Bertram; Sternberg, Shlomo (eds.), Collected papers. Vol. II, Berlin, New York: Springer-Verlag, ISBN 978-3-540-19035-6, MR 0929821 Gelfand, I. M.; Shen, A. (1993), Algebra, Boston: Birkhäuser, ISBN 978-0-8176-3677-7 Gelfand, Izrail M. (1989), Gindikin, S. G.; Guillemin, V. W.; Kirillov, A. A.; Kostant, Bertram; Sternberg, Shlomo (eds.), Collected papers. Vol. III, Berlin, New York: Springer-Verlag, ISBN 978-3-540-19399-9, MR 0997939 Gelfand, I. M.; Kapranov, M.M.; Zelevinsky, A.V. (1994), Discriminants, resultants, and multidimensional determinants, Boston: Birkhäuser, ISBN 978-0-8176-3660-9{{citation}}: CS1 maint: publisher location (link) Gelfand, I. M.; Saul, M. (2001), Trigonometry, Boston: Birkhäuser, doi:10.1007/978-1-4612-0149-6, ISBN 978-0-8176-3914-3 Gelfand, I. M.; Gindikin, S. G.; Graev, M. I. (2003), Selected topics in integral geometry, Translations of Mathematical Monographs, vol. 220, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-2932-5, MR 2000133 Borovik, Alexandre V.; Gelfand, I. M.; White, Neil (2003), Coxeter matroids, Progress in Mathematics, vol. 216, Boston, MA: Birkhäuser Boston, ISBN 978-0-8176-3764-4, MR 1989953 Generalized Functions Volumes, 1-6, American Math Society, (2015) == See also == Gelfand duality Gelfand–Levitan–Marchenko equation Gelfand pair Gelfand mapping Gelfand ring Gelfand triple Anti-cosmopolitan campaign == References == === Citations === === Sources === == External links == Israel Moiseevich Gelfand, dedicated site, maintained by Tatiana V. Gelfand and Tatiana I. Gelfand Israel Gelfand – Daily Telegraph obituary Israel Gelfand – Guardian obituary Israel Gelfand at the Mathematics Genealogy Project O'Connor, John J.; Robertson, Edmund F., "Israel Gelfand", MacTutor History of Mathematics Archive, University of St Andrews Web page at Rutgers List of publications. Steele Prize citation. The unity of mathematics – In honor of the ninetieth birthday of I. M. Gelfand Interview: "A talk with professor I. M. Gelfand.", recorded by V. Retakh and A. Sosinsky, Kvant (1989), no. 1, 3–12 (in Russian). English translation in: Quantum (1991), no. 1, 20–26. (Link)
Wikipedia:Israel Gohberg#0
Israel Gohberg (Hebrew: ישראל גוכברג; Russian: Изра́иль Цу́дикович Го́хберг; 23 August 1928 – 12 October 2009) was a Bessarabian-born Soviet and Israeli mathematician, most known for his work in operator theory and functional analysis, in particular linear operators and integral equations. == Biography == Gohberg was born in Tarutino to parents Tsudik and Haya Gohberg. His father owned a small typography shop and his mother was a midwife. The young Gohberg studied in a Hebrew school in Taurtyne and then a Romanian school in Orhei, where he was influenced by the tutelage of Modest Shumbarsky, a student of the renowned topologist Karol Borsuk. He studied at the Kyrgyz Pedagogical Institute in Bishkek and at Moldova State University in Chișinău, completed his doctorate at Leningrad State University on a thesis advised by Mark Krein (1954), and attended Moscow State University for his habilitation degree. Gohberg joined the faculty at the Teacher's college in Soroca and the Teachers college in Bălți before returning to Chișinău where he was elected into the Academy of Sciences and also being appointed head of functional analysis at Moldova State University (1964–73). After moving to Israel, he joined Tel Aviv University (1974) and was at the Weizmann Institute at Rehovot. Since then he also had positions at Vrije Universiteit in Amsterdam (1983), as well as at the University of Calgary and the University of Maryland, College Park. He founded the Integral equations and operator theory journal (1983). Gohberg was the visionary and driving force of the International Workshop on Operator Theory and its Applications, or IWOTA, starting with its first meeting on August 1, 1981. He became a lifetime president of the IWOTA Steering Committee and a founder of the Springer / Birkhäuser Verlag book series Operator Theory: Advances and Applications (OTAA). Gohberg was awarded the Humboldt Prize in 1992. He received honorary doctorates from the Darmstadt University of Technology in 1997; from the Vienna University of Technology in 2001; from West University of Timișoara in 2002; from Moldova State University, Chișinău, Moldova in 2002; from Alecu Russo State University, Bălți, Moldova in 2002; and from Technion, June 2008. He also was awarded the M.G. Krein Prize of the Ukrainian Academy of Sciences in 2008, and was elected SIAM Fellow in 2009. He died in Ra'anana in 2009. == Publications == Gohberg has authored near five hundred articles in his field. Books, a selection: 1986. Invariant subspaces of matrices with applications. With Peter Lancaster, and Leiba Rodman. . Vol. 51. SIAM, 1986. 2003. Basic classes of linear operators. With Rien Kaashoek and Seymour Goldberg. Springer, 2003. 2005. Convolution equations and projection methods for their solution. Izrail. With Aronovich Felʹdman. Vol. 41. AMS Bookstore. 2009. Matrix polynomials. With Peter Lancaster, and Leiba Rodman. Vol. 58. SIAM, 2009. Articles, a selection: Gohberg, Israel C., and Kreĭn, Mark Grigor'evič. Introduction to the theory of linear nonselfadjoint operators in Hilbert space. Vol. 18. American Mathematical Soc., 1969. Gohberg, I; Kaashoek, M. A (1984). "Time varying linear systems with boundary conditions and integral operators. I. The transfer operator and its properties". Integral Equations and Operator Theory. 7 (3): 325. doi:10.1007/BF01208381. S2CID 118696780. Branges, Louis de (1994). "Book Review: Classes of linear operators, Volume 2". Bulletin of the American Mathematical Society. 31 (2): 236–244. doi:10.1090/S0273-0979-1994-00526-9. == References == == External links == pictures of Gohberg
Wikipedia:Israel Halperin#0
Israel Halperin (January 5, 1911 – March 8, 2007) was a Canadian mathematician and social activist. == Early life and education == Israel Halperin was born in Toronto, Ontario, the son of Russian Jewish immigrants Solomon Halperin and Fanny Lundy. Halperin attended Malvern Collegiate Institute, Victoria University in the University of Toronto, graduated from the University of Toronto in 1932, and later was a graduate student of John von Neumann at Princeton University, where he received his doctorate in mathematics. == Early career == After completing his doctorate in mathematics at Princeton, Halperin took a faculty position at Queen's University beginning in 1939. Halperin enlisted with the Canadian Army in 1942, serving until 1945 in Ottawa. Under the Royal Canadian Artillery, Halperin assisted with the Canadian Armament Research and Development Establishment (CARDE). He then returned to Queen's. == Arrest and release == In February 1946, Halperin was arrested and accused of espionage in Canada, in connection with the defection of Igor Gouzenko, a Soviet cipher clerk, which occurred in Ottawa in September 1945. Gouzenko's defection and subsequent investigation showed that the Soviet Union was carrying on large-scale spying in Canada and the United States, including nuclear weapons espionage. After some arduous questioning and confinement lasting several weeks, under a Royal Commission appointed by Justice Minister Louis St-Laurent, followed by a trial in early 1947, Halperin was eventually cleared and freed. He resumed teaching at Queen's, but not until 1948, following more legal hurdles which were raised by Queen's University leadership. Queen's Principal Robert Charles Wallace advocated his return. == Later career == Following von Neumann's death in 1957, Halperin completed two of his unfinished papers, leaving them under von Neumann's name alone. Halperin taught at Queen's until 1966, earning tenure as a full professor. He then moved to the University of Toronto until his retirement in 1976, by which time he had authored more than 100 academic papers. In 1980, the Israel Halperin Prize was set up by the Canadian Annual Symposium on Operator Theory and Operator Algebras to be awarded to a member of the Canadian mathematical community who has recently obtained a doctorate and has made contributions to operator theory or operator algebras, in honor of Halperin. Halperin was awarded an honorary doctorate of laws from Queen's in 1989, and was made a Member of the Order of Canada, both for his humanitarian work. == Honours == Halperin was elected a Fellow of the Royal Society of Canada in 1953, and won the Henry Marshall Tory Medal in 1967. == Personal life == Halperin was the father of four children, all of whom went on to become professors: William Halperin, Connie Eaves, Stephen Halperin, and Mary Hannah. Halperin died in 2007 at age 96. == Notes == == References == Beck, Sara (February 2008). "A Question of Treason". Queen's Alumni Review: 14–20, 52. Retrieved January 8, 2024. == Further reading == Clément, Dominique. "Israel Halperin". Canada's Human Rights History. == External links == Israel Halperin at the Mathematics Genealogy Project
Wikipedia:Israel Michael Sigal#0
Israel Michael Sigal (born 31 August 1945 in Kiev, Ukrainian SSR) is a Canadian mathematician specializing in mathematical physics. He is a professor at the University of Toronto Department of Mathematics. He was an invited speaker at International Congress of Mathematicians, Kyoto—1990 and in International Congress on Mathematical Physics, Lausanne—1979, W. Berlin—1981, Marseille—1986. == Education == Born in Kiev, Ukrainian SSR, Sigal obtained his bachelor's degree at Gorky University and his Ph.D. at Tel-Aviv University == Research interests == Partial differential equation of quantum physics, Quantum mechanics and quantum information theory, Quantum field theory, Statistical mechanics, Non-linear equations, Mathematical biology, Pattern recognition == Awards == The Jeffrey-Williams Lectureship, CMS Summer Meeting, 1992. John L. Synge Award, 1993. Fellow of the Royal Society of Canada, 1993. University Professor, 1997. Norman Stuart Robertson Chair in Applied Mathematics, 1998. CRM-Fields-PIMS prize, 2000. Fellow of the American Mathematical Society, 2012. == Selected works == Mathematical foundations of quantum scattering theory for multiparticle systems. Mem. Amer. Math. Soc. 1978. MR 0508478. Sigal, I. M. (1982). "Mathematical theory of single channel systems. Analyticity of scattering matrix". Trans. Amer. Math. Soc. 270 (2): 409–437. doi:10.1090/s0002-9947-1982-0645323-x. MR 0645323. Scattering theory for many body quantum-mechanical systems: rigorous results. Lecture Notes in Mathematics 1011. Springer Verlag. 1983. Bach, V.; Fröhlich, J.; Sigal, I. M. (1995). "Mathematical theory of nonrelativistic matter and radiation". Lett. Math. Phys. 34 (3): 183–201. Bibcode:1995LMaPh..34..183B. CiteSeerX 10.1.1.52.2248. doi:10.1007/bf01872776. S2CID 17664339. with Peter D. Hislop: Introduction to spectral theory: with applications to Schrödinger operators. Springer Verlag. 1996. with F. Ting: Sigal, I. M.; Ting, F. (2005). "Pinning of magnetic vortices by an external potential". St. Petersburg Math. J. 16 (1): 211–236. doi:10.1090/s1061-0022-04-00848-9. MR 2069485. with Stephen J. Gustafson: Mathematical concepts of quantum mechanics (2nd ed.). Springer Verlag. 2011. == References == == External links == Israel Michael Sigal at the Mathematics Genealogy Project
Wikipedia:Israel Nathan Herstein#0
Israel Nathan Herstein (March 28, 1923 – February 9, 1988) was a mathematician, appointed as professor at the University of Chicago in 1962. He worked on a variety of areas of algebra, including ring theory, with over 100 research papers and over a dozen books. == Education and career == Herstein was born in Lublin, Poland, in 1923. His family emigrated to Canada in 1926, and he grew up in a harsh and underprivileged environment where, according to him, "you either became a gangster or a college professor." During his school years he played football, ice hockey, golf, tennis, and pool. He also worked as a steeplejack and as a barker at a fair. He received his B.S. degree from the University of Manitoba and his M.A. from the University of Toronto. He received his Ph.D. from Indiana University Bloomington in 1948. His advisor was Max Zorn. He held positions at the University of Kansas, Ohio State University, University of Pennsylvania, and Cornell University before permanently settling at the University of Chicago in 1962. He was a Guggenheim Fellow for the academic year 1960–1961. He is known for his lucid style of writing, as exemplified by his Topics in Algebra, an undergraduate introduction to abstract algebra that was first published in 1964, with a second edition in 1975. A more advanced text is his Noncommutative Rings in the Carus Mathematical Monographs series. His primary interest was in noncommutative ring theory, but he also wrote papers on finite groups, linear algebra, and mathematical economics. He had 30 Ph.D. students, traveled and lectured widely, and spoke Italian, Hebrew, Polish, and Portuguese. He died from cancer in Chicago, Illinois, in 1988. His doctoral students include Miriam Cohen, Wallace S. Martindale, Susan Montgomery, Karen Parshall and Claudio Procesi. == Selected publications == Herstein, I. N. (May 1954). "On the Lie ring of a simple ring". Proc Natl Acad Sci U S A. 40 (5): 305–306. Bibcode:1954PNAS...40..305H. doi:10.1073/pnas.40.5.305. PMC 534126. PMID 16589478. Herstein, I. N. (October 1965). "A counterexample in Noetherian rings". Proc Natl Acad Sci U S A. 54 (4): 1036–1037. Bibcode:1965PNAS...54.1036H. doi:10.1073/pnas.54.4.1036. PMC 219788. PMID 16578617. Topics in Algebra. Milton Keynes: John Wiley & Sons. 1991 [1964]. ISBN 978-0-471-01090-6. Rings with Involution. Chicago: Univ. of Chicago Press. 1976. ISBN 978-0-226-32806-5. Noncommutative Rings. Washington: American Mathematical Soc. 1994 [1968]. ISBN 978-0-88385-015-2. == Notes == == References == Gallian, Joseph A. (2006). Contemporary Abstract Algebra (Sixth ed.). Houghton Mifflin. ISBN 0-618-51471-6. == External links == O'Connor, John J.; Robertson, Edmund F., "Israel Nathan Herstein", MacTutor History of Mathematics Archive, University of St Andrews Israel Nathan Herstein at the Mathematics Genealogy Project
Wikipedia:Issachar ben Mordecai ibn Susan#0
Issachar ben Mordecai ibn Susan (fl. 1539–1572) (Hebrew: יששכר בן מרדכי אבן שושן) was a Jewish mathematician who lived in Ottoman Palestine. At a young age, he moved from Morocco—perhaps from Fes—to Jerusalem, where he became a pupil of Levi ibn Ḥabib. From there he went to Safed, where, under great hardship, he continued his studies. But his increasing poverty induced him, in 1539, to leave Safed and seek a living elsewhere. At this time he commenced work on the calendar, giving, among other things, tables which embraced the years 5299–6000 and (1539–2240). After his return to Safed he resumed his work on the calendar, in which he was assisted by the dayan Joshua. It was published at Salonica, in 1564, under the title Tikkun Yissakar. The second edition, under the title Ibbur Shanim (Venice, 1578), is not as rare as the first. The tables in both editions begin with the year of publication. The book also contains, in two appendixes, a treatise on rites ("minhagim") depending upon the variations in the calendar from year to year, and a treatise on the division of the weekly portions and the hafṭarot according to the ritual of the different congregations. For the latter treatise the author quotes as his source ancient manuscript commentaries, and holds that, according to the opinion of a certain scholar, the division of the weekly portions is to be traced back to Ezra. Rites, anonymously given, are, according to p. 51, 2d edition, taken from Abudarham, to whom the author attributes great authority. == Jewish Encyclopedia bibliography == Fuenn, Keneset Yisrael, i.704; Fürst, Bibl. Jud. iii.396; Steinschneider, Cat. Bodl. col. 1061; idem, in Abhandlungen zur Gesch. der Mathematik, 1899, ix.479. == References == This article incorporates text from a publication now in the public domain: Singer, Isidore; et al., eds. (1901–1906). "Ibn Shoshan". The Jewish Encyclopedia. New York: Funk & Wagnalls.
Wikipedia:István Szalay#0
István Szalay (22 March 1944 – 1 September 2022) was a Hungarian mathematician and politician. A member of the Hungarian Socialist Party, he served in the National Assembly from 1998 to 2002. Prior to that, he was mayor of Szeged from 1994 to 1998. Szalay died on 1 September 2022, at the age of 78. == References ==
Wikipedia:Itai Benjamini#0
Itai Benjamini (Hebrew: איתי בנימין) is an Israeli mathematician who holds the Renee and Jay Weiss Chair in the Department of Mathematics at the Weizmann Institute of Science. == Education == Benjamini completed his Ph.D. in 1992 at the Hebrew University of Jerusalem, under the supervision of Benjamin Weiss. His dissertation was entitled "Random Walks on Graphs and Manifolds". In 2004 he won the Rollo Davidson Prize for young probability theorists "for his work across probability, including the analytic and geometric, particularly in the study of random processes associated with graphs". In the same year he also won the Morris L. Levinson Prize of the Weizmann Institute. He was an invited speaker at the International Congress of Mathematicians in 2010, speaking about "random planar metrics".[B] == Career == Benjamini was a long time collaborator of Oded Schramm. Their joint works included papers on limits of planar graphs,[BS] noise sensitivity of Boolean functions[BKS1] and first passage percolation[BKS2]. With Olle Häggström, Benjamini edited the selected works of Oded Schramm.[S] Benjamini has also made contributions to the study of the Biham–Middleton–Levine traffic model [AB] and isoperimetric inequalities on Riemannian manifolds[BC]. == Selected publications == == References == == External links == Home page
Wikipedia:Itala D'Ottaviano#0
Itala Maria Loffredo D'Ottaviano (born 1944) is a Brazilian mathematical logician who was president of the Brazilian Logic Society. Topics in her work have included non-classical logic, paraconsistent logic, many-valued logic, and the history of logic. == Education == After graduating from the Conservatório Musical Carlos Gomes, a music school in Campinas, in 1960, D'Ottaviano studied mathematics at the Pontifical Catholic University of Campinas, graduating in 1966. She earned a master's degree in mathematics at the University of Campinas in 1974, and completed a Ph.D. there in 1982, advised by Mário Tourasse Teixeira and Newton da Costa, respectively. Her doctoral dissertation, Sobre Uma Teoria de Modelos Trivalente, concerned the model theory of three-valued logic. She earned a habilitation at the University of Campinas in 1987. == Career == D'Ottaviano was a postdoctoral researcher at the University of California, Stanford University, and the University of Oxford. She taught mathematics at the University of Campinas beginning in 1969, and became a titular professor there in 1998. From 2013 to 2014 she was Provost of Graduate Studies at the university. She was president of the Brazilian Logic Society twice, from 1994 to 2003 and again from 2011 to 2014. She also headed the Committee on Logic in Latin America of the Association for Symbolic Logic from 1993 to 1999. == Book == With Roberto Cignoli and Daniele Mundici, D'Ottaviano is a coauthor of the book Algebraic Foundations of Many-Valued Reasoning (Kluwer, 2000). == Recognition == D'Ottaviano is a full member of the International Academy of Philosophy of Science. == References == == External links == Itala D'Ottaviano publications indexed by Google Scholar
Wikipedia:Italo Jose Dejter#0
Italo Jose Dejter (December 17, 1939) is an Argentine-born American mathematician, a retired professor of mathematics and computer science from the University of Puerto Rico, (August 1984-February 2018) and a researcher in algebraic topology, differential topology, graph theory, coding theory and combinatorial designs. He obtained a Licentiate degree in mathematics from University of Buenos Aires in 1967, arrived at Rutgers University in 1970 by means of a Guggenheim Fellowship and obtained a Ph.D. degree in mathematics in 1975 under the supervision of Professor Ted Petrie, with support of the National Science Foundation. He was a professor at the Federal University of Santa Catarina, Brazil, from 1977 to 1984, with grants from the National Council for Scientific and Technological Development, (CNPq). Dejter has been a visiting scholar at a number of research institutions, including University of São Paulo, Instituto Nacional de Matemática Pura e Aplicada, Federal University of Rio Grande do Sul, University of Cambridge, National Autonomous University of Mexico, Simon Fraser University, University of Victoria, New York University, University of Illinois at Urbana–Champaign, McMaster University, DIMACS, Autonomous University of Barcelona, Technical University of Denmark, Auburn University, Polytechnic University of Catalonia, Technical University of Madrid, Charles University, Ottawa University and Simón Bolívar University. The sections below describe the relevance of Dejter's work in the research areas mentioned in the first paragraph above, or in the adjacent box. == Algebraic and differential topology == In 1971, Ted Petrie conjectured that if X is a closed, smooth 2n-dimensional homotopy complex projective space that admits a nontrivial smooth action of the circle, and if a function h, mapping X onto the 2n-dimensional complex projective space, is a homotopy equivalence, then h preserves the Pontrjagin classes. In 1975, Dejter proved Petrie's Conjecture for n=3, establishing this way that every closed, smooth, 6-dimensional homotopy complex projective space must be the complex 3-dimensional projective space CP3. Dejter's result is most relevant in view of Petrie's exotic S1-actions on CP3, (apart from the trivial S1-actions on CP3). Let G be a compact Lie group, let Y be a smooth G-manifold and let F a G-fibre map between G-vector bundles of the same dimension over Y which on each G-fibre is proper and has degree one. Petrie also asked: What are necessary and sufficient conditions for the existence of a smooth G-map properly G-homotopic to F and transverse to the zero-section? Dejter provided both types of conditions, which do not close to a necessary and sufficient condition due to a counterexample. The main tool involved in establishing the results above by reducing differential-topology problems into algebraic-topology solutions is equivariant algebraic K-theory, where equivariance is understood with respect to the group given by the circle, i.e. the unit circle of the complex plane. == Graph theory == === Erdős–Pósa theorem and odd cycles === In 1962, Paul Erdős and Lajos Pósa proved that for every positive integer k there exists a positive integer k' such that for every graph G, either (i) G has k vertex-disjoint (long and/or even) cycles or (ii) there exists a subset X of less than k' vertices of G such that G \ X has no (long and/or even) cycles. This result, known today as the Erdős–Pósa theorem, cannot be extended to odd cycles. In fact, in 1987 Dejter and Víctor Neumann-Lara showed that given an integer k > 0, there exists a graph G not possessing disjoint odd cycles such that the number of vertices of G whose removal destroys all odd cycles of G is higher than k. === Ljubljana graph in binary 7-cube === In 1993, Brouwer, Dejter and Thomassen described an undirected, bipartite graph with 112 vertices and 168 edges, (semi-symmetric, that is edge-transitive but not vertex-transitive, cubic graph with diameter 8, radius 7, chromatic number 2, chromatic index 3, girth 10, with exactly 168 cycles of length 10 and 168 cycles of length 12), known since 2002 as the Ljubljana graph. They also established that the Dejter graph, obtained by deleting a copy of the Hamming code of length 7 from the binary 7-cube, admits a 3-factorization into two copies of the Ljubljana graph. See also. Moreover, relations of this subject with square-blocking subsets and with perfect dominating sets (see below) in hypercubes were addressed by Dejter et al. since 1991 in, and . In fact, two questions were answered in, namely: (a) How many colors are needed for a coloring of the n-cube without monochromatic 4-cycles or 6-cycles? Brouwer, Dejter and Thomassen showed that 4 colors suffice and thereby settle a problem of Erdős. (Independently found by F.R.K.Chung. Improving on this, Marston Conder in 1993 showed that for all n not less than 3 the edges of the n-cube can be 3-colored in such a way that there is no monochromatic 4-cycle or 6-cycle). (b) Which vertex-transitive induced subgraphs does a hypercube have? The Dejter graph mentioned above is 6-regular, vertex-transitive and, as suggested, its edges can be 2-colored so that the two resulting monochromatic subgraphs are isomorphic to the semi-symmetric Ljubljana graph of girth 10. In 1972, I. Z. Bouwer attributed a graph with the mentioned properties of the Ljubljana graph to R. M. Foster. === Coxeter graph and Klein graph === In 2012, Dejter showed that the 56-vertex Klein cubic graph F{56}B, denoted here Γ', can be obtained from the 28-vertex Coxeter cubic graph Γ by zipping adequately the squares of the 24 7-cycles of Γ endowed with an orientation obtained by considering Γ as a C {\displaystyle {\mathcal {C}}} -ultrahomogeneous digraph, where C {\displaystyle {\mathcal {C}}} is the collection formed both by the oriented 7-cycles and the 2-arcs that tightly fasten those oriented 7-cycles in Γ. In the process, it is seen that Γ' is a C'-ultrahomogeneous (undirected) graph, where C' is the collection formed by both the 7-cycles and the 1-paths that tightly fasten those 7-cycles in Γ'. This yields an embedding of Γ' into a 3-torus T3 which forms the Klein map of Coxeter notation (7,3)8. The dual graph of Γ' in T3 is the distance-regular Klein quartic graph, with corresponding dual map of Coxeter notation (3,7)8. Other aspects of this work are also cited in the following pages: Bitangents of a quartic.Coxeter graph.Heawood graph. In 2010, Dejter adapted the notion of C {\displaystyle {\mathcal {C}}} -ultrahomogeneous graph for digraphs, and presented a strongly connected C → 4 {\displaystyle {\vec {C}}_{4}} -ultrahomogeneous oriented graph on 168 vertices and 126 pairwise arc-disjoint 4-cycles with regular indegree and outdegree 3 and no circuits of lengths 2 and 3 by altering a definition of the Coxeter graph via pencils of ordered lines of the Fano plane in which pencils were replaced by ordered pencils. The study of ultrahomogeneous graphs (respectively, digraphs) can be traced back to Sheehan, Gardiner, Ronse, Cameron, Gol'fand and Klin, (respectively, Fraïssé, Lachlan and Woodrow, Cherlin). See also page 77 in Bondy and Murty. === Kd-ultrahomogeneous configurations === Motivated in 2013 by the study of connected Menger graphs of self-dual 1-configurations (nd)1 expressible as Kd-ultrahomogeneous graphs, Dejter wondered for which values of n such graphs exist, as they would yield the most symmetrical, connected, edge-disjoint unions of n copies of Kd on n vertices in which the roles of vertices and copies of Kd are interchangeable. For d=4, known values of n are: n=13, 21 and n=42, This reference, by Dejter in 2009, yields a graph G for which each isomorphism between two of the 42 copies of K4 or two of the 21 copies of K2,2,2 in G extends to an automorphism of G. While it would be of interest to determine the spectrum and multiplicities of the involved values of n, Dejter contributes the value of n=102 via the Biggs-Smith association scheme (presented via sextets mod 17), shown to control attachment of 102 (cuboctahedral) copies of the line graph of the 3-cube to the 102 (tetrahedral) copies of K4, these sharing each triangle with two copies of the cuboctahedral copies and guaranteeing that the distance 3-graph of the Biggs-Smith graph is the Menger graph of a self-dual 1-configuration (1024)1. This result was obtained as an application of a transformation of distance-transitive graphs into C-UH graphs that yielded the above-mentioned paper and also allowed to confront, as digraphs, the Pappus graph to the Desargues graph. These applications as well as the reference use the following definition. Given a family C of digraphs, a digraph G is said to be C-ultrahomogeneous if every isomorphism between two induced members of C in G extends to an automorphism of G. In, it is shown that exactly 7 distance-transitive cubic graphs among the existing 12 possess a particular ultrahomogeneous property with respect to oriented cycles realizing the girth that allows the construction of a related Cayley digraph with similar ultrahomogeneous properties in which those oriented cycles appear minimally "pulled apart", or "separated" and whose description is truly beautiful and insightful. === Hamiltonicity in graphs === In 1983, Dejter found that an equivalent condition for the existence of a Z4-Hamilton cycle on the graph of chessknight moves of the usual type (1,2),(resp (1,4)) on the 2nx2n-board is that n is odd larger than 2, (resp. 4). These results are cited by I. Parberry, in relation to the algorithmic aspects of the knight's tour problem. In 1985, Dejter presented a construction technique for Hamilton cycles in the middle-levels graphs. The existence of such cycles had been conjectured by I. Havel in 1983. and by M. Buck and D. Wiedemann in 1984, (though Béla Bollobás presented it to Dejter as a Paul Erdős' conjecture in Jan. 1983) and established by T. Mütze in 2014. That technique was used by Dejter et al. In 2014, Dejter returned to this problem and established a canonical ordering of the vertices in a quotient graph (of each middle-levels graph under the action of a dihedral group) in one-to-one correspondence with an initial section of a system of numeration (present as sequence A239903 in the On-Line Encyclopedia of Integer Sequences by Neil Sloane) composed by restricted growth strings (with the k-th Catalan number expressed by means of the string 10...0 with k "zeros" and a single "one", as J. Arndt does in page 325 ) and related to Kierstead-Trotter lexical matching colors. This system of numeration may apply to a dihedral-symmetric restricted version of the middle-levels conjecture. In 1988, Dejter showed that for any positive integer n, all 2-covering graphs of the complete graph Kn on n vertices can be determined; in addition, he showed that among them there is only one graph that is connected and has a maximal automorphism group, which happens to be bipartite; Dejter also showed that an i-covering graph of Kn is hamiltonian, for i less than 4, and that properly minimal connected non-hamiltonian covering graphs of Kn are obtained which are 4-coverings of Kn; also, non-hamiltonian connected 6-coverings of Kn were constructed in that work. Also in 1988, Dejter showed that if k, n and q are integers such that if 0 <2k<n=2kq ± {\displaystyle \pm } 1, then the graph generated by the generalized chessknight moves of type (1,2k) on the 2n x 2n-chessboard has Hamilton cycles invariant under quarter turns. For k=1, respectively 2, this extends to the following necessary and sufficient condition for the existence of such cycles: that n is odd and larger than 2k-1. In 1990, Dejter showed that if n and r are integers larger than 0 with n+r larger than 2, then the difference of two concentric square boards A and B with (n + 2r)2 and n2 entries respectively has a chessknight Hamilton cycle invariant under quarter-turns if and only if r is larger than 2 and either n or r is odd. In 1991, Dejter and Neumann-Lara showed that given a group Zn acting freely on a graph G, the notion of a voltage graph is applied to the search for Hamilton cycles in G invariant under an action of Zn on G. As an application, for n = 2 and 4, equivalent conditions and lower bounds for chessknight Hamilton cycles containing paths spanning square quadrants and rectangular half-boards were found, respectively. === Coloring the arcs of biregular graphs === Recalling that each edge of a graph H has two oppositely oriented arcs, each vertex v of H is identified with the set of arcs (v,e) departing from v along the edges e of H incident to v. Let H be a (λ,μ)-biregular graph with bipartition (Y,X), where |Y|=kμ and |X|=kλ, where k<0, λ and μ are integers. In, Dejter considered the problem of assigning, for each edge e=yx of H, a color given by an element of Y, respectively X, to the arc (y,e), respectively (x,e), so that each color is assigned exactly once in the set of arcs departing from each vertex of H. Furthermore, Dejter set such assignment to fulfill a specific bicolor weight function over a monotonic subset of Y×X, pointing that this problem applies to the Design of Experiments for Industrial Chemistry, Molecular Biology, Cellular Neuroscience, etc. An algorithmic construction based on biregular graphs with bipartitions given by cyclic-group pairs is also presented in Dejter's work, as well as three essentially different solutions to the Great Circle Challenge Puzzle based on a different biregular graph whose bipartition is formed by the vertices and 5-cycles of the Petersen graph. == Perfect dominating sets == A perfect dominating set S of a graph G is a set of vertices of G such that every vertex of G is either in S or is adjacent to exactly one vertex of S. Weichsel showed that a perfect dominating set of the n-cube Qn induces a subgraph of Qn whose components are isomorphic to hypercubes and conjectured that each of these hypercubes has the same dimension. In 1993, Dejter and Weichsel presented the first known cases in which those components have the same dimension but different directions, namely in the 8-cube with components that are 1-cubes formed each by one edge, with the involved edges happening in: (a) four different directions, as told by Alexander Felzenbaum to Weichsel in Rehovot, Israel, 1988; (b) eight different directions, which involves the Hamming code of length 7, the Heawood graph, the Fano plane and the Steiner triple system of order 7. The result of (a) above is immediately extended to perfect dominating sets in cubes of dimensions which are powers of 2 whose components contain each an only edge in half the coordinate direction. On the other hand, in 1991, Dejter and Phelps extended the result of (b) above again to cubes whose dimensions are powers of 2, with components composed each by a unique edge in all coordinate directions. (However, this result is not yet extended to q-ary cubes, as planned by the authors). The Weichsel conjecture was answered in the affirmative by Östergård and Weakley, who found a perfect dominating set in the 13-cube whose components are 26 4-cubes and 288 isolated vertices. Dejter and Phelps gave a short and elegant proof of this result. === Efficient dominating sets === An E-chain is a countable family of nested graphs, each of which has an efficient dominating set. The Hamming codes in the n-cubes provide a classical example of E-chains. Dejter and Serra gave a constructing tool to produce E-chains of Cayley graphs. This tool was used to construct infinite families of E-chains of Cayley graphs generated by transposition trees of diameter 2 on symmetric groups. These graphs, known as star graphs, had the efficient domination property established by Arumugam and Kala. In contrast, Dejter and O. Tomaiconza showed that there is no efficient dominating set in any Cayley graph generated by a transposition tree of diameter 3. Further study on threaded distance trees and E-sets of star graphs was conducted by Dejter. In 2012, Dejter adapted the results cited above to the case of digraphs. In fact, worst-case efficient dominating sets in digraphs are conceived so that their presence in certain strong digraphs corresponds to that of efficient dominating sets in star graphs. The fact that the star graphs form a so-called dense segmental neighborly E-chain is reflected in a corresponding fact for digraphs. === Quasiperfect dominating sets === In 2009, Dejter defined a vertex subset S of a graph G as a quasiperfect dominating set in G if each vertex v of G not in S is adjacent to dv ∈{1,2} vertices of S, and then investigated perfect and quasiperfect dominating sets in the regular tessellation graph of Schläfli symbol {3,6} and in its toroidal quotient graphs, yielding the classification of their perfect dominating sets and most of their quasiperfect dominating sets S with induced components of the form Kν, where ν ∈{1,2,3} depends only on S. == Coding theory == === Invariants of perfect error-correcting codes === Invariants of perfect error-correcting codes were addressed by Dejter in, and Dejter and Delgado in which it is shown that a perfect 1-error-correcting code C is 'foldable' over its kernel via the Steiner triple systems associated to its codewords. The resulting 'folding' produces a graph invariant for C via Pasch configurations and tensors. Moreover, the invariant is complete for Vasil'ev codes of length 15 as viewed by F. Hergert, showing the existence of nonadditive propelinear 1-perfect codes, and allowing to visualize a propelinear code by means of the commutative group formed by its classes mod kernel, as well as to generalize the notion of a propelinear code by extending the involved composition of permutations to a more general group product. === Generalizing perfect Lee codes === Motivated by an application problem in computer architecture, Araujo, Dejter and Horak introduced a notion of perfect distance-dominating set, PDDS, in a graph, constituting a generalization of perfect Lee codes, diameter perfect codes, and other codes and dominating sets, and thus initiating a systematic study of such vertex sets. Some of these sets, related to the motivating application, were constructed, and the non-existence of others was demonstrated. In fact, an extension of the long-standing Golomb-Welch Conjecture, in terms of PDDSs, was stated. === Total perfect codes === According to Dejter and Delgado, given a vertex subset S' of a side Pm of an m x n grid graph G, the perfect dominating sets S in G with S' being the intersection of S with V(Pm) can be determined via an exhaustive algorithm of running time O(2m+n). Extending the algorithm to infinite-grid graphs of width m-1, periodicity makes the binary decision tree prunable into a finite threaded tree, a closed walk of which yields all such sets S. The graphs induced by the complements of such sets S can be codified by arrays of ordered pairs of positive integers, for the growth and determination of which a speedier algorithm exists. A recent characterization of grid graphs having total perfect codes S (i.e. with just 1-cubes as induced components, also called 1-PDDS and DPL(2,4)), due to Klostermeyer and Goldwasser, allowed Dejter and Delgado to show that these sets S are restrictions of only one total perfect code S1 in the planar integer lattice graph, with the extra-bonus that the complement of S1 yields an aperiodic tiling, like the Penrose tiling. In contrast, the parallel, horizontal, total perfect codes in the planar integer lattice graph are in 1-1 correspondence with the doubly infinite {0,1}-sequences. Dejter showed that there is an uncountable number of parallel total perfect codes in the planar integer lattice graph L; in contrast, there is just one 1-perfect code, and just one total perfect code in L, the latter code restricting to total perfect codes of rectangular grid graphs (which yields an asymmetric, Penrose, tiling of the plane); in particular, Dejter characterized all cycle products Cm x Cn containing parallel total perfect codes, and the d-perfect and total perfect code partitions of L and Cm x Cn, the former having as quotient graph the undirected Cayley graphs of the cyclic group of order 2d2+2d+1 with generator set {1,2d2}. In 2012, Araujo and Dejter made a conjecturing contribution to the classification of lattice-like total perfect codes in n-dimensional integer lattices via pairs (G,F) formed by abelian groups G and homomorphisms F from Zn onto G, in the line of the Araujo-Dejter-Horak work cited above. == Combinatorial designs == Since 1994, Dejter intervened in several projects in Combinatorial Designs initially suggested by Alexander Rosa, C. C. Lindner and C. A. Rodger and also worked upon with E. Mendelsohn, F. Franek, D. Pike, P. A. Adams, E. J. Billington, D. G. Hoffman, M. Meszka and others, which produced results in the following subjects: Invariants for 2-factorization and cycle systems, Triangles in 2-factorizations, Number of 4-cycles in 2-factorizations of complete graphs, Directed almost resolvable Hamilton-Waterloo problem, Number of 4-cycles in 2-factorizations of K2n minus a 1-factor, Almost resolvable 4-cycle systems, Critical sets for the completion of Latin squares Almost resolvable maximum packings of complete graphs with 4-cycles. == References ==
Wikipedia:Iterated function#0
In mathematics, an iterated function is a function that is obtained by composing another function with itself two or several times. The process of repeatedly applying the same function is called iteration. In this process, starting from some initial object, the result of applying a given function is fed again into the function as input, and this process is repeated. For example, on the image on the right: L = F ( K ) , M = F ∘ F ( K ) = F 2 ( K ) . {\displaystyle L=F(K),\ M=F\circ F(K)=F^{2}(K).} Iterated functions are studied in computer science, fractals, dynamical systems, mathematics and renormalization group physics. == Definition == The formal definition of an iterated function on a set X follows. Let X be a set and f: X → X be a function. Defining f n as the n-th iterate of f, where n is a non-negative integer, by: f 0 = d e f id X {\displaystyle f^{0}~{\stackrel {\mathrm {def} }{=}}~\operatorname {id} _{X}} and f n + 1 = d e f f ∘ f n , {\displaystyle f^{n+1}~{\stackrel {\mathrm {def} }{=}}~f\circ f^{n},} where idX is the identity function on X and (f ∘ {\displaystyle \circ } g)(x) = f (g(x)) denotes function composition. This notation has been traced to and John Frederick William Herschel in 1813. Herschel credited Hans Heinrich Bürmann for it, but without giving a specific reference to the work of Bürmann, which remains undiscovered. Because the notation f n may refer to both iteration (composition) of the function f or exponentiation of the function f (the latter is commonly used in trigonometry), some mathematicians choose to use ∘ to denote the compositional meaning, writing f∘n(x) for the n-th iterate of the function f(x), as in, for example, f∘3(x) meaning f(f(f(x))). For the same purpose, f [n](x) was used by Benjamin Peirce whereas Alfred Pringsheim and Jules Molk suggested nf(x) instead. == Abelian property and iteration sequences == In general, the following identity holds for all non-negative integers m and n, f m ∘ f n = f n ∘ f m = f m + n . {\displaystyle f^{m}\circ f^{n}=f^{n}\circ f^{m}=f^{m+n}~.} This is structurally identical to the property of exponentiation that aman = am + n. In general, for arbitrary general (negative, non-integer, etc.) indices m and n, this relation is called the translation functional equation, cf. Schröder's equation and Abel equation. On a logarithmic scale, this reduces to the nesting property of Chebyshev polynomials, Tm(Tn(x)) = Tm n(x), since Tn(x) = cos(n arccos(x)). The relation (f m)n(x) = (f n)m(x) = f mn(x) also holds, analogous to the property of exponentiation that (am)n = (an)m = amn. The sequence of functions f n is called a Picard sequence, named after Charles Émile Picard. For a given x in X, the sequence of values fn(x) is called the orbit of x. If f n (x) = f n+m (x) for some integer m > 0, the orbit is called a periodic orbit. The smallest such value of m for a given x is called the period of the orbit. The point x itself is called a periodic point. The cycle detection problem in computer science is the algorithmic problem of finding the first periodic point in an orbit, and the period of the orbit. == Fixed points == If x = f(x) for some x in X (that is, the period of the orbit of x is 1), then x is called a fixed point of the iterated sequence. The set of fixed points is often denoted as Fix(f). There exist a number of fixed-point theorems that guarantee the existence of fixed points in various situations, including the Banach fixed point theorem and the Brouwer fixed point theorem. There are several techniques for convergence acceleration of the sequences produced by fixed point iteration. For example, the Aitken method applied to an iterated fixed point is known as Steffensen's method, and produces quadratic convergence. == Limiting behaviour == Upon iteration, one may find that there are sets that shrink and converge towards a single point. In such a case, the point that is converged to is known as an attractive fixed point. Conversely, iteration may give the appearance of points diverging away from a single point; this would be the case for an unstable fixed point. When the points of the orbit converge to one or more limits, the set of accumulation points of the orbit is known as the limit set or the ω-limit set. The ideas of attraction and repulsion generalize similarly; one may categorize iterates into stable sets and unstable sets, according to the behavior of small neighborhoods under iteration. Also see infinite compositions of analytic functions. Other limiting behaviors are possible; for example, wandering points are points that move away, and never come back even close to where they started. == Invariant measure == If one considers the evolution of a density distribution, rather than that of individual point dynamics, then the limiting behavior is given by the invariant measure. It can be visualized as the behavior of a point-cloud or dust-cloud under repeated iteration. The invariant measure is an eigenstate of the Ruelle-Frobenius-Perron operator or transfer operator, corresponding to an eigenvalue of 1. Smaller eigenvalues correspond to unstable, decaying states. In general, because repeated iteration corresponds to a shift, the transfer operator, and its adjoint, the Koopman operator can both be interpreted as shift operators action on a shift space. The theory of subshifts of finite type provides general insight into many iterated functions, especially those leading to chaos. == Fractional iterates and flows, and negative iterates == The notion f1/n must be used with care when the equation gn(x) = f(x) has multiple solutions, which is normally the case, as in Babbage's equation of the functional roots of the identity map. For example, for n = 2 and f(x) = 4x − 6, both g(x) = 6 − 2x and g(x) = 2x − 2 are solutions; so the expression f 1/2(x) does not denote a unique function, just as numbers have multiple algebraic roots. A trivial root of f can always be obtained if f's domain can be extended sufficiently, cf. picture. The roots chosen are normally the ones belonging to the orbit under study. Fractional iteration of a function can be defined: for instance, a half iterate of a function f is a function g such that g(g(x)) = f(x). This function g(x) can be written using the index notation as f 1/2(x) . Similarly, f 1/3(x) is the function defined such that f1/3(f1/3(f1/3(x))) = f(x), while f2/3(x) may be defined as equal to f 1/3(f1/3(x)), and so forth, all based on the principle, mentioned earlier, that f m ○ f n = f m + n. This idea can be generalized so that the iteration count n becomes a continuous parameter, a sort of continuous "time" of a continuous orbit. In such cases, one refers to the system as a flow (cf. section on conjugacy below.) If a function is bijective (and so possesses an inverse function), then negative iterates correspond to function inverses and their compositions. For example, f −1(x) is the normal inverse of f, while f −2(x) is the inverse composed with itself, i.e. f −2(x) = f −1(f −1(x)). Fractional negative iterates are defined analogously to fractional positive ones; for example, f −1/2(x) is defined such that f −1/2(f −1/2(x)) = f −1(x), or, equivalently, such that f −1/2(f 1/2(x)) = f 0(x) = x. === Some formulas for fractional iteration === One of several methods of finding a series formula for fractional iteration, making use of a fixed point, is as follows. First determine a fixed point for the function such that f(a) = a. Define f n(a) = a for all n belonging to the reals. This, in some ways, is the most natural extra condition to place upon the fractional iterates. Expand fn(x) around the fixed point a as a Taylor series, f n ( x ) = f n ( a ) + ( x − a ) d d x f n ( x ) | x = a + ( x − a ) 2 2 d 2 d x 2 f n ( x ) | x = a + ⋯ {\displaystyle f^{n}(x)=f^{n}(a)+(x-a)\left.{\frac {d}{dx}}f^{n}(x)\right|_{x=a}+{\frac {(x-a)^{2}}{2}}\left.{\frac {d^{2}}{dx^{2}}}f^{n}(x)\right|_{x=a}+\cdots } Expand out f n ( x ) = f n ( a ) + ( x − a ) f ′ ( a ) f ′ ( f ( a ) ) f ′ ( f 2 ( a ) ) ⋯ f ′ ( f n − 1 ( a ) ) + ⋯ {\displaystyle f^{n}(x)=f^{n}(a)+(x-a)f'(a)f'(f(a))f'(f^{2}(a))\cdots f'(f^{n-1}(a))+\cdots } Substitute in for fk(a) = a, for any k, f n ( x ) = a + ( x − a ) f ′ ( a ) n + ( x − a ) 2 2 ( f ″ ( a ) f ′ ( a ) n − 1 ) ( 1 + f ′ ( a ) + ⋯ + f ′ ( a ) n − 1 ) + ⋯ {\displaystyle f^{n}(x)=a+(x-a)f'(a)^{n}+{\frac {(x-a)^{2}}{2}}(f''(a)f'(a)^{n-1})\left(1+f'(a)+\cdots +f'(a)^{n-1}\right)+\cdots } Make use of the geometric progression to simplify terms, f n ( x ) = a + ( x − a ) f ′ ( a ) n + ( x − a ) 2 2 ( f ″ ( a ) f ′ ( a ) n − 1 ) f ′ ( a ) n − 1 f ′ ( a ) − 1 + ⋯ {\displaystyle f^{n}(x)=a+(x-a)f'(a)^{n}+{\frac {(x-a)^{2}}{2}}(f''(a)f'(a)^{n-1}){\frac {f'(a)^{n}-1}{f'(a)-1}}+\cdots } There is a special case when f '(a) = 1, f n ( x ) = x + ( x − a ) 2 2 ( n f ″ ( a ) ) + ( x − a ) 3 6 ( 3 2 n ( n − 1 ) f ″ ( a ) 2 + n f ‴ ( a ) ) + ⋯ {\displaystyle f^{n}(x)=x+{\frac {(x-a)^{2}}{2}}(nf''(a))+{\frac {(x-a)^{3}}{6}}\left({\frac {3}{2}}n(n-1)f''(a)^{2}+nf'''(a)\right)+\cdots } This can be carried on indefinitely, although inefficiently, as the latter terms become increasingly complicated. A more systematic procedure is outlined in the following section on Conjugacy. ==== Example 1 ==== For example, setting f(x) = Cx + D gives the fixed point a = D/(1 − C), so the above formula terminates to just f n ( x ) = D 1 − C + ( x − D 1 − C ) C n = C n x + 1 − C n 1 − C D , {\displaystyle f^{n}(x)={\frac {D}{1-C}}+\left(x-{\frac {D}{1-C}}\right)C^{n}=C^{n}x+{\frac {1-C^{n}}{1-C}}D~,} which is trivial to check. ==== Example 2 ==== Find the value of 2 2 2 ⋯ {\displaystyle {\sqrt {2}}^{{\sqrt {2}}^{{\sqrt {2}}^{\cdots }}}} where this is done n times (and possibly the interpolated values when n is not an integer). We have f(x) = √2x. A fixed point is a = f(2) = 2. So set x = 1 and f n (1) expanded around the fixed point value of 2 is then an infinite series, 2 2 2 ⋯ = f n ( 1 ) = 2 − ( ln ⁡ 2 ) n + ( ln ⁡ 2 ) n + 1 ( ( ln ⁡ 2 ) n − 1 ) 4 ( ln ⁡ 2 − 1 ) − ⋯ {\displaystyle {\sqrt {2}}^{{\sqrt {2}}^{{\sqrt {2}}^{\cdots }}}=f^{n}(1)=2-(\ln 2)^{n}+{\frac {(\ln 2)^{n+1}((\ln 2)^{n}-1)}{4(\ln 2-1)}}-\cdots } which, taking just the first three terms, is correct to the first decimal place when n is positive. Also see Tetration: f n(1) = n√2. Using the other fixed point a = f(4) = 4 causes the series to diverge. For n = −1, the series computes the inverse function ⁠2+ln x/ln 2⁠. ==== Example 3 ==== With the function f(x) = xb, expand around the fixed point 1 to get the series f n ( x ) = 1 + b n ( x − 1 ) + 1 2 b n ( b n − 1 ) ( x − 1 ) 2 + 1 3 ! b n ( b n − 1 ) ( b n − 2 ) ( x − 1 ) 3 + ⋯ , {\displaystyle f^{n}(x)=1+b^{n}(x-1)+{\frac {1}{2}}b^{n}(b^{n}-1)(x-1)^{2}+{\frac {1}{3!}}b^{n}(b^{n}-1)(b^{n}-2)(x-1)^{3}+\cdots ~,} which is simply the Taylor series of x(bn ) expanded around 1. == Conjugacy == If f and g are two iterated functions, and there exists a homeomorphism h such that g = h−1 ○ f ○ h , then f and g are said to be topologically conjugate. Clearly, topological conjugacy is preserved under iteration, as gn = h−1 ○ f n ○ h. Thus, if one can solve for one iterated function system, one also has solutions for all topologically conjugate systems. For example, the tent map is topologically conjugate to the logistic map. As a special case, taking f(x) = x + 1, one has the iteration of g(x) = h−1(h(x) + 1) as gn(x) = h−1(h(x) + n), for any function h. Making the substitution x = h−1(y) = ϕ(y) yields g(ϕ(y)) = ϕ(y+1), a form known as the Abel equation. Even in the absence of a strict homeomorphism, near a fixed point, here taken to be at x = 0, f(0) = 0, one may often solve Schröder's equation for a function Ψ, which makes f(x) locally conjugate to a mere dilation, g(x) = f '(0) x, that is f(x) = Ψ−1(f '(0) Ψ(x)). Thus, its iteration orbit, or flow, under suitable provisions (e.g., f '(0) ≠ 1), amounts to the conjugate of the orbit of the monomial, Ψ−1(f '(0)n Ψ(x)), where n in this expression serves as a plain exponent: functional iteration has been reduced to multiplication! Here, however, the exponent n no longer needs be integer or positive, and is a continuous "time" of evolution for the full orbit: the monoid of the Picard sequence (cf. transformation semigroup) has generalized to a full continuous group. This method (perturbative determination of the principal eigenfunction Ψ, cf. Carleman matrix) is equivalent to the algorithm of the preceding section, albeit, in practice, more powerful and systematic. == Markov chains == If the function is linear and can be described by a stochastic matrix, that is, a matrix whose rows or columns sum to one, then the iterated system is known as a Markov chain. == Examples == There are many chaotic maps. Well-known iterated functions include the Mandelbrot set and iterated function systems. Ernst Schröder, in 1870, worked out special cases of the logistic map, such as the chaotic case f(x) = 4x(1 − x), so that Ψ(x) = arcsin(√x)2, hence f n(x) = sin(2n arcsin(√x))2. A nonchaotic case Schröder also illustrated with his method, f(x) = 2x(1 − x), yielded Ψ(x) = −⁠1/2⁠ ln(1 − 2x), and hence fn(x) = −⁠1/2⁠((1 − 2x)2n − 1). If f is the action of a group element on a set, then the iterated function corresponds to a free group. Most functions do not have explicit general closed-form expressions for the n-th iterate. The table below lists some that do. Note that all these expressions are valid even for non-integer and negative n, as well as non-negative integer n. Note: these two special cases of ax2 + bx + c are the only cases that have a closed-form solution. Choosing b = 2 = –a and b = 4 = –a, respectively, further reduces them to the nonchaotic and chaotic logistic cases discussed prior to the table. Some of these examples are related among themselves by simple conjugacies. == Means of study == Iterated functions can be studied with the Artin–Mazur zeta function and with transfer operators. == In computer science == In computer science, iterated functions occur as a special case of recursive functions, which in turn anchor the study of such broad topics as lambda calculus, or narrower ones, such as the denotational semantics of computer programs. == Definitions in terms of iterated functions == Two important functionals can be defined in terms of iterated functions. These are summation: { b + 1 , ∑ i = a b g ( i ) } ≡ ( { i , x } → { i + 1 , x + g ( i ) } ) b − a + 1 { a , 0 } {\displaystyle \left\{b+1,\sum _{i=a}^{b}g(i)\right\}\equiv \left(\{i,x\}\rightarrow \{i+1,x+g(i)\}\right)^{b-a+1}\{a,0\}} and the equivalent product: { b + 1 , ∏ i = a b g ( i ) } ≡ ( { i , x } → { i + 1 , x g ( i ) } ) b − a + 1 { a , 1 } {\displaystyle \left\{b+1,\prod _{i=a}^{b}g(i)\right\}\equiv \left(\{i,x\}\rightarrow \{i+1,xg(i)\}\right)^{b-a+1}\{a,1\}} == Functional derivative == The functional derivative of an iterated function is given by the recursive formula: δ f N ( x ) δ f ( y ) = f ′ ( f N − 1 ( x ) ) δ f N − 1 ( x ) δ f ( y ) + δ ( f N − 1 ( x ) − y ) {\displaystyle {\frac {\delta f^{N}(x)}{\delta f(y)}}=f'(f^{N-1}(x)){\frac {\delta f^{N-1}(x)}{\delta f(y)}}+\delta (f^{N-1}(x)-y)} == Lie's data transport equation == Iterated functions crop up in the series expansion of combined functions, such as g(f(x)). Given the iteration velocity, or beta function (physics), v ( x ) = ∂ f n ( x ) ∂ n | n = 0 {\displaystyle v(x)=\left.{\frac {\partial f^{n}(x)}{\partial n}}\right|_{n=0}} for the nth iterate of the function f, we have g ( f ( x ) ) = exp ⁡ [ v ( x ) ∂ ∂ x ] g ( x ) . {\displaystyle g(f(x))=\exp \left[v(x){\frac {\partial }{\partial x}}\right]g(x).} For example, for rigid advection, if f(x) = x + t, then v(x) = t. Consequently, g(x + t) = exp(t ∂/∂x) g(x), action by a plain shift operator. Conversely, one may specify f(x) given an arbitrary v(x), through the generic Abel equation discussed above, f ( x ) = h − 1 ( h ( x ) + 1 ) , {\displaystyle f(x)=h^{-1}(h(x)+1),} where h ( x ) = ∫ 1 v ( x ) d x . {\displaystyle h(x)=\int {\frac {1}{v(x)}}\,dx.} This is evident by noting that f n ( x ) = h − 1 ( h ( x ) + n ) . {\displaystyle f^{n}(x)=h^{-1}(h(x)+n)~.} For continuous iteration index t, then, now written as a subscript, this amounts to Lie's celebrated exponential realization of a continuous group, e t ∂ ∂ h ( x ) g ( x ) = g ( h − 1 ( h ( x ) + t ) ) = g ( f t ( x ) ) . {\displaystyle e^{t~{\frac {\partial ~~}{\partial h(x)}}}g(x)=g(h^{-1}(h(x)+t))=g(f_{t}(x)).} The initial flow velocity v suffices to determine the entire flow, given this exponential realization which automatically provides the general solution to the translation functional equation, f t ( f τ ( x ) ) = f t + τ ( x ) . {\displaystyle f_{t}(f_{\tau }(x))=f_{t+\tau }(x)~.} == See also == == Notes == == References == == External links == Gill, John (January 2017). "A Primer on the Elementary Theory of Infinite Compositions of Complex Functions". Colorado State University.
Wikipedia:Iterated logarithm#0
In computer science, the iterated logarithm of n {\displaystyle n} , written log* n {\displaystyle n} (usually read "log star"), is the number of times the logarithm function must be iteratively applied before the result is less than or equal to 1 {\displaystyle 1} . The simplest formal definition is the result of this recurrence relation: log ∗ ⁡ n := { 0 if n ≤ 1 ; 1 + log ∗ ⁡ ( log ⁡ n ) if n > 1 {\displaystyle \log ^{*}n:={\begin{cases}0&{\mbox{if }}n\leq 1;\\1+\log ^{*}(\log n)&{\mbox{if }}n>1\end{cases}}} In computer science, lg* is often used to indicate the binary iterated logarithm, which iterates the binary logarithm (with base 2 {\displaystyle 2} ) instead of the natural logarithm (with base e). Mathematically, the iterated logarithm is well defined for any base greater than e 1 / e ≈ 1.444667 {\displaystyle e^{1/e}\approx 1.444667} , not only for base 2 {\displaystyle 2} and base e. The "super-logarithm" function s l o g b ( n ) {\displaystyle \mathrm {slog} _{b}(n)} is "essentially equivalent" to the base b {\displaystyle b} iterated logarithm (although differing in minor details of rounding) and forms an inverse to the operation of tetration. == Analysis of algorithms == The iterated logarithm is useful in analysis of algorithms and computational complexity, appearing in the time and space complexity bounds of some algorithms such as: Finding the Delaunay triangulation of a set of points knowing the Euclidean minimum spanning tree: randomized O(n log* n) time. Fürer's algorithm for integer multiplication: O(n log n 2O(lg* n)). Finding an approximate maximum (element at least as large as the median): lg* n − 1 ± 3 parallel operations. Richard Cole and Uzi Vishkin's distributed algorithm for 3-coloring an n-cycle: O(log* n) synchronous communication rounds. The iterated logarithm grows at an extremely slow rate, much slower than the logarithm itself, or repeats of it. This is because the tetration grows much faster than iterated exponential: y b = b b ⋅ ⋅ b ⏟ y ≫ b b ⋅ ⋅ b y ⏟ n {\displaystyle {^{y}b}=\underbrace {b^{b^{\cdot ^{\cdot ^{b}}}}} _{y}\gg \underbrace {b^{b^{\cdot ^{\cdot ^{b^{y}}}}}} _{n}} the inverse grows much slower: log b ∗ ⁡ x ≪ log b n ⁡ x {\displaystyle \log _{b}^{*}x\ll \log _{b}^{n}x} . For all values of n relevant to counting the running times of algorithms implemented in practice (i.e., n ≤ 265536, which is far more than the estimated number of atoms in the known universe), the iterated logarithm with base 2 has a value no more than 5. Higher bases give smaller iterated logarithms. == Other applications == The iterated logarithm is closely related to the generalized logarithm function used in symmetric level-index arithmetic. The additive persistence of a number, the number of times someone must replace the number by the sum of its digits before reaching its digital root, is O ( log ∗ ⁡ n ) {\displaystyle O(\log ^{*}n)} . In computational complexity theory, Santhanam shows that the computational resources DTIME — computation time for a deterministic Turing machine — and NTIME — computation time for a non-deterministic Turing machine — are distinct up to n log ∗ ⁡ n . {\displaystyle n{\sqrt {\log ^{*}n}}.} == See also == Inverse Ackermann function, an even more slowly growing function also used in computational complexity theory == References ==
Wikipedia:Ivan Cherednik#0
Ivan Cherednik (Иван Владимирович Чередник) is a Russian-American mathematician. He introduced double affine Hecke algebras, and used them to prove Macdonald's constant term conjecture in (Cherednik 1995). He has also dealt with algebraic geometry, number theory and Soliton equations. His research interests include representation theory, mathematical physics, and algebraic combinatorics. He is currently the Austin M. Carr Distinguished Professor of mathematics at the University of North Carolina at Chapel Hill. In 1998 he was an Invited Speaker of the International Congress of Mathematicians in Berlin. == See also == Dyson conjecture Macdonald polynomials Yangian == Publications == Cherednik, Ivan (1995), "Double Affine Hecke Algebras and Macdonald's Conjectures", Annals of Mathematics, Second Series, 141 (1), Annals of Mathematics: 191–216, doi:10.2307/2118632, ISSN 0003-486X, JSTOR 2118632 Cherednik, Ivan (2005), Double affine Hecke algebras, London Mathematical Society Lecture Note Series, vol. 319, Cambridge University Press, ISBN 978-0-521-60918-0, MR 2133033 Cherednik, Ivan (2023), Combinatorics, Modeling, Elementary Number Theory: From Basic to Advanced, World Scientific, 2023, ISBN 9811265402 == References == Ivan Cherednik at the Mathematics Genealogy Project University of North Carolina page about Ivan Cherednik Cherednik on Math-Net.Ru
Wikipedia:Ivan M. Niven#0
Ivan Morton Niven (October 25, 1915 – May 9, 1999) was a Canadian-American number theorist best remembered for his work on Waring's problem. He worked for many years as a professor at the University of Oregon, and was president of the Mathematical Association of America. He wrote several books on mathematics. == Life == Niven was born in Vancouver. He did his undergraduate studies at the University of British Columbia and was awarded his doctorate in 1938 from the University of Chicago. He was a member of the University of Oregon faculty from 1947 to his retirement in 1981. He was president of the Mathematical Association of America (MAA) from 1983 to 1984. He died in 1999 in Eugene, Oregon. == Research == Niven completed the solution of most of Waring's problem in 1944. This problem, based on a 1770 conjecture by Edward Waring, consists of finding the smallest number g ( n ) {\displaystyle g(n)} such that every positive integer is the sum of at most g ( n ) {\displaystyle g(n)} n {\displaystyle n} -th powers of positive integers. David Hilbert had proved the existence of such a g ( n ) {\displaystyle g(n)} in 1909; Niven's work established the value of g ( n ) {\displaystyle g(n)} for all but finitely many values of n {\displaystyle n} . Niven gave an elementary proof that π {\displaystyle \pi } is irrational in 1947. Niven numbers, Niven's constant, and Niven's theorem are named for Niven. He has an Erdős number of 1 because he coauthored a paper with Paul Erdős, on partial sums of the harmonic series. == Recognition == Niven received the University of Oregon's Charles E. Johnson Award in 1981. He received the MAA Distinguished Service Award in 1989. He won a Lester R. Ford Award in 1970. In 2000, the asteroid 12513 Niven, discovered in 1998, was named after him. == Books == Irrational Numbers. [Carus Mathematical Monographs]. The Mathematical Association of America. 1956. ISBN 0-88385-011-7. {{cite book}}: ISBN / Date incompatibility (help) Niven, Ivan; Zuckerman, Herbert S.; Montgomery, Hugh L. (1991) [First published 1960]. An Introduction to the Theory of Numbers. New York: John Wiley & Sons. ISBN 978-81-265-1811-1. Calculus. Van Nostrand Reinhold Company. 1966. ISBN 978-0-442-06032-9. Numbers: Rational and Irrational. Anneli Lax New Mathematical Library. Vol. 1. Washington DC: The Mathematical Association of America. 2011 [First published 1961]. doi:10.5948/upo9780883859193. ISBN 978-0-88385-919-3. Diophantine Approximations. Mineola, N.Y: Dover Publications. 1 January 2008 [First published 1963]. ISBN 978-0-486-46267-7. Mathematics of Choice: How to Count without Counting. Washington, DC: Mathematical Association of America. 1965. ISBN 978-0-88385-615-4. Maxima and Minima Without Calculus. Washington, D.C.: Cambridge University Press. 1981. ISBN 978-0-88385-306-1. == External links == Donald Albers and G. L. Alexanderson. "A conversation with Ivan Niven", College Mathematics Journal, 22, 1991, pp. 371–402. == See also == Proof that π is irrational == References ==
Wikipedia:Ivan Melnikov (politician)#0
Ivan Ivanovich Melnikov (Russian: Ива́н Ива́нович Ме́льников; born 7 August 1950) is a Russian politician. He is the vice-chairman of the Communist Party of the Russian Federation (CPRF), and First Vice-chairman of the State Duma. He is also a professor at Moscow State University. == Early life and education == Melnikov was born on 7 August 1950 in Bogoroditsk, Tula Oblast. He attended the faculty of mathematics at Moscow State University, graduating in 1972. Following this, he worked as a mathematics teacher at Boarding School No. 18, later renamed the Kolmogorov Boarding School, a mathematics and physics oriented secondary school affiliated with the university. He completed a DPhil in mathematics in 1982, working as an instructor/lecturer and eventually an associate professor at his alma mater Moscow State University. In 1999, he obtainted a DSc in pedagogy (education science) and obtained a full professorship in 2002. == Political career == An active member of the Communist Party of the Soviet Union (CPSU), he was elected to the CPSU Central Committee at the 28th Party Congress in 1990. He briefly served as a Party Secretary before the party was banned on 26 August 1991. Melnikov helped found the Communist Party of the Russian Federation, the successor to the CPSU in Russia, in 1993. He first received a deputy mandate for the State Duma in 1995 as part of the CPRF electoral list. == Sanctions == He was sanctioned by the UK government in 2014 in relation to the Russo-Ukrainian War. Melnikov was sanctioned by the United States Department of the Treasury following the 2022 Russian invasion of Ukraine. == References == == External links == Ivan Melnikov on the CPRF's website
Wikipedia:Ivan Oseledets#0
Ivan Oseledets (Russian: Оселедец Иван Валерьевич; born July 6, 1983) is a Russian computer scientist and mathematician and professor at the Skolkovo Institute of Science and Technology. He is best known for the tensor train decomposition, which is more commonly called a matrix product state in the area of tensor networks. Oseledets joined the Skolkovo Institute of Science and Technology in 2013 and currently serves as the director of the centre for artificial intelligence technology. == Education == Oseledets was educated in Russia, receiving an M.Sc from the Moscow Institute of Physics and Technology in 2006, and a Ph.D. from the G.I. Marchuk Institute of Numerical Mathematics of the Russian Academy of Sciences in 2007. He received the Russian Doctor of Sciences in 2012 also from the G.I. Marchuk Institute of Numerical Mathematics of the Russian Academy of Sciences. == Honors and awards == On February 7, 2019, Russian President Vladimir Putin presented Oseledets with an award for, proposing breakthrough computational technology for solving multidimensional problems in physics, chemistry, biology, and data analysis based on tensor expansions. In April 2022, Oseledets was elected to receive the honorary title, Professor of the Russian Academy of Sciences. Oseledets received a Humboldt Prize from the Alexander von Humboldt Foundation. The starting date of the award was February 2022. == Early life and family == Ivan Oseledets comes from a family of mathematicians. His grandfather, Ivan Bezhaev, was associate professor at the Moscow State University and reached the rank of lieutenant general in the Soviet Union's Red Army where he was responsible for various mathematical projects involving cryptography. His father, Valery Oseledets, proved Oseledets theorem in ergodic systems theory. == References ==
Wikipedia:Ivan Paskvić#0
Ivan Paskvić (German: Johann Pasquich, Hungarian: János Pasquich, 3 January 1754 – 15 December 1829) was an astronomer, physicist and mathematician from the Austrian Empire. == Biography == Paskvić was born in Senj. He was educated in Zagreb, from 1778 in Graz and from 1782 in Buda. In Buda he was an adjunct professor of physics, professor of mathematics, Dean of the Faculty of Arts and director of Buda Observatory. His Slovakian colleague Daniel M. Kmeth accused him in several scientific journals of forging observational data of Buda Observatory. After examining the data many prominent scientists in Europe stood in Paskvić's defense, such as Carl Friedrich Gauss, Friedrich Bessel, Johann Franz Encke, Heinrich Wilhelm Matthias Olbers, Heinrich Christian Schumacher. From 1824 he worked in Vienna where he died. == Research and work == Paskvić dealt with astronomy, higher geodesy, mathematics, mechanics and theory of machines. His scientific work is divided into two periods. The first period deals with mechanics, higher mathematics and with its applications to theory of machines. The second period deals with astronomy and higher geodesy. He derives the formula for the length of a mathematical seconds pendulum at any place on the Earth, compares it with that of Laplace and corrects de Prony's formula for the length of physical seconds pendulum. He determined the flattening of the Earth by finding formula's for 1) radius of the circle that passes through a point on Earth's surface and is parallel to the equator, 2) distance of the center of this circle from the center of the Earth, 3) meridian radius curvature at any point on Earth's surface, 4) size of one meridian degree, 5) the angle between the radius of the Earth at the equator and at some other point on Earth's surface, 6) the length of the quarter meridian, 7) the length of the meridian arc, 8) surface of Earth's zone between any two parallels. == Publications == == See also == 11191 Paskvić == Sources == Stipe Kutleša (1988). "O Senjaninu Ivanu Paskviću i njegovim radovima iz mehanike". Senjski zbornik. 15 (1): 149–155. Retrieved 13 March 2020. Patkós, Laszló (1988). "The Pasquich affair". Acta Historica Astronomiae. 24 (1): 182–187. Bibcode:2004AcHA...24..182P.
Wikipedia:Ivan Pervushin#0
Ivan Mikheevich Pervushin (Russian: Иван Михеевич Первушин, sometimes transliterated as Pervusin or Pervouchine) (15 January 1827—17 June 1900) was a Russian clergyman and mathematician of the second half of the 19th century, known for his achievements in number theory. He discovered the ninth perfect number and its odd prime factor, the ninth Mersenne prime. Also, he proved that two Fermat numbers, the 12th and 23rd, were composite. A contemporary of Pervushin's, writer A. D. Nosilov, wrote: "... this is the modest unknown worker of science ... All of his spacious study is filled up with the different mathematical books, ... here are the books of famous mathematicians: Chebyshev, Legendre, Riemann; not including all modern mathematical publications, which were sent to him by Russian and foreign scientific and mathematical societies. It seemed I was not in a study of the village priest, but in a study of an old mathematics professor ... Besides being a mathematician, he is also a statistician, a meteorologist, and a correspondent". == Life == Ivan Pervushin was born on 27 January [O.S. 15 January] 1827 in Lysva, Permsky Uyezd, Perm Governorate, a district in the east of European Russia. He claimed his birthplace to be the town of Lysva (where his grandfather, John Pervushin, was a priest) but other sources suggest Pashii, in Gornozavodsk. Though, according to recently found archival parish registers of 1827 from Lysva church, he was born in Lysva. He graduated from Kazan clerical academy in 1852. Upon graduation, Pervushin was required to become a priest; he stayed for some time in Perm, then moved to a remote village of Zamaraevo, some 150 miles from Ekaterinburg, where he lived for 25 years. In Zamaraevo, Pervushin founded a rural school in 1859. He moved to the nearby town of Shadrinsk in 1883, where he published an article that ridiculed the local government. As a punishment, he was exiled to the village of Mehonskoe in 1887. Ivan Pervushin died on 30 June [O.S. 17 June] 1900 in Mehonskoe at the age of 73. == Number theory == The priest's job provided for Pervushin's life and left him plenty of free time to spend on mathematics. Pervushin was particularly interested in number theory. In 1877 and in the beginning of 1878 he presented two papers to the Russian Academy of Sciences. In these papers, he proved that the 12th and 23rd Fermat numbers are composite: 2 2 12 + 1 {\displaystyle 2^{2^{12}}+1} is divisible by 7 × 2 14 + 1 = 114689 {\displaystyle 7\times 2^{14}+1=114689} and 2 2 23 + 1 {\displaystyle 2^{2^{23}}+1} is divisible by 5 × 2 25 + 1 = 167772161. {\displaystyle 5\times 2^{25}+1=167772161.} In 1883 Pervushin demonstrated that the number 2 61 − 1 = 2305843009213693951 {\displaystyle 2^{61}-1=2305843009213693951} is a Mersenne prime, and that correspondingly 2 60 ( 2 61 − 1 ) = 2658455991569831744654692615953842176 {\displaystyle 2^{60}(2^{61}-1)=2658455991569831744654692615953842176} is a perfect number. At the time, these were the second largest known prime number, and the second largest known perfect number, after 2 127 − 1 {\displaystyle 2^{127}-1} and 2 126 ( 2 127 − 1 ) {\displaystyle 2^{126}(2^{127}-1)} , proved prime and perfect by Édouard Lucas seven years earlier. They remained the second largest until 1911, when Ralph Ernest Powers proved that 2 89 − 1 {\displaystyle 2^{89}-1} is prime and 2 88 ( 2 89 − 1 ) {\displaystyle 2^{88}(2^{89}-1)} is perfect. Pervushin was a contributor to the International World Congress of Mathematicians of 1893, a part of the World's Columbian Exposition in Chicago that became a precursor to the later International Congresses of Mathematicians. However, he did not attend. == References ==
Wikipedia:Ivan Privalov#0
Ivan Vasilyevich Privalov (Russian: Ива́н Васи́льевич Прива́лов; 12 March 1902 – 26 January 1974) was a Ukrainian and Soviet football player. == Honours == Kharkiv FCC USSR Champion: 1924 Individual Ukrainian Footballer of the Year: 1922, 1923, 1925, 1926, 1927 == International career == Privalov made his debut for USSR on 16 November 1924 in a friendly against Turkey along with Oleksandr Shpakovsky from FC Sturm Kharkiv (the Soviet Union won 3:0). He also participated in six other unofficial games against Turkey amateurs from 1925 to 1933. == External links == (in Russian) Profile
Wikipedia:Ivan Rival#0
Ivan Rival (March 15, 1947 – January 22, 2002 in Ottawa, Ontario, Canada) was a Canadian mathematician and computer scientist, a professor of mathematics at the University of Calgary and of computer science at the University of Ottawa. Rival's Ph.D. thesis concerned lattice theory. After moving to Calgary he began to work more generally with partially ordered sets, and to study fixed point theorems for partially ordered structures. He was a frequent organizer of conferences in order theory, and in 1984 he founded the journal Order. As a computer scientist at Ottawa, he shifted research topics, applying his expertise in order theory to the study of data structures, computational geometry, and graph drawing. Rival grew up in Hamilton, Ontario. He earned a bachelor's degree at McMaster University in 1969, and received his Ph.D. from the University of Manitoba in 1974 under the supervision of George Gratzer. After postdoctoral stints visiting Robert Dilworth at Caltech and Rudolf Wille at the Technische Hochschule Darmstadt, he took a faculty position at Calgary in 1975, and was promoted to full professor in 1981. In 1986, he moved to the University of Ottawa, where he became chair of the computer science department. Rival's doctoral students included Dwight Duffus, the Goodrich C. White Professor of Mathematics & Computer Science at Emory University. Duffus took over the editorship of Order after the retirement (as editor) of William T. Trotter, who took over the editorship from Rival. == References == == External links == The Ivan Rival Memorial Website
Wikipedia:Ivan Stojmenović#0
Ivan Stojmenović (1957 – 3 November 2014) was a Serbian-Canadian mathematician and computer scientist well known for his contributions to communications networks and algorithms. He has published over 300 articles in his field and edited four handbooks in the area of wireless sensor networks. == Biography == He studied mathematics, earning a B.Sc. (1979) and M.Sc. at the University of Novi Sad, and a Ph.D. (1985) at the University of Zagreb, where he continued as assistant professor (1985–87). He held the chair in Applied Computing in the School of Engineering at the University of Birmingham in the UK 2007/8. After visiting appointments at Washington State University and University of Miami he joined the faculty at University of Ottawa (1988) where he was a professor. He was also editor-in-chief of IEEE Transactions on Parallel and Distributed Systems. == Books == Wireless Sensor and Actuator Networks: Algorithms and Protocols for Scalable Coordination and Data Communication (Wiley, 2010). Handbook of Applied Algorithms: Solving Scientific, Engineering and Practical Problems (Wiley, 2008). Handbook of Sensor Networks: Algorithms and Architectures (Wiley, 2005). Bundled with Crossbow Technology sensorkits. Mobile Ad Hoc Networking (Wiley, 2004). Handbook of Wireless Networks and Mobile Computing (Wiley, 2002) == Awards and honors == In 2008 he was named an IEEE Fellow "for contributions to data communication algorithms and protocols for wireless sensor and ad hoc networks". == Death in car accident == Stojmenović died on 3 November 2014 after he lost control of his car and slammed into an overpass on Highway 416. == References ==
Wikipedia:Ivan Vinogradov#0
Ivan Matveevich Vinogradov (Russian: Ива́н Матве́евич Виногра́дов, IPA: [ɪˈvan mɐtˈvʲejɪvʲɪtɕ vʲɪnɐˈɡradəf] ; 14 September 1891 – 20 March 1983) was a Soviet mathematician, who was one of the creators of modern analytic number theory, and also a dominant figure in mathematics in the USSR. He was born in the Velikiye Luki district, Pskov Oblast. He graduated from the University of St. Petersburg, where in 1920 he became a Professor. From 1934 he was a Director of the Steklov Institute of Mathematics, a position he held for the rest of his life, except for the five-year period (1941–1946) when the institute was directed by Academician Sergei Sobolev. In 1941 he was awarded the Stalin Prize. He was elected to the American Philosophical Society in 1942. In 1951 he became a foreign member of the Polish Academy of Sciences and Letters in Kraków. == Mathematical contributions == In analytic number theory, Vinogradov's method refers to his main problem-solving technique, applied to central questions involving the estimation of exponential sums. In its most basic form, it is used to estimate sums over prime numbers, or Weyl sums. It is a reduction from a complicated sum to a number of smaller sums which are then simplified. The canonical form for prime number sums is S = ∑ p ≤ P exp ⁡ ( 2 π i f ( p ) ) . {\displaystyle S=\sum _{p\leq P}\exp(2\pi if(p)).} With the help of this method, Vinogradov tackled questions such as the ternary Goldbach problem in 1937 (using Vinogradov's theorem), and the zero-free region for the Riemann zeta function. His own use of it was inimitable; in terms of later techniques, it is recognised as a prototype of the large sieve method in its application of bilinear forms, and also as an exploitation of combinatorial structure. In some cases his results resisted improvement for decades. He also used this technique on the Dirichlet divisor problem, allowing him to estimate the number of integer points under an arbitrary curve. This was an improvement on the work of Georgy Voronoy. In 1918 Vinogradov proved the Pólya–Vinogradov inequality for character sums. == Personality and career == Vinogradov served as director of the Mathematical Institute for 49 years. For his long service he was twice awarded the order of The Hero of the Socialist Labour. The house where he was born was converted into his memorial – a unique honour among Russian mathematicians. As the head of a leading mathematical institute, Vinogradov enjoyed significant influence in the Academy of Sciences and was regarded as an informal leader of Soviet mathematicians, not always in a positive way: his anti-Semitic feelings led him to hinder the careers of many prominent Soviet mathematicians. Although he was always faithful to the official line, he was never a member of the Communist Party and his overall mindset was nationalistic rather than communist. This can at least partly be attributed to his origins: his father was a priest of the Russian Orthodox Church. Vinogradov was enormously strong: in some recollections it is stated that he could lift a chair with a person sitting on it by holding the leg of the chair in his hands. He was never married and was very attached to his dacha in Abramtsevo, where he spent all his weekends and vacations (together with his sister Nadezhda, also unmarried) enjoying flower gardening. He had friendly relations with the president of the Russian Academy of Sciences Mstislav Keldysh and Mikhail Lavrentyev, both mathematicians whose careers started in his institute. == References == == Bibliography == Selected Works, Berlin; New York: Springer-Verlag, 1985, ISBN 0-387-12788-7. Vinogradov, I. M. Elements of Number Theory. Mineola, NY: Dover Publications, 2003, ISBN 0-486-49530-2. Vinogradov, I. M. Method of Trigonometrical Sums in the Theory of Numbers. Mineola, NY: Dover Publications, 2004, ISBN 0-486-43878-3. Vinogradov I. M. (Ed.) Matematicheskaya entsiklopediya. Moscow: Sov. Entsiklopediya 1977. Now translated as the Encyclopaedia of Mathematics. == External links == Works by or about Ivan Vinogradov at the Internet Archive O'Connor, John J.; Robertson, Edmund F., "Ivan Vinogradov", MacTutor History of Mathematics Archive, University of St Andrews Vinogradov memorial (in Russian) Memoirs of colleagues Archived 5 March 2014 at the Wayback Machine (in Russian) DOC PDF Memoirs of his opponent academician Sergei Novikov Archived 15 May 2011 at the Wayback Machine Vinogradov in Abramtsevo, memoirs Archived 15 May 2011 at the Wayback Machine (in Russian)
Wikipedia:Ivan Vreman#0
Ivan Vreman (in some sources Ivan Ureman) (6 June 1583 – 22 April 1620) was a Croatian astronomer, physicist, mathematician, missionary, translator and Jesuit priest. His work in the field of astronomy and mathematics means complementing and improving the work of those scientists from the Early modern era who used a mathematical approach to search for new insights and knowledge about understanding reality. It also presents his interests and contributions in the broader context of research and scientific conditions and opportunities at the very beginning of the 17th century. As a Catholic missionary, he worked during the first Jesuit missions in the Far East, so his activities can also be viewed from the perspective of the exchange of knowledge and the entire activities of the early intermediaries between Europe and countries like China, Japan and India. == Biography == === Early years === Vreman was born on 6 June 1583 in Split, Croatia, at that time under Venetian rule. There is very little information about his early life, but he most likely attended one of the schools in Split, as that town, like other major towns on the Croatian coast, had organized education for the younger generation. Thus, Vreman received a quality initial education that helped him to be sent to Rome in 1600 to the novitiate of the Society of Jesus. In 1602 he began his studies at the Collegium Romanum. He studied natural philosophy, mathematics and astronomy. In 1607 he completed his studies in philosophy and became involved in scientific work, beginning to conduct astronomical researches. He was especially interested in studying of lunar eclipse, which occupied him all the time during his life. His astronomical observations are preserved in letters he exchanged with his professors and other prominent scholars. From 1607 to 1609 he studied theology. Since the telescope was not yet available, Vreman made his observations without it, but therefore carried out them by devising special observational and methodical procedures. He made a detailed description of the lunar eclipse, as evidenced, for instance, by a letter he sent on 31 January 1609 to the Italian astronomer, cartographer and mathematician Magini, in which he enclosed a description of his observations. In 1609 he left Rome for Portugal and Spain, intending to go to the Far East, but had to wait for the departure of a ship that had been preparing for that long voyage. While waiting, he worked until 1615 as a professor of mathematics in Lisbon and at the Colegio de los Jesuitas in the city of Oropesa in Spain. The only known work that has been preserved from that period is his mathematical manuscript Geometriae speculatiuae compendium (Handbook of Speculative Geometry). From this manuscript, which he probably wrote for teaching purposes, it can be possible to see his mathematical interests and attitudes. === Missionary work === In 1615 he set out on a voyage to Jesuit mission in Goa, India, where he stayed for nine months. After that, he continued his voyage to another Jesuit mission, which was situated in the Portuguese colony of Macau. That colony was an important location on the southern borders of China, because it was open to travellers, unlike China itself, where access was very difficult. Vreman stayed in Macau from 1616 for the next couple of years doing scientific work. There he also taught mathematics, studied Chinese astronomy and translated works of missionaries residing in Japan. It is known about his work and activities that he studied Euclid's work "Elements", which was written in the 4th century BC with the aim of laying foundations for the construction of geometry. Besides, Vreman's special interest was astronomy, where he needed mathematical knowledge in theoretical and practical work and the Euclidean methodology for conducting astronomical proofs. Mathematics is always present in Vreman's work in the field of astronomy and cartography. While researched, he used to solve an astronomical problems with planned observations. Then he applied mathematics in the analysis of his results. Among the areas that interested him, Vreman investigated magnetic declination. At that time, an explanation was sought that would explain the nature of the phenomena of both declination and inclination. Related to magnetic declinations was Vreman's work on determining geographical coordinates. According to his observations, he determined the differences between Asian and European time and defined the positions of towns like Goa, Macau and others as well. He contributed to the precise determination of the latitude of Macau by mastering one of the methods of determining latitude. Since he defined the geographical coordinates of places, he can be considered one of the forerunners of Croatian cartography. === Death in China === In 1619 Vreman managed to secretly enter inland China. He continued to teach mathematics and to study Chinese astronomy, wishing to expand his knowledge about it, compare it with European astronomy and contribute to the transfer of knowledge from one tradition to another. At that time, he translated into Italian and Latin the reports of Portuguese missionaries in Catholic missions in Japan. His translations echoed in Europe and aroused such great interest that they were reprinted several times in three other European languages. In China he lived and worked in difficult conditions, like many other missionaries in the Far East, fell ill, gradually became very exhausted and skinny, and finally died on 22 April 1620 in Nanchang, at the age of thirty six. He was buried in Nanjing, 500 kilometers away, where there was a cemetery where deceased priests had already been buried. == See also == List of Catholic clergy scientists List of Catholic missionaries to China List of Jesuits List of Jesuit sites == References == == Sources == Borić, Marijana (2021). "Ivan Ureman — posrednik između kineske i europske znanstvene tradicije" [Ivan Ureman — a Mediator between Chinese and European Scientific Tradition]. Renewed Life: Journal of Philosophy and Religious Studies (in Croatian). 76 (4). Zagreb, Croatia: Institute of Philosophy and Theology of Society of Jesus: 499–512. doi:10.31337/oz.76.4.5. ISSN 1849-0182. Ruiz de Medina, Juan (2000) [1990]. "Ivan Vreman, Split 1583 - Nanchang 1620, a Croat among the Jesuit missionaries of Japan and China". In Pozaić, Valentin (ed.). Jesuits among the Croats: Proceedings of the international symposium 'Jesuits in the religious, scientific and cultural life among the Croats' (Zagreb, October 8-11, 1990). Zagreb: Institute of Philosophy and Theology - Croatian Historical Institute. Retrieved 2024-12-17 – via arts.kuleuven. Peng, Yuchao (2024) [2022-04-14]. "The First Croatian to Arrive in China, Jesuit Ivan Vreman (1583–1620)". Chinese Journal of Slavic Studies. 4 (1). Berlin: de Gruyter: 138–151. doi:10.1515/cjss-2024-0007. ISSN 2747-7487. Retrieved 2024-12-17 – via digital.zlb.
Wikipedia:Ivar Ekeland#0
Ivar I. Ekeland (born 2 July 1944, Paris) is a French mathematician of Norwegian descent. Ekeland has written influential monographs and textbooks on nonlinear functional analysis, the calculus of variations, and mathematical economics, as well as popular books on mathematics, which have been published in French, English, and other languages. Ekeland is known as the author of Ekeland's variational principle and for his use of the Shapley–Folkman lemma in optimization theory. He has contributed to the periodic solutions of Hamiltonian systems and particularly to the theory of Kreĭn indices for linear systems (Floquet theory). Ekeland is cited in the credits of Steven Spielberg's 1993 movie Jurassic Park as an inspiration of the fictional chaos theory specialist Ian Malcolm appearing in Michael Crichton's 1990 novel Jurassic Park. == Biography == Ekeland studied at the École Normale Supérieure (1963–1967). He is a senior research fellow at the French National Centre for Scientific Research (CNRS). He obtained his doctorate in 1970. He teaches mathematics and economics at the Paris Dauphine University, the École Polytechnique, the École Spéciale Militaire de Saint-Cyr, and the University of British Columbia in Vancouver. He was the chairman of Paris-Dauphine University from 1989 to 1994. Ekeland is a recipient of the D'Alembert Prize and the Jean Rostand prize. He is also a member of the Norwegian Academy of Science and Letters. == Popular science: Jurassic Park by Crichton and Spielberg == Ekeland has written several books on popular science, in which he has explained parts of dynamical systems, chaos theory, and probability theory. These books were first written in French and then translated into English and other languages, where they received praise for their mathematical accuracy as well as their value as literature and as entertainment. Through these writings, Ekeland had an influence on Jurassic Park, on both the novel and film. Ekeland's Mathematics and the unexpected and James Gleick's Chaos inspired the discussions of chaos theory in the novel Jurassic Park by Michael Crichton. When the novel was adapted for the film Jurassic Park by Steven Spielberg, Ekeland and Gleick were consulted by the actor Jeff Goldblum as he prepared to play the mathematician specializing in chaos theory. == Research == Ekeland has contributed to mathematical analysis, particularly to variational calculus and mathematical optimization. === Variational principle === In mathematical analysis, Ekeland's variational principle, discovered by Ivar Ekeland, is a theorem that asserts that there exists a nearly optimal solution to a class of optimization problems. Ekeland's variational principle can be used when the lower level set of a minimization problem is not compact, so that the Bolzano–Weierstrass theorem can not be applied. Ekeland's principle relies on the completeness of the metric space. Ekeland's principle leads to a quick proof of the Caristi fixed point theorem. Ekeland was associated with the University of Paris when he proposed this theorem. === Variational theory of Hamiltonian systems === Ivar Ekeland is an expert on variational analysis, which studies mathematical optimization of spaces of functions. His research on periodic solutions of Hamiltonian systems and particularly to the theory of Kreĭn indices for linear systems (Floquet theory) was described in his monograph. === Additive optimization problems === Ekeland explained the success of methods of convex minimization on large problems that appeared to be non-convex. In many optimization problems, the objective function f are separable, that is, the sum of many summand-functions each with its own argument: f ( x ) = f ( x 1 , … , x N ) = ∑ n f n ( x n ) . {\displaystyle f(x)=f(x_{1},\dots ,x_{N})=\sum _{n}f_{n}(x_{n}).} For example, problems of linear optimization are separable. For a separable problem, we consider an optimal solution x min = ( x 1 , … , x N ) min {\displaystyle x_{\min }=(x_{1},\dots ,x_{N})_{\min }} with the minimum value f(xmin). For a separable problem, we consider an optimal solution (xmin, f(xmin)) to the "convexified problem", where convex hulls are taken of the graphs of the summand functions. Such an optimal solution is the limit of a sequence of points in the convexified problem ( x j , f ( x j ) ) ∈ C o n v ( G r a p h ( f n ) ) . {\displaystyle (x_{j},f(x_{j}))\in \mathrm {Conv} (\mathrm {Graph} (f_{n})).\,} An application of the Shapley–Folkman lemma represents the given optimal-point as a sum of points in the graphs of the original summands and of a small number of convexified summands. This analysis was published by Ivar Ekeland in 1974 to explain the apparent convexity of separable problems with many summands, despite the non-convexity of the summand problems. In 1973, the young mathematician Claude Lemaréchal was surprised by his success with convex minimization methods on problems that were known to be non-convex. Ekeland's analysis explained the success of methods of convex minimization on large and separable problems, despite the non-convexities of the summand functions. The Shapley–Folkman lemma has encouraged the use of methods of convex minimization on other applications with sums of many functions. == Bibliography == === Research === Ekeland, Ivar; Temam, Roger (1999). Convex analysis and variational problems. Classics in applied mathematics. Vol. 28. Philadelphia, PA: Society for Industrial and Applied Mathematics (SIAM). ISBN 978-0-89871-450-0. MR 1727362. (Corrected reprinting of the 1976 North-Holland (MR463993) ed.) The book is cited over 500 times in MathSciNet. Ekeland, Ivar (1979). "Nonconvex minimization problems". Bulletin of the American Mathematical Society. New Series. 1 (3): 443–474. doi:10.1090/S0273-0979-1979-14595-6. MR 0526967. Ekeland, Ivar (1990). Convexity methods in Hamiltonian mechanics. Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)]. Vol. 19. Berlin: Springer-Verlag. pp. x+247. ISBN 978-3-540-50613-3. MR 1051888. Aubin, Jean-Pierre; Ekeland, Ivar (2006). Applied nonlinear analysis. Mineola, NY: Dover Publications, Inc. pp. x+518. ISBN 978-0-486-45324-8. MR 2303896. (Reprint of the 1984 Wiley (MR749753) ed.) === Exposition for a popular audience === Ekeland, Ivar (1988). Mathematics and the unexpected (Translated by Ekeland from his French ed.). Chicago, IL: University Of Chicago Press. pp. xiv+146. ISBN 978-0-226-19989-4. MR 0945956. Ekeland, Ivar (1993). The broken dice, and other mathematical tales of chance (Translated by Carol Volk from the 1991 French ed.). Chicago, IL: University of Chicago Press. pp. iv+183. ISBN 978-0-226-19991-7. MR 1243636. Ekeland, Ivar (2006). The best of all possible worlds: Mathematics and destiny (Translated from the 2000 French ed.). Chicago, IL: University of Chicago Press. pp. iv+207. ISBN 978-0-226-19994-8. MR 2259005. == See also == Jonathan M. Borwein ("smooth" variational principle) Robert R. Phelps (a "grandfather" of variational principles) David Preiss ("smooth" variational principle) == Notes == == External links == Ivar Ekeland at the Mathematics Genealogy Project Ekeland's webpage at CEREMADE Ekeland's Curriculum vitae
Wikipedia:Ivars Peterson#0
Ivars Peterson (born 4 December 1948) is a Canadian mathematics writer. == Early life == Peterson received a B.Sc. in Physics and Chemistry and a B.Ed. in Education from the University of Toronto. Peterson received an M.A. in Journalism from the University of Missouri-Columbia. == Career == Peterson worked as a high school science and mathematics teacher. Peterson has been a columnist and online editor at Science News and Science News for Kids, and has been columnist for the children's magazine Muse. He wrote the weekly online column Ivars Peterson's MathTrek. Peterson is the author of a number of popular mathematics and related books. Peterson has been a weekly mathematics columnist for MAA Online. Peterson received the Joint Policy Board for Mathematics Communications Award in 1991 for "exceptional skill in communicating mathematics to the general public over the last decade". For the spring 2008 semester, he accepted the Wayne G. Basler Chair of Excellence for the Integration of the Arts, Rhetoric and Science at East Tennessee State University. He gave a four lectures on how math is integral in our society and our universe. He also taught a course entitled "Communicating Mathematics". In 2007, Peterson was named Director of Publications for Journals and Communications at the Mathematical Association of America. == Bibliography == Mathematical Treks: From Surreal Numbers to Magic Circles (2002) Mathematical Association of America ISBN 0-88385-537-2 Fragments of Infinity: A Kaleidoscope of Math and Art (2000) John Wiley & Sons ISBN 0-471-16558-1 The Jungles of Randomness: A Mathematical Safari (1997) John Wiley & Sons ISBN 0-471-16449-6 Fatal Defect: Chasing Killer Computer Bugs (1995) Times Books ISBN 0-8129-2023-6 Newton's Clock: Chaos in the Solar System (1993) W.H. Freeman ISBN 0-7167-2724-2 Islands of Truth: Mathematical Mystery Cruise (1990) W.H.Freeman ISBN 0-7167-2148-1 The Mathematical Tourist: Snapshots of Modern Mathematics (1988) W.H.Freeman ISBN 0-8050-7159-8 == References == == External links == Quotations related to Ivars Peterson at Wikiquote Ivars Peterson homepage - googlepages Ivars Peterson The Mathematical Tourist - blogspot Ivars Peterson The Mathematical Tourist - Mathematical Association of America via: archive.org Ivars Peterson MathTrek archives Mathematical Association of America JPBM Communications Award.
Wikipedia:Iván Gutman#0
Iván Gutman (born in 1947) is a Serbian chemist and mathematician. == Life and work == Gutman was born in Sombor, Yugoslavia in a Bunjevac family. In 1970 he graduated chemistry from the University of Belgrade where he worked a short time as an assistant at the chemistry department. From 1971 until 1976 he worked as research assistant and senior research assistant at Ruđer Bošković Institute in Zagreb, department of physical chemistry. In 1973 he received M.Sc. degree from the University of Zagreb, in the area of theoretical organic chemistry. In the same year he received a doctorate degree in chemistry from the University of Zagreb. His supervisor was Nenad Trinajstić. From 1977 he worked at the University of Kragujevac, eventually becoming a full research professor in 1982. In 1981 he received a doctorate degree in mathematics from the University of Belgrade. From 2012 he is a professor emeritus at the University of Kragujevac. His research interests are theoretical organic chemistry, physical chemistry, mathematical chemistry, graph theory, spectral graph theory and discrete mathematics. Gutman is known for his work in chemical graph theory and topological descriptors. In mathematics he introduced the notion of graph energy, a concept originating from theoretical chemistry. With Chris Godsil he worked on the theory of the matching polynomial. He is a full member of the Serbian Academy of Arts and Sciences since 1997. Other memberships include membership in the International Academy of Mathematical Chemistry, the Academy of Nonlinear Sciences (Moscow) and Academia Europaea. Gutman is a collaborator on the Lexicon of Danube Croats for Croatian Academic Society 'HAD' in Subotica. == See also == Graph energy Wiener index Caterpillar tree Szeged index Aleksandar Despić Pavle Simić Milan Vukcevich Bogdan Đuričić Ljubisav Rakić Ivan Gutman Sima Lozanić Marko Leko Mihailo Rašković Zivojin Jocic Aleksandar M. Leko Milivoje Lozanić Dejan Popović Jekić Panta Tutundžić Vukić Mićović Persida Ilić Svetozar Lj. Jovanović Djordje K. Stefanović == References == == Selected publications == A. Graovac; I. Gutman; N. Trinajstić; T. Živković (1972). "Graph theory and molecular orbitals: Application of Sachs theorem". Theor. Chim. Acta. 26 (1): 67–78. doi:10.1007/bf00527654. S2CID 101611868. I. Gutman; N. Trinajstić (1972). "Graph theory and molecular orbitals. Total ?-electron energy of alternant hydrocarbons". Chemical Physics Letters. 17 (4): 535–538. Bibcode:1972CPL....17..535G. doi:10.1016/0009-2614(72)85099-1. I. Gutman; M. Milun; N. Trinajstić (1975). "Topological Definition of Resonance Energy". MATCH Commun. Math. Computer Chem. 1: 171–175. I. Gutman; B. Ruscic; N. Trinajstić; C.F. Wilcox Jr. (1975). "Graph theory and molecular orbitals. XII. Acyclic polyenes". J. Chem. Phys. 62 (9): 1692–1704. Bibcode:1975JChPh..62.3399G. doi:10.1063/1.430994. I. Gutman; M. Milun; N. Trinajstić (1977). "Graph Theory and Molecular Orbitals. XIX. Non–Parametric Resonance Energies of Arbitrary Conjugated Systems". J. Am. Chem. Soc. 99 (6): 1692–1704. doi:10.1021/ja00448a002. I. Gutman; O.E. Polansky (1986). Mathematical Concepts in Organic Chemistry. Springer-Verlag. I. Gutman (1994). "A formula for the Wiener number of trees and its extension to graphs containing cycles". Graph Theory Notes NY. 27: 9–15. I. Gutman (1994). "Selected properties of the Schultz molecular topological index". J. Chem. Inf. Comput. Sci. 34 (5): 1087–1089. doi:10.1021/ci00021a009. I. Gutman; B. Zhour (2006). "Laplacian energy of a graph". J. Am. Chem. Soc. 414 (1): 29–37. doi:10.1016/j.laa.2005.09.008.