source
stringlengths 16
98
| text
stringlengths 40
168k
|
|---|---|
Wikipedia:Claire Mathieu#0
|
Claire Mathieu (formerly Kenyon, born 1965) is a French computer scientist and mathematician, known for her research on approximation algorithms, online algorithms, and auction theory. She works as a director of research at the Centre national de la recherche scientifique. Mathieu earned her Ph.D. in 1988 from the University of Paris-Sud, under the supervision of Claude Puech. She worked at CNRS and ENS Lyon from 1991 to 1997, at Paris-Sud from 1997 to 2002, at the École Polytechnique from 2002 to 2004, and at Brown University from 2004 to 2011 before returning to CNRS in 2012. She was an invited speaker at the 2014 International Colloquium on Automata, Languages and Programming and at the 2015 Symposium on Discrete Algorithms. She won the CNRS Silver Medal in 2019. In 2020, she became a Chevalier of the Légion d'honneur. == References ==
|
Wikipedia:Claire Postlethwaite#0
|
Claire Maria Postlethwaite is an applied mathematician based in New Zealand, where she is a professor in applied mathematics at the University of Auckland and a principal investigator for Te Pūnaha Matatini. Her research involves heteroclinic networks in dynamical systems and delay differential equations, and their varied applications including neuroscience, evolutionary robotics, animal migration, and climate modelling. == Education and career == Postlethwaite was a mathematics student at the University of Cambridge in England. After earning a bachelor's degree in 2001 and taking Part III of the Mathematical Tripos in 2002, she completed her PhD at Cambridge in 2006. Her dissertation, Robust Heteroclinic Cycles and Networks, was supervised by J. H. P. Dawes. She became a postdoctoral researcher in the US, at Northwestern University and the University of Houston. In 2008, she took a lecturer position at the University of Auckland. She was named as a senior lecturer in 2011 and associate professor in 2017; she is now a full professor there. == Recognition == In 2011, the New Zealand Mathematical Society gave Postlethwaite their Early Career Research Award, in recognition of her "enormous progress in applying mathematics to the study of animal movement, and for her development of fundamental ideas in applied dynamical systems". Postlethwaite was the 2018 recipient of the JH Michell Medal of ANZIAM. She is a Fellow of the New Zealand Mathematical Society. == References == == External links == Home page Claire Postlethwaite publications indexed by Google Scholar
|
Wikipedia:Clare Parnell#0
|
Clare Elizabeth Parnell (born 1970) is a British astrophysicist and applied mathematician who studies the mathematics of the Sun and of magnetic fields, including the Solar corona and the Sun's magnetic carpet, magnetic reconnection in plasma, and the null points of magnetic fields. She is a professor of mathematics at the University of St Andrews, and the former head of the Division of Applied Mathematics at St Andrews. == Education and career == Parnell was born in Essex and educated at The Ridgeway School and Swindon Technical College. As a child, she found mathematics to be her easiest subject. She entered the University of Wales College Cardiff in 1988, originally intending to study both chemistry and mathematics, but after a year switched to mathematics only. In 1991 she completed a bachelor's degree with first class honours in mathematics at Cardiff. She then came to the University of St Andrews as a doctoral student, finishing her Ph.D. in theoretical solar physics in 1994. She remained at St Andrews as a postdoctoral researcher (interrupted by research at Stanford University in 1996–1997), became a lecturer in 2002, and was promoted to professor in 2011. From 2009 to 2013 she was head of the Division of Applied Mathematics at St Andrews. == Recognition == In 2006, Parnell won the Fowler Prize for Early Achievement in Astronomy and Geophysics of the Royal Astronomical Society for her research on how the Solar corona is heated. In 2007 she won a Philip Leverhulme Prize for her work on solar physics. == Personal == Parnell is an avid mountaineer and chose the University of Wales in part for its nearby mountains. In her three years as a doctoral student at St Andrews, she climbed all 277 peaks then listed as Munros. She has two children. == References ==
|
Wikipedia:Classification of Fatou components#0
|
In mathematics, Fatou components are components of the Fatou set. They were named after Pierre Fatou. == Rational case == If f is a rational function f = P ( z ) Q ( z ) {\displaystyle f={\frac {P(z)}{Q(z)}}} defined in the extended complex plane, and if it is a nonlinear function (degree > 1) d ( f ) = max ( deg ( P ) , deg ( Q ) ) ≥ 2 , {\displaystyle d(f)=\max(\deg(P),\,\deg(Q))\geq 2,} then for a periodic component U {\displaystyle U} of the Fatou set, exactly one of the following holds: U {\displaystyle U} contains an attracting periodic point U {\displaystyle U} is parabolic U {\displaystyle U} is a Siegel disc: a simply connected Fatou component on which f(z) is analytically conjugate to a Euclidean rotation of the unit disc onto itself by an irrational rotation angle. U {\displaystyle U} is a Herman ring: a double connected Fatou component (an annulus) on which f(z) is analytically conjugate to a Euclidean rotation of a round annulus, again by an irrational rotation angle. === Attracting periodic point === The components of the map f ( z ) = z − ( z 3 − 1 ) / 3 z 2 {\displaystyle f(z)=z-(z^{3}-1)/3z^{2}} contain the attracting points that are the solutions to z 3 = 1 {\displaystyle z^{3}=1} . This is because the map is the one to use for finding solutions to the equation z 3 = 1 {\displaystyle z^{3}=1} by Newton–Raphson formula. The solutions must naturally be attracting fixed points. === Herman ring === The map f ( z ) = e 2 π i t z 2 ( z − 4 ) / ( 1 − 4 z ) {\displaystyle f(z)=e^{2\pi it}z^{2}(z-4)/(1-4z)} and t = 0.6151732... will produce a Herman ring. It is shown by Shishikura that the degree of such map must be at least 3, as in this example. === More than one type of component === If degree d is greater than 2 then there is more than one critical point and then can be more than one type of component == Transcendental case == === Baker domain === In case of transcendental functions there is another type of periodic Fatou components, called Baker domain: these are "domains on which the iterates tend to an essential singularity (not possible for polynomials and rational functions)" one example of such a function is: f ( z ) = z − 1 + ( 1 − 2 z ) e z {\displaystyle f(z)=z-1+(1-2z)e^{z}} === Wandering domain === Transcendental maps may have wandering domains: these are Fatou components that are not eventually periodic. == See also == No-wandering-domain theorem Montel's theorem John Domains Basins of attraction == References == Lennart Carleson and Theodore W. Gamelin, Complex Dynamics, Springer 1993. Alan F. Beardon Iteration of Rational Functions, Springer 1991.
|
Wikipedia:Classification of discontinuities#0
|
Continuous functions are of utmost importance in mathematics, functions and applications. However, not all functions are continuous. If a function is not continuous at a limit point (also called "accumulation point" or "cluster point") of its domain, one says that it has a discontinuity there. The set of all points of discontinuity of a function may be a discrete set, a dense set, or even the entire domain of the function. The oscillation of a function at a point quantifies these discontinuities as follows: in a removable discontinuity, the distance that the value of the function is off by is the oscillation; in a jump discontinuity, the size of the jump is the oscillation (assuming that the value at the point lies between these limits of the two sides); in an essential discontinuity (a.k.a. infinite discontinuity), oscillation measures the failure of a limit to exist. A special case is if the function diverges to infinity or minus infinity, in which case the oscillation is not defined (in the extended real numbers, this is a removable discontinuity). == Classification == For each of the following, consider a real valued function f {\displaystyle f} of a real variable x , {\displaystyle x,} defined in a neighborhood of the point x 0 {\displaystyle x_{0}} at which f {\displaystyle f} is discontinuous. === Removable discontinuity === Consider the piecewise function f ( x ) = { x 2 for x < 1 0 for x = 1 2 − x for x > 1 {\displaystyle f(x)={\begin{cases}x^{2}&{\text{ for }}x<1\\0&{\text{ for }}x=1\\2-x&{\text{ for }}x>1\end{cases}}} The point x 0 = 1 {\displaystyle x_{0}=1} is a removable discontinuity. For this kind of discontinuity: The one-sided limit from the negative direction: L − = lim x → x 0 − f ( x ) {\displaystyle L^{-}=\lim _{x\to x_{0}^{-}}f(x)} and the one-sided limit from the positive direction: L + = lim x → x 0 + f ( x ) {\displaystyle L^{+}=\lim _{x\to x_{0}^{+}}f(x)} at x 0 {\displaystyle x_{0}} both exist, are finite, and are equal to L = L − = L + . {\displaystyle L=L^{-}=L^{+}.} In other words, since the two one-sided limits exist and are equal, the limit L {\displaystyle L} of f ( x ) {\displaystyle f(x)} as x {\displaystyle x} approaches x 0 {\displaystyle x_{0}} exists and is equal to this same value. If the actual value of f ( x 0 ) {\displaystyle f\left(x_{0}\right)} is not equal to L , {\displaystyle L,} then x 0 {\displaystyle x_{0}} is called a removable discontinuity. This discontinuity can be removed to make f {\displaystyle f} continuous at x 0 , {\displaystyle x_{0},} or more precisely, the function g ( x ) = { f ( x ) x ≠ x 0 L x = x 0 {\displaystyle g(x)={\begin{cases}f(x)&x\neq x_{0}\\L&x=x_{0}\end{cases}}} is continuous at x = x 0 . {\displaystyle x=x_{0}.} The term removable discontinuity is sometimes broadened to include a removable singularity, in which the limits in both directions exist and are equal, while the function is undefined at the point x 0 . {\displaystyle x_{0}.} This use is an abuse of terminology because continuity and discontinuity of a function are concepts defined only for points in the function's domain. === Jump discontinuity === Consider the function f ( x ) = { x 2 for x < 1 0 for x = 1 2 − ( x − 1 ) 2 for x > 1 {\displaystyle f(x)={\begin{cases}x^{2}&{\mbox{ for }}x<1\\0&{\mbox{ for }}x=1\\2-(x-1)^{2}&{\mbox{ for }}x>1\end{cases}}} Then, the point x 0 = 1 {\displaystyle x_{0}=1} is a jump discontinuity. In this case, a single limit does not exist because the one-sided limits, L − {\displaystyle L^{-}} and L + {\displaystyle L^{+}} exist and are finite, but are not equal: since, L − ≠ L + , {\displaystyle L^{-}\neq L^{+},} the limit L {\displaystyle L} does not exist. Then, x 0 {\displaystyle x_{0}} is called a jump discontinuity, step discontinuity, or discontinuity of the first kind. For this type of discontinuity, the function f {\displaystyle f} may have any value at x 0 . {\displaystyle x_{0}.} === Essential discontinuity === For an essential discontinuity, at least one of the two one-sided limits does not exist in R {\displaystyle \mathbb {R} } . (Notice that one or both one-sided limits can be ± ∞ {\displaystyle \pm \infty } ). Consider the function f ( x ) = { sin 5 x − 1 for x < 1 0 for x = 1 1 x − 1 for x > 1. {\displaystyle f(x)={\begin{cases}\sin {\frac {5}{x-1}}&{\text{ for }}x<1\\0&{\text{ for }}x=1\\{\frac {1}{x-1}}&{\text{ for }}x>1.\end{cases}}} Then, the point x 0 = 1 {\displaystyle x_{0}=1} is an essential discontinuity. In this example, both L − {\displaystyle L^{-}} and L + {\displaystyle L^{+}} do not exist in R {\displaystyle \mathbb {R} } , thus satisfying the condition of essential discontinuity. So x 0 {\displaystyle x_{0}} is an essential discontinuity, infinite discontinuity, or discontinuity of the second kind. (This is distinct from an essential singularity, which is often used when studying functions of complex variables). == Counting discontinuities of a function == Supposing that f {\displaystyle f} is a function defined on an interval I ⊆ R , {\displaystyle I\subseteq \mathbb {R} ,} we will denote by D {\displaystyle D} the set of all discontinuities of f {\displaystyle f} on I . {\displaystyle I.} By R {\displaystyle R} we will mean the set of all x 0 ∈ I {\displaystyle x_{0}\in I} such that f {\displaystyle f} has a removable discontinuity at x 0 . {\displaystyle x_{0}.} Analogously by J {\displaystyle J} we denote the set constituted by all x 0 ∈ I {\displaystyle x_{0}\in I} such that f {\displaystyle f} has a jump discontinuity at x 0 . {\displaystyle x_{0}.} The set of all x 0 ∈ I {\displaystyle x_{0}\in I} such that f {\displaystyle f} has an essential discontinuity at x 0 {\displaystyle x_{0}} will be denoted by E . {\displaystyle E.} Of course then D = R ∪ J ∪ E . {\displaystyle D=R\cup J\cup E.} The two following properties of the set D {\displaystyle D} are relevant in the literature. The set of D {\displaystyle D} is an F σ {\displaystyle F_{\sigma }} set. The set of points at which a function is continuous is always a G δ {\displaystyle G_{\delta }} set (see). If on the interval I , {\displaystyle I,} f {\displaystyle f} is monotone then D {\displaystyle D} is at most countable and D = J . {\displaystyle D=J.} This is Froda's theorem. Tom Apostol follows partially the classification above by considering only removable and jump discontinuities. His objective is to study the discontinuities of monotone functions, mainly to prove Froda’s theorem. With the same purpose, Walter Rudin and Karl R. Stromberg study also removable and jump discontinuities by using different terminologies. However, furtherly, both authors state that R ∪ J {\displaystyle R\cup J} is always a countable set (see). The term essential discontinuity has evidence of use in mathematical context as early as 1889. However, the earliest use of the term alongside a mathematical definition seems to have been given in the work by John Klippert. Therein, Klippert also classified essential discontinuities themselves by subdividing the set E {\displaystyle E} into the three following sets: E 1 = { x 0 ∈ I : lim x → x 0 − f ( x ) and lim x → x 0 + f ( x ) do not exist in R } , {\displaystyle E_{1}=\left\{x_{0}\in I:\lim _{x\to x_{0}^{-}}f(x){\text{ and }}\lim _{x\to x_{0}^{+}}f(x){\text{ do not exist in }}\mathbb {R} \right\},} E 2 = { x 0 ∈ I : lim x → x 0 − f ( x ) exists in R and lim x → x 0 + f ( x ) does not exist in R } , {\displaystyle E_{2}=\left\{x_{0}\in I:\ \lim _{x\to x_{0}^{-}}f(x){\text{ exists in }}\mathbb {R} {\text{ and }}\lim _{x\to x_{0}^{+}}f(x){\text{ does not exist in }}\mathbb {R} \right\},} E 3 = { x 0 ∈ I : lim x → x 0 − f ( x ) does not exist in R and lim x → x 0 + f ( x ) exists in R } . {\displaystyle E_{3}=\left\{x_{0}\in I:\ \lim _{x\to x_{0}^{-}}f(x){\text{ does not exist in }}\mathbb {R} {\text{ and }}\lim _{x\to x_{0}^{+}}f(x){\text{ exists in }}\mathbb {R} \right\}.} Of course E = E 1 ∪ E 2 ∪ E 3 . {\displaystyle E=E_{1}\cup E_{2}\cup E_{3}.} Whenever x 0 ∈ E 1 , {\displaystyle x_{0}\in E_{1},} x 0 {\displaystyle x_{0}} is called an essential discontinuity of first kind. Any x 0 ∈ E 2 ∪ E 3 {\displaystyle x_{0}\in E_{2}\cup E_{3}} is said an essential discontinuity of second kind. Hence he enlarges the set R ∪ J {\displaystyle R\cup J} without losing its characteristic of being countable, by stating the following: The set R ∪ J ∪ E 2 ∪ E 3 {\displaystyle R\cup J\cup E_{2}\cup E_{3}} is countable. == Rewriting Lebesgue's theorem == When I = [ a , b ] {\displaystyle I=[a,b]} and f {\displaystyle f} is a bounded function, it is well-known of the importance of the set D {\displaystyle D} in the regard of the Riemann integrability of f . {\displaystyle f.} In fact, Lebesgue's theorem (also named Lebesgue-Vitali) theorem) states that f {\displaystyle f} is Riemann integrable on I = [ a , b ] {\displaystyle I=[a,b]} if and only if D {\displaystyle D} is a set with Lebesgue's measure zero. In this theorem seems that all type of discontinuities have the same weight on the obstruction that a bounded function f {\displaystyle f} be Riemann integrable on [ a , b ] . {\displaystyle [a,b].} Since countable sets are sets of Lebesgue's measure zero and a countable union of sets with Lebesgue's measure zero is still a set of Lebesgue's mesure zero, we are seeing now that this is not the case. In fact, the discontinuities in the set R ∪ J ∪ E 2 ∪ E 3 {\displaystyle R\cup J\cup E_{2}\cup E_{3}} are absolutely neutral in the regard of the Riemann integrability of f . {\displaystyle f.} The main discontinuities for that purpose are the essential discontinuities of first kind and consequently the Lebesgue-Vitali theorem can be rewritten as follows: A bounded function, f , {\displaystyle f,} is Riemann integrable on [ a , b ] {\displaystyle [a,b]} if and only if the correspondent set E 1 {\displaystyle E_{1}} of all essential discontinuities of first kind of f {\displaystyle f} has Lebesgue's measure zero. The case where E 1 = ∅ {\displaystyle E_{1}=\varnothing } correspond to the following well-known classical complementary situations of Riemann integrability of a bounded function f : [ a , b ] → R {\displaystyle f:[a,b]\to \mathbb {R} } : If f {\displaystyle f} has right-hand limit at each point of [ a , b [ {\displaystyle [a,b[} then f {\displaystyle f} is Riemann integrable on [ a , b ] {\displaystyle [a,b]} (see) If f {\displaystyle f} has left-hand limit at each point of ] a , b ] {\displaystyle ]a,b]} then f {\displaystyle f} is Riemann integrable on [ a , b ] . {\displaystyle [a,b].} If f {\displaystyle f} is a regulated function on [ a , b ] {\displaystyle [a,b]} then f {\displaystyle f} is Riemann integrable on [ a , b ] . {\displaystyle [a,b].} === Examples === Thomae's function is discontinuous at every non-zero rational point, but continuous at every irrational point. One easily sees that those discontinuities are all removable. By the first paragraph, there does not exist a function that is continuous at every rational point, but discontinuous at every irrational point. The indicator function of the rationals, also known as the Dirichlet function, is discontinuous everywhere. These discontinuities are all essential of the first kind too. Consider now the ternary Cantor set C ⊂ [ 0 , 1 ] {\displaystyle {\mathcal {C}}\subset [0,1]} and its indicator (or characteristic) function 1 C ( x ) = { 1 x ∈ C 0 x ∈ [ 0 , 1 ] ∖ C . {\displaystyle \mathbf {1} _{\mathcal {C}}(x)={\begin{cases}1&x\in {\mathcal {C}}\\0&x\in [0,1]\setminus {\mathcal {C}}.\end{cases}}} One way to construct the Cantor set C {\displaystyle {\mathcal {C}}} is given by C := ⋂ n = 0 ∞ C n {\textstyle {\mathcal {C}}:=\bigcap _{n=0}^{\infty }C_{n}} where the sets C n {\displaystyle C_{n}} are obtained by recurrence according to C n = C n − 1 3 ∪ ( 2 3 + C n − 1 3 ) for n ≥ 1 , and C 0 = [ 0 , 1 ] . {\displaystyle C_{n}={\frac {C_{n-1}}{3}}\cup \left({\frac {2}{3}}+{\frac {C_{n-1}}{3}}\right){\text{ for }}n\geq 1,{\text{ and }}C_{0}=[0,1].} In view of the discontinuities of the function 1 C ( x ) , {\displaystyle \mathbf {1} _{\mathcal {C}}(x),} let's assume a point x 0 ∉ C . {\displaystyle x_{0}\not \in {\mathcal {C}}.} Therefore there exists a set C n , {\displaystyle C_{n},} used in the formulation of C {\displaystyle {\mathcal {C}}} , which does not contain x 0 . {\displaystyle x_{0}.} That is, x 0 {\displaystyle x_{0}} belongs to one of the open intervals which were removed in the construction of C n . {\displaystyle C_{n}.} This way, x 0 {\displaystyle x_{0}} has a neighbourhood with no points of C . {\displaystyle {\mathcal {C}}.} (In another way, the same conclusion follows taking into account that C {\displaystyle {\mathcal {C}}} is a closed set and so its complementary with respect to [ 0 , 1 ] {\displaystyle [0,1]} is open). Therefore 1 C {\displaystyle \mathbf {1} _{\mathcal {C}}} only assumes the value zero in some neighbourhood of x 0 . {\displaystyle x_{0}.} Hence 1 C {\displaystyle \mathbf {1} _{\mathcal {C}}} is continuous at x 0 . {\displaystyle x_{0}.} This means that the set D {\displaystyle D} of all discontinuities of 1 C {\displaystyle \mathbf {1} _{\mathcal {C}}} on the interval [ 0 , 1 ] {\displaystyle [0,1]} is a subset of C . {\displaystyle {\mathcal {C}}.} Since C {\displaystyle {\mathcal {C}}} is an uncountable set with null Lebesgue measure, also D {\displaystyle D} is a null Lebesgue measure set and so in the regard of Lebesgue-Vitali theorem 1 C {\displaystyle \mathbf {1} _{\mathcal {C}}} is a Riemann integrable function. More precisely one has D = C . {\displaystyle D={\mathcal {C}}.} In fact, since C {\displaystyle {\mathcal {C}}} is a nonwhere dense set, if x 0 ∈ C {\displaystyle x_{0}\in {\mathcal {C}}} then no neighbourhood ( x 0 − ε , x 0 + ε ) {\displaystyle \left(x_{0}-\varepsilon ,x_{0}+\varepsilon \right)} of x 0 , {\displaystyle x_{0},} can be contained in C . {\displaystyle {\mathcal {C}}.} This way, any neighbourhood of x 0 ∈ C {\displaystyle x_{0}\in {\mathcal {C}}} contains points of C {\displaystyle {\mathcal {C}}} and points which are not of C . {\displaystyle {\mathcal {C}}.} In terms of the function 1 C {\displaystyle \mathbf {1} _{\mathcal {C}}} this means that both lim x → x 0 − 1 C ( x ) {\textstyle \lim _{x\to x_{0}^{-}}\mathbf {1} _{\mathcal {C}}(x)} and lim x → x 0 + 1 C ( x ) {\textstyle \lim _{x\to x_{0}^{+}}1_{\mathcal {C}}(x)} do not exist. That is, D = E 1 , {\displaystyle D=E_{1},} where by E 1 , {\displaystyle E_{1},} as before, we denote the set of all essential discontinuities of first kind of the function 1 C . {\displaystyle \mathbf {1} _{\mathcal {C}}.} Clearly ∫ 0 1 1 C ( x ) d x = 0. {\textstyle \int _{0}^{1}\mathbf {1} _{\mathcal {C}}(x)dx=0.} == Discontinuities of derivatives == Let I ⊆ R {\displaystyle I\subseteq \mathbb {R} } an open interval, let F : I → R {\displaystyle F:I\to \mathbb {R} } be differentiable on I , {\displaystyle I,} and let f : I → R {\displaystyle f:I\to \mathbb {R} } be the derivative of F . {\displaystyle F.} That is, F ′ ( x ) = f ( x ) {\displaystyle F'(x)=f(x)} for every x ∈ I {\displaystyle x\in I} . According to Darboux's theorem, the derivative function f : I → R {\displaystyle f:I\to \mathbb {R} } satisfies the intermediate value property. The function f {\displaystyle f} can, of course, be continuous on the interval I , {\displaystyle I,} in which case Bolzano's theorem also applies. Recall that Bolzano's theorem asserts that every continuous function satisfies the intermediate value property. On the other hand, the converse is false: Darboux's theorem does not assume f {\displaystyle f} to be continuous and the intermediate value property does not imply f {\displaystyle f} is continuous on I . {\displaystyle I.} Darboux's theorem does, however, have an immediate consequence on the type of discontinuities that f {\displaystyle f} can have. In fact, if x 0 ∈ I {\displaystyle x_{0}\in I} is a point of discontinuity of f {\displaystyle f} , then necessarily x 0 {\displaystyle x_{0}} is an essential discontinuity of f {\displaystyle f} . This means in particular that the following two situations cannot occur: Furthermore, two other situations have to be excluded (see John Klippert): Observe that whenever one of the conditions (i), (ii), (iii), or (iv) is fulfilled for some x 0 ∈ I {\displaystyle x_{0}\in I} one can conclude that f {\displaystyle f} fails to possess an antiderivative, F {\displaystyle F} , on the interval I {\displaystyle I} . On the other hand, a new type of discontinuity with respect to any function f : I → R {\displaystyle f:I\to \mathbb {R} } can be introduced: an essential discontinuity, x 0 ∈ I {\displaystyle x_{0}\in I} , of the function f {\displaystyle f} , is said to be a fundamental essential discontinuity of f {\displaystyle f} if lim x → x 0 − f ( x ) ≠ ± ∞ {\displaystyle \lim _{x\to x_{0}^{-}}f(x)\neq \pm \infty } and lim x → x 0 + f ( x ) ≠ ± ∞ . {\displaystyle \lim _{x\to x_{0}^{+}}f(x)\neq \pm \infty .} Therefore if x 0 ∈ I {\displaystyle x_{0}\in I} is a discontinuity of a derivative function f : I → R {\displaystyle f:I\to \mathbb {R} } , then necessarily x 0 {\displaystyle x_{0}} is a fundamental essential discontinuity of f {\displaystyle f} . Notice also that when I = [ a , b ] {\displaystyle I=[a,b]} and f : I → R {\displaystyle f:I\to \mathbb {R} } is a bounded function, as in the assumptions of Lebesgue's theorem, we have for all x 0 ∈ ( a , b ) {\displaystyle x_{0}\in (a,b)} : lim x → x 0 ± f ( x ) ≠ ± ∞ , {\displaystyle \lim _{x\to x_{0}^{\pm }}f(x)\neq \pm \infty ,} lim x → a + f ( x ) ≠ ± ∞ , {\displaystyle \lim _{x\to a^{+}}f(x)\neq \pm \infty ,} and lim x → b − f ( x ) ≠ ± ∞ . {\displaystyle \lim _{x\to b^{-}}f(x)\neq \pm \infty .} Therefore any essential discontinuity of f {\displaystyle f} is a fundamental one. == See also == Removable singularity – Undefined point on a holomorphic function which can be made regular Mathematical singularity – Point where a function, a curve or another mathematical object does not behave regularlyPages displaying short descriptions of redirect targets Extension by continuity – Property of topological space Smoothness – Number of derivatives of a function (mathematics) Geometric continuity – Number of derivatives of a function (mathematics)Pages displaying short descriptions of redirect targets Parametric continuity – Number of derivatives of a function (mathematics)Pages displaying short descriptions of redirect targets == Notes == == References == == Sources == Malik, S.C.; Arora, Savita (1992). Mathematical Analysis (2nd ed.). New York: Wiley. ISBN 0-470-21858-4.{{cite book}}: CS1 maint: publisher location (link) == External links == "Discontinuous". PlanetMath. "Discontinuity" by Ed Pegg, Jr., The Wolfram Demonstrations Project, 2007. Weisstein, Eric W. "Discontinuity". MathWorld. Kudryavtsev, L.D. (2001) [1994]. "Discontinuity point". Encyclopedia of Mathematics. EMS Press.
|
Wikipedia:Classification of finite simple groups#0
|
In mathematics, the classification of finite simple groups (popularly called the enormous theorem) is a result of group theory stating that every finite simple group is either cyclic, or alternating, or belongs to a broad infinite class called the groups of Lie type, or else it is one of twenty-six exceptions, called sporadic (the Tits group is sometimes regarded as a sporadic group because it is not strictly a group of Lie type, in which case there would be 27 sporadic groups). The proof consists of tens of thousands of pages in several hundred journal articles written by about 100 authors, published mostly between 1955 and 2004. Simple groups can be seen as the basic building blocks of all finite groups, reminiscent of the way the prime numbers are the basic building blocks of the natural numbers. The Jordan–Hölder theorem is a more precise way of stating this fact about finite groups. However, a significant difference from integer factorization is that such "building blocks" do not necessarily determine a unique group, since there might be many non-isomorphic groups with the same composition series or, put in another way, the extension problem does not have a unique solution. Daniel Gorenstein (1923–1992), Richard Lyons, and Ronald Solomon are gradually publishing a simplified and revised version of the proof. == Statement of the classification theorem == The classification theorem has applications in many branches of mathematics, as questions about the structure of finite groups (and their action on other mathematical objects) can sometimes be reduced to questions about finite simple groups. Thanks to the classification theorem, such questions can sometimes be answered by checking each family of simple groups and each sporadic group. Daniel Gorenstein announced in 1983 that the finite simple groups had all been classified, but this was premature as he had been misinformed about the proof of the classification of quasithin groups. The completed proof of the classification was announced by Aschbacher (2004) after Aschbacher and Smith published a 1221-page proof for the missing quasithin case. == Overview of the proof of the classification theorem == Gorenstein (1982, 1983) wrote two volumes outlining the low rank and odd characteristic part of the proof, and Michael Aschbacher, Richard Lyons, and Stephen D. Smith et al. (2011) wrote a 3rd volume covering the remaining characteristic 2 case. The proof can be broken up into several major pieces as follows: === Groups of small 2-rank === The simple groups of low 2-rank are mostly groups of Lie type of small rank over fields of odd characteristic, together with five alternating and seven characteristic 2 type and nine sporadic groups. The simple groups of small 2-rank include: Groups of 2-rank 0, in other words groups of odd order, which are all solvable by the Feit–Thompson theorem. Groups of 2-rank 1. The Sylow 2-subgroups are either cyclic, which is easy to handle using the transfer map, or generalized quaternion, which are handled with the Brauer–Suzuki theorem: in particular there are no simple groups of 2-rank 1 except for the cyclic group of order two. Groups of 2-rank 2. Alperin showed that the Sylow subgroup must be dihedral, quasidihedral, wreathed, or a Sylow 2-subgroup of U3(4). The first case was done by the Gorenstein–Walter theorem which showed that the only simple groups are isomorphic to L2(q) for q odd or A7, the second and third cases were done by the Alperin–Brauer–Gorenstein theorem which implies that the only simple groups are isomorphic to L3(q) or U3(q) for q odd or M11, and the last case was done by Lyons who showed that U3(4) is the only simple possibility. Groups of sectional 2-rank at most 4, classified by the Gorenstein–Harada theorem. The classification of groups of small 2-rank, especially ranks at most 2, makes heavy use of ordinary and modular character theory, which is almost never directly used elsewhere in the classification. All groups not of small 2 rank can be split into two major classes: groups of component type and groups of characteristic 2 type. This is because if a group has sectional 2-rank at least 5 then MacWilliams showed that its Sylow 2-subgroups are connected, and the balance theorem implies that any simple group with connected Sylow 2-subgroups is either of component type or characteristic 2 type. (For groups of low 2-rank the proof of this breaks down, because theorems such as the signalizer functor theorem only work for groups with elementary abelian subgroups of rank at least 3.) === Groups of component type === A group is said to be of component type if for some centralizer C of an involution, C/O(C) has a component (where O(C) is the core of C, the maximal normal subgroup of odd order). These are more or less the groups of Lie type of odd characteristic of large rank, and alternating groups, together with some sporadic groups. A major step in this case is to eliminate the obstruction of the core of an involution. This is accomplished by the B-theorem, which states that every component of C/O(C) is the image of a component of C. The idea is that these groups have a centralizer of an involution with a component that is a smaller quasisimple group, which can be assumed to be already known by induction. So to classify these groups one takes every central extension of every known finite simple group, and finds all simple groups with a centralizer of involution with this as a component. This gives a rather large number of different cases to check: there are not only 26 sporadic groups and 16 families of groups of Lie type and the alternating groups, but also many of the groups of small rank or over small fields behave differently from the general case and have to be treated separately, and the groups of Lie type of even and odd characteristic are also quite different. === Groups of characteristic 2 type === A group is of characteristic 2 type if the generalized Fitting subgroup F*(Y) of every 2-local subgroup Y is a 2-group. As the name suggests these are roughly the groups of Lie type over fields of characteristic 2, plus a handful of others that are alternating or sporadic or of odd characteristic. Their classification is divided into the small and large rank cases, where the rank is the largest rank of an odd abelian subgroup normalizing a nontrivial 2-subgroup, which is often (but not always) the same as the rank of a Cartan subalgebra when the group is a group of Lie type in characteristic 2. The rank 1 groups are the thin groups, classified by Aschbacher, and the rank 2 ones are the notorious quasithin groups, classified by Aschbacher and Smith. These correspond roughly to groups of Lie type of ranks 1 or 2 over fields of characteristic 2. Groups of rank at least 3 are further subdivided into 3 classes by the trichotomy theorem, proved by Aschbacher for rank 3 and by Gorenstein and Lyons for rank at least 4. The three classes are groups of GF(2) type (classified mainly by Timmesfeld), groups of "standard type" for some odd prime (classified by the Gilman–Griess theorem and work by several others), and groups of uniqueness type, where a result of Aschbacher implies that there are no simple groups. The general higher rank case consists mostly of the groups of Lie type over fields of characteristic 2 of rank at least 3 or 4. === Existence and uniqueness of the simple groups === The main part of the classification produces a characterization of each simple group. It is then necessary to check that there exists a simple group for each characterization and that it is unique. This gives a large number of separate problems; for example, the original proofs of existence and uniqueness of the monster group totaled about 200 pages, and the identification of the Ree groups by Thompson and Bombieri was one of the hardest parts of the classification. Many of the existence proofs and some of the uniqueness proofs for the sporadic groups originally used computer calculations, most of which have since been replaced by shorter hand proofs. == History of the proof == === Gorenstein's program === In 1972 Gorenstein (1979, Appendix) announced a program for completing the classification of finite simple groups, consisting of the following 16 steps: Groups of low 2-rank. This was essentially done by Gorenstein and Harada, who classified the groups with sectional 2-rank at most 4. Most of the cases of 2-rank at most 2 had been done by the time Gorenstein announced his program. The semisimplicity of 2-layers. The problem is to prove that the 2-layer of the centralizer of an involution in a simple group is semisimple. Standard form in odd characteristic. If a group has an involution with a 2-component that is a group of Lie type of odd characteristic, the goal is to show that it has a centralizer of involution in "standard form" meaning that a centralizer of involution has a component that is of Lie type in odd characteristic and also has a centralizer of 2-rank 1. Classification of groups of odd type. The problem is to show that if a group has a centralizer of involution in "standard form" then it is a group of Lie type of odd characteristic. This was solved by Aschbacher's classical involution theorem. Quasi-standard form Central involutions Classification of alternating groups. Some sporadic groups Thin groups. The simple thin finite groups, those with 2-local p-rank at most 1 for odd primes p, were classified by Aschbacher in 1978 Groups with a strongly p-embedded subgroup for p odd The signalizer functor method for odd primes. The main problem is to prove a signalizer functor theorem for nonsolvable signalizer functors. This was solved by McBride in 1982. Groups of characteristic p type. This is the problem of groups with a strongly p-embedded 2-local subgroup with p odd, which was handled by Aschbacher. Quasithin groups. A quasithin group is one whose 2-local subgroups have p-rank at most 2 for all odd primes p, and the problem is to classify the simple ones of characteristic 2 type. This was completed by Aschbacher and Smith in 2004. Groups of low 2-local 3-rank. This was essentially solved by Aschbacher's trichotomy theorem for groups with e(G)=3. The main change is that 2-local 3-rank is replaced by 2-local p-rank for odd primes. Centralizers of 3-elements in standard form. This was essentially done by the Trichotomy theorem. Classification of simple groups of characteristic 2 type. This was handled by the Gilman–Griess theorem, with 3-elements replaced by p-elements for odd primes. === Timeline of the proof === Many of the items in the table below are taken from Solomon (2001). The date given is usually the publication date of the complete proof of a result, which is sometimes several years later than the proof or first announcement of the result, so some of the items appear in the "wrong" order. == Second-generation classification == The proof of the theorem, as it stood around 1985 or so, can be called first generation. Because of the extreme length of the first generation proof, much effort has been devoted to finding a simpler proof, called a second-generation classification proof. This effort, called "revisionism", was originally led by Daniel Gorenstein, and coauthored with Richard Lyons and Ronald Solomon. As of 2023, ten volumes of the second generation proof have been published (Gorenstein, Lyons & Solomon 1994, 1996, 1998, 1999, 2002, 2005, 2018a, 2018b; & Inna Capdeboscq, 2021, 2023). In 2012 Solomon estimated that the project would need another 5 volumes, but said that progress on them was slow. It is estimated that the new proof will eventually fill approximately 5,000 pages. (This length stems in part from the second generation proof being written in a more relaxed style.) However, with the publication of volume 9 of the GLS series, and including the Aschbacher–Smith contribution, this estimate was already reached, with several more volumes still in preparation (the rest of what was originally intended for volume 9, plus projected volumes 10 and 11). Aschbacher and Smith wrote their two volumes devoted to the quasithin case in such a way that those volumes can be part of the second generation proof. Gorenstein and his collaborators have given several reasons why a simpler proof is possible. The most important thing is that the correct, final statement of the theorem is now known. Simpler techniques can be applied that are known to be adequate for the types of groups we know to be finite simple. In contrast, those who worked on the first generation proof did not know how many sporadic groups there were, and in fact some of the sporadic groups (e.g., the Janko groups) were discovered while proving other cases of the classification theorem. As a result, many of the pieces of the theorem were proved using techniques that were overly general. Because the conclusion was unknown, the first generation proof consists of many stand-alone theorems, dealing with important special cases. Much of the work of proving these theorems was devoted to the analysis of numerous special cases. Given a larger, orchestrated proof, dealing with many of these special cases can be postponed until the most powerful assumptions can be applied. The price paid under this revised strategy is that these first generation theorems no longer have comparatively short proofs, but instead rely on the complete classification. Many first generation theorems overlap, and so divide the possible cases in inefficient ways. As a result, families and subfamilies of finite simple groups were identified multiple times. The revised proof eliminates these redundancies by relying on a different subdivision of cases. Finite group theorists have more experience at this sort of exercise, and have new techniques at their disposal. Aschbacher (2004) has called the work on the classification problem by Ulrich Meierfrankenfeld, Bernd Stellmacher, Gernot Stroth, and a few others, a third generation program. One goal of this is to treat all groups in characteristic 2 uniformly using the amalgam method. === Length of proof === Gorenstein has discussed some of the reasons why there might not be a short proof of the classification similar to the classification of compact Lie groups. The most obvious reason is that the list of simple groups is quite complicated: with 26 sporadic groups there are likely to be many special cases that have to be considered in any proof. So far no one has yet found a clean uniform description of the finite simple groups similar to the parameterization of the compact Lie groups by Dynkin diagrams. Atiyah and others have suggested that the classification ought to be simplified by constructing some geometric object that the groups act on and then classifying these geometric structures. The problem is that no one has been able to suggest an easy way to find such a geometric structure associated with a simple group. In some sense, the classification does work by finding geometric structures such as BN-pairs, but this only comes at the end of a very long and difficult analysis of the structure of a finite simple group. Another suggestion for simplifying the proof is to make greater use of representation theory. The problem here is that representation theory seems to require very tight control over the subgroups of a group in order to work well. For groups of small rank, one has such control and representation theory works very well, but for groups of larger rank no-one has succeeded in using it to simplify the classification. In the early days of the classification, there was a considerable effort made to use representation theory, but this never achieved much success in the higher rank case. == Consequences of the classification == This section lists some results that have been proved using the classification of finite simple groups. The Schreier conjecture The Signalizer functor theorem The B conjecture The Schur–Zassenhaus theorem for all groups (though this only uses the Feit–Thompson theorem). A transitive permutation group on a finite set with more than 1 element has a fixed-point-free element of prime power order. The classification of 2-transitive permutation groups. The classification of rank 3 permutation groups. The Sims conjecture Frobenius's conjecture on the number of solutions of xn = 1. == See also == O'Nan–Scott theorem == Citations == == References == Aschbacher, Michael (2004). "The Status of the Classification of the Finite Simple Groups" (PDF). Notices of the American Mathematical Society. Vol. 51, no. 7. pp. 736–740. Aschbacher, Michael; Lyons, Richard; Smith, Stephen D.; Solomon, Ronald (2011), The Classification of Finite Simple Groups: Groups of Characteristic 2 Type, Mathematical Surveys and Monographs, vol. 172, ISBN 978-0-8218-5336-8 Conway, John Horton; Curtis, Robert Turner; Norton, Simon Phillips; Parker, Richard A; Wilson, Robert Arnott (1985), Atlas of Finite Groups: Maximal Subgroups and Ordinary Characters for Simple Groups, Oxford University Press, ISBN 978-0-19-853199-9 Gorenstein, D. (1979), "The classification of finite simple groups. I. Simple groups and local analysis", Bulletin of the American Mathematical Society, New Series, 1 (1): 43–199, doi:10.1090/S0273-0979-1979-14551-8, ISSN 0002-9904, MR 0513750 Gorenstein, D. (1982), Finite simple groups, University Series in Mathematics, New York: Plenum Publishing Corp., ISBN 978-0-306-40779-6, MR 0698782 Gorenstein, D. (1983), The classification of finite simple groups. Vol. 1. Groups of noncharacteristic 2 type, The University Series in Mathematics, Plenum Press, ISBN 978-0-306-41305-6, MR 0746470 Daniel Gorenstein (1985), "The Enormous Theorem", Scientific American, December 1, 1985, vol. 253, no. 6, pp. 104–115. Gorenstein, D. (1986), "Classifying the finite simple groups", Bulletin of the American Mathematical Society, New Series, 14 (1): 1–98, doi:10.1090/S0273-0979-1986-15392-9, ISSN 0002-9904, MR 0818060 Gorenstein, D.; Lyons, Richard; Solomon, Ronald (1994), The classification of the finite simple groups, Mathematical Surveys and Monographs, vol. 40, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-0334-9, MR 1303592 Gorenstein, D.; Lyons, Richard; Solomon, Ronald (1996), The classification of the finite simple groups, Number 2, Mathematical Surveys and Monographs, vol. 40, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-0390-5, MR 1358135 Gorenstein, D.; Lyons, Richard; Solomon, Ronald (1998), The classification of the finite simple groups, Number 3, Mathematical Surveys and Monographs, vol. 40, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-0391-2, MR 1490581 Gorenstein, D.; Lyons, Richard; Solomon, Ronald (1999), The classification of the finite simple groups, Number 4. Part II, Chapters 1-4: Uniqueness Theorems, Mathematical Surveys and Monographs, vol. 40, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-1379-9, MR 1675976 Gorenstein, D.; Lyons, Richard; Solomon, Ronald (2002), The classification of the finite simple groups, Number 5, Mathematical Surveys and Monographs, vol. 40, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-2776-5, MR 1923000 Gorenstein, D.; Lyons, Richard; Solomon, Ronald (2005), The classification of the finite simple groups, Number 6: Part IV: The Special Odd Case, Mathematical Surveys and Monographs, vol. 40, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-2777-2, MR 2104668 Gorenstein, D.; Lyons, Richard; Solomon, Ronald (2018), The classification of the finite simple groups, Number 7: Part III, Chapters 7–11: The Generic Case, Stages 3b and 4a, Mathematical Surveys and Monographs, vol. 40, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-4069-6, MR 3752626 Gorenstein, D.; Lyons, Richard; Solomon, Ronald (2018), The Classification of the Finite Simple Groups, Number 8: Part III, Chapters 12–17: The Generic Case, Completed, Mathematical Surveys and Monographs, vol. 40, Providence, R.I.: American Mathematical Society, ISBN 978-1-4704-4189-0, MR 3887657 Capdeboscq, Inna; Gorenstein, D.; Lyons, Richard; Solomon, Ronald (2021), The Classification of the Finite Simple Groups, Number 9: Part V, Chapters 1-8: Theorem C 5 {\displaystyle C_{5}} and Theorem C 6 {\displaystyle C_{6}} , Stage 1, Mathematical Surveys and Monographs, vol. 40, Providence, R.I.: American Mathematical Society, ISBN 978-1-4704-6437-0, MR 4244365 Capdeboscq, Inna; Gorenstein, D.; Lyons, Richard; Solomon, Ronald (2023), The Classification of the Finite Simple Groups, Number 10: Part V, Chapters 9-17: Theorem C 6 {\displaystyle C_{6}} and Theorem C 4 ∗ {\displaystyle C_{4}^{*}} , Case A, Mathematical Surveys and Monographs, vol. 40, Providence, R.I.: American Mathematical Society, ISBN 978-1-4704-7553-6, MR 4656413 Mark Ronan, Symmetry and the Monster, ISBN 978-0-19-280723-6, Oxford University Press, 2006. (Concise introduction for lay reader) Marcus du Sautoy, Finding Moonshine, Fourth Estate, 2008, ISBN 978-0-00-721461-7 (another introduction for the lay reader. American edition published in 2009 as Symmetry: A Journey into the Patterns of Nature) Ron Solomon (1995) "On Finite Simple Groups and their Classification," Notices of the American Mathematical Society. (Not too technical and good on history. American version published in 2009 as Symmetry: A Journey into the Patterns of Nature) Solomon, Ronald (2001), "A brief history of the classification of the finite simple groups" (PDF), Bulletin of the American Mathematical Society, New Series, 38 (3): 315–352, doi:10.1090/S0273-0979-01-00909-0, ISSN 0002-9904, MR 1824893, archived (PDF) from the original on 2001-06-15 – article won Levi L. Conant prize for exposition Thompson, John G. (1984), "Finite nonsolvable groups", in Gruenberg, K. W.; Roseblade, J. E. (eds.), Group theory. Essays for Philip Hall, Boston, MA: Academic Press, pp. 1–12, ISBN 978-0-12-304880-6, MR 0780566 Wilson, Robert A. (2009), The finite simple groups, Graduate Texts in Mathematics 251, vol. 251, Berlin, New York: Springer-Verlag, doi:10.1007/978-1-84800-988-2, ISBN 978-1-84800-987-5, Zbl 1203.20012 == External links == ATLAS of Finite Group Representations. Searchable database of representations and other data for many finite simple groups. Elwes, Richard, "An enormous theorem: the classification of finite simple groups," Plus Magazine, Issue 41, December 2006. For laypeople. Madore, David (2003) Orders of nonabelian simple groups. Archived 2005-04-04 at the Wayback Machine Includes a list of all nonabelian simple groups up to order 1010. In what sense is the classification of all finite groups “impossible”? Ornes, Stephen (2015). "Researchers Race to Rescue the Enormous Theorem before Its Giant Proof Vanishes". Scientific American. 313 (1): 68–75. doi:10.1038/scientificamerican0715-68. PMID 26204718. "Where are the second- (and third-)generation proofs of the classification of finite simple groups up to?". MathOverflow. (Last updated in February 2024)
|
Wikipedia:Claude Dechales#0
|
Claude François Milliet Dechales (1621 – 28 March 1678) was a French Jesuit priest and mathematician. He published a treatise on mathematics and a translation of the works of Euclid. == Biography == Born in Chambéry, Savoy, Claude Dechales (De Challes) was the son of Hector Milliet de Challes (1568–1642), first president of Sovereign Senate of Savoy. He entered the Jesuits at the age of fifteen on 21 September 1636. He participated in the French Jesuit mission to the Ottoman Empire and taught literature in the schools of his order for nine years. Back in France, Louis XIV had him appointed professor of hydrography in Marseille where he taught navigation and military engineering. He then moved to the Trinity College in Lyon in 1674, where he simultaneously taught philosophy (4 years), mathematics (6 years) and theology (5 years). He published in Lyon his famous Cursus seu Mondus Matematicus. At the end of his life, Dechales taught mathematics in a college in Turin in Piedmont, where he died, on 28 March 1678. == Publications == 1660–1672 : Huict livres des Elemens d'Euclide rendus plus faciles par le R.P. Claude François Milliet Dechales, de la Compagnie de Jésus (B. Coral, Lyon). 1674 : a second edition of Euclide, Elementorum Euclidis libri octo, ad faciliorem captum accommodati (Lyon, Anisson). 1674 : Cursus seu Mondus Matematicus Ex officina anissonina (Anisson). 1677 : L'art de fortifier, de défendre et d'attaquer les places, suivant les méthodes françoises, hollandoises, italiennes et espagnoles (Paris), and L'art de naviger demontré par principes et confirmé par plusieurs observations tirées de l'experience (Paris). 1682 : Traité du mouvement local et du ressort dans lequel, leur nature, & leurs causes, sont curieusement recherchées, & ou les loix qu'ils observent dans l'acceleration & les pendules, & encore dans la percussion & la reflexion des corps, sont solidement establies, à Lyon chez Anisson et Posuel. 1685 : Euclide translated in English under the title The elements of Euclid explain'd, in a new, but most easie method : together with the use of every proposition through all parts of the mathematicks. == References == == Sources == MacDonnell, Joseph (1989). "Publications in Geometry". Jesuit geometers : a study of fifty-six prominent Jesuit geometers during the first two centuries of Jesuit history. Saint Louis: Institute of Jesuit Sources. ISBN 0-912422-94-7. Archived from the original on 19 August 2018. Retrieved 12 March 2017. Smith, David Eugene (1958). History of Mathematics. Courier Corporation. p. 386. ISBN 978-0-486-20429-1. {{cite book}}: ISBN / Date incompatibility (help) Nardi, Antonio, "An eccentric adherent of Galileo. The jesuit François Milliet Dechales between Galileo and Newton", Archives internationales d'histoire des sciences 49 (142), January 1999 Le scholasticon Archived 2 November 2013 at the Wayback Machine, by Jacob Schmutz. Vincent Jullien : les Eléments de géométrie de Gilles Personne de Roberval,
|
Wikipedia:Claude Gaspar Bachet de Méziriac#0
|
Claude Gaspar Bachet Sieur de Méziriac (9 October 1581 – 26 February 1638) was a French mathematician and poet born in Bourg-en-Bresse, at that time belonging to Duchy of Savoy. He wrote Problèmes plaisans et délectables qui se font par les nombres, Les éléments arithmétiques, and a Latin translation of the Arithmetica of Diophantus (the very translation where Fermat wrote a margin note about Fermat's Last Theorem). He also discovered means of solving indeterminate equations using continued fractions, a method of constructing magic squares, and a proof of Bézout's identity. == Biography == Claude Gaspar Bachet de Méziriac was born in Bourg-en-Bresse on 9 October 1581. By the time he reached the age of six, both his mother (Marie de Chavanes) and his father (Jean Bachet) had died. He was then looked after by the Jesuit Order. For a year in 1601, Bachet was a member of the Jesuit Order (he left due to an illness). Bachet lived a comfortable life in Bourg-en-Bresse. He married Philiberte de Chabeu in 1620 and had seven children. Bachet was a pupil of the Jesuit mathematician Jacques de Billy at the Jesuit College in Rheims. They became close friends. Bachet wrote the Problèmes plaisans et délectables qui se font par les nombres of which the first edition was issued in 1612, a second and enlarged edition was brought out in 1624; this contains an interesting collection of arithmetical tricks and questions, many of which are quoted in W. W. Rouse Ball's Mathematical Recreations and Essays. He also wrote Les éléments arithmétiques, which exists in manuscript; and a translation, from Greek to Latin, of the Arithmetica of Diophantus (1621). It was this very translation in which Fermat wrote his famous margin note claiming that he had a proof of Fermat's Last Theorem. The same text renders Diophantus' term παρισὀτης as adaequalitat, which became Fermat's technique of adequality, a pioneering method of infinitesimal calculus. Bachet was the earliest writer who discussed the solution of indeterminate equations by means of continued fractions. He also did work in number theory and found a method of constructing magic squares. In the second edition of his Problèmes plaisants (1624) he gives a proof of Bézout's identity (as proposition XVIII) 142 years before it got published by Bézout. He was elected member of the Académie française in 1635. == Notes == == References == == Further reading == Schaaf, William (1970). "Bachet de Méziriac, Claude-Gaspar". Dictionary of Scientific Biography. Vol. 1. New York: Charles Scribner's Sons. pp. 367–368. ISBN 0-684-10114-9. Ad Meskens (2010), Travelling Mathematics: The Fate of Diophantos' Arithmetic (Science Networks. Historical Studies Book 41). == External links == Diophantus Alexandrinus, Pierre de Fermat, Claude Gaspard Bachet de Meziriac, Diophanti Alexandrini Arithmeticorum libri 6, et De numeris multangulis liber unus. Cum comm. C(laude) G(aspar) Bacheti et observationibus P(ierre) de Fermat. Acc. doctrinae analyticae inventum novum, coll. ex variis eiu. Tolosae 1670, doi:10.3931/e-rara-9423. Problèmes plaisans et délectables, qui se font par les nombres – digital copy at the Library of Congress
|
Wikipedia:Claude Mylon#0
|
Claude Mylon (1618–1660) was a French mathematician and member of the Académie Parisienne and the Académie des Sciences. == References ==
|
Wikipedia:Claude Sabbah#0
|
Claude Sabbah (born 30 October 1954) is a French mathematician and researcher at École Polytechnique. == Education == Sabbah received his doctoral degree from Paris Diderot University in 1976 under the supervision of Lê Dũng Tráng. == Selected publications == === Books === Introduction to Stokes Structures, Springer Verlag, 2012, ISBN 978-3-642-31694-4 Polarizable twistor D-modules, Société Mathématique de France, 2005 ISBN 2-856-29174-0 With Jean-Michel Bony, Bernard Malgrange and Laurent Schwartz : Distributions. Dans le sillage de Laurent Schwartz, Éditions de l'École polytechnique, 2003, ISBN 2-7302-1095-4 Déformations isomonodromiques et variétés de Frobenius, EDP Sciences, 2002, ISBN 2-86883-534-1 English translation : Isomonodromic Deformations and Frobenius Manifolds, Springer Verlag, 2008, ISBN 1-84800-053-7 Équations différentielles à points singuliers irréguliers et phénomène de Stokes en dimension 2, Société mathématique de France, 2000, ISBN 2-856-29085-X == References == == External links == Personal website Claude Sabbah on Google Scholar
|
Wikipedia:Claudia Cenedese#0
|
Claudia Cenedese (born 1971) is an Italian physical oceanographer and applied mathematician whose research focuses on the circulation and flow of water in the ocean, and on the theoretical fluid dynamics needed to model these flows, including phenomena such as mesoscale vortices, buoyancy-driven flow, coastal currents, dense overflows, and the melting patterns of icebergs. She is a senior scientist at the Woods Hole Oceanographic Institution. == Education and career == Cenedese's father, Antonio Cenedese, is also a fluid dynamics researcher at Sapienza University of Rome, and as a child, she became fascinated by the motion of water in his experimental tanks. She earned a laurea in environmental engineering from Sapienza University in 1995, and completed a Ph.D. in applied mathematics and theoretical physics at the University of Cambridge in 1998, under the supervision of Paul Linden. She came to the Woods Hole Oceanographic Institution in 1998 as a postdoctoral scholar working with John A. Whitehead. She remained there for the rest of her career, becoming an assistant scientist in 2000 and obtaining a permanent research staff position in 2004. She was promoted to senior scientist in 2015. At Woods Hole, she established an exchange program for Italian students to visit, and has been active in mentoring women in oceanography. Since 2015, she has also held an adjunct faculty position in the Department of Civil and Natural Resources Engineering of the University of Canterbury in New Zealand. == Recognition == In 2018, Cenedese was elected as a Fellow of the American Physical Society (APS), after a nomination from the APS Division of Fluid Dynamics, "for fundamental contributions to the understanding of fluid-dynamical processes in the world's oceans, particularly turbulent entrainment into overflows and the melting of glaciers and icebergs, obtained through elegant and physically insightful laboratory experiments". == References == == External links == Home page Claudia Cenedese publications indexed by Google Scholar
|
Wikipedia:Claudia Malvenuto#0
|
Claudia Malvenuto (born 1965) is an Italian mathematician, one of the namesakes of the Malvenuto–Poirier–Reutenauer Hopf algebra of permutations. She is an associate professor of mathematics at Sapienza University of Rome. == Education == Malvenuto was born in Turin. After earning a laurea in mathematics from Sapienza University of Rome in 1988, she went to Canada for doctoral study at Queen's University, but soon transferred to the Université du Québec à Montréal, where she completed her Ph.D. in 1994. Her dissertation, Produits et coproduits des fonctions quasisymétriques et de l’algèbre des descentes [Products and co-products of quasisymmetric functions and of the algebra of descents], was supervised by Christophe Reutenauer. She won the Governor General's Academic Medal in Gold for 1994, for the best doctoral thesis in the sciences in Canada for that year. == Career == After completing her doctorate, she became a high school mathematics teacher in Rome from 1994 to 2000, while also holding a postdoctoral research position at Roma Tre University from 1995 to 1997. In 2000 she obtained an academic position as an assistant professor of computer science at Sapienza University, and finally in 2012 she was able to obtain a position as an assistant professor of mathematics at Sapienza University. In 2016 she became an associate professor of mathematics at Sapienza University. == References == == External links == Claudia Malvenuto publications indexed by Google Scholar
|
Wikipedia:Claudia Polini#0
|
Claudia Polini is an Italian mathematician specializing in commutative algebra. She is the Glynn Family Honors Collegiate Professor of Mathematics at the University of Notre Dame, and directs the Center of Mathematics at Notre Dame. == Education and career == Polini's mother was a school teacher, and before Polini reached school age herself she was already solving the mathematics problems in her mother's lessons. She graduated from the University of Padua in 1990, and completed her Ph.D. at Rutgers University in 1995, with Wolmer Vasconcelos as her doctoral advisor. Her dissertation was Studies on Singularities. After postdoctoral research at Michigan State University, she became an assistant professor at Hope College in Michigan in 1998, then moved to the University of Oregon in 2000 and to Notre Dame in 2001. == Contributions == Polini is currently an Associate Editor for the Journal of Commutative Algebra, Proceedings of the American Mathematical Society, and Journal of Algebra. She co-authored the research monograph A Study of Singularities on Rational Curves Via Syzygies with David Cox, Andrew R. Kustin, and Bernd Ulrich. == Recognition == At Notre Dame, Polini became the Rev. John Cardinal O'Hara, C.S.C Professor of Mathematics in 2010, and the Glynn Family Honors Professor in 2018. She was included in the 2019 class of fellows of the American Mathematical Society "for contributions to commutative algebra and for service to the profession". == References == == External links == Home page
|
Wikipedia:Claudia Sagastizábal#0
|
Claudia Alejandra Sagastizábal is an applied mathematician known for her research in convex optimization and energy management, and for her co-authorship of the book Numerical Optimization: Theoretical and Practical Aspects. She is a researcher at the University of Campinas in Brazil. Since 2015 she has been editor-in-chief of the journal Set-Valued and Variational Analysis. == Education and career == Sagastizábal earned a degree in mathematics, astronomy and physics from the National University of Córdoba in Argentina in 1984. She completed a PhD in 1993 at Pantheon-Sorbonne University in France; her dissertation, Quelques methodes numeriques d'optimization: Application en gestion de stocks, was supervised by Claude Lemaréchal. While in France, she worked with Électricité de France on optimization problems involving electricity generation, a topic that has continued in her research since that time. She moved to Brazil in 1997. Before joining the University of Campinas in 2017, she has also been affiliated with the Instituto Nacional de Matemática Pura e Aplicada and French Institute for Research in Computer Science and Automation, among other institutions. == Recognition == Sagastizábal was an invited speaker at the 8th International Congress on Industrial and Applied Mathematics in 2015. She was also an invited speaker on control theory and mathematical optimization at the 2018 International Congress of Mathematicians. She is a SIAM Fellow, in the 2024 class of fellows, elected "for contributions to non-smooth optimization and applications to engineering, and numerical methods for optimization". == References == == External links == Official website "Claudia Sagastizábal's Home Page". 11 June 2007. Archived from the original on 17 September 2019. Retrieved 11 November 2023.
|
Wikipedia:Claudio Baiocchi#0
|
Claudio Baiocchi (August 20, 1940 – December 14, 2020) was an Italian mathematician. He was a professor at the University of Pavia and since the 1990s he was a professor of mathematical higher analysis at the Sapienza University. He worked on partial differential equations and the calculus of variations. In 1971 he applied his mathematical methods to a free boundary problem in the filtration of liquids through porous media with applications in civil engineering (by using the Baiocchi transform). His later research dealt with, among other topics, the Collatz problem, cellular automata and Turing machines. In 1970 Baiocchi received the Caccioppoli Prize. In 1974 he was an invited speaker at the International Congress of Mathematicians in Vancouver. He was elected to the Accademia dei XL and the Accademia dei Lincei. == Selected publications == with V. Comincioli, E. Magenes & G. A. Pozzi: "Free boundary problems in the theory of fluid flow through porous media: existence and uniqueness theorems; Annali di Matematica Pura ed Applicata". 97 (1). 1973: 1–82. doi:10.1007/bf02414909. S2CID 119934268. {{cite journal}}: Cite journal requires |journal= (help) "Free boundary problems in the theory of fluid flow through porous media". Proc. Intern. Congress of Math. 2. 1974. with Antonio Capelo: Variational and quasivariational inequalities. Applications to free boundary problems. Chichester/New York, Wiley 1984. with G. Buttazzo, F. Gastaldi & F. Tomarelli: Baiocchi, Claudio; Buttazzo, Giuseppe; Gastaldi, Fabio; Tomarelli, Franco (1988). "General existence theorems for unilateral problems in continuum mechanics". Archive for Rational Mechanics and Analysis. 100 (2): 149–189. doi:10.1007/bf00282202. S2CID 121906311. with F. Brezzi & L. D. Marini; Stabilization of Galerkin methods and applications to domain decomposition (pp. 343-355). Berlin; Heidelberg: Springer. 1992. as editor with Jacques-Louis Lions: Boundary value problems for partial differential equations and applications. Dedicated to Enrico Magenes, Elsevier-Masson, 1993 with F. Brezzi & L. P. Franca: Baiocchi, Claudio; Brezzi, Franco; Franca, Leopoldo P. (1993). "Virtual bubbles and Galerkin-least-squares type methods (Ga. LS)". Computer Methods in Applied Mechanics and Engineering. 105 (1): 125–141. doi:10.1016/0045-7825(93)90119-i. == References == == External links == Claudio Baiocchi at the Mathematics Genealogy Project
|
Wikipedia:Claudio Procesi#0
|
Claudio Procesi (born 31 March 1941 in Rome) is an Italian mathematician, known for works in algebra and representation theory. == Career == Procesi studied at the Sapienza University of Rome, where he received his degree (Laurea) in 1963. In 1966 he graduated from the University of Chicago advised by Israel Herstein, with a thesis titled "On rings with polynomial identities". From 1966 he was assistant professor at the University of Rome, 1970 associate professor at the University of Lecce, and 1971 at the University of Pisa. From 1973 he was full professor in Pisa and in 1975 ordinary Professor at the Sapienza University of Rome. He was a visiting scientist at Columbia University (1969–1970), the University of California, Los Angeles (1973/74), at the Instituto Nacional de Matemática Pura e Aplicada, at the Massachusetts Institute of Technology (1991), at the University of Grenoble, at Brandeis University (1981/2), at the University of Texas at Austin (1984), the Institute for Advanced Study (1994), the Mathematical Sciences Research Institute (1992, etc.), at the International Centre for Theoretical Physics in Trieste, and at the École Normale Supérieure. Procesi studies noncommutative algebra, algebraic groups, invariant theory, enumerative geometry, infinite dimensional algebras and quantum groups, polytopes, braid groups, cyclic homology, geometry of orbits of compact groups, arrangements of subspaces and tori. Procesi proved that the polynomial invariants of n × n {\displaystyle n\times n} matrices over a field K {\displaystyle K} all come from the Hamilton-Cayley theorem, which says that a square matrix satisfies its own characteristic polynomial. In 1981 he was awarded the Medal of the Accademia dei Lincei, of which he is a member since 1987. In 1986 he received the Feltrinelli Prize in mathematics. In 1978 he was an invited speaker at the International Congress of Mathematicians (ICM) in Helsinki. From 2007 to 2010 he is a vice-president of the International Mathematical Union. He was an editor of the Duke Mathematical Journal, the Journal of Algebra, Communications in Algebra, and Advances in Mathematics. Furthermore, he was on the committee of the Abel Prize and the algebra committee for the ICM 1986–1994. == Personal life == Procesi is the father of Michela Procesi, also a mathematician. == Works == === Articles === De Mari, Filippo; Procesi, Claudio; Shayman, Mark A. (1992). "Hessenberg varieties". Transactions of the American Mathematical Society. 332 (2): 529–534. doi:10.1090/S0002-9947-1992-1043857-6. MR 1043857. de Concini, Corrado; Kac, Victor G.; Procesi, Claudio (1992). "Quantum coadjoint action". Journal of the American Mathematical Society. 5 (1): 151–189. doi:10.1090/S0894-0347-1992-1124981-X. MR 1124981. with Lieven Le Bruyn: Le Bruyn, Lieven; Procesi, Claudio (1990). "Semisimple representations of quivers" (PDF). Transactions of the American Mathematical Society. 317 (2): 585–598. doi:10.1090/S0002-9947-1990-0958897-0. with Corrado de Concini and George Lusztig: De Concini, C.; Lusztig, G.; Procesi, C. (1988). "Homology of the zero-set of a nilpotent vector field on a flag manifold". Journal of the American Mathematical Society. 1: 15–34. doi:10.1090/S0894-0347-1988-0924700-2. Procesi, Claudio (1976). "The invariants of n × n {\displaystyle n\times n} matrices". Bulletin of the American Mathematical Society. 82 (6): 891–892. doi:10.1090/S0002-9904-1976-14196-1. === Books === 2017: (with Corrado de Concini) The Invariant Theory of Matrices, American Mathematical Society 2010: (with Corrado de Concini) Topics in Hyperplane Arrangements, Polytopes and Box-Splines, Springer MR2722776 2006: Lie groups: An approach through invariants and representations, Springer, Universitext 1996: (with Hanspeter Kraft) Classical Invariant Theory 1993: (with Corrado de Concini) Quantum groups, Lecture Notes in Mathematics, Springer MR1288995 1993: Rings with polynomial identities, Dekker MR0366968 1983: A primer on invariant theory, Brandeis University == See also == Hessenberg variety Hodge algebra Wonderful compactification Procesi bundle == References == The original article was a translation (Google) of the corresponding German article. == External links == Homepage Claudio Procesi at the Mathematics Genealogy Project
|
Wikipedia:Clearing denominators#0
|
In mathematics, the method of clearing denominators, also called clearing fractions, is a technique for simplifying an equation equating two expressions that each are a sum of rational expressions – which includes simple fractions. == Example == Consider the equation x 6 + y 15 z = 1. {\displaystyle {\frac {x}{6}}+{\frac {y}{15z}}=1.} The smallest common multiple of the two denominators 6 and 15z is 30z, so one multiplies both sides by 30z: 5 x z + 2 y = 30 z . {\displaystyle 5xz+2y=30z.\,} The result is an equation with no fractions. The simplified equation is not entirely equivalent to the original. For when we substitute y = 0 and z = 0 in the last equation, both sides simplify to 0, so we get 0 = 0, a mathematical truth. But the same substitution applied to the original equation results in x/6 + 0/0 = 1, which is mathematically meaningless. == Description == Without loss of generality, we may assume that the right-hand side of the equation is 0, since an equation E1 = E2 may equivalently be rewritten in the form E1 − E2 = 0. So let the equation have the form ∑ i = 1 n P i Q i = 0. {\displaystyle \sum _{i=1}^{n}{\frac {P_{i}}{Q_{i}}}=0.} The first step is to determine a common denominator D of these fractions – preferably the least common denominator, which is the least common multiple of the Qi. This means that each Qi is a factor of D, so D = RiQi for some expression Ri that is not a fraction. Then P i Q i = R i P i R i Q i = R i P i D , {\displaystyle {\frac {P_{i}}{Q_{i}}}={\frac {R_{i}P_{i}}{R_{i}Q_{i}}}={\frac {R_{i}P_{i}}{D}}\,,} provided that RiQi does not assume the value 0 – in which case also D equals 0. So we have now ∑ i = 1 n P i Q i = ∑ i = 1 n R i P i D = 1 D ∑ i = 1 n R i P i = 0. {\displaystyle \sum _{i=1}^{n}{\frac {P_{i}}{Q_{i}}}=\sum _{i=1}^{n}{\frac {R_{i}P_{i}}{D}}={\frac {1}{D}}\sum _{i=1}^{n}R_{i}P_{i}=0.} Provided that D does not assume the value 0, the latter equation is equivalent with ∑ i = 1 n R i P i = 0 , {\displaystyle \sum _{i=1}^{n}R_{i}P_{i}=0\,,} in which the denominators have vanished. As shown by the provisos, care has to be taken not to introduce zeros of D – viewed as a function of the unknowns of the equation – as spurious solutions. == Example 2 == Consider the equation 1 x ( x + 1 ) + 1 x ( x + 2 ) − 1 ( x + 1 ) ( x + 2 ) = 0. {\displaystyle {\frac {1}{x(x+1)}}+{\frac {1}{x(x+2)}}-{\frac {1}{(x+1)(x+2)}}=0.} The least common denominator is x(x + 1)(x + 2). Following the method as described above results in ( x + 2 ) + ( x + 1 ) − x = 0. {\displaystyle (x+2)+(x+1)-x=0.} Simplifying this further gives us the solution x = −3. It is easily checked that none of the zeros of x(x + 1)(x + 2) – namely x = 0, x = −1, and x = −2 – is a solution of the final equation, so no spurious solutions were introduced. == References == Richard N. Aufmann; Joanne Lockwood (2012). Algebra: Beginning and Intermediate (3 ed.). Cengage Learning. p. 88. ISBN 978-1-133-70939-8.
|
Wikipedia:Clemency Montelle#0
|
Clemency Montelle (born 8 July 1977) is a New Zealand historian of mathematics known for her research on Indian mathematics and Indian astronomy. She is a professor of mathematics at the University of Canterbury, and a fellow of the New Zealand India Research Institute of the Victoria University of Wellington. == Education == Montelle is originally from Christchurch. She earned first class honours in mathematics and classical studies at the University of Canterbury in 1999, and completed a master's degree there in 2000. It was not until the fourth year of her studies that, finding a copy of Euclid in the original Greek, she realized that she could reconcile her two interests by working in the history of mathematics. She became a Fulbright Scholar at Brown University, where she learned Cuneiform, Sanskrit, and Arabic. She completed a Ph.D. in the history of mathematics there in 2005; at Brown, her faculty mentors included David Pingree, Alice Slotsky, and Kim Plofker. == Service == Montelle was vice president of the Commission for the History of Ancient and Medieval Astronomy for the 2017–2021 term. == Books == Montelle is the author of the book Chasing Shadows: Mathematics, Astronomy, and the Early Reckoning of Eclipse theory (Johns Hopkins Press, 2011). With Kim Plofker, she is the coauthor of Sanskrit Astronomical Tables (Springer, 2019). == References == == External links == Home page
|
Wikipedia:Clement W. H. Lam#0
|
Clement Wing Hong Lam (Chinese: 林永康) is a Canadian mathematician, specializing in combinatorics. He is famous for the computer proof, with Larry Thiel and S. Swiercz, of the nonexistence of a finite projective plane of order 10. Lam earned his PhD in 1974 under Herbert Ryser at Caltech with thesis Rational G-Circulants Satisfying the Matrix Equation A 2 = d I + λ J {\displaystyle A^{2}=dI+\lambda J} . He is a professor at Concordia University in Montreal. In 2006 he received the Euler medal. In 1992 he received the Lester Randolph Ford Award for the article The search for a finite projective plane of order 10. The eponymous Lam's problem is equivalent to finding a finite projective plane of order 10 or finding 9 orthogonal Latin squares of order 10. == See also == Experimental mathematics == References == == External links == Homepage search on author CWH Lam from Google Scholar
|
Wikipedia:Cleo (mathematician)#0
|
Cleo was the pseudonym of an anonymous mathematician active on the mathematics Stack Exchange from 2013 to 2015, who became known for providing precise answers to complex mathematical integration problems without showing any intermediate steps. Due to the extraordinary accuracy and speed of the provided solutions, mathematicians debated whether Cleo was an individual genius, a collective pseudonym, or even an early artificial intelligence system. During the poster's active period, Cleo posted 39 answers to advanced mathematical questions, primarily focusing on complex integration problems that had stumped other users. Cleo's answers were characterized by being consistently correct while providing no explanation of methodology, often appearing within hours of the original posts. The account claimed to be limited in interaction due to an unspecified medical condition. The mystery surrounding Cleo's identity and mathematical abilities generated significant interest in the mathematical community, with users attempting to analyze solution patterns and writing style for clues. Some compared Cleo to historical mathematical figures like Srinivasa Ramanujan, known for providing solutions without conventional proofs. In 2025, Cleo was revealed to be Vladimir Reshetnikov, a software developer originally from Uzbekistan. == History == === Background === According to Cleo's mathematics Stack Exchange (also known as Math.SE) profile, she presented herself as a female mathematician with an undisclosed medical condition that limited her ability to engage in extended discussions or provide detailed explanations. Their profile bio stated: "My real name is Cleo, I'm female. I have a medical condition that makes it very difficult for me to engage in conversations, or post long answers, sorry for that. I like math and do my best to be useful at this site, although I realize my answers might be not useful for everyone." === Activity on Stack Exchange === Between November 2013 and December 2015, Cleo posted 39 answers to advanced mathematical problems, primarily involving complicated integration. Their first notable contribution came on 11 November 2013, solving a particularly difficult integral that had stumped other users: The solution was simply stated by Cleo four and a half hours later as: I = 4 π arccot φ {\displaystyle I=4\pi \operatorname {arccot} {\sqrt {\varphi }}} where φ {\displaystyle \varphi } is the golden ratio. The answer included only a hyperlink defining the golden ratio, with no supporting work. The Math.SE community questioned the value of answers without proofs, which contradicted the platform's goal of building a library of high-quality, educational content. Two days later, Ron Gordon, a patent agent and former physicist, provided a comprehensive proof validating Cleo's solution. His approach involved reducing an eighth-degree polynomial to a quadratic equation through symmetry analysis, deriving the golden ratio from the simplified expression, thereby confirming Cleo's original answer. Gordon's detailed solution gained significant recognition, earning over 1,000 upvotes on Stack Exchange and later recognition on the subreddit /r/math as the "Master of Integration". While Cleo's answers were mathematically correct, they were controversial within the community for failing to meet the platform's standards for educational value. Cleo's self-presentation on Stack Exchange evolved over time. In her earlier profile from 2013, she quoted Srinivasa Ramanujan's famous description of receiving mathematical insights through dreams: "While asleep, I had an unusual experience. There was a red screen formed by flowing blood, as it were. I was observing it. Suddenly a hand began to write on the screen. I became all attention. That hand wrote a number of elliptic integrals. They stuck to my mind. As soon as I woke up, I committed them to writing." Following this quote, Cleo added her own philosophical perspective: "Remember, you are not locked into a single axiom system. You may invent your own, whenever you wish—just use your intuition and imagination." By the time the account went dormant, the profile had changed to the straightforward biographical statement mentioned above. == Identity == Some speculated that Cleo was a famous mathematician, like Terence Tao (though Tao himself denied this in an email correspondence), Grigori Perelman, Stephen Hawking, or Maryam Mirzakhani. Allisan Parshall of Scientific American compared Cleo's posts to the work of Indian mathematician Srinivasa Ramanujan, who similarly came up with complicated mathematical formulas without explanation. Some suspected that Cleo was an artificial intelligence that specialized in solving complicated integrals. In late 2023, a Reddit user known as "evilscientist311" conducted a comprehensive analysis of Cleo's activity patterns and interactions with other Stack Exchange accounts. This investigation identified several suspicious profiles that frequently interacted with Cleo, including those of Vladimir Reshetnikov and Laila Podlesny. Notably, Laila Podlesny had posted the question about a complex integral that first brought Cleo widespread attention in November 2013. Despite these connections, the investigator concluded that Cleo was likely operated by a group of university friends who collaborated on mathematical problems and abandoned the account after graduation. In January 2025, YouTuber Joe McCann released a video investigating Cleo's identity, building on earlier Reddit investigations that had noted connections between several Stack Exchange accounts, but concluded there was insufficient evidence to identify Cleo with certainty. A viewer of McCann's video discovered a link between Cleo and other accounts through email recovery information: When attempting password recovery on Laila Podlesny's Gmail account, they discovered that the backup email address matched the beginning of Vladimir Reshetnikov's email. When McCann contacted Reshetnikov with this evidence, Reshetnikov confirmed his identity as Cleo. Vladimir Reshetnikov (born 1979) studied theoretical physics at the National University of Uzbekistan in the late 1990s. He worked as a software developer in Tashkent before moving to the United States, where he was employed by Microsoft for several years. He is an active contributor to the Mathematics Stack Exchange. On 8 February 2025, Reshetnikov posted a Base64-encoded message on his Stack Exchange profile that, when decoded, read "Creator of Cleo." Similar encoded messages appeared on related accounts, including those of "Laila Podlesny" and "Oksana Gimmel," confirming these were also alternate accounts created by Reshetnikov. Reshetnikov explained that he created the Cleo persona to generate interest in mathematical problems that received little attention on the forum. According to him, the mysterious nature of the account and its terse solutions were intended to encourage other users to develop their own problem-solving approaches. == See also == 15 (programmer) Nicolas Bourbaki Satoshi Nakamoto Cunningham's Law == References == == External links == Cleo’s profile page at Mathematics Stack Exchange
|
Wikipedia:Cleota Gage Fry#0
|
Cleota Gage Fry (December 30, 1910 – July 1, 2001) was an American mathematician, physicist and university professor. She was one of the few women to earn a PhD in mathematics before World War II and was only the second person at Purdue to earn a doctorate in mathematics. == Biography == Fry was born December 30, 1910 in Shoshone, Idaho and was the oldest of four children of Coral Gage and Holmes L. Fry. She attended elementary school in Portland, Oregon, and graduated from Roosevelt High School in 1929. She borrowed funding from a lawyer friend to attend Reed College and graduated with a bachelor's degree in physics in 1933 with an undergraduate thesis titled, Analysis of Textile Fibers and Fabrics. Following graduation, she boarded a train for Chicago to meet Vivian Annabelle Johnson, a friend from Reed College who had become a member of the physics department at Purdue University. Fry and Johnson settled in Lafayette, Indiana. For her first part-time job at Purdue, she joined Dr. R. B. Abbott to work on his research on violins. She said later, "I registered for 10 hours of course work and thereby launched my graduate study. I continued helping on the research program to find out what makes a violin good." In 1936 Fry received a master's degree from Purdue, and decided to study for her PhD in mathematics because "there was nothing else to do." She completed her doctoral degree with a minor in physics in 1939 supervised by Howard Kibble Hughes. The title of her dissertation was: Asymptotic Developments of Certain Integral Functions. Her PhD in 1939 was only the second one awarded in mathematics by Purdue. From 1939 until her retirement, she taught mathematics and physics at various times at Purdue University in West Lafayette, Indiana. She was an assistant teacher in mathematics until 1940, a lecturer in physics until 1945, an instructor until 1947, an assistant professor until 1955 and an associate professor until 1977. She was also assistant dean at the School of Science from 1952 to 1961. She was described by a neighbor as: "not even five feet tall, with curly hair and delicate features, a 'raving beauty.' [The neighbor] said Fry was sweet, happy, and laughed a lot." Fry died July 1, 2001 in Lafayette at 90 following a heart attack. In her will, she bequeathed funds for the need-based Cleota Gage Fry Scholarship, which is still available to students attending Reed College. == Memberships == According to Green, Fry was active in several organizations. American Mathematical Society Mathematical Association of America American Physical Society Sigma Xi == Selected publications == 1942 with H. K. Hughes. Asymptotic developments of certain integral functions. Duke Math. J. 9. 1952: with W. L. Ayres; H. F. S. Jonah: General College Mathematics. New York, Toronto, and London: McGraw-Hill Book Co. 1943: with H. K. Hughes. Asymptotic developments of certain integral functions. Bull. Amer. Math. Soc. 49:45 #33. Presented by title to cancelled meeting of the AMS, New York City, 27–28 Dec 1942. 1946: with H. K. Hughes: Asymptotic developments of types of generalized Bessel functions. Bull. Amer. Math. Soc. 52:818 #297. Presented by H. K. Hughes to the AMS, Ithaca, New York, 22 Aug 1946. == References ==
|
Wikipedia:Clive Selwyn Davis#0
|
Clive Selwyn Davis (15 April 1916 – 29 October 2009) was a Professor in Mathematics at the University of Queensland and veteran of World War II. He took his PhD in mathematics at the University of Cambridge, and upon his return to Australia worked to improve the study of mathematics at the University of Queensland over the next 30 years. == Education == Clive Selwyn Davis was born in Sydney, Australia. He and his family of four siblings lived in the Strathfield area. He attended Summer Hill and Homebush primary schools and then went to Sydney Technical High School. He enrolled at the University of Sydney, at first being attached to Engineering, before transferring to Science. He took a First Class Honours degree in Mathematics in 1937, followed by First Class Honours in Physics in 1938 and a M.Sc. in 1939. After World War II, he obtained a scholarship to Trinity College, Cambridge and was awarded his PhD in number theory in 1949, under the supervision of Louis Mordell. == Military == At the University of Sydney, Davis joined the Air Squadron there in 1936. He trained as a pilot in the Royal Australian Air Force Reserve. After completing his M.Sc. he travelled to England on a CSIR post-graduate scholarship to conduct research on aircraft instruments and aerodynamics at the Royal Aircraft Establishment in Farnborough, intending to return to Australia and work at the Aeronautical Research Laboratories. When World War II began, he volunteered in the RAF and served with distinction in two tours of the Middle East and Europe. and was awarded a Distinguished Flying Cross in 1941. As a scientist and pilot he was involved in developing and deploying radar. He worked for the air Ministry in a new special unit which monitored and oversaw developments deployed in fighter and bomber commands. He later transferred to the RAAF, and returned to Australia. He would serve on the Air Staff of the RAAF, and was mentioned in the official war history for his role in the development of operations research. In the last part of the war, he commanded 103 Squadron of Liberator heavy bombers, in their operations from North Queensland and Papua New Guinea. He retired with the rank of Wing Commander. == Career == After the war, Davis took his PhD at Cambridge and resumed his career in mathematics . He lectured at the University of Bristol for seven years and was then appointed the chair of Mathematics at the University of Queensland in 1956, following on from Henry James Priestley and Eugene Francis Simonds. He led the Mathematics Department there for 27 years, retiring in 1983. Davis campaigned for his new Department, finding that it was at least 20 years behind other universities he had attended. He fought for a new building for the Department, improving teaching with the introduction of a separate Honours stream, tutorial classes and recruiting new staff . He fought for a separate Mathematics Library, which when it was formed, was named in his honour. ( It was later absorbed by the Dorothy Hill Engineering and Sciences Library at UQ in 1997. His name remains attached to the collection, as does Dorothy Hill and Thomas Parnell, whose Geology and Physics libraries respectively were absorbed by the new library.) He was a member of the University Senate for 10 years, and was active in setting up the FSSU superannuation scheme. He served on the University Research Committee, for seven years, three as Chairman. He was President of the Royal Society of Queensland in 1966. He was Treasurer of the Australian Mathematical Society from 1956 to 1968, and its President from 1968 to 1970. He was a founding member of the Great Barrier Reef Committee and Wildlife Preservation Society of Queensland. He published papers from 1945 until 1951, with his PhD being featured in Acta Mathematica. He returned to scholarly publishing in 1978. Davis married Antoinette "Toni" Lowing in Melbourne in 1943; they had three children, and later adopted a sont. Clive Davis died in 2009 == References ==
|
Wikipedia:Closed-form expression#0
|
In mathematics, an expression or equation is in closed form if it is formed with constants, variables, and a set of functions considered as basic and connected by arithmetic operations (+, −, ×, /, and integer powers) and function composition. Commonly, the basic functions that are allowed in closed forms are nth root, exponential function, logarithm, and trigonometric functions. However, the set of basic functions depends on the context. For example, if one adds polynomial roots to the basic functions, the functions that have a closed form are called elementary functions. The closed-form problem arises when new ways are introduced for specifying mathematical objects, such as limits, series, and integrals: given an object specified with such tools, a natural problem is to find, if possible, a closed-form expression of this object; that is, an expression of this object in terms of previous ways of specifying it. == Example: roots of polynomials == The quadratic formula x = − b ± b 2 − 4 a c 2 a {\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac}}}{2a}}} is a closed form of the solutions to the general quadratic equation a x 2 + b x + c = 0. {\displaystyle ax^{2}+bx+c=0.} More generally, in the context of polynomial equations, a closed form of a solution is a solution in radicals; that is, a closed-form expression for which the allowed functions are only nth-roots and field operations ( + , − , × , / ) . {\displaystyle (+,-,\times ,/).} In fact, field theory allows showing that if a solution of a polynomial equation has a closed form involving exponentials, logarithms or trigonometric functions, then it has also a closed form that does not involve these functions. There are expressions in radicals for all solutions of cubic equations (degree 3) and quartic equations (degree 4). The size of these expressions increases significantly with the degree, limiting their usefulness. In higher degrees, the Abel–Ruffini theorem states that there are equations whose solutions cannot be expressed in radicals, and, thus, have no closed forms. A simple example is the equation x 5 − x − 1 = 0. {\displaystyle x^{5}-x-1=0.} Galois theory provides an algorithmic method for deciding whether a particular polynomial equation can be solved in radicals. == Symbolic integration == Symbolic integration consists essentially of the search of closed forms for antiderivatives of functions that are specified by closed-form expressions. In this context, the basic functions used for defining closed forms are commonly logarithms, exponential function and polynomial roots. Functions that have a closed form for these basic functions are called elementary functions and include trigonometric functions, inverse trigonometric functions, hyperbolic functions, and inverse hyperbolic functions. The fundamental problem of symbolic integration is thus, given an elementary function specified by a closed-form expression, to decide whether its antiderivative is an elementary function, and, if it is, to find a closed-form expression for this antiderivative. For rational functions; that is, for fractions of two polynomial functions; antiderivatives are not always rational fractions, but are always elementary functions that may involve logarithms and polynomial roots. This is usually proved with partial fraction decomposition. The need for logarithms and polynomial roots is illustrated by the formula ∫ f ( x ) g ( x ) d x = ∑ α ∈ Roots ( g ( x ) ) f ( α ) g ′ ( α ) ln ( x − α ) , {\displaystyle \int {\frac {f(x)}{g(x)}}\,dx=\sum _{\alpha \in \operatorname {Roots} (g(x))}{\frac {f(\alpha )}{g'(\alpha )}}\ln(x-\alpha ),} which is valid if f {\displaystyle f} and g {\displaystyle g} are coprime polynomials such that g {\displaystyle g} is square free and deg f < deg g . {\displaystyle \deg f<\deg g.} == Alternative definitions == Changing the basic functions to include additional functions can change the set of equations with closed-form solutions. Many cumulative distribution functions cannot be expressed in closed form, unless one considers special functions such as the error function or gamma function to be basic. It is possible to solve the quintic equation if general hypergeometric functions are included, although the solution is far too complicated algebraically to be useful. For many practical computer applications, it is entirely reasonable to assume that the gamma function and other special functions are basic since numerical implementations are widely available. == Analytic expression == This is a term that is sometimes understood as a synonym for closed-form (see "Wolfram Mathworld".) but this usage is contested (see "Math Stackexchange".). It is unclear the extent to which this term is genuinely in use as opposed to the result of uncited earlier versions of this page. == Comparison of different classes of expressions == The closed-form expressions do not include infinite series or continued fractions; neither includes integrals or limits. Indeed, by the Stone–Weierstrass theorem, any continuous function on the unit interval can be expressed as a limit of polynomials, so any class of functions containing the polynomials and closed under limits will necessarily include all continuous functions. Similarly, an equation or system of equations is said to have a closed-form solution if and only if at least one solution can be expressed as a closed-form expression; and it is said to have an analytic solution if and only if at least one solution can be expressed as an analytic expression. There is a subtle distinction between a "closed-form function" and a "closed-form number" in the discussion of a "closed-form solution", discussed in (Chow 1999) and below. A closed-form or analytic solution is sometimes referred to as an explicit solution. == Dealing with non-closed-form expressions == === Transformation into closed-form expressions === The expression: f ( x ) = ∑ n = 0 ∞ x 2 n {\displaystyle f(x)=\sum _{n=0}^{\infty }{\frac {x}{2^{n}}}} is not in closed form because the summation entails an infinite number of elementary operations. However, by summing a geometric series this expression can be expressed in the closed form: f ( x ) = 2 x . {\displaystyle f(x)=2x.} === Differential Galois theory === The integral of a closed-form expression may or may not itself be expressible as a closed-form expression. This study is referred to as differential Galois theory, by analogy with algebraic Galois theory. The basic theorem of differential Galois theory is due to Joseph Liouville in the 1830s and 1840s and hence referred to as Liouville's theorem. A standard example of an elementary function whose antiderivative does not have a closed-form expression is: e − x 2 , {\displaystyle e^{-x^{2}},} whose one antiderivative is (up to a multiplicative constant) the error function: erf ( x ) = 2 π ∫ 0 x e − t 2 d t . {\displaystyle \operatorname {erf} (x)={\frac {2}{\sqrt {\pi }}}\int _{0}^{x}e^{-t^{2}}\,dt.} === Mathematical modelling and computer simulation === Equations or systems too complex for closed-form or analytic solutions can often be analysed by mathematical modelling and computer simulation (for an example in physics, see). == Closed-form number == Three subfields of the complex numbers C have been suggested as encoding the notion of a "closed-form number"; in increasing order of generality, these are the Liouvillian numbers (not to be confused with Liouville numbers in the sense of rational approximation), EL numbers and elementary numbers. The Liouvillian numbers, denoted L, form the smallest algebraically closed subfield of C closed under exponentiation and logarithm (formally, intersection of all such subfields)—that is, numbers which involve explicit exponentiation and logarithms, but allow explicit and implicit polynomials (roots of polynomials); this is defined in (Ritt 1948, p. 60). L was originally referred to as elementary numbers, but this term is now used more broadly to refer to numbers defined explicitly or implicitly in terms of algebraic operations, exponentials, and logarithms. A narrower definition proposed in (Chow 1999, pp. 441–442), denoted E, and referred to as EL numbers, is the smallest subfield of C closed under exponentiation and logarithm—this need not be algebraically closed, and corresponds to explicit algebraic, exponential, and logarithmic operations. "EL" stands both for "exponential–logarithmic" and as an abbreviation for "elementary". Whether a number is a closed-form number is related to whether a number is transcendental. Formally, Liouvillian numbers and elementary numbers contain the algebraic numbers, and they include some but not all transcendental numbers. In contrast, EL numbers do not contain all algebraic numbers, but do include some transcendental numbers. Closed-form numbers can be studied via transcendental number theory, in which a major result is the Gelfond–Schneider theorem, and a major open question is Schanuel's conjecture. == Numerical computations == For purposes of numeric computations, being in closed form is not in general necessary, as many limits and integrals can be efficiently computed. Some equations have no closed form solution, such as those that represent the Three-body problem or the Hodgkin–Huxley model. Therefore, the future states of these systems must be computed numerically. == Conversion from numerical forms == There is software that attempts to find closed-form expressions for numerical values, including RIES, identify in Maple and SymPy, Plouffe's Inverter, and the Inverse Symbolic Calculator. == See also == Algebraic solution – Solution in radicals of a polynomial equationPages displaying short descriptions of redirect targets Computer simulation – Process of mathematical modelling, performed on a computer Elementary function – A kind of mathematical function Finitary operation – Addition, multiplication, division, ...Pages displaying short descriptions of redirect targets Numerical solution – Methods for numerical approximationsPages displaying short descriptions of redirect targets Liouvillian function – Elementary functions and their finitely iterated integrals Symbolic regression – Type of regression analysis Tarski's high school algebra problem – Mathematical problem Term (logic) – Components of a mathematical or logical formula Tupper's self-referential formula – Formula that visually represents itself when graphed == Notes == == References == == Further reading == Ritt, J. F. (1948), Integration in finite terms Chow, Timothy Y. (May 1999), "What is a Closed-Form Number?", American Mathematical Monthly, 106 (5): 440–448, arXiv:math/9805045, doi:10.2307/2589148, JSTOR 2589148 Jonathan M. Borwein and Richard E. Crandall (January 2013), "Closed Forms: What They Are and Why We Care", Notices of the American Mathematical Society, 60 (1): 50–65, doi:10.1090/noti936 == External links == Weisstein, Eric W. "Closed-Form Solution". MathWorld. Closed-form continuous-time neural networks
|
Wikipedia:Closure (mathematics)#0
|
In mathematics, a subset of a given set is closed under an operation on the larger set if performing that operation on members of the subset always produces a member of that subset. For example, the natural numbers are closed under addition, but not under subtraction: 1 − 2 is not a natural number, although both 1 and 2 are. Similarly, a subset is said to be closed under a collection of operations if it is closed under each of the operations individually. The closure of a subset is the result of a closure operator applied to the subset. The closure of a subset under some operations is the smallest superset that is closed under these operations. It is often called the span (for example linear span) or the generated set. == Definitions == Let S be a set equipped with one or several methods for producing elements of S from other elements of S. A subset X of S is said to be closed under these methods if an input of purely elements of X always results in an element still in X. Sometimes, one may also say that X has the closure property. The main property of closed sets, which results immediately from the definition, is that every intersection of closed sets is a closed set. It follows that for every subset Y of S, there is a smallest closed subset X of S such that Y ⊆ X {\displaystyle Y\subseteq X} (it is the intersection of all closed subsets that contain Y). Depending on the context, X is called the closure of Y or the set generated or spanned by Y. The concepts of closed sets and closure are often extended to any property of subsets that are stable under intersection; that is, every intersection of subsets that have the property has also the property. For example, in C n , {\displaystyle \mathbb {C} ^{n},} a Zariski-closed set, also known as an algebraic set, is the set of the common zeros of a family of polynomials, and the Zariski closure of a set V of points is the smallest algebraic set that contains V. === In algebraic structures === An algebraic structure is a set equipped with operations that satisfy some axioms. These axioms may be identities. Some axioms may contain existential quantifiers ∃ ; {\displaystyle \exists ;} in this case it is worth to add some auxiliary operations in order that all axioms become identities or purely universally quantified formulas. See Algebraic structure for details. A set with a single binary operation that is closed is called a magma. In this context, given an algebraic structure S, a substructure of S is a subset that is closed under all operations of S, including the auxiliary operations that are needed for avoiding existential quantifiers. A substructure is an algebraic structure of the same type as S. It follows that, in a specific example, when closeness is proved, there is no need to check the axioms for proving that a substructure is a structure of the same type. Given a subset X of an algebraic structure S, the closure of X is the smallest substructure of S that is closed under all operations of S. In the context of algebraic structures, this closure is generally called the substructure generated or spanned by X, and one says that X is a generating set of the substructure. For example, a group is a set with an associative operation, often called multiplication, with an identity element, such that every element has an inverse element. Here, the auxiliary operations are the nullary operation that results in the identity element and the unary operation of inversion. A subset of a group that is closed under multiplication and inversion is also closed under the nullary operation (that is, it contains the identity) if and only if it is non-empty. So, a non-empty subset of a group that is closed under multiplication and inversion is a group that is called a subgroup. The subgroup generated by a single element, that is, the closure of this element, is called a cyclic group. In linear algebra, the closure of a non-empty subset of a vector space (under vector-space operations, that is, addition and scalar multiplication) is the linear span of this subset. It is a vector space by the preceding general result, and it can be proved easily that is the set of linear combinations of elements of the subset. Similar examples can be given for almost every algebraic structures, with, sometimes some specific terminology. For example, in a commutative ring, the closure of a single element under ideal operations is called a principal ideal. == Binary relations == A binary relation R {\displaystyle R} on a set A {\displaystyle A} is a subset of A × A {\displaystyle A\times A} , which is the set of all ordered pairs on A {\displaystyle A} . The infix notation x R y {\displaystyle xRy} is commonly used for ( x , y ) ∈ R {\displaystyle (x,y)\in R} . We can define different kinds of closures of R {\displaystyle R} on A {\displaystyle A} by the properties and operations of it. For examples: Reflexivity As every intersection of reflexive relations is reflexive, we define the reflexive closure of R {\displaystyle R} on A {\displaystyle A} as the smallest reflexive relation on A {\displaystyle A} that contains R {\displaystyle R} . Symmetry As we can define a unary operation on A × A {\displaystyle A\times A} that maps ( x , y ) {\displaystyle (x,y)} to ( y , x ) {\displaystyle (y,x)} , we define the symmetric closure of R {\displaystyle R} on A {\displaystyle A} as the smallest relation on A {\displaystyle A} that contains R {\displaystyle R} and is closed under this unary operation. Transitivity As we can define a partial binary operation on A × A {\displaystyle A\times A} that maps ( x , y ) {\displaystyle (x,y)} and ( y , z ) {\displaystyle (y,z)} to ( x , z ) {\displaystyle (x,z)} , we define the transitive closure of R {\displaystyle R} on A {\displaystyle A} as the smallest relation on A {\displaystyle A} that contains R {\displaystyle R} and is closed under this partial binary operation. A preorder is a relation that is reflective and transitive. It follows that the reflexive transitive closure of a relation is the smallest preorder containing it. Similarly, the reflexive transitive symmetric closure or equivalence closure of a relation is the smallest equivalence relation that contains it. == Other examples == In matroid theory, the closure of X is the largest superset of X that has the same rank as X. The transitive closure of a set. The algebraic closure of a field. The integral closure of an integral domain in a field that contains it. The radical of an ideal in a commutative ring. In geometry, the convex hull of a set S of points is the smallest convex set of which S is a subset. In formal languages, the Kleene closure of a language can be described as the set of strings that can be made by concatenating zero or more strings from that language. In group theory, the conjugate closure or normal closure of a set of group elements is the smallest normal subgroup containing the set. In mathematical analysis and in probability theory, the closure of a collection of subsets of X under countably many set operations is called the σ-algebra generated by the collection. == Closure operator == In the preceding sections, closures are considered for subsets of a given set. The subsets of a set form a partially ordered set (poset) for inclusion. Closure operators allow generalizing the concept of closure to any partially ordered set. Given a poset S whose partial order is denoted with ≤, a closure operator on S is a function C : S → S {\displaystyle C:S\to S} that is increasing ( x ≤ C ( x ) {\displaystyle x\leq C(x)} for all x ∈ S {\displaystyle x\in S} ), idempotent ( C ( C ( x ) ) = C ( x ) {\displaystyle C(C(x))=C(x)} ), and monotonic ( x ≤ y ⟹ C ( x ) ≤ C ( y ) {\displaystyle x\leq y\implies C(x)\leq C(y)} ). Equivalently, a function from S to S is a closure operator if x ≤ C ( y ) ⟺ C ( x ) ≤ C ( y ) {\displaystyle x\leq C(y)\iff C(x)\leq C(y)} for all x , y ∈ S . {\displaystyle x,y\in S.} An element of S is closed if it is its own closure, that is, if x = C ( x ) . {\displaystyle x=C(x).} By idempotency, an element is closed if and only if it is the closure of some element of S. An example is the topological closure operator; in Kuratowski's characterization, axioms K2, K3, K4' correspond to the above defining properties. An example not operating on subsets is the ceiling function, which maps every real number x to the smallest integer that is not smaller than x. === Closure operator vs. closed sets === A closure on the subsets of a given set may be defined either by a closure operator or by a set of closed sets that is stable under intersection and includes the given set. These two definitions are equivalent. Indeed, the defining properties of a closure operator C implies that an intersection of closed sets is closed: if X = ⋂ X i {\textstyle X=\bigcap X_{i}} is an intersection of closed sets, then C ( X ) {\displaystyle C(X)} must contain X and be contained in every X i . {\displaystyle X_{i}.} This implies C ( X ) = X {\displaystyle C(X)=X} by definition of the intersection. Conversely, if closed sets are given and every intersection of closed sets is closed, then one can define a closure operator C such that C ( X ) {\displaystyle C(X)} is the intersection of the closed sets containing X. This equivalence remains true for partially ordered sets with the greatest-lower-bound property, if one replace "closed sets" by "closed elements" and "intersection" by "greatest lower bound". == Notes == == References == Weisstein, Eric W. "Algebraic Closure". MathWorld.
|
Wikipedia:Closure with a twist#0
|
Closure with a twist is a property of subsets of an algebraic structure. A subset Y {\displaystyle Y} of an algebraic structure X {\displaystyle X} is said to exhibit closure with a twist if for every two elements y 1 , y 2 ∈ Y {\displaystyle y_{1},y_{2}\in Y} there exists an automorphism ϕ {\displaystyle \phi } of X {\displaystyle X} and an element y 3 ∈ Y {\displaystyle y_{3}\in Y} such that y 1 ⋅ y 2 = ϕ ( y 3 ) {\displaystyle y_{1}\cdot y_{2}=\phi (y_{3})} where " ⋅ {\displaystyle \cdot } " is notation for an operation on X {\displaystyle X} preserved by ϕ {\displaystyle \phi } . Two examples of algebraic structures which exhibit closure with a twist are the cwatset and the generalized cwatset, or GC-set. == Cwatset == In mathematics, a cwatset is a set of bitstrings, all of the same length, which is closed with a twist. If each string in a cwatset, C, say, is of length n, then C will be a subset of Z 2 n {\displaystyle \mathbb {Z} _{2}^{n}} . Thus, two strings in C are added by adding the bits in the strings modulo 2 (that is, addition without carry, or exclusive disjunction). The symmetric group on n letters, Sym ( n ) {\displaystyle {\text{Sym}}(n)} , acts on Z 2 n {\displaystyle \mathbb {Z} _{2}^{n}} by bit permutation: p ( ( c 1 , … , c n ) ) = ( c p ( 1 ) , … , c p ( n ) ) , {\displaystyle p((c_{1},\ldots ,c_{n}))=(c_{p(1)},\ldots ,c_{p(n)}),} where c = ( c 1 , … , c n ) {\displaystyle c=(c_{1},\ldots ,c_{n})} is an element of Z 2 n {\displaystyle \mathbb {Z} _{2}^{n}} and p is an element of Sym ( n ) {\displaystyle {\text{Sym}}(n)} . Closure with a twist now means that for each element c in C, there exists some permutation p c {\displaystyle p_{c}} such that, when you add c to an arbitrary element e in the cwatset and then apply the permutation, the result will also be an element of C. That is, denoting addition without carry by + {\displaystyle +} , C will be a cwatset if and only if ∀ c ∈ C : ∃ p c ∈ Sym ( n ) : ∀ e ∈ C : p c ( e + c ) ∈ C . {\displaystyle \forall c\in C:\exists p_{c}\in {\text{Sym}}(n):\forall e\in C:p_{c}(e+c)\in C.} This condition can also be written as ∀ c ∈ C : ∃ p c ∈ Sym ( n ) : p c ( C + c ) = C . {\displaystyle \forall c\in C:\exists p_{c}\in {\text{Sym}}(n):p_{c}(C+c)=C.} === Examples === All subgroups of Z 2 n {\displaystyle \mathbb {Z} _{2}^{n}} — that is, nonempty subsets of Z 2 n {\displaystyle \mathbb {Z} _{2}^{n}} which are closed under addition-without-carry — are trivially cwatsets, since we can choose each permutation pc to be the identity permutation. An example of a cwatset which is not a group is F = {000,110,101}. To demonstrate that F is a cwatset, observe that F + 000 = F. F + 110 = {110,000,011}, which is F with the first two bits of each string transposed. F + 101 = {101,011,000}, which is the same as F after exchanging the first and third bits in each string. A matrix representation of a cwatset is formed by writing its words as the rows of a 0-1 matrix. For instance a matrix representation of F is given by F = [ 0 0 0 1 1 0 1 0 1 ] . {\displaystyle F={\begin{bmatrix}0&0&0\\1&1&0\\1&0&1\end{bmatrix}}.} To see that F is a cwatset using this notation, note that F + 000 = [ 0 0 0 1 1 0 1 0 1 ] = F i d = F ( 2 , 3 ) R ( 2 , 3 ) C . {\displaystyle F+000={\begin{bmatrix}0&0&0\\1&1&0\\1&0&1\end{bmatrix}}=F^{id}=F^{(2,3)_{R}(2,3)_{C}}.} F + 110 = [ 1 1 0 0 0 0 0 1 1 ] = F ( 1 , 2 ) R ( 1 , 2 ) C = F ( 1 , 2 , 3 ) R ( 1 , 2 , 3 ) C . {\displaystyle F+110={\begin{bmatrix}1&1&0\\0&0&0\\0&1&1\end{bmatrix}}=F^{(1,2)_{R}(1,2)_{C}}=F^{(1,2,3)_{R}(1,2,3)_{C}}.} F + 101 = [ 1 0 1 0 1 1 0 0 0 ] = F ( 1 , 3 ) R ( 1 , 3 ) C = F ( 1 , 3 , 2 ) R ( 1 , 3 , 2 ) C . {\displaystyle F+101={\begin{bmatrix}1&0&1\\0&1&1\\0&0&0\end{bmatrix}}=F^{(1,3)_{R}(1,3)_{C}}=F^{(1,3,2)_{R}(1,3,2)_{C}}.} where π R {\displaystyle \pi _{R}} and σ C {\displaystyle \sigma _{C}} denote permutations of the rows and columns of the matrix, respectively, expressed in cycle notation. For any n ≥ 3 {\displaystyle n\geq 3} another example of a cwatset is K n {\displaystyle K_{n}} , which has n {\displaystyle n} -by- n {\displaystyle n} matrix representation K n = [ 0 0 0 ⋯ 0 0 1 1 0 ⋯ 0 0 1 0 1 ⋯ 0 0 ⋮ 1 0 0 ⋯ 1 0 1 0 0 ⋯ 0 1 ] . {\displaystyle K_{n}={\begin{bmatrix}0&0&0&\cdots &0&0\\1&1&0&\cdots &0&0\\1&0&1&\cdots &0&0\\&&&\vdots &&\\1&0&0&\cdots &1&0\\1&0&0&\cdots &0&1\end{bmatrix}}.} Note that for n = 3 {\displaystyle n=3} , K 3 = F {\displaystyle K_{3}=F} . An example of a nongroup cwatset with a rectangular matrix representation is W = [ 0 0 0 1 0 0 1 1 0 1 1 1 0 1 1 0 0 1 ] . {\displaystyle W={\begin{bmatrix}0&0&0\\1&0&0\\1&1&0\\1&1&1\\0&1&1\\0&0&1\end{bmatrix}}.} === Properties === Let C ⊂ Z 2 n {\displaystyle C\subset \mathbb {Z} _{2}^{n}} be a cwatset. The degree of C is equal to the exponent n. The order of C, denoted by |C|, is the set cardinality of C. There is a necessary condition on the order of a cwatset in terms of its degree, which is analogous to Lagrange's Theorem in group theory. To wit, Theorem. If C is a cwatset of degree n and order m, then m divides 2 n ! {\displaystyle 2^{n}!} . The divisibility condition is necessary but not sufficient. For example, there does not exist a cwatset of degree 5 and order 15. == Generalized cwatset == In mathematics, a generalized cwatset (GC-set) is an algebraic structure generalizing the notion of closure with a twist, the defining characteristic of the cwatset. === Definitions === A subset H of a group G is a GC-set if for each h ∈ H {\displaystyle h\in H} , there exists a ϕ h ∈ Aut ( G ) {\displaystyle \phi _{h}\in {\text{Aut}}(G)} such that ϕ h ( h ) ⋅ H = ϕ h ( H ) {\displaystyle \phi _{h}(h)\cdot H=\phi _{h}(H)} . Furthermore, a GC-set H ⊆ G is a cyclic GC-set if there exists an h ∈ H {\displaystyle h\in H} and a ϕ ∈ Aut ( G ) {\displaystyle \phi \in {\text{Aut}}(G)} such that H = h 1 , h 2 , . . . {\displaystyle H={h_{1},h_{2},...}} where h 1 = h {\displaystyle h_{1}=h} and h n = h 1 ⋅ ϕ ( h n − 1 ) {\displaystyle h_{n}=h_{1}\cdot \phi (h_{n-1})} for all n > 1 {\displaystyle n>1} . === Examples === Any cwatset is a GC-set, since C + c = π ( C ) {\displaystyle C+c=\pi (C)} implies that π − 1 ( c ) + C = π − 1 ( C ) {\displaystyle \pi ^{-1}(c)+C=\pi ^{-1}(C)} . Any group is a GC-set, satisfying the definition with the identity automorphism. A non-trivial example of a GC-set is H = 0 , 2 {\displaystyle H={0,2}} where G = Z 10 {\displaystyle G=Z_{10}} . A nonexample showing that the definition is not trivial for subsets of Z 2 n {\displaystyle Z_{2}^{n}} is H = 000 , 100 , 010 , 001 , 110 {\displaystyle H={000,100,010,001,110}} . === Properties === A GC-set H ⊆ G always contains the identity element of G. The direct product of GC-sets is again a GC-set. A subset H ⊆ G is a GC-set if and only if it is the projection of a subgroup of Aut(G)⋉G, the semi-direct product of Aut(G) and G. As a consequence of the previous property, GC-sets have an analogue of Lagrange's Theorem: The order of a GC-set divides the order of Aut(G)⋉G. If a GC-set H has the same order as the subgroup of Aut(G)⋉G of which it is the projection then for each prime power p q {\displaystyle p^{q}} which divides the order of H, H contains sub-GC-sets of orders p, p 2 {\displaystyle p^{2}} ,..., p q {\displaystyle p^{q}} . (Analogue of the first Sylow Theorem) A GC-set is cyclic if and only if it is the projection of a cyclic subgroup of Aut(G)⋉G. == References == Sherman, Gary J.; Wattenberg, Martin (1994), "Introducing … cwatsets!", Mathematics Magazine, 67 (2): 109–117, doi:10.2307/2690684, JSTOR 2690684. The Cwatset of a Graph, Nancy-Elizabeth Bush and Paul A. Isihara, Mathematics Magazine 74, #1 (February 2001), pp. 41–47. On the symmetry groups of hypergraphs of perfect cwatsets, Daniel K. Biss, Ars Combinatorica 56 (2000), pp. 271–288. Automorphic Subsets of the n-dimensional Cube, Gareth Jones, Mikhail Klin, and Felix Lazebnik, Beiträge zur Algebra und Geometrie 41 (2000), #2, pp. 303–323. Daniel C. Smith (2003)RHIT-UMJ, RHIT [1]
|
Wikipedia:Clustering coefficient#0
|
In graph theory, a clustering coefficient is a measure of the degree to which nodes in a graph tend to cluster together. Evidence suggests that in most real-world networks, and in particular social networks, nodes tend to create tightly knit groups characterised by a relatively high density of ties; this likelihood tends to be greater than the average probability of a tie randomly established between two nodes (Holland and Leinhardt, 1971; Watts and Strogatz, 1998). Two versions of this measure exist: the global and the local. The global version was designed to give an overall indication of the clustering in the network, whereas the local gives an indication of the extent of "clustering" of a single node. == Local clustering coefficient == The local clustering coefficient of a vertex (node) in a graph quantifies how close its neighbours are to being a clique (complete graph). Duncan J. Watts and Steven Strogatz introduced the measure in 1998 to determine whether a graph is a small-world network. A graph G = ( V , E ) {\displaystyle G=(V,E)} formally consists of a set of vertices V {\displaystyle V} and a set of edges E {\displaystyle E} between them. An edge e i j {\displaystyle e_{ij}} connects vertex v i {\displaystyle v_{i}} with vertex v j {\displaystyle v_{j}} . The neighborhood N i {\displaystyle N_{i}} for a vertex v i {\displaystyle v_{i}} is defined as its immediately connected neighbours as follows: N i = { v j : e i j ∈ E ∨ e j i ∈ E } . {\displaystyle N_{i}=\{v_{j}:e_{ij}\in E\lor e_{ji}\in E\}.} We define k i {\displaystyle k_{i}} as the number of vertices, | N i | {\displaystyle |N_{i}|} , in the neighbourhood, N i {\displaystyle N_{i}} , of vertex v i {\displaystyle v_{i}} . The local clustering coefficient C i {\displaystyle C_{i}} for a vertex v i {\displaystyle v_{i}} is then given by a proportion of the number of links between the vertices within its neighbourhood divided by the number of links that could possibly exist between them. For a directed graph, e i j {\displaystyle e_{ij}} is distinct from e j i {\displaystyle e_{ji}} , and therefore for each neighbourhood N i {\displaystyle N_{i}} there are k i ( k i − 1 ) {\displaystyle k_{i}(k_{i}-1)} links that could exist among the vertices within the neighbourhood ( k i {\displaystyle k_{i}} is the number of neighbours of a vertex). Thus, the local clustering coefficient for directed graphs is given as C i = | { e j k : v j , v k ∈ N i , e j k ∈ E } | k i ( k i − 1 ) . {\displaystyle C_{i}={\frac {|\{e_{jk}:v_{j},v_{k}\in N_{i},e_{jk}\in E\}|}{k_{i}(k_{i}-1)}}.} An undirected graph has the property that e i j {\displaystyle e_{ij}} and e j i {\displaystyle e_{ji}} are considered identical. Therefore, if a vertex v i {\displaystyle v_{i}} has k i {\displaystyle k_{i}} neighbours, k i ( k i − 1 ) 2 {\displaystyle {\frac {k_{i}(k_{i}-1)}{2}}} edges could exist among the vertices within the neighbourhood. Thus, the local clustering coefficient for undirected graphs can be defined as C i = 2 | { e j k : v j , v k ∈ N i , e j k ∈ E } | k i ( k i − 1 ) . {\displaystyle C_{i}={\frac {2|\{e_{jk}:v_{j},v_{k}\in N_{i},e_{jk}\in E\}|}{k_{i}(k_{i}-1)}}.} Let λ G ( v ) {\displaystyle \lambda _{G}(v)} be the number of triangles on v ∈ V ( G ) {\displaystyle v\in V(G)} for undirected graph G {\displaystyle G} . That is, λ G ( v ) {\displaystyle \lambda _{G}(v)} is the number of subgraphs of G {\displaystyle G} with 3 edges and 3 vertices, one of which is v {\displaystyle v} . Let τ G ( v ) {\displaystyle \tau _{G}(v)} be the number of triples on v ∈ G {\displaystyle v\in G} . That is, τ G ( v ) {\displaystyle \tau _{G}(v)} is the number of subgraphs (not necessarily induced) with 2 edges and 3 vertices, one of which is v {\displaystyle v} and such that v {\displaystyle v} is incident to both edges. Then we can also define the clustering coefficient as C i = λ G ( v ) τ G ( v ) . {\displaystyle C_{i}={\frac {\lambda _{G}(v)}{\tau _{G}(v)}}.} It is simple to show that the two preceding definitions are the same, since τ G ( v ) = C ( k i , 2 ) = 1 2 k i ( k i − 1 ) . {\displaystyle \tau _{G}(v)=C({k_{i}},2)={\frac {1}{2}}k_{i}(k_{i}-1).} These measures are 1 if every neighbour connected to v i {\displaystyle v_{i}} is also connected to every other vertex within the neighbourhood, and 0 if no vertex that is connected to v i {\displaystyle v_{i}} connects to any other vertex that is connected to v i {\displaystyle v_{i}} . Since any graph is fully specified by its adjacency matrix A, the local clustering coefficient for a simple undirected graph can be expressed in terms of A as: C i = 1 k i ( k i − 1 ) ∑ j , k A i j A j k A k i {\displaystyle C_{i}={\frac {1}{k_{i}(k_{i}-1)}}\sum _{j,k}A_{ij}A_{jk}A_{ki}} where: k i = ∑ j A i j {\displaystyle k_{i}=\sum _{j}A_{ij}} and Ci=0 when ki is zero or one. In the above expression, the numerator counts twice the number of complete triangles that vertex i is involved in. In the denominator, ki2 counts the number of edge pairs that vertex i is involved in plus the number of single edges traversed twice. ki is the number of edges connected to vertex i, and subtracting ki then removes the latter, leaving only a set of edge pairs that could conceivably be connected into triangles. For every such edge pair, there will be another edge pair which could form the same triangle, so the denominator counts twice the number of conceivable triangles that vertex i could be involved in. == Global clustering coefficient == The global clustering coefficient is based on triplets of nodes. A triplet is three nodes that are connected by either two (open triplet) or three (closed triplet) undirected ties. A triangle graph therefore includes three closed triplets, one centred on each of the nodes (n.b. this means the three triplets in a triangle come from overlapping selections of nodes). The global clustering coefficient is the number of closed triplets (or 3 x triangles) over the total number of triplets (both open and closed). The first attempt to measure it was made by Luce and Perry (1949). This measure gives an indication of the clustering in the whole network (global), and can be applied to both undirected and directed networks (often called transitivity, see Wasserman and Faust, 1994, page 243). The global clustering coefficient is defined as: C = number of closed triplets number of all triplets (open and closed) {\displaystyle C={\frac {\mbox{number of closed triplets}}{\mbox{number of all triplets (open and closed)}}}} . The number of closed triplets has also been referred to as 3 × triangles in the literature, so: C = 3 × number of triangles number of all triplets {\displaystyle C={\frac {3\times {\mbox{number of triangles}}}{\mbox{number of all triplets}}}} . A generalisation to weighted networks was proposed by Opsahl and Panzarasa (2009), and a redefinition to two-mode networks (both binary and weighted) by Opsahl (2009). Since any simple graph is fully specified by its adjacency matrix A, the global clustering coefficient for an undirected graph can be expressed in terms of A as: C = ∑ i , j , k A i j A j k A k i 1 2 ∑ i k i ( k i − 1 ) {\displaystyle C={\frac {\sum _{i,j,k}A_{ij}A_{jk}A_{ki}}{{\frac {1}{2}}\sum _{i}k_{i}(k_{i}-1)}}} where: k i = ∑ j A i j {\displaystyle k_{i}=\sum _{j}A_{ij}} and C=0 when the denominator is zero. === Network average clustering coefficient === As an alternative to the global clustering coefficient, the overall level of clustering in a network is measured by Watts and Strogatz as the average of the local clustering coefficients of all the vertices n {\displaystyle n} : C ¯ = 1 n ∑ i = 1 n C i . {\displaystyle {\bar {C}}={\frac {1}{n}}\sum _{i=1}^{n}C_{i}.} This metric places more weight on the low degree nodes, while the transitivity ratio places more weight on the high degree nodes. A generalisation to weighted networks was proposed by Barrat et al. (2004), and a redefinition to bipartite graphs (also called two-mode networks) by Latapy et al. (2008) and Opsahl (2009). Alternative generalisations to weighted and directed graphs have been provided by Fagiolo (2007) and Clemente and Grassi (2018). This formula is not, by default, defined for graphs with isolated vertices; see Kaiser (2008) and Barmpoutis et al. The networks with the largest possible average clustering coefficient are found to have a modular structure, and at the same time, they have the smallest possible average distance among the different nodes. == Percolation of clustered networks == For a random tree-like network without degree-degree correlation, it can be shown that such network can have a giant component, and the percolation threshold (transmission probability) is given by p c = 1 g 1 ′ ( 1 ) {\displaystyle p_{c}={\frac {1}{g_{1}'(1)}}} , where g 1 ( z ) {\displaystyle g_{1}(z)} is the generating function corresponding to the excess degree distribution. In networks with low clustering, 0 < C ≪ 1 {\displaystyle 0<C\ll 1} , the critical point gets scaled by ( 1 − C ) − 1 {\displaystyle (1-C)^{-1}} such that: p c = 1 1 − C 1 g 1 ′ ( 1 ) . {\displaystyle p_{c}={\frac {1}{1-C}}{\frac {1}{g_{1}'(1)}}.} This indicates that for a given degree distribution, the clustering leads to a larger percolation threshold, mainly because for a fixed number of links, the clustering structure reinforces the core of the network with the price of diluting the global connections. For networks with high clustering, strong clustering could induce the core–periphery structure, in which the core and periphery might percolate at different critical points, and the above approximate treatment is not applicable. For studying the robustness of clustered networks a percolation approach is developed. == See also == Directed graph Graph theory Network theory Network science Percolation theory Scale free network Small world == References == == External links == Media related to Clustering coefficient at Wikimedia Commons
|
Wikipedia:Clément Mouhot#0
|
Clément Mouhot (French: [muo]; born 19 August 1978) is a French mathematician and academic. He is Professor of Mathematical Sciences at the University of Cambridge. His research is primarily in partial differential equations and mathematical physics (statistical mechanics, Boltzmann equation, Vlasov equation). == Biography == Mouhot obtained his PhD in 2004 under the supervision of Cedric Villani at the École normale supérieure de Lyon. Since 2011, he is Associate editor of Acta Applicandae Mathematicae and of the Journal of Statistical Physics. Since 2012, he is Co-Editor-in-chief of the ESAIM Proceedings. Since 2014 he is Associate editor of Communications in Mathematical Physics. His work "On Landau damping" with Villani (published in 2011) was quoted in the Fields Medal laudation of Villani in 2010. In 2013, his work "Kac's program in kinetic theory" with Mischler was the subject of a Séminaire Bourbaki. Mouhot was an invited speaker at International Congress of Mathematicians in Rio de Janeiro in 2018, and at the conference Dynamics, Equations and Applications in Kraków in 2019. In 2014 he was awarded the Whitehead Prize. and the "Grand Prix Madame Victor Noury" of the French "Académie des sciences". He has won the 2015/2016 Adams Prize writing on the subject Applied Analysis. In 2018, Mouhot helped organise a letter protesting Noah Carl's appointment to a fellowship at St Edmund's College, with Mouhot and other signatories describing Carl's work on genetics and race as pseudoscience. == References ==
|
Wikipedia:Clément Servais#0
|
Clément Joseph Servais (16 October 1862, Huy – 9 October 1935, Brussels) was a Belgian mathematician, specializing in geometry. Servais attended secondary school at the Athénée royal de Huy. In 1881 he matriculated at the Normal School of Sciences of Ghent University. In 1884 he graduated there and passed the agrégation for teaching upper secondary classes. He then became a teacher in Ypres and in the following year he taught at the Athénée royal de Bruxelles but after a short time he was appointed a docent for teaching mathematical courses at the School of Civil Engineering of Ghent University. In 1886 he received his Ph.D. at Ghent University. He became in 1887 a docent, in 1890 a professor extraordinarius, and in 1894 a professor ordinarius at the Faculty of Sciences of Ghent University. In 1932 he retired there as a professor emeritus. On 15 December 1919 he was elected to the Royal Academies for Science and the Arts of Belgium. He was an Invited Speaker of the ICM in 1924 in Toronto. == Selected publications == "Sur la courbure des biquadratiques gauches de première espèce." Nouvelles annales de mathématiques: journal des candidats aux écoles polytechnique et normale 11 (1911): 289–302. "Extension des théorèmes de Frégier aux courbes et aux surfaces algébriques." Nouvelles annales de mathématiques: journal des candidats aux écoles polytechnique et normale 12 (1912): 145–156. "Sur les axes de l'indicatrice et les centres de courbure principaux en un point d'une surface du second ordre." Nouvelles annales de mathématiques: journal des candidats aux écoles polytechnique et normale 14 (1914): 193–218. "Sur les surfaces tétraédrales symétriques." Nouvelles annales de mathématiques: journal des candidats aux écoles polytechnique et normale 19 (1919): 456–468. "Un théorème général sur les complexes." Nouvelles annales de mathématiques: journal des candidats aux écoles polytechnique et normale 20 (1920): 347–355. == Bibliography == Lembrechts, A. "Clément Servais (1862–1935)" (PDF). ugent.be. (list of publications by C. Servais) == References ==
|
Wikipedia:Coastline paradox#0
|
The coastline paradox is the counterintuitive observation that the coastline of a landmass does not have a well-defined length. This results from the fractal curve-like properties of coastlines; i.e., the fact that a coastline typically has a fractal dimension. Although the "paradox of length" was previously noted by Hugo Steinhaus, the first systematic study of this phenomenon was by Lewis Fry Richardson, and it was expanded upon by Benoit Mandelbrot. The measured length of the coastline depends on the method used to measure it and the degree of cartographic generalization. Since a landmass has features at all scales, from hundreds of kilometers in size to tiny fractions of a millimeter and below, there is no obvious size of the smallest feature that should be taken into consideration when measuring, and hence no single well-defined perimeter to the landmass. Various approximations exist when specific assumptions are made about minimum feature size. The problem is fundamentally different from the measurement of other, simpler edges. It is possible, for example, to accurately measure the length of a straight, idealized metal bar by using a measurement device to determine that the length is less than a certain amount and greater than another amount—that is, to measure it within a certain degree of uncertainty. The more precise the measurement device, the closer results will be to the true length of the edge. With a coastline, however, measuring in finer and finer detail does not improve the accuracy; it merely adds to the total. Unlike with the metal bar, it is impossible even in theory to obtain an exact value for the length of a coastline. In three-dimensional space, the coastline paradox is readily extended to the concept of fractal surfaces, whereby the area of a surface varies depending on the measurement resolution. == Discovery == Shortly before 1951, Lewis Fry Richardson, in researching the possible effect of border lengths on the probability of war, noticed that the Portuguese reported their measured border with Spain to be 987 km (613 mi), but the Spanish reported it as 1,214 km (754 mi). This was the beginning of the coastline problem, which is a mathematical uncertainty inherent in the measurement of boundaries that are irregular. The prevailing method of estimating the length of a border (or coastline) was to lay out n equal straight-line segments of length l with dividers on a map or aerial photograph. Each end of the segment must be on the boundary. Investigating the discrepancies in border estimation, Richardson discovered what is now termed the "Richardson effect": the sum of the segments monotonically increases when the common length of the segments decreases. In effect, the shorter the ruler, the longer the measured border; the Spanish and Portuguese geographers were simply using different-length rulers. The result most astounding to Richardson is that, under certain circumstances, as l approaches zero, the length of the coastline approaches infinity. Richardson had believed, based on Euclidean geometry, that a coastline would approach a fixed length, as do similar estimations of regular geometric figures. For example, the perimeter of a regular polygon inscribed in a circle approaches the circumference with increasing numbers of sides (and decrease in the length of one side). In geometric measure theory such a smooth curve as the circle that can be approximated by small straight segments with a definite limit is termed a rectifiable curve. Benoit Mandelbrot devised an alternative measure of length for coastlines, the Hausdorff dimension, and showed that it does not depend on the length l in the same way. == Mathematical aspects == The basic concept of length originates from Euclidean distance. In Euclidean geometry, a straight line represents the shortest distance between two points. This line has only one length. On the surface of a sphere, this is replaced by the geodesic length (also called the great circle length), which is measured along the surface curve that exists in the plane containing both endpoints and the center of the sphere. The length of basic curves is more complicated but can also be calculated. Measuring with rulers, one can approximate the length of a curve by adding the sum of the straight lines which connect the points: Using a few straight lines to approximate the length of a curve will produce an estimate lower than the true length; when increasingly short (and thus more numerous) lines are used, the sum approaches the curve's true length, and that length is the least upper bound or supremum of all such approximations. A precise value for this length can be found using calculus, the branch of mathematics enabling the calculation of infinitesimally small distances. The following animation illustrates how a smooth curve can be meaningfully assigned a precise length: Not all curves can be measured in this way. A fractal is, by definition, a curve whose perceived complexity does not decrease with measurement scale. Whereas approximations of a smooth curve tend to a single value as measurement precision increases, the measured value for a fractal does not converge. As the length of a fractal curve always diverges to infinity, if one were to measure a coastline with infinite or near-infinite resolution, the length of the infinitely short kinks in the coastline would add up to infinity. However, this figure relies on the assumption that space can be subdivided into infinitesimal sections. The truth value of this assumption—which underlies Euclidean geometry and serves as a useful model in everyday measurement—is a matter of philosophical speculation, and may or may not reflect the changing realities of "space" and "distance" on the atomic level (approximately the scale of a nanometer). Coastlines are less definite in their construction than idealized fractals such as the Mandelbrot set because they are formed by various natural events that create patterns in statistically random ways, whereas idealized fractals are formed through repeated iterations of simple, formulaic sequences. === Measuring a coastline === More than a decade after Richardson completed his work, Benoit Mandelbrot developed a new branch of mathematics, fractal geometry, to describe just such non-rectifiable complexes in nature as the infinite coastline. His own definition of the new figure serving as the basis for his study is: I coined fractal from the Latin adjective fractus. The corresponding Latin verb frangere means "to break:" to create irregular fragments. It is therefore sensible ... that, in addition to "fragmented" ... fractus should also mean "irregular". In "How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension", published on 5 May 1967, Mandelbrot discusses self-similar curves that have Hausdorff dimension between 1 and 2. These curves are examples of fractals, although Mandelbrot does not use this term in the paper, as he did not coin it until 1975. The paper is one of Mandelbrot's first publications on the topic of fractals. Empirical evidence suggests that the smaller the increment of measurement, the longer the measured length becomes. If one were to measure a stretch of coastline with a yardstick, one would get a shorter result than if the same stretch were measured with a 1-foot (30 cm) ruler. This is because one would be laying the ruler along a more curvilinear route than that followed by the yardstick. The empirical evidence suggests a rule which, if extrapolated, shows that the measured length increases without limit as the measurement scale decreases towards zero. This discussion implies that it is meaningless to talk about the length of a coastline; some other means of quantifying coastlines are needed. Mandelbrot then describes various mathematical curves, related to the Koch snowflake, which are defined in such a way that they are strictly self-similar. Mandelbrot shows how to calculate the Hausdorff dimension of each of these curves, each of which has a dimension D between 1 and 2 (he also mentions but does not give a construction for the space-filling Peano curve, which has a dimension exactly 2). The paper does not claim that any coastline or geographic border actually has fractional dimension. Instead, it notes that Richardson's empirical law is compatible with the idea that geographic curves, such as coastlines, can be modelled by random self-similar figures of fractional dimension. Near the end of the paper Mandelbrot briefly discusses how one might approach the study of fractal-like objects in nature that look random rather than regular. For this he defines statistically self-similar figures and says that these are encountered in nature. The paper is important because it is a "turning point" in Mandelbrot's early thinking on fractals. It is an example of the linking of mathematical objects with natural forms that was a theme of much of his later work. A key property of some fractals is self-similarity; that is, at any scale the same general configuration appears. A coastline is perceived as bays alternating with promontories. In the hypothetical situation that a given coastline has this property of self-similarity, then no matter how great any one small section of coastline is magnified, a similar pattern of smaller bays and promontories superimposed on larger bays and promontories appears, right down to the grains of sand. At that scale the coastline appears as a momentarily shifting, potentially infinitely long thread with a stochastic arrangement of bays and promontories formed from the small objects at hand. In such an environment (as opposed to smooth curves) Mandelbrot asserts "coastline length turns out to be an elusive notion that slips between the fingers of those who want to grasp it". There are different kinds of fractals. A coastline with the stated property is in "a first category of fractals, namely curves whose fractal dimension is greater than 1". That last statement represents an extension by Mandelbrot of Richardson's thought. Mandelbrot's statement of the Richardson effect is: where L, coastline length, a function of the measurement unit ε, is approximated by the expression. F is a constant, and D is a parameter that Richardson found depended on the coastline approximated by L. He gave no theoretical explanation, but Mandelbrot identified D with a non-integer form of the Hausdorff dimension, later the fractal dimension. Rearranging the expression yields where Fε−D must be the number of units ε required to obtain L. The broken line measuring a coast does not extend in one direction nor does it represent an area, but is intermediate between the two and can be thought of as a band of width 2ε. D is its fractal dimension, ranging between 1 and 2 (and typically less than 1.5). More broken coastlines have greater D, and therefore L is longer for the same ε. D is approximately 1.02 for the coastline of South Africa, and approximately 1.25 for the west coast of Great Britain. For lake shorelines, the typical value of D is 1.28. == Solutions == The coastline paradox describes a problem with real-world applications, including trivial matters such as which river, beach, border, coastline is the longest, with the former two records a matter of fierce debate; furthermore, the problem extends to demarcating territorial boundaries, property rights, erosion monitoring, and the theoretical implications of our geometric modelling. To resolve this problem, several solutions have been proposed. These solutions resolve the practical problems around the problem by setting the definition of "coastline," establishing the practical physical limits of a coastline, and using mathematical integers within these practical limitations to calculate the length to a meaningful level of precision. These practical solutions to the problem can resolve the problem for all practical applications while it persists as a theoretical/mathematical concept within our models. == Criticisms and misunderstandings == The coastline paradox is often criticized because coastlines are inherently finite, real features in space, and, therefore, there is a quantifiable answer to their length. The comparison to fractals, while useful as a metaphor to explain the problem, is criticized as not fully accurate, as coastlines are not self-repeating and are fundamentally finite. The source of the paradox is based on the way reality is measured and is most relevant when attempting to use those measurements to create cartographic models of coasts. Modern technology, such as LiDAR, Global Positioning Systems and Geographic Information Systems, has made addressing the paradox much easier; however, the limitations of survey measurements and the vector software persist. Critics argue that these problems are more theoretical and not practical considerations for planners. Alternately, the concept of a coast "line" is in itself a human construct that depends on assignment of tidal datum which is not flat relative to any vertical datum, and thus any line constructed between land and sea somewhere in the intertidal zone is semi-arbitrary and in constant flux. Thus wide number of "shorelines" may be constructed for varied analytical purposes using different data sources and methodologies, each with a different length. This may complicate the quantification of ecosystem services using methods that depend on shoreline length. == See also == Alaska boundary dispute – Alaskan and Canadian claims to the Alaskan Panhandle differed greatly, based on competing interpretations of the ambiguous phrase setting the border at "a line parallel to the windings of the coast", applied to the fjord-dense region. Fractal dimension Gabriel's horn, a geometric figure with infinite surface area but finite volume List of countries by length of coastline Scale (geography) Paradox of the heap Staircase paradox, similar paradox where a straight segment approximation converges to a different value Zeno's paradoxes List of longest beaches List of river systems by length List of countries and territories by number of land borders == References == === Citations === === Sources === == External links == "Coastlines" at Fractal Geometry (ed. Michael Frame, Benoit Mandelbrot, and Nial Neger; maintained for Math 190a at Yale University) The Atlas of Canada – Coastline and Shoreline NOAA GeoZone Blog on Digital Coast What Is The Coastline Paradox? – YouTube video by Veritasium
|
Wikipedia:Coates graph#0
|
In mathematics, the Coates graph or Coates flow graph, named after C.L. Coates, is a graph associated with the Coates' method for the solution of a system of linear equations. The Coates graph Gc(A) associated with an n × n matrix A is an n-node, weighted, labeled, directed graph. The nodes, labeled 1 through n, are each associated with the corresponding row/column of A. If entry aji ≠ 0 then there is a directed edge from node i to node j with weight aji. In other words, the Coates graph for matrix A is the one whose adjacency matrix is the transpose of A. == See also == Flow graph (mathematics) Mason graph == References ==
|
Wikipedia:Cocker's Arithmetick#0
|
Cocker's Arithmetick, also known by its full title "Cocker's Arithmetick: Being a Plain and Familiar Method Suitable to the Meanest Capacity for the Full Understanding of That Incomparable Art, As It Is Now Taught by the Ablest School-Masters in City and Country", is a grammar school mathematics textbook written by the English engraver and teacher Edward Cocker (1631–1676) and published posthumously by John Hawkins in 1678. Arithmetick along with companion volume, Decimal Arithmetick published in 1684, were used to teach mathematics in schools in the United Kingdom for more than 150 years. == Disputed authorship == Some controversy exists over the authorship of the book. Augustus De Morgan claimed the work was written by Hawkins, who merely used Cocker's name to lend the authority of his reputation to the book. Ruth Wallis, in 1997, wrote an article in Annals of Science, claiming De Morgan's analysis was flawed and Cocker was the real author. == Popularity and impact == The popularity of Arithmetick is unquestioned by its more than 130 editions, and that its place was woven in the fabric of the popular culture of the time is evidenced by its references in the phrase, "according to Cocker", meaning "absolutely correct" or "according to the rules". Such noted figures of history as Benjamin Franklin and Thomas Simpson are documented as having used the book. Over 100 years after its publication, Samuel Johnson carried a copy of Arithmetick on his tour of Scotland, and mentions it in his letters: In the afternoon tea was made by a very decent girl in a printed linen; she engaged me so much, that I made her a present of Cocker's Arithmetick. == Writing style == Though popular, like most texts of its time, Arithmetick style is formal, stiff and difficult to follow as illustrated in its explanation of the "rule of three". Again, observe, that of the three given numbers, those two that are of the same kind, one of them must be the first, and the other the third, and that which is of the same kind with the number sought, must be the second number in the rule of three; and that you may know which of the said numbers to make your first, and which your third, know this, that to one of those two numbers there is always affixed a demand, and that number upon which the demand lieth must always be reckoned the third number As well as the rule of three, Arithmetick contains instructions on alligation and the rule of false position. Following the common practice of textbooks at the time, each rule is illustrated with numerous examples of commercial transactions involving the exchange of wheat, rye and other seeds; calculation of costs for the erection of houses and other structures; and the rotation of gears on a shaft. The text contains the earliest known use of the term lowest terms. == References == == Further reading == On-line text of Cocker's decimal arithmetic at Internet Archive Numeracy and Popular Culture: Cocker’s Arithmetick and the Market for Cheap Arithmetical Books, 1678–1787
|
Wikipedia:Cocker's Decimal Arithmetick#0
|
Cocker's Decimal Arithmetick is a grammar school mathematics textbook written by the English engraver and teacher Edward Cocker (1631–1676) and published posthumously by John Hawkins in 1684. Decimal Arithmetick along with the companion volume Cocker's Arithmetick, published in 1677, were used in schools in the United Kingdom for more than 150 years. The concept of decimal fractions and the advantages of using them in calculations were well known, but a wide variety of different notations were in use. After surveying various notations, Decimal Arithmetick recommends the decimal point notation introduced by John Napier: A decimal fraction being written ... by having a point or prick prefixed before it ... being written according to the first direction, I conceive they may be most fit for calculation. Decimal Arithmetick gives instructions for calculations involving decimals, methods of extracting roots, and an overview of the concept of logarithms. There are many worked examples, some of which involve solid geometry or the calculation of interest. == References ==
|
Wikipedia:Codimension#0
|
In mathematics, codimension is a basic geometric idea that applies to subspaces in vector spaces, to submanifolds in manifolds, and suitable subsets of algebraic varieties. For affine and projective algebraic varieties, the codimension equals the height of the defining ideal. For this reason, the height of an ideal is often called its codimension. The dual concept is relative dimension. == Definition == Codimension is a relative concept: it is only defined for one object inside another. There is no “codimension of a vector space (in isolation)”, only the codimension of a vector subspace. If W is a linear subspace of a finite-dimensional vector space V, then the codimension of W in V is the difference between the dimensions: codim ( W ) = dim ( V ) − dim ( W ) . {\displaystyle \operatorname {codim} (W)=\dim(V)-\dim(W).} It is the complement of the dimension of W, in that, with the dimension of W, it adds up to the dimension of the ambient space V: dim ( W ) + codim ( W ) = dim ( V ) . {\displaystyle \dim(W)+\operatorname {codim} (W)=\dim(V).} Similarly, if N is a submanifold or subvariety in M, then the codimension of N in M is codim ( N ) = dim ( M ) − dim ( N ) . {\displaystyle \operatorname {codim} (N)=\dim(M)-\dim(N).} Just as the dimension of a submanifold is the dimension of the tangent bundle (the number of dimensions that you can move on the submanifold), the codimension is the dimension of the normal bundle (the number of dimensions you can move off the submanifold). More generally, if W is a linear subspace of a (possibly infinite dimensional) vector space V then the codimension of W in V is the dimension (possibly infinite) of the quotient space V/W, which is more abstractly known as the cokernel of the inclusion. For finite-dimensional vector spaces, this agrees with the previous definition codim ( W ) = dim ( V / W ) = dim coker ( W → V ) = dim ( V ) − dim ( W ) , {\displaystyle \operatorname {codim} (W)=\dim(V/W)=\dim \operatorname {coker} (W\to V)=\dim(V)-\dim(W),} and is dual to the relative dimension as the dimension of the kernel. Finite-codimensional subspaces of infinite-dimensional spaces are often useful in the study of topological vector spaces. == Additivity of codimension and dimension counting == The fundamental property of codimension lies in its relation to intersection: if W1 has codimension k1, and W2 has codimension k2, then if U is their intersection with codimension j we have max (k1, k2) ≤ j ≤ k1 + k2. In fact j may take any integer value in this range. This statement is more perspicuous than the translation in terms of dimensions, because the RHS is just the sum of the codimensions. In words codimensions (at most) add. If the subspaces or submanifolds intersect transversally (which occurs generically), codimensions add exactly. This statement is called dimension counting, particularly in intersection theory. == Dual interpretation == In terms of the dual space, it is quite evident why dimensions add. The subspaces can be defined by the vanishing of a certain number of linear functionals, which if we take to be linearly independent, their number is the codimension. Therefore, we see that U is defined by taking the union of the sets of linear functionals defining the Wi. That union may introduce some degree of linear dependence: the possible values of j express that dependence, with the RHS sum being the case where there is no dependence. This definition of codimension in terms of the number of functions needed to cut out a subspace extends to situations in which both the ambient space and subspace are infinite dimensional. In other language, which is basic for any kind of intersection theory, we are taking the union of a certain number of constraints. We have two phenomena to look out for: the two sets of constraints may not be independent; the two sets of constraints may not be compatible. The first of these is often expressed as the principle of counting constraints: if we have a number N of parameters to adjust (i.e. we have N degrees of freedom), and a constraint means we have to 'consume' a parameter to satisfy it, then the codimension of the solution set is at most the number of constraints. We do not expect to be able to find a solution if the predicted codimension, i.e. the number of independent constraints, exceeds N (in the linear algebra case, there is always a trivial, null vector solution, which is therefore discounted). The second is a matter of geometry, on the model of parallel lines; it is something that can be discussed for linear problems by methods of linear algebra, and for non-linear problems in projective space, over the complex number field. == In geometric topology == Codimension also has some clear meaning in geometric topology: on a manifold, codimension 1 is the dimension of topological disconnection by a submanifold, while codimension 2 is the dimension of ramification and knot theory. In fact, the theory of high-dimensional manifolds, which starts in dimension 5 and above, can alternatively be said to start in codimension 3, because higher codimensions avoid the phenomenon of knots. Since surgery theory requires working up to the middle dimension, once one is in dimension 5, the middle dimension has codimension greater than 2, and hence one avoids knots. This quip is not vacuous: the study of embeddings in codimension 2 is knot theory, and difficult, while the study of embeddings in codimension 3 or more is amenable to the tools of high-dimensional geometric topology, and hence considerably easier. == See also == Glossary of differential geometry and topology == References == "Codimension", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Roman, Stephen (2008), Advanced Linear Algebra, Graduate Texts in Mathematics (Third ed.), Springer, ISBN 978-0-387-72828-5
|
Wikipedia:Codomain#0
|
In mathematics, a codomain, counter-domain, or set of destination of a function is a set into which all of the output of the function is constrained to fall. It is the set Y in the notation f: X → Y. The term range is sometimes ambiguously used to refer to either the codomain or the image of a function. A codomain is part of a function f if f is defined as a triple (X, Y, G) where X is called the domain of f, Y its codomain, and G its graph. The set of all elements of the form f(x), where x ranges over the elements of the domain X, is called the image of f. The image of a function is a subset of its codomain so it might not coincide with it. Namely, a function that is not surjective has elements y in its codomain for which the equation f(x) = y does not have a solution. A codomain is not part of a function f if f is defined as just a graph. For example in set theory it is desirable to permit the domain of a function to be a proper class X, in which case there is formally no such thing as a triple (X, Y, G). With such a definition functions do not have a codomain, although some authors still use it informally after introducing a function in the form f: X → Y. == Examples == For a function f : R → R {\displaystyle f\colon \mathbb {R} \rightarrow \mathbb {R} } defined by f : x ↦ x 2 , {\displaystyle f\colon \,x\mapsto x^{2},} or equivalently f ( x ) = x 2 , {\displaystyle f(x)\ =\ x^{2},} the codomain of f is R {\displaystyle \textstyle \mathbb {R} } , but f does not map to any negative number. Thus the image of f is the set R 0 + {\displaystyle \textstyle \mathbb {R} _{0}^{+}} ; i.e., the interval [0, ∞). An alternative function g is defined thus: g : R → R 0 + {\displaystyle g\colon \mathbb {R} \rightarrow \mathbb {R} _{0}^{+}} g : x ↦ x 2 . {\displaystyle g\colon \,x\mapsto x^{2}.} While f and g map a given x to the same number, they are not, in this view, the same function because they have different codomains. A third function h can be defined to demonstrate why: h : x ↦ x . {\displaystyle h\colon \,x\mapsto {\sqrt {x}}.} The domain of h cannot be R {\displaystyle \textstyle \mathbb {R} } but can be defined to be R 0 + {\displaystyle \textstyle \mathbb {R} _{0}^{+}} : h : R 0 + → R . {\displaystyle h\colon \mathbb {R} _{0}^{+}\rightarrow \mathbb {R} .} The compositions are denoted h ∘ f , {\displaystyle h\circ f,} h ∘ g . {\displaystyle h\circ g.} On inspection, h ∘ f is not useful. It is true, unless defined otherwise, that the image of f is not known; it is only known that it is a subset of R {\displaystyle \textstyle \mathbb {R} } . For this reason, it is possible that h, when composed with f, might receive an argument for which no output is defined – negative numbers are not elements of the domain of h, which is the square root function. Function composition therefore is a useful notion only when the codomain of the function on the right side of a composition (not its image, which is a consequence of the function and could be unknown at the level of the composition) is a subset of the domain of the function on the left side. The codomain affects whether a function is a surjection, in that the function is surjective if and only if its codomain equals its image. In the example, g is a surjection while f is not. The codomain does not affect whether a function is an injection. A second example of the difference between codomain and image is demonstrated by the linear transformations between two vector spaces – in particular, all the linear transformations from R 2 {\displaystyle \textstyle \mathbb {R} ^{2}} to itself, which can be represented by the 2×2 matrices with real coefficients. Each matrix represents a map with the domain R 2 {\displaystyle \textstyle \mathbb {R} ^{2}} and codomain R 2 {\displaystyle \textstyle \mathbb {R} ^{2}} . However, the image is uncertain. Some transformations may have image equal to the whole codomain (in this case the matrices with rank 2) but many do not, instead mapping into some smaller subspace (the matrices with rank 1 or 0). Take for example the matrix T given by T = ( 1 0 1 0 ) {\displaystyle T={\begin{pmatrix}1&0\\1&0\end{pmatrix}}} which represents a linear transformation that maps the point (x, y) to (x, x). The point (2, 3) is not in the image of T, but is still in the codomain since linear transformations from R 2 {\displaystyle \textstyle \mathbb {R} ^{2}} to R 2 {\displaystyle \textstyle \mathbb {R} ^{2}} are of explicit relevance. Just like all 2×2 matrices, T represents a member of that set. Examining the differences between the image and codomain can often be useful for discovering properties of the function in question. For example, it can be concluded that T does not have full rank since its image is smaller than the whole codomain. == See also == Bijection – One-to-one correspondence Morphism § Codomain Endofunction – Function with the same domain and codomain == Notes == == References == Bourbaki, Nicolas (1970). Théorie des ensembles. Éléments de mathématique. Springer. ISBN 9783540340348. Eccles, Peter J. (1997), An Introduction to Mathematical Reasoning: Numbers, Sets, and Functions, Cambridge University Press, ISBN 978-0-521-59718-0 Forster, Thomas (2003), Logic, Induction and Sets, Cambridge University Press, ISBN 978-0-521-53361-4 Mac Lane, Saunders (1998), Categories for the working mathematician (2nd ed.), Springer, ISBN 978-0-387-98403-2 Scott, Dana S.; Jech, Thomas J. (1967), Axiomatic set theory, Symposium in Pure Mathematics, American Mathematical Society, ISBN 978-0-8218-0245-8 Sharma, A.K. (2004), Introduction To Set Theory, Discovery Publishing House, ISBN 978-81-7141-877-0 Stewart, Ian; Tall, David Orme (1977), The foundations of mathematics, Oxford University Press, ISBN 978-0-19-853165-4
|
Wikipedia:Coefficient#0
|
In mathematics, a coefficient is a multiplicative factor involved in some term of a polynomial, a series, or any other type of expression. It may be a number without units, in which case it is known as a numerical factor. It may also be a constant with units of measurement, in which it is known as a constant multiplier. In general, coefficients may be any expression (including variables such as a, b and c). When the combination of variables and constants is not necessarily involved in a product, it may be called a parameter. For example, the polynomial 2 x 2 − x + 3 {\displaystyle 2x^{2}-x+3} has coefficients 2, −1, and 3, and the powers of the variable x {\displaystyle x} in the polynomial a x 2 + b x + c {\displaystyle ax^{2}+bx+c} have coefficient parameters a {\displaystyle a} , b {\displaystyle b} , and c {\displaystyle c} . A constant coefficient, also known as constant term or simply constant, is a quantity either implicitly attached to the zeroth power of a variable or not attached to other variables in an expression; for example, the constant coefficients of the expressions above are the number 3 and the parameter c, involved in 3=c ⋅ x0. The coefficient attached to the highest degree of the variable in a polynomial of one variable is referred to as the leading coefficient; for example, in the example expressions above, the leading coefficients are 2 and a, respectively. In the context of differential equations, these equations can often be written in terms of polynomials in one or more unknown functions and their derivatives. In such cases, the coefficients of the differential equation are the coefficients of this polynomial, and these may be non-constant functions. A coefficient is a constant coefficient when it is a constant function. For avoiding confusion, in this context a coefficient that is not attached to unknown functions or their derivatives is generally called a constant term rather than a constant coefficient. In particular, in a linear differential equation with constant coefficient, the constant coefficient term is generally not assumed to be a constant function. == Terminology and definition == In mathematics, a coefficient is a multiplicative factor in some term of a polynomial, a series, or any expression. For example, in the polynomial 7 x 2 − 3 x y + 1.5 + y , {\displaystyle 7x^{2}-3xy+1.5+y,} with variables x {\displaystyle x} and y {\displaystyle y} , the first two terms have the coefficients 7 and −3. The third term 1.5 is the constant coefficient. In the final term, the coefficient is 1 and is not explicitly written. In many scenarios, coefficients are numbers (as is the case for each term of the previous example), although they could be parameters of the problem—or any expression in these parameters. In such a case, one must clearly distinguish between symbols representing variables and symbols representing parameters. Following René Descartes, the variables are often denoted by x, y, ..., and the parameters by a, b, c, ..., but this is not always the case. For example, if y is considered a parameter in the above expression, then the coefficient of x would be −3y, and the constant coefficient (with respect to x) would be 1.5 + y. When one writes a x 2 + b x + c , {\displaystyle ax^{2}+bx+c,} it is generally assumed that x is the only variable, and that a, b and c are parameters; thus the constant coefficient is c in this case. Any polynomial in a single variable x can be written as a k x k + ⋯ + a 1 x 1 + a 0 {\displaystyle a_{k}x^{k}+\dotsb +a_{1}x^{1}+a_{0}} for some nonnegative integer k {\displaystyle k} , where a k , … , a 1 , a 0 {\displaystyle a_{k},\dotsc ,a_{1},a_{0}} are the coefficients. This includes the possibility that some terms have coefficient 0; for example, in x 3 − 2 x + 1 {\displaystyle x^{3}-2x+1} , the coefficient of x 2 {\displaystyle x^{2}} is 0, and the term 0 x 2 {\displaystyle 0x^{2}} does not appear explicitly. For the largest i {\displaystyle i} such that a i ≠ 0 {\displaystyle a_{i}\neq 0} (if any), a i {\displaystyle a_{i}} is called the leading coefficient of the polynomial. For example, the leading coefficient of the polynomial 4 x 5 + x 3 + 2 x 2 {\displaystyle 4x^{5}+x^{3}+2x^{2}} is 4. This can be generalised to multivariate polynomials with respect to a monomial order, see Gröbner basis § Leading term, coefficient and monomial. == Linear algebra == In linear algebra, a system of linear equations is frequently represented by its coefficient matrix. For example, the system of equations { 2 x + 3 y = 0 5 x − 4 y = 0 , {\displaystyle {\begin{cases}2x+3y=0\\5x-4y=0\end{cases}},} the associated coefficient matrix is ( 2 3 5 − 4 ) . {\displaystyle {\begin{pmatrix}2&3\\5&-4\end{pmatrix}}.} Coefficient matrices are used in algorithms such as Gaussian elimination and Cramer's rule to find solutions to the system. The leading entry (sometimes leading coefficient) of a row in a matrix is the first nonzero entry in that row. So, for example, in the matrix ( 1 2 0 6 0 2 9 4 0 0 0 4 0 0 0 0 ) , {\displaystyle {\begin{pmatrix}1&2&0&6\\0&2&9&4\\0&0&0&4\\0&0&0&0\end{pmatrix}},} the leading coefficient of the first row is 1; that of the second row is 2; that of the third row is 4, while the last row does not have a leading coefficient. Though coefficients are frequently viewed as constants in elementary algebra, they can also be viewed as variables as the context broadens. For example, the coordinates ( x 1 , x 2 , … , x n ) {\displaystyle (x_{1},x_{2},\dotsc ,x_{n})} of a vector v {\displaystyle v} in a vector space with basis { e 1 , e 2 , … , e n } {\displaystyle \lbrace e_{1},e_{2},\dotsc ,e_{n}\rbrace } are the coefficients of the basis vectors in the expression v = x 1 e 1 + x 2 e 2 + ⋯ + x n e n . {\displaystyle v=x_{1}e_{1}+x_{2}e_{2}+\dotsb +x_{n}e_{n}.} == See also == Correlation coefficient Degree of a polynomial Monic polynomial Binomial coefficient == References == == Further reading == Sabah Al-hadad and C.H. Scott (1979) College Algebra with Applications, page 42, Winthrop Publishers, Cambridge Massachusetts ISBN 0-87626-140-3 . Gordon Fuller, Walter L Wilson, Henry C Miller, (1982) College Algebra, 5th edition, page 24, Brooks/Cole Publishing, Monterey California ISBN 0-534-01138-1 .
|
Wikipedia:Coefficient matrix#0
|
In linear algebra, a coefficient matrix is a matrix consisting of the coefficients of the variables in a set of linear equations. The matrix is used in solving systems of linear equations. == Coefficient matrix == In general, a system with m linear equations and n unknowns can be written as a 11 x 1 + a 12 x 2 + ⋯ + a 1 n x n = b 1 a 21 x 1 + a 22 x 2 + ⋯ + a 2 n x n = b 2 ⋮ a m 1 x 1 + a m 2 x 2 + ⋯ + a m n x n = b m {\displaystyle {\begin{aligned}a_{11}x_{1}+a_{12}x_{2}+\cdots +a_{1n}x_{n}&=b_{1}\\a_{21}x_{1}+a_{22}x_{2}+\cdots +a_{2n}x_{n}&=b_{2}\\&\;\;\vdots \\a_{m1}x_{1}+a_{m2}x_{2}+\cdots +a_{mn}x_{n}&=b_{m}\end{aligned}}} where x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\ldots ,x_{n}} are the unknowns and the numbers a 11 , a 12 , … , a m n {\displaystyle a_{11},a_{12},\ldots ,a_{mn}} are the coefficients of the system. The coefficient matrix is the m × n matrix with the coefficient aij as the (i, j)th entry: [ a 11 a 12 ⋯ a 1 n a 21 a 22 ⋯ a 2 n ⋮ ⋮ ⋱ ⋮ a m 1 a m 2 ⋯ a m n ] {\displaystyle {\begin{bmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&\cdots &a_{mn}\end{bmatrix}}} Then the above set of equations can be expressed more succinctly as A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } where A is the coefficient matrix and b is the column vector of constant terms. == Relation of its properties to properties of the equation system == By the Rouché–Capelli theorem, the system of equations is inconsistent, meaning it has no solutions, if the rank of the augmented matrix (the coefficient matrix augmented with an additional column consisting of the vector b) is greater than the rank of the coefficient matrix. If, on the other hand, the ranks of these two matrices are equal, the system must have at least one solution. The solution is unique if and only if the rank r equals the number n of variables. Otherwise the general solution has n – r free parameters; hence in such a case there are an infinitude of solutions, which can be found by imposing arbitrary values on n – r of the variables and solving the resulting system for its unique solution; different choices of which variables to fix, and different fixed values of them, give different system solutions. == Dynamic equations == A first-order matrix difference equation with constant term can be written as y t + 1 = A y t + c , {\displaystyle \mathbf {y} _{t+1}=A\mathbf {y} _{t}+\mathbf {c} ,} where A is n × n and y and c are n × 1. This system converges to its steady-state level of y if and only if the absolute values of all n eigenvalues of A are less than 1. A first-order matrix differential equation with constant term can be written as d y d t = A y ( t ) + c . {\displaystyle {\frac {d\mathbf {y} }{dt}}=A\mathbf {y} (t)+\mathbf {c} .} This system is stable if and only if all n eigenvalues of A have negative real parts. == References ==
|
Wikipedia:Cohn's irreducibility criterion#0
|
Cohn's irreducibility criterion is a sufficient condition for a polynomial to be irreducible in Z [ x ] {\displaystyle \mathbb {Z} [x]} —that is, for it to be unfactorable into the product of lower-degree polynomials with integer coefficients. == Statement == The criterion is often stated as follows: If a prime number p {\displaystyle p} is expressed in base 10 as p = a m 10 m + a m − 1 10 m − 1 + ⋯ + a 1 10 + a 0 {\displaystyle p=a_{m}10^{m}+a_{m-1}10^{m-1}+\cdots +a_{1}10+a_{0}} (where 0 ≤ a i ≤ 9 {\displaystyle 0\leq a_{i}\leq 9} ) then the polynomial f ( x ) = a m x m + a m − 1 x m − 1 + ⋯ + a 1 x + a 0 {\displaystyle f(x)=a_{m}x^{m}+a_{m-1}x^{m-1}+\cdots +a_{1}x+a_{0}} is irreducible in Z [ x ] {\displaystyle \mathbb {Z} [x]} . The theorem can be generalized to other bases as follows: Assume that b ≥ 2 {\displaystyle b\geq 2} is a natural number and p ( x ) = a k x k + a k − 1 x k − 1 + ⋯ + a 1 x + a 0 {\displaystyle p(x)=a_{k}x^{k}+a_{k-1}x^{k-1}+\cdots +a_{1}x+a_{0}} is a polynomial such that 0 ≤ a i ≤ b − 1 {\displaystyle 0\leq a_{i}\leq b-1} . If p ( b ) {\displaystyle p(b)} is a prime number then p ( x ) {\displaystyle p(x)} is irreducible in Z [ x ] {\displaystyle \mathbb {Z} [x]} . == History and extensions == The base 10 version of the theorem is attributed to Cohn by Pólya and Szegő in Problems and Theorems in Analysis while the generalization to any base b is due to Brillhart, Filaseta, and Odlyzko. It is clear from context that the "A. Cohn" mentioned by Polya and Szegő is Arthur Cohn (1894–1940), a student of Issai Schur who was awarded his doctorate from Frederick William University in 1921. A further generalization of the theorem allowing coefficients larger than digits was given by Filaseta and Gross. In particular, let f ( x ) {\displaystyle f(x)} be a polynomial with non-negative integer coefficients such that f ( 10 ) {\displaystyle f(10)} is prime. If all coefficients are ≤ {\displaystyle \leq } 49598666989151226098104244512918, then f ( x ) {\displaystyle f(x)} is irreducible over Z [ x ] {\displaystyle \mathbb {Z} [x]} . Moreover, they proved that this bound is also sharp. In other words, coefficients larger than 49598666989151226098104244512918 do not guarantee irreducibility. The method of Filaseta and Gross was also generalized to provide similar sharp bounds for some other bases by Cole, Dunn, and Filaseta. An analogue of the theorem also holds for algebraic function fields over finite fields. == Converse == The converse of this criterion is that, if p is an irreducible polynomial with integer coefficients that have greatest common divisor 1, then there exists a base such that the coefficients of p form the representation of a prime number in that base. This is the Bunyakovsky conjecture and its truth or falsity remains an open question. == See also == Eisenstein's criterion Perron's irreducibility criterion == References == == External links == "A. Cohn's irreducibility criterion". PlanetMath.
|
Wikipedia:Coincidence point#0
|
In mathematics, a coincidence point (or simply coincidence) of two functions is a point in their common domain having the same image. Formally, given two functions f , g : X → Y {\displaystyle f,g\colon X\rightarrow Y} we say that a point x in X is a coincidence point of f and g if f(x) = g(x). Coincidence theory (the study of coincidence points) is, in most settings, a generalization of fixed point theory, the study of points x with f(x) = x. Fixed point theory is the special case obtained from the above by letting X = Y and taking g to be the identity function. Just as fixed point theory has its fixed-point theorems, there are theorems that guarantee the existence of coincidence points for pairs of functions. Notable among them, in the setting of manifolds, is the Lefschetz coincidence theorem, which is typically known only in its special case formulation for fixed points. Coincidence points, like fixed points, are today studied using many tools from mathematical analysis and topology. An equaliser is a generalization of the coincidence set. == See also == Incidence (geometry) Intersection (geometry) == References ==
|
Wikipedia:Cointerpretability#0
|
In mathematical logic, cointerpretability is a binary relation on formal theories: a formal theory T is cointerpretable in another such theory S, when the language of S can be translated into the language of T in such a way that S proves every formula whose translation is a theorem of T. The "translation" here is required to preserve the logical structure of formulas. This concept, in a sense dual to interpretability, was introduced by Japaridze (1993), who also proved that, for theories of Peano arithmetic and any stronger theories with effective axiomatizations, cointerpretability is equivalent to Σ 1 {\displaystyle \Sigma _{1}} -conservativity. == See also == Cotolerance Interpretability logic Tolerance (in logic) == References == Japaridze, Giorgi (1993), "A generalized notion of weak interpretability and the corresponding modal logic", Annals of Pure and Applied Logic, 61 (1–2): 113–160, doi:10.1016/0168-0072(93)90201-N, MR 1218658. Japaridze, Giorgi; de Jongh, Dick (1998), "The logic of provability", in Buss, Samuel R. (ed.), Handbook of Proof Theory, Studies in Logic and the Foundations of Mathematics, vol. 137, Amsterdam: North-Holland, pp. 475–546, doi:10.1016/S0049-237X(98)80022-0, MR 1640331.
|
Wikipedia:Cokernel#0
|
The cokernel of a linear mapping of vector spaces f : X → Y is the quotient space Y / im(f) of the codomain of f by the image of f. The dimension of the cokernel is called the corank of f. Cokernels are dual to the kernels of category theory, hence the name: the kernel is a subobject of the domain (it maps to the domain), while the cokernel is a quotient object of the codomain (it maps from the codomain). Intuitively, given an equation f(x) = y that one is seeking to solve, the cokernel measures the constraints that y must satisfy for this equation to have a solution – the obstructions to a solution – while the kernel measures the degrees of freedom in a solution, if one exists. This is elaborated in intuition, below. More generally, the cokernel of a morphism f : X → Y in some category (e.g. a homomorphism between groups or a bounded linear operator between Hilbert spaces) is an object Q and a morphism q : Y → Q such that the composition q f is the zero morphism of the category, and furthermore q is universal with respect to this property. Often the map q is understood, and Q itself is called the cokernel of f. In many situations in abstract algebra, such as for abelian groups, vector spaces or modules, the cokernel of the homomorphism f : X → Y is the quotient of Y by the image of f. In topological settings, such as with bounded linear operators between Hilbert spaces, one typically has to take the closure of the image before passing to the quotient. == Formal definition == One can define the cokernel in the general framework of category theory. In order for the definition to make sense the category in question must have zero morphisms. The cokernel of a morphism f : X → Y is defined as the coequalizer of f and the zero morphism 0XY : X → Y. Explicitly, this means the following. The cokernel of f : X → Y is an object Q together with a morphism q : Y → Q such that the diagram commutes. Moreover, the morphism q must be universal for this diagram, i.e. any other such q′ : Y → Q′ can be obtained by composing q with a unique morphism u : Q → Q′: As with all universal constructions the cokernel, if it exists, is unique up to a unique isomorphism, or more precisely: if q : Y → Q and q′ : Y → Q′ are two cokernels of f : X → Y, then there exists a unique isomorphism u : Q → Q′ with q' = u q. Like all coequalizers, the cokernel q : Y → Q is necessarily an epimorphism. Conversely an epimorphism is called normal (or conormal) if it is the cokernel of some morphism. A category is called conormal if every epimorphism is normal (e.g. the category of groups is conormal). === Examples === In the category of groups, the cokernel of a group homomorphism f : G → H is the quotient of H by the normal closure of the image of f. In the case of abelian groups, since every subgroup is normal, the cokernel is just H modulo the image of f: coker ( f ) = H / im ( f ) . {\displaystyle \operatorname {coker} (f)=H/\operatorname {im} (f).} === Special cases === In a preadditive category, it makes sense to add and subtract morphisms. In such a category, the coequalizer of two morphisms f and g (if it exists) is just the cokernel of their difference: coeq ( f , g ) = coker ( g − f ) . {\displaystyle \operatorname {coeq} (f,g)=\operatorname {coker} (g-f).} In an abelian category (a special kind of preadditive category) the image and coimage of a morphism f are given by im ( f ) = ker ( coker f ) , coim ( f ) = coker ( ker f ) . {\displaystyle {\begin{aligned}\operatorname {im} (f)&=\ker(\operatorname {coker} f),\\\operatorname {coim} (f)&=\operatorname {coker} (\ker f).\end{aligned}}} In particular, every abelian category is normal (and conormal as well). That is, every monomorphism m can be written as the kernel of some morphism. Specifically, m is the kernel of its own cokernel: m = ker ( coker ( m ) ) {\displaystyle m=\ker(\operatorname {coker} (m))} == Intuition == The cokernel can be thought of as the space of constraints that an equation must satisfy, as the space of obstructions, just as the kernel is the space of solutions. Formally, one may connect the kernel and the cokernel of a map T: V → W by the exact sequence 0 → ker T → V ⟶ T W → coker T → 0. {\displaystyle 0\to \ker T\to V{\overset {T}{\longrightarrow }}W\to \operatorname {coker} T\to 0.} These can be interpreted thus: given a linear equation T(v) = w to solve, the kernel is the space of solutions to the homogeneous equation T(v) = 0, and its dimension is the number of degrees of freedom in solutions to T(v) = w, if they exist; the cokernel is the space of constraints on w that must be satisfied if the equation is to have a solution, and its dimension is the number of independent constraints that must be satisfied for the equation to have a solution. The dimension of the cokernel plus the dimension of the image (the rank) add up to the dimension of the target space, as the dimension of the quotient space W / T(V) is simply the dimension of the space minus the dimension of the image. As a simple example, consider the map T: R2 → R2, given by T(x, y) = (0, y). Then for an equation T(x, y) = (a, b) to have a solution, we must have a = 0 (one constraint), and in that case the solution space is (x, b), or equivalently, (0, b) + (x, 0), (one degree of freedom). The kernel may be expressed as the subspace (x, 0) ⊆ V: the value of x is the freedom in a solution. The cokernel may be expressed via the real valued map W: (a, b) → (a): given a vector (a, b), the value of a is the obstruction to there being a solution. Additionally, the cokernel can be thought of as something that "detects" surjections in the same way that the kernel "detects" injections. A map is injective if and only if its kernel is trivial, and a map is surjective if and only if its cokernel is trivial, or in other words, if W = im(T). == References ==
|
Wikipedia:Cole Prize#0
|
The Frank Nelson Cole Prize, or Cole Prize for short, is one of twenty-two prizes awarded to mathematicians by the American Mathematical Society, one for an outstanding contribution to algebra, and the other for an outstanding contribution to number theory. The prize is named after Frank Nelson Cole, who served the Society for 25 years. The Cole Prize in algebra was funded by Cole himself, from funds given to him as a retirement gift; the prize fund was later augmented by his son, leading to the double award. The prizes recognize a notable research work in algebra (given every three years) or number theory (given every three years) that has appeared in the last six years. The work must be published in a recognized, peer-reviewed venue. The first award for algebra was made in 1928 to L. E. Dickson, while the first award for number theory was made in 1931 to H. S. Vandiver. == Frank Nelson Cole Prize in Algebra == == Frank Nelson Cole Prize in Number Theory == For full citations, see external links. == See also == List of mathematics awards == References == == External links == Frank Nelson Cole Prize in Algebra Frank Nelson Cole Prize in Number Theory
|
Wikipedia:Colette Guillopé#0
|
Colette Guillopé (born 1951) is a French mathematician specializing in partial differential equations and fluid mechanics. She is a professor emerita at Paris 12 Val de Marne University, where she is also the gender officer for the university. == Early life == Guillopé's parents were both professors. She studied at the École normale supérieure de Fontenay-aux-Roses. == Education == Guillopé earned a diplôme d'études approfondies. In 1977, Guillopé earned her doctorate at the Centre national de la recherche scientifique. She completed a thèse d'état in 1983 from the University of Paris-Sud under the supervision of Roger Temam. == Career == Guillopé was a founding member of L'association femmes et mathématiques in 1987, and was its president from 1996 to 1998. She also led the association femmes & sciences from 2004 to 2008. In 2016 she became an officer in the Legion of Honour after already being a knight in the Legion. She was president of French women mathematicians. == References ==
|
Wikipedia:Colin W. Clark#0
|
Colin Whitcomb Clark (18 June 1931 – 12 April 2024) was a Canadian mathematician and behaviorial ecologist who contributed to the economics of natural resources. Clark specialized in behavioral ecology and the economics of natural resources, specifically, in the management of commercial fisheries. Clark was named a Fellow of the International Institute of Fisheries Economics & Trade (IIFET) in 2016 for his contributions to bioeconomics. Clark's impact upon fisheries economics through his scholarly work is encapsulated in Mathematical Bioeconomics: The Mathematics of Conservation, which is considered to be a classic contribution in environmental economic theory. == Background == Clark was born in Vancouver, Canada on 18 June 1931. He completed his PhD. in 1958 at the University of Washington. He was appointed to the University of British Columbia's mathematics department in 1960, working on partial differential equations, spectral theory, and functional analysis, before pivoting to mathematical biology. He married Janet Clark, with whom he had 3 children. As a result of his work in mathematical biology, he became a member of the Vancouver Natural History Society, and a prolific birdwatcher. He died on 12 April 2024, at the age of 92. == Honours and awards == 1997 Elected Fellow of the Royal Society == Books == Math Overboard! (Basic Math for Adults): Part 2. 2013. Dog Ear Publishing. Math Overboard! (Basic Math for Adults): Part 1. 2012. Dog Ear Publishing. Mathematical Bioeconomics: The Mathematics of Conservation. 3rd Edition. 2010. Wiley Interscience (New York, NY). The Worldwide Crisis in Fisheries: Economic Models and Human Behaviour. 2006. Cambridge University Press (Cambridge, UK; New York, NY). Dynamic State Variable Models in Ecology: Methods and Applications (with Marc Mangel). 2000. Oxford University Press (Oxford, UK: New York, NY). Dynamic Models in Behavioral Ecology (with Marc Mangel). 1988. Princeton University Press (Princeton, NJ). Natural Resource Economics: Notes and Problems (with Jon Conrad). 1997. Cambridge University Press (Cambridge, UK: New York, NY). == References ==
|
Wikipedia:Collage theorem#0
|
In mathematics, the collage theorem characterises an iterated function system whose attractor is close, relative to the Hausdorff metric, to a given set. The IFS described is composed of contractions whose images, as a collage or union when mapping the given set, are arbitrarily close to the given set. It is typically used in fractal compression. == Statement == Let X {\displaystyle \mathbb {X} } be a complete metric space. Suppose L {\displaystyle L} is a nonempty, compact subset of X {\displaystyle \mathbb {X} } and let ϵ > 0 {\displaystyle \epsilon >0} be given. Choose an iterated function system (IFS) { X ; w 1 , w 2 , … , w N } {\displaystyle \{\mathbb {X} ;w_{1},w_{2},\dots ,w_{N}\}} with contractivity factor s , {\displaystyle s,} where 0 ≤ s < 1 {\displaystyle 0\leq s<1} (the contractivity factor s {\displaystyle s} of the IFS is the maximum of the contractivity factors of the maps w i {\displaystyle w_{i}} ). Suppose h ( L , ⋃ n = 1 N w n ( L ) ) ≤ ε , {\displaystyle h\left(L,\bigcup _{n=1}^{N}w_{n}(L)\right)\leq \varepsilon ,} where h ( ⋅ , ⋅ ) {\displaystyle h(\cdot ,\cdot )} is the Hausdorff metric. Then h ( L , A ) ≤ ε 1 − s {\displaystyle h(L,A)\leq {\frac {\varepsilon }{1-s}}} where A is the attractor of the IFS. Equivalently, h ( L , A ) ≤ ( 1 − s ) − 1 h ( L , ∪ n = 1 N w n ( L ) ) {\displaystyle h(L,A)\leq (1-s)^{-1}h\left(L,\cup _{n=1}^{N}w_{n}(L)\right)\quad } , for all nonempty, compact subsets L of X {\displaystyle \mathbb {X} } . Informally, If L {\displaystyle L} is close to being stabilized by the IFS, then L {\displaystyle L} is also close to being the attractor of the IFS. == See also == Michael Barnsley Barnsley fern == References == Barnsley, Michael. (1988). Fractals Everywhere. Academic Press, Inc. ISBN 0-12-079062-9. == External links == A description of the collage theorem and interactive Java applet at cut-the-knot. Notes on designing IFSs to approximate real images. Expository Paper on Fractals and Collage theorem
|
Wikipedia:Collapsing algebra#0
|
In mathematics, a collapsing algebra is a type of Boolean algebra sometimes used in forcing to reduce ("collapse") the size of cardinals. The posets used to generate collapsing algebras were introduced by Azriel Lévy in 1963. The collapsing algebra of λω is a complete Boolean algebra with at least λ elements but generated by a countable number of elements. As the size of countably generated complete Boolean algebras is unbounded, this shows that there is no free complete Boolean algebra on a countable number of elements. == Definition == There are several slightly different sorts of collapsing algebras. If κ and λ are cardinals, then the Boolean algebra of regular open sets of the product space κλ is a collapsing algebra. Here κ and λ are both given the discrete topology. There are several different options for the topology of κλ. The simplest option is to take the usual product topology. Another option is to take the topology generated by open sets consisting of functions whose value is specified on less than λ elements of λ. == References == Bell, J. L. (1985). Boolean-Valued Models and Independence Proofs in Set Theory. Oxford Logic Guides. Vol. 12 (2nd ed.). Oxford: Oxford University Press (Clarendon Press). ISBN 0-19-853241-5. Zbl 0585.03021. Jech, Thomas (2003). Set theory (third millennium (revised and expanded) ed.). Springer-Verlag. ISBN 3-540-44085-2. OCLC 174929965. Zbl 1007.03002. Lévy, Azriel (1963). "Independence results in set theory by Cohen's method. IV". Notices Amer. Math. Soc. 10.
|
Wikipedia:Colloquium Lectures (AMS)#0
|
The Colloquium Lecture of the American Mathematical Society is a special annual session of lectures. == History == The origins of the Colloquium Lectures date back to the 1893 International Congress of Mathematics, held in connection with the Chicago World's Fair, where the German mathematician Felix Klein gave the opening address. After the Congress, Klein was invited by one of its organiser, his former student Henry Seely White, to deliver a two-week-long series of lectures at Northwestern University in Evanston. In February 1896, White proposed in a letter to Thomas Fiske to repeat the experience of the Evanston lectures, by organising a series of longer talks "for increasing the utility of the American Mathematical Society". The two of them, together with E. H. Moore, William Osgood, Frank Cole, Alexander Ziwet, and Frank Morley, wrote later an open letter to the AMS, asking the society to sponsor an annual week-long series of Colloquium lectures focussing on a specific mathematical area, in order to complement the traditional shorter talks. The first official Colloquium Lectures were held in September 1896, after the AMS Summer Meetings in Buffalo, New York, and consisted of two independent series of lectures given by James Pierpont and Maxime Bôcher. A synopse of their lectures was published in the Bulletin of the AMS; starting from the second Colloquium in 1898, the lectures have been published entirely in book form in the AMS Colloquium Publications series. == List of Colloquium Lectures == 1896 James Pierpont (Yale University): Galois's theory of equations. 1896 Maxime Bôcher (Harvard University): Linear differential equations and their applications. 1898 William Fogg Osgood (Harvard University): Selected topics in the theory of functions. 1898 Arthur Gordon Webster (Clark University): The partial differential equations of wave propagation. 1901 Oskar Bolza (University of Chicago): The simplest type of problems in the calculus of variations. 1901 Ernest William Brown (Haverford College): Modern methods of treating dynamical problems, and in particular the problem of three bodies. 1903 Henry Seely White (Northwestern University): Linear systems of curves on algebraic surfaces. 1903 Frederick S. Woods (Massachusetts Institute of Technology): Forms of non-euclidean space. 1903 Edward Burr Van Vleck (Wesleyan University): Selected topics in the theory of divergent series and continued fractions. 1906 E. H. Moore (University of Chicago): On the theory of bilinear functional operations. 1906 Ernest Julius Wilczynski (University of California, Berkeley): Projective differential geometry. 1906 Max Mason (Yale University): Selected topics in the theory of boundary value problems of differential equations. 1909 Gilbert Ames Bliss (University of Chicago): Fundamental existence theorems. 1909 Edward Kasner (Columbia University): Differential-geometric aspects of dynamics. 1913 Leonard E. Dickson (University of Chicago): On invariants and the theory of numbers. 1913 William Fogg Osgood (Harvard University): Topics in the theory of functions of several complex variables. 1916 Griffith C. Evans (Université Rice): Functionals and their applications, selected topics including integral equations. 1916 Oswald Veblen (Princeton University): Analysis situs. 1920 George David Birkhoff (Harvard University): Dynamical systems. 1920 Forest Ray Moulton (University of Chicago): Topics from the theory of functions of infinitely many variables. 1925 Luther P. Eisenhart (Princeton University): Non-Riemannian geometry. 1925 Dunham Jackson (University of Minnesota): The Theory of Approximations. 1927 Eric Temple Bell (California Institute of Technology): Algebraic arithmetic. 1927 Anna Pell Wheeler (Bryn Mawr College): The theory of quadratic forms in infinitely many variables and applications. 1928 Arthur Byron Coble (University of Illinois): The determination of the tritangent planes of the space sextic of genus four. 1929 Robert Lee Moore (University of Texas): Foundations of point set theory. 1930 Solomon Lefschetz (Princeton University): Topology. 1931 Marston Morse (Harvard University): The calculus of variations in the large. 1932 Joseph Ritt (Columbia University): Differential equations from the algebraic standpoint. 1934 Raymond Paley (Trinity College, Cambridge University), deceased in 1933 and replaced by Norbert Wiener (Massachusetts Institute of Technology): Fourier transforms in the complex domain. 1935 Harry Vandiver (University of Texas): Fermat's last theorem and related topics in number theory. 1936 Edward W. Chittenden (University of Iowa): Topics in general analysis. 1937 John von Neumann (Institute for Advanced Study): Continuous geometry. 1939 Abraham Adrian Albert (University of Chicago): Structure of algebras. 1939 Marshall Stone (Harvard University): Convex bodies. 1940 Gordon Thomas Whyburn (University of Virginia): Analytic topology. 1941 Øystein Ore (Yale University): Mathematical relations and structures. 1942 Raymond Louis Wilder (University of Michigan): Topology of manifolds. 1943 Edward James McShane (University of Virginia): Existence theorems in the calculus of variations. 1944 Einar Hille (Yale University): Selected topics in the theory of semi-groups. 1945 Tibor Radó (Ohio State University): Length and area. 1946 Hassler Whitney (Harvard University): Topology of smooth manifolds. 1947 Oscar Zariski (Harvard University): Abstract algebraic geometry. 1948 Richard Brauer (University of Toronto): Representation of groups and rings. 1949 Gustav Hedlund (Yale University): Topological Dynamics. 1951 Deane Montgomery (Institute for Advanced Study): Topological transformation groups. 1952 Alfred Tarski (University of California, Berkeley): Arithmetical classes and types of algebraic systems. 1953 Antoni Zygmund (University of Chicago): On the existence and properties of certain singular integrals. 1955 Nathan Jacobson (Yale University): Jordan algebras. 1956 Salomon Bochner (Princeton University): Harmonic analysis and probability. 1957 Norman Steenrod (Princeton University): Cohomology operations. 1959 Joseph L. Doob (University of Illinois, Urbana-Champaign): The first boundary value problem. 1960 Shiing-Shen Chern (University of California, Berkeley): Geometrical structures on manifolds. 1961 George Mackey (Harvard University): Infinite dimensional group representatives. 1963 Saunders Mac Lane (University of Chicago): Categorical algebra. 1964 Charles Morrey (University of California, Berkeley): Multiple integrals in the calculus of variations. 1965 Alberto Calderón (University of Chicago): Singular integrals. 1967 Samuel Eilenberg (Columbia University): Universal algebras and the theory of automata. 1968 Donald Spencer (Stanford University): Overdetermined systems of partial differential equations. 1968 John Willard Milnor (Princeton University and University of California, Los Angeles): Uses of the fundamental group. 1969 Raoul Bott (Harvard University): On the periodicity theorem of the classical groups and its applications. 1969 Harish-Chandra (Institute for Advanced Study): Harmonic analysis of semisimple Lie groups. 1970 R. H. Bing (University of Wisconsin, Madison): Topology of 3-manifolds. 1971 Lipman Bers (Columbia University): Uniformization, moduli, and Kleinian groups. 1971 Armand Borel (Institute for Advanced Study): Algebraic groups and arithmetic groups. 1972 Stephen Smale (University of California, Berkeley): Applications of global analysis to biology, economics, electrical circuits, and celestial mechanics. 1972 John T. Tate (Harvard University): The arithmetic of elliptic curves. 1973 Michael Francis Atiyah (Institute for Advanced Study): The index of elliptic operators. 1973 Felix Browder (University of Chicago): Nonlinear functional analysis and its applications to nonlinear partial differential and integral equations. 1974 Errett Bishop (University of California, San Diego): Schizophrenia in contemporary mathematics. 1974 Louis Nirenberg (Courant Institute): Selected topics in partial differential equations. 1974 John Griggs Thompson (University of Cambridge): Finite simple groups. 1975 Howard Jerome Keisler (University of Wisconsin): New directions in model theory. 1975 Ellis Kolchin (Columbia University): Differential algebraic groups. 1975 Elias Stein (Princeton University): Singular integrals, old and new. 1976 Isadore M. Singer (Massachusetts Institute of Technology): Connections between analysis, geometry and topology. 1976 Jürgen Moser (Courant Institute): Recent progress in dynamical systems. 1977 William Browder (Princeton University): Differential topology of higher dimensional manifolds. 1977 Herbert Federer (Brown University): Geometric measure theory. 1978 Hyman Bass (Columbia University): Algebraic K-theory. 1979 Phillip Griffiths (Harvard University): Complex analysis and algebraic geometry. 1979 George Mostow (Yale University): Discrete subgroups of Lie groups. 1980 Wolfgang M. Schmidt (University of Colorado, Boulder): Various methods in number theory. 1980 Julia Robinson (University of California, Berkeley): Between logic and arithmetic. 1981 Mark Kac (Rockefeller University): Some mathematical problems suggested by questions in physics. 1981 Serge Lang (Yale University): Units and class numbers in algebraic geometry and number theory. 1982 Dennis Sullivan (CUNY, Graduate School and University Center): Geometry, iteration, and group theory. 1982 Morris Hirsch (University of California, Berkeley): Convergence in ordinary and partial differential equations. 1983 Charles Fefferman (Princeton University): The uncertainty principle. 1983 Bertram Kostant (Massachusetts Institute of Technology): On the Coxeter element and the structure of the exceptional Lie groups. 1984 Barry Mazur (Harvard University): On the arithmetic of curves. 1984 Paul Rabinowitz (University of Wisconsin, Madison): Minimax methods in critical point theory and applications to differential equations. 1985 Daniel Gorenstein (Rutgers University): The classification of the finite simple groups. 1985 Karen Uhlenbeck (University of Chicago): Mathematical gauge field theory. 1986 Shing-Tung Yau (University of California, San Diego): Nonlinear analysis. 1987 Peter Lax (Courant Institute): Uses of the non-Euclidean wave equation. 1987 Edward Witten (Princeton University): Mathematical applications of quantum field theory. 1988 Victor Guillemin (Massachusetts Institute of Technology): Spectral properties of Riemannian manifolds. 1989 Nicholas Katz (Princeton University): Exponential sums and differential equations. 1989 William Thurston (Princeton University): Geometry, groups, and self-similar tilings. 1990 Shlomo Sternberg (Harvard University): Some thoughts on the interaction between group theory and physics. 1991 Robert MacPherson (Massachusetts Institute of Technology): Intersection homology and perverse sheaves. 1992 Robert Langlands (Institute for Advanced Study): Automorphic forms and Hasse-Wiel zeta-functions and Finite models for percolation. 1993 Luis Caffarelli (Institute for Advanced Study): Nonlinear differential equations and Lagrangian coordinates. 1993 Sergiu Klainerman (Princeton University): On the regularity properties of gauge theories in Minkowski space-time. 1994 Jean Bourgain (IHES and the University of Illinois, Urbana-Champaign): Harmonic analysis and nonlinear evolution equations. 1995 Clifford Taubes (Harvard University): Mysteries in three and four dimensions. 1996 Andrew Wiles (Princeton University): Modular forms, elliptic curves and Galois representations. 1997 Daniel Stroock (Massachusetts Institute of Technology): Analysis on spaces of paths. 1998 Gian-Carlo Rota (Massachusetts Institute of Technology): Introduction to geometric probability; Invariant theory old and new; and Combinatorial snapshots. 1999 Helmut Hofer (Courant Institute, New York University): Symplectic geometry from a dynamical systems point of view. 2000 Curtis McMullen (Harvard University): Riemann surfaces in dynamics, topology, and arithmetic. 2001 János Kollár (Princeton University): Large rationally connected varieties. 2002 Lawrence C. Evans (University of California, Berkeley): Entropy methods for partial differential equations. 2003 Peter Sarnak (Courant Institute and Princeton University): Spectra of hyperbolic surfaces and applications. 2004 Sun-Yung Alice Chang (Princeton University): Conformal invariants and partial differential equations. 2005 Robert Lazarsfeld (University of Michigan): How polynomials vanish: Singularities, integrals, and ideals. 2006 Hendrik Lenstra (Universiteit Leiden): Entangled radicals. 2007 Andrei Okounkov (Princeton University): Limit shapes, real and imagined. 2008 Wendelin Werner (University of Paris-Sud): Random conformally invariant pictures. 2009 Grigori Alexandrowitsch Margulis (Yale University): Homogenous dynamics and number theory. 2010 Richard P. Stanley (Massachusetts Institute of Technology): Permutations: 1) Increasing and decreasing subsequences; 2) Alternating permutations; 3) Reduced decompositions. 2011 Alexander Lubotzky (The Hebrew University of Jerusalem): Expander graphs in pure and applied mathematics. 2012 Edward Frenkel (University of California, Berkeley): Langlands program, trace formulas, and their geometrization. 2013 Alice Guionnet (Ecole Normale Supérieure de Lyon): Free probability and random matrices. 2014 Dusa McDuff (Columbia University): Symplectic topology today. 2015 Michael J. Hopkins (Harvard University): 1) Algebraic topology: New and old directions; 2) The Kervaire invariant problem; 3) Chern-Weil theory and abstract homotopy theory. 2016 Timothy A. Gowers (University of Cambridge): Generalizations of Fourier analysis, and how to apply them. 2017 Carlos Kenig (University of Chicago): The focusing energy critical wave equation: the radical case in 3 space dimensions. 2018 Avi Wigderson (Institute for Advanced Study): 1) Alternate Minimization and Scaling algorithms: theory, applications and connections across mathematics and computer science; 2) Proving algebraic identities; 3) Proving analytic inequalities. 2019 Benedict Gross (Harvard University): Complex multiplication: past, present, future. 2020 Ingrid Daubechies (Duke University): Mathematical Frameworks for Signal and Image Analysis. 2021 not awarded 2022 Karen E. Smith (University of Michigan): Understanding and measuring singularities in algebraic geometry. 2023 Camillo De Lellis (Princeton University): Flows of nonsmooth vector fields. == See also == Josiah Willard Gibbs Lectureship == External links == Official website == References ==
|
Wikipedia:Colva Roney-Dougal#0
|
Colva Mary Roney-Dougal is a British mathematician specializing in group theory and computational algebra. She is Professor of Pure Mathematics at the University of St Andrews, and the Director of the Centre for Interdisciplinary Research in Computational Algebra at St Andrews. She is also known for her popularization of mathematics on BBC radio shows, including appearances on In Our Time about the mathematics of Emmy Noether and Pierre-Simon Laplace and on The Infinite Monkey Cage about the nature of infinity and numbers in the real world. == Education == Roney-Dougal completed her PhD at the University of London in 2001. Her dissertation, Permutation Groups with a Unique Non-diagonal Self-paired Orbital, was supervised by Peter Cameron. == Book == With John Bray and Derek Holt, Roney-Dougal is the co-author of the book The Maximal Subgroups of the Low-Dimensional Finite Classical Groups (London Mathematical Society and Cambridge University Press, 2013). == Recognition == In 2015 she was given the inaugural Cheryl E. Praeger Visiting Research Fellowship, funding her to visit the University of Western Australia. Roney-Dougal was appointed Officer of the Order of the British Empire (OBE) in the 2024 New Year Honours for services to education and mathematics. == References ==
|
Wikipedia:Combinatorial matrix theory#0
|
Combinatorial matrix theory is a branch of linear algebra and combinatorics that studies matrices in terms of the patterns of nonzeros and of positive and negative values in their coefficients. Concepts and topics studied within combinatorial matrix theory include: (0,1)-matrix, a matrix whose coefficients are all 0 or 1 Permutation matrix, a (0,1)-matrix with exactly one nonzero in each row and each column The Gale–Ryser theorem, on the existence of (0,1)-matrices with given row and column sums Hadamard matrix, a square matrix of 1 and −1 coefficients with each pair of rows having matching coefficients in exactly half of their columns Alternating sign matrix, a matrix of 0, 1, and −1 coefficients with the nonzeros in each row or column alternating between 1 and −1 and summing to 1 Sparse matrix, is a matrix with few nonzero elements, and sparse matrices of special form such as diagonal matrices and band matrices Sylvester's law of inertia, on the invariance of the number of negative diagonal elements of a matrix under changes of basis Researchers in combinatorial matrix theory include Richard A. Brualdi and Pauline van den Driessche. == References ==
|
Wikipedia:Commensurability (mathematics)#0
|
In mathematics, two non-zero real numbers a and b are said to be commensurable if their ratio a/b is a rational number; otherwise a and b are called incommensurable. (Recall that a rational number is one that is equivalent to the ratio of two integers.) There is a more general notion of commensurability in group theory. For example, the numbers 3 and 2 are commensurable because their ratio, 3/2, is a rational number. The numbers 3 {\displaystyle {\sqrt {3}}} and 2 3 {\displaystyle 2{\sqrt {3}}} are also commensurable because their ratio, 3 2 3 = 1 2 {\textstyle {\frac {\sqrt {3}}{2{\sqrt {3}}}}={\frac {1}{2}}} , is a rational number. However, the numbers 3 {\textstyle {\sqrt {3}}} and 2 are incommensurable because their ratio, 3 2 {\textstyle {\frac {\sqrt {3}}{2}}} , is an irrational number. More generally, it is immediate from the definition that if a and b are any two non-zero rational numbers, then a and b are commensurable; it is also immediate that if a is any irrational number and b is any non-zero rational number, then a and b are incommensurable. On the other hand, if both a and b are irrational numbers, then a and b may or may not be commensurable. == History of the concept == The Pythagoreans are credited with the proof of the existence of irrational numbers. When the ratio of the lengths of two line segments is irrational, the line segments themselves (not just their lengths) are also described as being incommensurable. A separate, more general and circuitous ancient Greek doctrine of proportionality for geometric magnitude was developed in Book V of Euclid's Elements in order to allow proofs involving incommensurable lengths, thus avoiding arguments which applied only to a historically restricted definition of number. Euclid's notion of commensurability is anticipated in passing in the discussion between Socrates and the slave boy in Plato's dialogue entitled Meno, in which Socrates uses the boy's own inherent capabilities to solve a complex geometric problem through the Socratic Method. He develops a proof which is, for all intents and purposes, very Euclidean in nature and speaks to the concept of incommensurability. The usage primarily comes from translations of Euclid's Elements, in which two line segments a and b are called commensurable precisely if there is some third segment c that can be laid end-to-end a whole number of times to produce a segment congruent to a, and also, with a different whole number, a segment congruent to b. Euclid did not use any concept of real number, but he used a notion of congruence of line segments, and of one such segment being longer or shorter than another. That a/b is rational is a necessary and sufficient condition for the existence of some real number c, and integers m and n, such that a = mc and b = nc. Assuming for simplicity that a and b are positive, one can say that a ruler, marked off in units of length c, could be used to measure out both a line segment of length a, and one of length b. That is, there is a common unit of length in terms of which a and b can both be measured; this is the origin of the term. Otherwise the pair a and b are incommensurable. == In group theory == In group theory, two subgroups Γ1 and Γ2 of a group G are said to be commensurable if the intersection Γ1 ∩ Γ2 is of finite index in both Γ1 and Γ2. Example: Let a and b be nonzero real numbers. Then the subgroup of the real numbers R generated by a is commensurable with the subgroup generated by b if and only if the real numbers a and b are commensurable, in the sense that a/b is rational. Thus the group-theoretic notion of commensurability generalizes the concept for real numbers. There is a similar notion for two groups which are not given as subgroups of the same group. Two groups G1 and G2 are (abstractly) commensurable if there are subgroups H1 ⊂ G1 and H2 ⊂ G2 of finite index such that H1 is isomorphic to H2. == In topology == Two path-connected topological spaces are sometimes said to be commensurable if they have homeomorphic finite-sheeted covering spaces. Depending on the type of space under consideration, one might want to use homotopy equivalences or diffeomorphisms instead of homeomorphisms in the definition. If two spaces are commensurable, then their fundamental groups are commensurable. Example: any two closed surfaces of genus at least 2 are commensurable with each other. == References ==
|
Wikipedia:Common fixed point problem#0
|
In mathematics, the common fixed point problem is the conjecture that, for any two continuous functions that map the unit interval into itself and commute under functional composition, there must be a point that is a fixed point of both functions. In other words, if the functions f {\displaystyle f} and g {\displaystyle g} are continuous, and f ( g ( x ) ) = g ( f ( x ) ) {\displaystyle f(g(x))=g(f(x))} for all x {\displaystyle x} in the unit interval, then there must be some x {\displaystyle x} in the unit interval for which f ( x ) = x = g ( x ) {\displaystyle f(x)=x=g(x)} . First posed in 1954, the problem remained unsolved for more than a decade, during which several mathematicians made incremental progress toward an affirmative answer. In 1967, William M. Boyce and John P. Huneke independently: 3 proved the conjecture to be false by providing examples of commuting functions on a closed interval that do not have a common fixed point. == History == A 1951 paper by H. D. Block and H. P. Thielman sparked interest in the subject of fixed points of commuting functions. Building on earlier work by J. F. Ritt and A. G. Walker, Block and Thielman identified sets of pairwise commuting polynomials and studied their properties. They proved, for each of these sets, that any two polynomials would share a common fixed point. Block and Thielman's paper led other mathematicians to wonder if having a common fixed point was a universal property of commuting functions. In 1954, Eldon Dyer asked whether if f {\displaystyle f} and g {\displaystyle g} are two continuous functions that map a closed interval on the real line into itself and commute, they must have a common fixed point. The same question was raised independently by Allen Shields in 1955 and again by Lester Dubins in 1956. John R. Isbell also raised the question in a more general form in 1957. During the 1960s, mathematicians were able to prove that the commuting function conjecture held when certain assumptions were made about f {\displaystyle f} and g {\displaystyle g} . In 1963, Ralph DeMarr showed that if f {\displaystyle f} and g {\displaystyle g} are both Lipschitz continuous, and if the Lipschitz constant of both is ≥ 1 {\displaystyle \geq 1} , then f {\displaystyle f} and g {\displaystyle g} will have a common fixed point. Gerald Jungck refined DeMarr's conditions, showing that they need not be Lipschitz continuous, but instead satisfy similar but less restrictive criteria. Taking a different approach, Haskell Cohen showed in 1964 that f {\displaystyle f} and g {\displaystyle g} will have a common fixed point if both are continuous and open. Later, both Jon H. Folkman and James T. Joichi, working independently, extended Cohen's work, showing that it is only necessary for one of the two functions to be open. John Maxfield and W. J. Mourant, in 1965, proved that commuting functions on the unit interval have a common fixed point if one of the functions has no period 2 points (i.e., f ( f ( x ) ) = x {\displaystyle f(f(x))=x} implies f ( x ) = x {\displaystyle f(x)=x} ). The following year, Sherwood Chu and R. D. Moyer found that the conjecture holds when there is a subinterval in which one of the functions has a fixed point and the other has no period 2 points. == Boyce's counterexample == William M. Boyce earned his Ph.D. from Tulane University in 1967. In his thesis, Boyce identified a pair of functions that commute under composition, but do not have a common fixed point, proving the fixed point conjecture to be false. In 1963, Glenn Baxter and Joichi published a paper about the fixed points of the composite function h ( x ) = f ( g ( x ) ) = g ( f ( x ) ) {\displaystyle h(x)=f(g(x))=g(f(x))} . It was known that the functions f {\displaystyle f} and g {\displaystyle g} permute the fixed points of h {\displaystyle h} . Baxter and Joichi noted that at each fixed point, the graph of h {\displaystyle h} must either cross the diagonal going up (an "up-crossing"), or going down (a "down-crossing"), or touch the diagonal and then move away in the opposite direction. In an independent paper, Baxter proved that the permutations must preserve the type of each fixed point (up-crossing, down-crossing, touching) and that only certain orderings are allowed. Boyce wrote a computer program to generate permutations that followed Baxter's rules, which he named "Baxter permutations." His program carefully screened out those that could be trivially shown to have fixed points or were analytically equivalent to other cases. After eliminating more than 97% of the possible permutations through this process, Boyce constructed pairs of commuting functions from the remaining candidates and was able to prove that one such pair, based on a Baxter permutation with 13 points of crossing on the diagonal, had no common fixed point. Boyce's paper is one of the earliest examples of a computer-assisted proof. It was uncommon in the 1960s for mathematicians to rely on computers for research, but Boyce, then serving in the Army, had access to computers at MIT Lincoln Laboratory. Boyce published a separate paper describing his process for generating Baxter permutations, including the FORTRAN source code of his program. == Huneke's counterexample == John P. Huneke also investigated the common fixed point problem for his Ph.D. at Wesleyan University, which he also received in 1967. In his thesis, Huneke provides two examples of function pairs that commute but have no common fixed points, using two different strategies. The first of Huneke's examples is essentially identical to Boyce's, though Huneke arrived at it through a different process. Huneke's solution is based on the mountain climbing problem, which states that two climbers, climbing separate mountains of equal height, will be able to climb in such a way that they will always be at the same elevation at each point in time. Huneke used this principle to construct sequences of functions that will converge to the counterexample to the common fixed point problem. == Later research == Although the discovery of counterexamples by Boyce and Huneke meant that the decade-long pursuit of a proof of the commuting function conjecture was lost, it did enable researchers to focus their efforts on investigating under what conditions, in addition to the ones already discovered, the conjecture still might hold true. Boyce extended the work of Maxfield/Mourant and Chu/Moyer in 1971, showing weaker conditions that allow both of the commuting functions to have period 2 points but still imply that they must have a common fixed point. His work was later extended by Theodore Mitchell, Julio Cano, and Jacek R. Jachymski. Over 25 years after the publication of his first paper, Jungck defined additional conditions under which f {\displaystyle f} and g {\displaystyle g} will have a common fixed point, based on the notions of periodic points and the coincidence set of the functions, that is, the values for which f ( x ) = g ( x ) {\displaystyle f(x)=g(x)} . Baxter permutations have become a subject of research in their own right and have been applied to other problems beyond the common fixed point problem. == References ==
|
Wikipedia:Common knowledge (logic)#0
|
Common knowledge is a special kind of knowledge for a group of agents. There is common knowledge of p in a group of agents G when all the agents in G know p, they all know that they know p, they all know that they all know that they know p, and so on ad infinitum. It can be denoted as C G p {\displaystyle C_{G}p} . The concept was first introduced in the philosophical literature by David Kellogg Lewis in his study Convention (1969). The sociologist Morris Friedell defined common knowledge in a 1969 paper. It was first given a mathematical formulation in a set-theoretical framework by Robert Aumann (1976). Computer scientists grew an interest in the subject of epistemic logic in general – and of common knowledge in particular – starting in the 1980s.[1] There are numerous puzzles based upon the concept which have been extensively investigated by mathematicians such as John Conway. The philosopher Stephen Schiffer, in his 1972 book Meaning, independently developed a notion he called "mutual knowledge" ( E G p {\displaystyle E_{G}p} ) which functions quite similarly to Lewis's and Friedel's 1969 "common knowledge". If a trustworthy announcement is made in public, then it becomes common knowledge; However, if it is transmitted to each agent in private, it becomes mutual knowledge but not common knowledge. Even if the fact that "every agent in the group knows p" ( E G p {\displaystyle E_{G}p} ) is transmitted to each agent in private, it is still not common knowledge: E G E G p ⇏ C G p {\displaystyle E_{G}E_{G}p\not \Rightarrow C_{G}p} . But, if any agent a {\displaystyle a} publicly announces their knowledge of p, then it becomes common knowledge that they know p (viz. C G K a p {\displaystyle C_{G}K_{a}p} ). If every agent publicly announces their knowledge of p, p becomes common knowledge C G E G p ⇒ C G p {\displaystyle C_{G}E_{G}p\Rightarrow C_{G}p} . == Example == === Puzzle === The idea of common knowledge is often introduced by some variant of induction puzzles (e.g. Muddy children puzzle):[2] On an island, there are k people who have blue eyes, and the rest of the people have green eyes. At the start of the puzzle, no one on the island ever knows their own eye color. By rule, if a person on the island ever discovers they have blue eyes, that person must leave the island at dawn; anyone not making such a discovery always sleeps until after dawn. On the island, each person knows every other person's eye color, there are no reflective surfaces, and there is no communication of eye color. At some point, an outsider comes to the island, calls together all the people on the island, and makes the following public announcement: "At least one of you has blue eyes". The outsider, furthermore, is known by all to be truthful, and all know that all know this, and so on: it is common knowledge that they are truthful, and thus it becomes common knowledge that there is at least one islander who has blue eyes ( C G [ ∃ x ∈ G ( B l x ) ] {\displaystyle C_{G}[\exists x\!\in \!G(Bl_{x})]} ). The problem: finding the eventual outcome, assuming all persons on the island are completely logical (every participant's knowledge obeys the axiom schemata for epistemic logic) and that this too is common knowledge. === Solution === The answer is that, on the kth dawn after the announcement, all the blue-eyed people will leave the island. ==== Proof ==== The solution can be seen with an inductive argument. If k = 1 (that is, there is exactly one blue-eyed person), the person will recognize that they alone have blue eyes (by seeing only green eyes in the others) and leave at the first dawn. If k = 2, no one will leave at the first dawn, and the inaction (and the implied lack of knowledge for every agent) is observed by everyone, which then becomes common knowledge as well ( C G [ ∀ x ∈ G ( ¬ K x B l x ) ] {\displaystyle C_{G}[\forall x\!\in \!G(\neg K_{x}Bl_{x})]} ). The two blue-eyed people, seeing only one person with blue eyes, and that no one left on the first dawn (and thus that k > 1; and also that the other blue-eyed person does not think that everyone except themself are not blue-eyed ¬ K a [ ∀ x ∈ ( G − a ) ( ¬ B l x ) ] {\displaystyle \neg K_{a}[\forall x\!\in \!(G-a)(\neg Bl_{x})]} , so another blue-eyed person ∃ x ∈ ( G − a ) ( B l x ) {\displaystyle \exists x\!\in \!(G-a)(Bl_{x})} ), will leave on the second dawn. Inductively, it can be reasoned that no one will leave at the first k − 1 dawns if and only if there are at least k blue-eyed people. Those with blue eyes, seeing k − 1 blue-eyed people among the others and knowing there must be at least k, will reason that they must have blue eyes and leave. For k > 1, the outsider is only telling the island citizens what they already know: that there are blue-eyed people among them. However, before this fact is announced, the fact is not common knowledge, but instead mutual knowledge. For k = 2, it is merely "first-order" knowledge ( E G [ ∃ x ∈ G ( B l x ) ] {\displaystyle E_{G}[\exists x\!\in \!G(Bl_{x})]} ). Each blue-eyed person knows that there is someone with blue eyes, but each blue eyed person does not know that the other blue-eyed person has this same knowledge. For k = 3, it is "second order" knowledge ( E G E G [ ∃ x ∈ G ( B l x ) ] = E G 2 [ ∃ x ∈ G ( B l x ) ] {\displaystyle E_{G}E_{G}[\exists x\!\in \!G(Bl_{x})]=E_{G}^{2}[\exists x\!\in \!G(Bl_{x})]} ). Each blue-eyed person knows that a second blue-eyed person knows that a third person has blue eyes, but no one knows that there is a third blue-eyed person with that knowledge, until the outsider makes their statement. In general: For k > 1, it is "(k − 1)th order" knowledge ( E G k − 1 [ ∃ x ∈ G ( B l x ) ] {\displaystyle E_{G}^{k-1}[\exists x\!\in \!G(Bl_{x})]} ). Each blue-eyed person knows that a second blue-eyed person knows that a third blue-eyed person knows that.... (repeat for a total of k − 1 levels) a kth person has blue eyes, but no one knows that there is a "kth" blue-eyed person with that knowledge, until the outsider makes his statement. The notion of common knowledge therefore has a palpable effect. Knowing that everyone knows does make a difference. When the outsider's public announcement (a fact already known to all, unless k=1 then the one person with blue eyes would not know until the announcement) becomes common knowledge, the blue-eyed people on this island eventually deduce their status, and leave. In particular: E G i [ | G ( B l x ) | ≥ j ] {\displaystyle E_{G}^{i}[|G(Bl_{x})|\geq j]} is free (i.e. known prior to the outsider's statement) iff i + j ≤ k {\displaystyle i+j\leq k} . E G i [ | G ( B l x ) | ≥ j ] {\displaystyle E_{G}^{i}[|G(Bl_{x})|\geq j]} , with a passing day where no one leaves, implies the next day E G i − 1 [ | G ( B l x ) | ≥ j + 1 ] {\displaystyle E_{G}^{i-1}[|G(Bl_{x})|\geq j+1]} . E G i [ | G ( B l x ) | ≥ j ] {\displaystyle E_{G}^{i}[|G(Bl_{x})|\geq j]} for j ≥ k {\displaystyle j\geq k} is thus reached iff it is reached for i + j > k {\displaystyle i+j>k} . The outsider gives E G i [ | G ( B l x ) | ≥ j ] {\displaystyle E_{G}^{i}[|G(Bl_{x})|\geq j]} for i = ∞ , j = 1 {\displaystyle i=\infty ,j=1} . == Formalization == === Modal logic (syntactic characterization) === Common knowledge can be given a logical definition in multi-modal logic systems in which the modal operators are interpreted epistemically. At the propositional level, such systems are extensions of propositional logic. The extension consists of the introduction of a group G of agents, and of n modal operators Ki (with i = 1, ..., n) with the intended meaning that "agent i knows." Thus Ki φ {\displaystyle \varphi } (where φ {\displaystyle \varphi } is a formula of the logical calculus) is read "agent i knows φ {\displaystyle \varphi } ." We can define an operator EG with the intended meaning of "everyone in group G knows" by defining it with the axiom E G φ ⇔ ⋀ i ∈ G K i φ , {\displaystyle E_{G}\varphi \Leftrightarrow \bigwedge _{i\in G}K_{i}\varphi ,} By abbreviating the expression E G E G n − 1 φ {\displaystyle E_{G}E_{G}^{n-1}\varphi } with E G n φ {\displaystyle E_{G}^{n}\varphi } and defining E G 0 φ = φ {\displaystyle E_{G}^{0}\varphi =\varphi } , common knowledge could then be defined with the axiom C φ ⇔ ⋀ i = 0 ∞ E i φ {\displaystyle C\varphi \Leftrightarrow \bigwedge _{i=0}^{\infty }E^{i}\varphi } There is, however, a complication. The languages of epistemic logic are usually finitary, whereas the axiom above defines common knowledge as an infinite conjunction of formulas, hence not a well-formed formula of the language. To overcome this difficulty, a fixed-point definition of common knowledge can be given. Intuitively, common knowledge is thought of as the fixed point of the "equation" C G φ = [ φ ∧ E G ( C G φ ) ] = E G ℵ 0 φ {\displaystyle C_{G}\varphi =[\varphi \wedge E_{G}(C_{G}\varphi )]=E_{G}^{\aleph _{0}}\varphi } . Here, ℵ 0 {\displaystyle \aleph _{0}} is the Aleph-naught. In this way, it is possible to find a formula ψ {\displaystyle \psi } implying E G ( φ ∧ C G φ ) {\displaystyle E_{G}(\varphi \wedge C_{G}\varphi )} from which, in the limit, we can infer common knowledge of φ {\displaystyle \varphi } . From this definition it can be seen that if E G φ {\displaystyle E_{G}\varphi } is common knowledge, then φ {\displaystyle \varphi } is also common knowledge ( C G E G φ ⇒ C G φ {\displaystyle C_{G}E_{G}\varphi \Rightarrow C_{G}\varphi } ). This syntactic characterization is given semantic content through so-called Kripke structures. A Kripke structure is given by a set of states (or possible worlds) S, n accessibility relations R 1 , … , R n {\displaystyle R_{1},\dots ,R_{n}} , defined on S × S {\displaystyle S\times S} , intuitively representing what states agent i considers possible from any given state, and a valuation function π {\displaystyle \pi } assigning a truth value, in each state, to each primitive proposition in the language. The Kripke semantics for the knowledge operator is given by stipulating that K i φ {\displaystyle K_{i}\varphi } is true at state s iff φ {\displaystyle \varphi } is true at all states t such that ( s , t ) ∈ R i {\displaystyle (s,t)\in R_{i}} . The semantics for the common knowledge operator, then, is given by taking, for each group of agents G, the reflexive (modal axiom T) and transitive closure (modal axiom 4) of the R i {\displaystyle R_{i}} , for all agents i in G, call such a relation R G {\displaystyle R_{G}} , and stipulating that C G φ {\displaystyle C_{G}\varphi } is true at state s iff φ {\displaystyle \varphi } is true at all states t such that ( s , t ) ∈ R G {\displaystyle (s,t)\in R_{G}} . === Set theoretic (semantic characterization) === Alternatively (yet equivalently) common knowledge can be formalized using set theory (this was the path taken by the Nobel laureate Robert Aumann in his seminal 1976 paper). Starting with a set of states S. An event E can then be defined as a subset of the set of states S. For each agent i, define a partition on S, Pi. This partition represents the state of knowledge of an agent in a state. Intuitively, if two states s1 and s2 are elements of the same part of partition of an agent, it means that s1 and s2 are indistinguishable to that agent. In general, in state s, agent i knows that one of the states in Pi(s) obtains, but not which one. (Here Pi(s) denotes the unique element of Pi containing s. This model excludes cases in which agents know things that are not true.) A knowledge function K can now be defined in the following way: K i ( e ) = { s ∈ S ∣ P i ( s ) ⊂ e } {\displaystyle K_{i}(e)=\{s\in S\mid P_{i}(s)\subset e\}} That is, Ki(e) is the set of states where the agent will know that event e obtains. It is a subset of e. Similar to the modal logic formulation above, an operator for the idea that "everyone knows can be defined as e". E ( e ) = ⋂ i K i ( e ) {\displaystyle E(e)=\bigcap _{i}K_{i}(e)} As with the modal operator, we will iterate the E function, E 1 ( e ) = E ( e ) {\displaystyle E^{1}(e)=E(e)} and E n + 1 ( e ) = E ( E n ( e ) ) {\displaystyle E^{n+1}(e)=E(E^{n}(e))} . Using this we can then define a common knowledge function, C ( e ) = ⋂ n = 1 ∞ E n ( e ) . {\displaystyle C(e)=\bigcap _{n=1}^{\infty }E^{n}(e).} The equivalence with the syntactic approach sketched above can easily be seen: consider an Aumann structure as the one just defined. We can define a correspondent Kripke structure by taking the same space S, accessibility relations R i {\displaystyle R_{i}} that define the equivalence classes corresponding to the partitions P i {\displaystyle P_{i}} , and a valuation function such that it yields value true to the primitive proposition p in all and only the states s such that s ∈ E p {\displaystyle s\in E^{p}} , where E p {\displaystyle E^{p}} is the event of the Aumann structure corresponding to the primitive proposition p. It is not difficult to see that the common knowledge accessibility function R G {\displaystyle R_{G}} defined in the previous section corresponds to the finest common coarsening of the partitions P i {\displaystyle P_{i}} for all i ∈ G {\displaystyle i\in G} , which is the finitary characterization of common knowledge also given by Aumann in the 1976 article. == Applications == Common knowledge was used by David Lewis in his pioneering game-theoretical account of convention. In this sense, common knowledge is a concept still central for linguists and philosophers of language (see Clark 1996) maintaining a Lewisian, conventionalist account of language. Robert Aumann introduced a set theoretical formulation of common knowledge (theoretically equivalent to the one given above) and proved the so-called agreement theorem through which: if two agents have common prior probability over a certain event, and the posterior probabilities are common knowledge, then such posterior probabilities are equal. A result based on the agreement theorem and proven by Milgrom shows that, given certain conditions on market efficiency and information, speculative trade is impossible. The concept of common knowledge is central in game theory. For several years it has been thought that the assumption of common knowledge of rationality for the players in the game was fundamental. It turns out (Aumann and Brandenburger 1995) that, in two-player games, common knowledge of rationality is not needed as an epistemic condition for Nash equilibrium strategies. Computer scientists use languages incorporating epistemic logics (and common knowledge) to reason about distributed systems. Such systems can be based on logics more complicated than simple propositional epistemic logic, see Wooldridge Reasoning about Artificial Agents, 2000 (in which he uses a first-order logic incorporating epistemic and temporal operators) or van der Hoek et al. "Alternating Time Epistemic Logic". In his 2007 book, The Stuff of Thought: Language as a Window into Human Nature, Steven Pinker uses the notion of common knowledge to analyze the kind of indirect speech involved in innuendoes. == In popular culture == The comedy movie Hot Lead and Cold Feet has an example of a chain of logic that is collapsed by common knowledge. The Denver Kid tells his allies that Rattlesnake is in town, but that he [the Kid] has “the edge”: “He's here and I know he's here, and he knows I know he's here, but he doesn't know I know he knows I know he's here.” So both protagonists know the main fact (Rattlesnake is here), but it is not “common knowledge”. Note that this is true even if the Kid is wrong: maybe Rattlesnake does know that the Kid knows that he knows that he knows, the chain still breaks because the Kid doesn't know that. Moments later, Rattlesnake confronts the Kid. We see the Kid realizing that his carefully constructed “edge” has collapsed into common knowledge. == See also == Global game Mutual knowledge (logic) Pluralistic ignorance Stag hunt Stephen Schiffer Two Generals' Problem for the impossibility of establishing common knowledge over an unreliable channel == Notes == ^ See the textbooks Reasoning about knowledge by Fagin, Halpern, Moses and Vardi (1995), and Epistemic Logic for computer science by Meyer and van der Hoek (1995). ^ A structurally identical problem is provided by Herbert Gintis (2000); he calls it "The Women of Sevitan". == References == == Further reading == Aumann, Robert (1976) "Agreeing to Disagree" Annals of Statistics 4(6): 1236–1239. Aumann Robert and Adam Brandenburger (1995) "Epistemic Conditions for Nash Equilibrium" Econometrica 63(5): 1161–1180. Clark, Herbert (1996) Using Language, Cambridge University Press ISBN 0-521-56745-9 Fagin, Ronald; Halpern, Joseph; Moses, Yoram; Vardi, Moshe (2003). Reasoning about Knowledge. Cambridge: MIT Press. ISBN 978-0-262-56200-3.. Lewis, David (1969) Convention: A Philosophical Study Oxford: Blackburn. ISBN 0-631-23257-5 J-J Ch. Meyer and W van der Hoek Epistemic Logic for Computer Science and Artificial Intelligence, volume 41, Cambridge Tracts in Theoretical Computer Science, Cambridge University Press, 1995. ISBN 0-521-46014-X Rescher, Nicolas (2005). Epistemic Logic: A Survey Of the Logic Of Knowledge. University of Pittsburgh Press. ISBN 978-0-8229-4246-7.. See Chapter 3. Shoham, Yoav; Leyton-Brown, Kevin (2009). Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations. New York: Cambridge University Press. ISBN 978-0-521-89943-7.. See Section 13.4; downloadable free online. Gintis, Herbert (2000) Game Theory Evolving Princeton University Press. ISBN 0-691-14051-0 Gintis, Herbert (2009) The Bounds of Reason Princeton University Press. ISBN 0-691-14052-9 Halpern, J. Y.; Moses, Y. (1990). "Knowledge and Common Knowledge in a Distributed Environment". Journal of the ACM. 37 (3): 549–587. arXiv:cs/0006009. doi:10.1145/79147.79161. S2CID 52151232. == External links == Vanderschraaf, Peter; Sillari, Giacomo. "Common Knowledge". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. Prof. Terence Tao's blog post (Feb 2008) Carr, Kareem. "In the Long Run We Are All Dead", "In the Long Run We Are All Dead II" at The Twofold Gaze. Detailed description of the blue-eyed islander problem, with solution. physics.harvard.edu "Green-eyed Dragons Problem" Archived 2014-12-01 at the Wayback Machine, "Green-eyed Dragons Solution" Archived 2014-12-01 at the Wayback Machine (Sept 2002)
|
Wikipedia:Communications in Algebra#0
|
Communications in Algebra is a monthly peer-reviewed scientific journal covering algebra, including commutative algebra, ring theory, module theory, non-associative algebra (including Lie algebras and Jordan algebras), group theory, and algebraic geometry. It was established in 1974 and is published by Taylor & Francis. The editor-in-chief is Scott Chapman (Sam Houston State University). Earl J. Taft (Rutgers University) was the founding editor. == Abstracting and indexing == The journal is abstracted and indexed in CompuMath Citation Index, Current Contents/Chemical, Earth, and Physical Sciences, Mathematical Reviews, MathSciNet, Science Citation Index Expanded (SCIE), and Zentralblatt MATH. According to the Journal Citation Reports, the journal has a 2022 impact factor of 0.7. == References == == External links == Official website
|
Wikipedia:Commutation matrix#0
|
In mathematics, especially in linear algebra and matrix theory, the commutation matrix is used for transforming the vectorized form of a matrix into the vectorized form of its transpose. Specifically, the commutation matrix K(m,n) is the nm × mn permutation matrix which, for any m × n matrix A, transforms vec(A) into vec(AT): K(m,n) vec(A) = vec(AT) . Here vec(A) is the mn × 1 column vector obtain by stacking the columns of A on top of one another: vec ( A ) = [ A 1 , 1 , … , A m , 1 , A 1 , 2 , … , A m , 2 , … , A 1 , n , … , A m , n ] T {\displaystyle \operatorname {vec} (\mathbf {A} )=[\mathbf {A} _{1,1},\ldots ,\mathbf {A} _{m,1},\mathbf {A} _{1,2},\ldots ,\mathbf {A} _{m,2},\ldots ,\mathbf {A} _{1,n},\ldots ,\mathbf {A} _{m,n}]^{\mathrm {T} }} where A = [Ai,j]. In other words, vec(A) is the vector obtained by vectorizing A in column-major order. Similarly, vec(AT) is the vector obtaining by vectorizing A in row-major order. The cycles and other properties of this permutation have been heavily studied for in-place matrix transposition algorithms. In the context of quantum information theory, the commutation matrix is sometimes referred to as the swap matrix or swap operator == Properties == The commutation matrix is a special type of permutation matrix, and is therefore orthogonal. In particular, K(m,n) is equal to P π {\displaystyle \mathbf {P} _{\pi }} , where π {\displaystyle \pi } is the permutation over { 1 , … , m n } {\displaystyle \{1,\dots ,mn\}} for which π ( i + m ( j − 1 ) ) = j + n ( i − 1 ) , i = 1 , … , m , j = 1 , … , n . {\displaystyle \pi (i+m(j-1))=j+n(i-1),\quad i=1,\dots ,m,\quad j=1,\dots ,n.} The determinant of K(m,n) is ( − 1 ) 1 4 n ( n − 1 ) m ( m − 1 ) {\displaystyle (-1)^{{\frac {1}{4}}n(n-1)m(m-1)}} . Replacing A with AT in the definition of the commutation matrix shows that K(m,n) = (K(n,m))T. Therefore, in the special case of m = n the commutation matrix is an involution and symmetric. The main use of the commutation matrix, and the source of its name, is to commute the Kronecker product: for every m × n matrix A and every r × q matrix B, K ( r , m ) ( A ⊗ B ) K ( n , q ) = B ⊗ A . {\displaystyle \mathbf {K} ^{(r,m)}(\mathbf {A} \otimes \mathbf {B} )\mathbf {K} ^{(n,q)}=\mathbf {B} \otimes \mathbf {A} .} This property is often used in developing the higher order statistics of Wishart covariance matrices. The case of n=q=1 for the above equation states that for any column vectors v,w of sizes m,r respectively, K ( r , m ) ( v ⊗ w ) = w ⊗ v . {\displaystyle \mathbf {K} ^{(r,m)}(\mathbf {v} \otimes \mathbf {w} )=\mathbf {w} \otimes \mathbf {v} .} This property is the reason that this matrix is referred to as the "swap operator" in the context of quantum information theory. Two explicit forms for the commutation matrix are as follows: if er,j denotes the j-th canonical vector of dimension r (i.e. the vector with 1 in the j-th coordinate and 0 elsewhere) then K ( r , m ) = ∑ i = 1 r ∑ j = 1 m ( e r , i e m , j T ) ⊗ ( e m , j e r , i T ) = ∑ i = 1 r ∑ j = 1 m ( e r , i ⊗ e m , j ) ( e m , j ⊗ e r , i ) T . {\displaystyle \mathbf {K} ^{(r,m)}=\sum _{i=1}^{r}\sum _{j=1}^{m}\left(\mathbf {e} _{r,i}{\mathbf {e} _{m,j}}^{\mathrm {T} }\right)\otimes \left(\mathbf {e} _{m,j}{\mathbf {e} _{r,i}}^{\mathrm {T} }\right)=\sum _{i=1}^{r}\sum _{j=1}^{m}\left(\mathbf {e} _{r,i}\otimes \mathbf {e} _{m,j}\right)\left(\mathbf {e} _{m,j}\otimes \mathbf {e} _{r,i}\right)^{\mathrm {T} }.} The commutation matrix may be expressed as the following block matrix: K ( m , n ) = [ K 1 , 1 ⋯ K 1 , n ⋮ ⋱ ⋮ K m , 1 ⋯ K m , n , ] , {\displaystyle \mathbf {K} ^{(m,n)}={\begin{bmatrix}\mathbf {K} _{1,1}&\cdots &\mathbf {K} _{1,n}\\\vdots &\ddots &\vdots \\\mathbf {K} _{m,1}&\cdots &\mathbf {K} _{m,n},\end{bmatrix}},} Where the p,q entry of n x m block-matrix Ki,j is given by K i j ( p , q ) = { 1 i = q and j = p , 0 otherwise . {\displaystyle \mathbf {K} _{ij}(p,q)={\begin{cases}1&i=q{\text{ and }}j=p,\\0&{\text{otherwise}}.\end{cases}}} For example, K ( 3 , 4 ) = [ 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 ] . {\displaystyle \mathbf {K} ^{(3,4)}=\left[{\begin{array}{ccc|ccc|ccc|ccc}1&0&0&0&0&0&0&0&0&0&0&0\\0&0&0&1&0&0&0&0&0&0&0&0\\0&0&0&0&0&0&1&0&0&0&0&0\\0&0&0&0&0&0&0&0&0&1&0&0\\\hline 0&1&0&0&0&0&0&0&0&0&0&0\\0&0&0&0&1&0&0&0&0&0&0&0\\0&0&0&0&0&0&0&1&0&0&0&0\\0&0&0&0&0&0&0&0&0&0&1&0\\\hline 0&0&1&0&0&0&0&0&0&0&0&0\\0&0&0&0&0&1&0&0&0&0&0&0\\0&0&0&0&0&0&0&0&1&0&0&0\\0&0&0&0&0&0&0&0&0&0&0&1\end{array}}\right].} == Code == For both square and rectangular matrices of m rows and n columns, the commutation matrix can be generated by the code below. === Python === Alternatively, a version without imports: === MATLAB === === R === == Example == Let A {\displaystyle A} denote the following 3 × 2 {\displaystyle 3\times 2} matrix: A = [ 1 4 2 5 3 6 ] . {\displaystyle A={\begin{bmatrix}1&4\\2&5\\3&6\\\end{bmatrix}}.} A {\displaystyle A} has the following column-major and row-major vectorizations (respectively): v col = vec ( A ) = [ 1 2 3 4 5 6 ] , v row = vec ( A T ) = [ 1 4 2 5 3 6 ] . {\displaystyle \mathbf {v} _{\text{col}}=\operatorname {vec} (A)={\begin{bmatrix}1\\2\\3\\4\\5\\6\\\end{bmatrix}},\quad \mathbf {v} _{\text{row}}=\operatorname {vec} (A^{\mathrm {T} })={\begin{bmatrix}1\\4\\2\\5\\3\\6\\\end{bmatrix}}.} The associated commutation matrix is K = K ( 3 , 2 ) = [ 1 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 1 ⋅ ⋅ ⋅ 1 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 1 ⋅ ⋅ ⋅ 1 ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ 1 ] , {\displaystyle K=\mathbf {K} ^{(3,2)}={\begin{bmatrix}1&\cdot &\cdot &\cdot &\cdot &\cdot \\\cdot &\cdot &\cdot &1&\cdot &\cdot \\\cdot &1&\cdot &\cdot &\cdot &\cdot \\\cdot &\cdot &\cdot &\cdot &1&\cdot \\\cdot &\cdot &1&\cdot &\cdot &\cdot \\\cdot &\cdot &\cdot &\cdot &\cdot &1\\\end{bmatrix}},} (where each ⋅ {\displaystyle \cdot } denotes a zero). As expected, the following holds: K T K = K K T = I 6 {\displaystyle K^{\mathrm {T} }K=KK^{\mathrm {T} }=\mathbf {I} _{6}} K v col = v row {\displaystyle K\mathbf {v} _{\text{col}}=\mathbf {v} _{\text{row}}} == References == Jan R. Magnus and Heinz Neudecker (1988), Matrix Differential Calculus with Applications in Statistics and Econometrics, Wiley.
|
Wikipedia:Commutative property#0
|
In mathematics, a binary operation is commutative if changing the order of the operands does not change the result. It is a fundamental property of many binary operations, and many mathematical proofs depend on it. Perhaps most familiar as a property of arithmetic, e.g. "3 + 4 = 4 + 3" or "2 × 5 = 5 × 2", the property can also be used in more advanced settings. The name is needed because there are operations, such as division and subtraction, that do not have it (for example, "3 − 5 ≠ 5 − 3"); such operations are not commutative, and so are referred to as noncommutative operations. The idea that simple operations, such as the multiplication and addition of numbers, are commutative was for many centuries implicitly assumed. Thus, this property was not named until the 19th century, when new algebraic structures started to be studied. == Definition == A binary operation ∗ {\displaystyle *} on a set S is commutative if x ∗ y = y ∗ x {\displaystyle x*y=y*x} for all x , y ∈ S {\displaystyle x,y\in S} . An operation that is not commutative is said to be noncommutative. One says that x commutes with y or that x and y commute under ∗ {\displaystyle *} if x ∗ y = y ∗ x . {\displaystyle x*y=y*x.} So, an operation is commutative if every two elements commute. An operation is noncommutative if there are two elements such that x ∗ y ≠ y ∗ x . {\displaystyle x*y\neq y*x.} This does not exclude the possibility that some pairs of elements commute. == Examples == === Commutative operations === Addition and multiplication are commutative in most number systems, and, in particular, between natural numbers, integers, rational numbers, real numbers and complex numbers. This is also true in every field. Addition is commutative in every vector space and in every algebra. Union and intersection are commutative operations on sets. "And" and "or" are commutative logical operations. === Noncommutative operations === Division is noncommutative, since 1 ÷ 2 ≠ 2 ÷ 1 {\displaystyle 1\div 2\neq 2\div 1} . Subtraction is noncommutative, since 0 − 1 ≠ 1 − 0 {\displaystyle 0-1\neq 1-0} . However it is classified more precisely as anti-commutative, since x − y = − ( y − x ) {\displaystyle x-y=-(y-x)} for every x {\displaystyle x} and y {\displaystyle y} . Exponentiation is noncommutative, since 2 3 ≠ 3 2 {\displaystyle 2^{3}\neq 3^{2}} (see Equation xy = yx. Some truth functions are noncommutative, since their truth tables are different when one changes the order of the operands. For example, the truth tables for (A ⇒ B) = (¬A ∨ B) and (B ⇒ A) = (A ∨ ¬B) are Function composition is generally noncommutative. For example, if f ( x ) = 2 x + 1 {\displaystyle f(x)=2x+1} and g ( x ) = 3 x + 7 {\displaystyle g(x)=3x+7} . Then ( f ∘ g ) ( x ) = f ( g ( x ) ) = 2 ( 3 x + 7 ) + 1 = 6 x + 15 {\displaystyle (f\circ g)(x)=f(g(x))=2(3x+7)+1=6x+15} and ( g ∘ f ) ( x ) = g ( f ( x ) ) = 3 ( 2 x + 1 ) + 7 = 6 x + 10. {\displaystyle (g\circ f)(x)=g(f(x))=3(2x+1)+7=6x+10.} Matrix multiplication of square matrices of a given dimension is a noncommutative operation, except for 1 × 1 {\displaystyle 1\times 1} matrices. For example: [ 0 2 0 1 ] = [ 1 1 0 1 ] [ 0 1 0 1 ] ≠ [ 0 1 0 1 ] [ 1 1 0 1 ] = [ 0 1 0 1 ] {\displaystyle {\begin{bmatrix}0&2\\0&1\end{bmatrix}}={\begin{bmatrix}1&1\\0&1\end{bmatrix}}{\begin{bmatrix}0&1\\0&1\end{bmatrix}}\neq {\begin{bmatrix}0&1\\0&1\end{bmatrix}}{\begin{bmatrix}1&1\\0&1\end{bmatrix}}={\begin{bmatrix}0&1\\0&1\end{bmatrix}}} The vector product (or cross product) of two vectors in three dimensions is anti-commutative; i.e., b × a = − ( a × b ) {\displaystyle \mathbf {b} \times \mathbf {a} =-(\mathbf {a} \times \mathbf {b} )} . == Commutative structures == Some types of algebraic structures involve an operation that does not require commutativity. If this operation is commutative for a specific structure, the structure is often said to be commutative. So, a commutative semigroup is a semigroup whose operation is commutative; a commutative monoid is a monoid whose operation is commutative; a commutative group or abelian group is a group whose operation is commutative; a commutative ring is a ring whose multiplication is commutative. (Addition in a ring is always commutative.) However, in the case of algebras, the phrase "commutative algebra" refers only to associative algebras that have a commutative multiplication. == History and etymology == Records of the implicit use of the commutative property go back to ancient times. The Egyptians used the commutative property of multiplication to simplify computing products. Euclid is known to have assumed the commutative property of multiplication in his book Elements. Formal uses of the commutative property arose in the late 18th and early 19th centuries when mathematicians began to work on a theory of functions. Nowadays, the commutative property is a well-known and basic property used in most branches of mathematics. The first recorded use of the term commutative was in a memoir by François Servois in 1814, which used the word commutatives when describing functions that have what is now called the commutative property. Commutative is the feminine form of the French adjective commutatif, which is derived from the French noun commutation and the French verb commuter, meaning "to exchange" or "to switch", a cognate of to commute. The term then appeared in English in 1838. in Duncan Gregory's article entitled "On the real nature of symbolical algebra" published in 1840 in the Transactions of the Royal Society of Edinburgh. == See also == Anticommutative property Canonical commutation relation (in quantum mechanics) Centralizer and normalizer (also called a commutant) Commutative diagram Commutative (neurophysiology) Commutator Particle statistics (for commutativity in physics) Quasi-commutative property Trace monoid Commuting probability == Notes == == References ==
|
Wikipedia:Complementary series representation#0
|
In mathematics, complementary series representations of a reductive real or p-adic Lie groups are certain irreducible unitary representations that are not tempered and do not appear in the decomposition of the regular representation into irreducible representations. They are rather mysterious: they do not turn up very often, and seem to exist by accident. They were sometimes overlooked, in fact, in some earlier claims to have classified the irreducible unitary representations of certain groups. Several conjectures in mathematics, such as the Selberg conjecture, are equivalent to saying that certain representations are not complementary. For examples see the representation theory of SL2(R). Elias M. Stein (1972) constructed some families of them for higher rank groups using analytic continuation, sometimes called the Stein complementary series. == References == A.I. Shtern (2001) [1994], "Complementary series (of representations)", Encyclopedia of Mathematics, EMS Press Stein, Elias M. (April 1970), "Analytic Continuation of Group Representations", Advances in Mathematics, 4 (2): 172–207, doi:10.1016/0001-8708(70)90022-8, also reprinted as ISBN 0-300-01428-7
|
Wikipedia:Complete homogeneous symmetric polynomial#0
|
In mathematics, specifically in algebraic combinatorics and commutative algebra, the complete homogeneous symmetric polynomials are a specific kind of symmetric polynomials. Every symmetric polynomial can be expressed as a polynomial expression in complete homogeneous symmetric polynomials. == Definition == The complete homogeneous symmetric polynomial of degree k in n variables X1, ..., Xn, written hk for k = 0, 1, 2, ..., is the sum of all monomials of total degree k in the variables. Formally, h k ( X 1 , X 2 , … , X n ) = ∑ 1 ≤ i 1 ≤ i 2 ≤ ⋯ ≤ i k ≤ n X i 1 X i 2 ⋯ X i k . {\displaystyle h_{k}(X_{1},X_{2},\dots ,X_{n})=\sum _{1\leq i_{1}\leq i_{2}\leq \cdots \leq i_{k}\leq n}X_{i_{1}}X_{i_{2}}\cdots X_{i_{k}}.} The formula can also be written as: h k ( X 1 , X 2 , … , X n ) = ∑ l 1 + l 2 + ⋯ + l n = k l i ≥ 0 X 1 l 1 X 2 l 2 ⋯ X n l n . {\displaystyle h_{k}(X_{1},X_{2},\dots ,X_{n})=\sum _{l_{1}+l_{2}+\cdots +l_{n}=k \atop l_{i}\geq 0}X_{1}^{l_{1}}X_{2}^{l_{2}}\cdots X_{n}^{l_{n}}.} Indeed, lp is just the multiplicity of p in the sequence ik. The first few of these polynomials are h 0 ( X 1 , X 2 , … , X n ) = 1 , h 1 ( X 1 , X 2 , … , X n ) = ∑ 1 ≤ j ≤ n X j , h 2 ( X 1 , X 2 , … , X n ) = ∑ 1 ≤ j ≤ k ≤ n X j X k , h 3 ( X 1 , X 2 , … , X n ) = ∑ 1 ≤ j ≤ k ≤ l ≤ n X j X k X l . {\displaystyle {\begin{aligned}h_{0}(X_{1},X_{2},\dots ,X_{n})&=1,\\[10px]h_{1}(X_{1},X_{2},\dots ,X_{n})&=\sum _{1\leq j\leq n}X_{j},\\h_{2}(X_{1},X_{2},\dots ,X_{n})&=\sum _{1\leq j\leq k\leq n}X_{j}X_{k},\\h_{3}(X_{1},X_{2},\dots ,X_{n})&=\sum _{1\leq j\leq k\leq l\leq n}X_{j}X_{k}X_{l}.\end{aligned}}} Thus, for each nonnegative integer k, there exists exactly one complete homogeneous symmetric polynomial of degree k in n variables. Another way of rewriting the definition is to take summation over all sequences ik, without condition of ordering ip ≤ ip + 1: h k ( X 1 , X 2 , … , X n ) = ∑ 1 ≤ i 1 , i 2 , ⋯ , i k ≤ n m 1 ! m 2 ! ⋯ m n ! k ! X i 1 X i 2 ⋯ X i k , {\displaystyle h_{k}(X_{1},X_{2},\dots ,X_{n})=\sum _{1\leq i_{1},i_{2},\cdots ,i_{k}\leq n}{\frac {m_{1}!m_{2}!\cdots m_{n}!}{k!}}X_{i_{1}}X_{i_{2}}\cdots X_{i_{k}},} here mp is the multiplicity of number p in the sequence ik. For example h 2 ( X 1 , X 2 ) = 2 ! 0 ! 2 ! X 1 2 + 1 ! 1 ! 2 ! X 1 X 2 + 1 ! 1 ! 2 ! X 2 X 1 + 0 ! 2 ! 2 ! X 2 2 = X 1 2 + X 1 X 2 + X 2 2 . {\displaystyle h_{2}(X_{1},X_{2})={\frac {2!0!}{2!}}X_{1}^{2}+{\frac {1!1!}{2!}}X_{1}X_{2}+{\frac {1!1!}{2!}}X_{2}X_{1}+{\frac {0!2!}{2!}}X_{2}^{2}=X_{1}^{2}+X_{1}X_{2}+X_{2}^{2}.} The polynomial ring formed by taking all integral linear combinations of products of the complete homogeneous symmetric polynomials is a commutative ring. == Examples == The following lists the n basic (as explained below) complete homogeneous symmetric polynomials for the first three positive values of n. For n = 1: h 1 ( X 1 ) = X 1 . {\displaystyle h_{1}(X_{1})=X_{1}\,.} For n = 2: h 1 ( X 1 , X 2 ) = X 1 + X 2 h 2 ( X 1 , X 2 ) = X 1 2 + X 1 X 2 + X 2 2 . {\displaystyle {\begin{aligned}h_{1}(X_{1},X_{2})&=X_{1}+X_{2}\\h_{2}(X_{1},X_{2})&=X_{1}^{2}+X_{1}X_{2}+X_{2}^{2}.\end{aligned}}} For n = 3: h 1 ( X 1 , X 2 , X 3 ) = X 1 + X 2 + X 3 h 2 ( X 1 , X 2 , X 3 ) = X 1 2 + X 2 2 + X 3 2 + X 1 X 2 + X 1 X 3 + X 2 X 3 h 3 ( X 1 , X 2 , X 3 ) = X 1 3 + X 2 3 + X 3 3 + X 1 2 X 2 + X 1 2 X 3 + X 2 2 X 1 + X 2 2 X 3 + X 3 2 X 1 + X 3 2 X 2 + X 1 X 2 X 3 . {\displaystyle {\begin{aligned}h_{1}(X_{1},X_{2},X_{3})&=X_{1}+X_{2}+X_{3}\\h_{2}(X_{1},X_{2},X_{3})&=X_{1}^{2}+X_{2}^{2}+X_{3}^{2}+X_{1}X_{2}+X_{1}X_{3}+X_{2}X_{3}\\h_{3}(X_{1},X_{2},X_{3})&=X_{1}^{3}+X_{2}^{3}+X_{3}^{3}+X_{1}^{2}X_{2}+X_{1}^{2}X_{3}+X_{2}^{2}X_{1}+X_{2}^{2}X_{3}+X_{3}^{2}X_{1}+X_{3}^{2}X_{2}+X_{1}X_{2}X_{3}.\end{aligned}}} == Properties == === Generating function === The complete homogeneous symmetric polynomials are characterized by the following identity of formal power series in t: ∑ k = 0 ∞ h k ( X 1 , … , X n ) t k = ∏ i = 1 n ∑ j = 0 ∞ ( X i t ) j = ∏ i = 1 n 1 1 − X i t {\displaystyle \sum _{k=0}^{\infty }h_{k}(X_{1},\ldots ,X_{n})t^{k}=\prod _{i=1}^{n}\sum _{j=0}^{\infty }(X_{i}t)^{j}=\prod _{i=1}^{n}{\frac {1}{1-X_{i}t}}} (this is called the generating function, or generating series, for the complete homogeneous symmetric polynomials). Here each fraction in the final expression is the usual way to represent the formal geometric series that is a factor in the middle expression. The identity can be justified by considering how the product of those geometric series is formed: each factor in the product is obtained by multiplying together one term chosen from each geometric series, and every monomial in the variables Xi is obtained for exactly one such choice of terms, and comes multiplied by a power of t equal to the degree of the monomial. The formula above can be seen as a special case of the MacMahon master theorem. The right hand side can be interpreted as 1 / det ( 1 − t M ) {\displaystyle 1/\!\det(1-tM)} where t ∈ R {\displaystyle t\in \mathbb {R} } and M = diag ( X 1 , … , X N ) {\displaystyle M={\text{diag}}(X_{1},\ldots ,X_{N})} . On the left hand side, one can identify the complete homogeneous symmetric polynomials as special cases of the multinomial coefficient that appears in the MacMahon expression. Performing some standard computations, we can also write the generating function as ∑ k = 0 ∞ h k ( X 1 , … , X n ) t k = exp ( ∑ j = 1 ∞ ( X 1 j + ⋯ + X n j ) t j j ) {\displaystyle \sum _{k=0}^{\infty }h_{k}(X_{1},\ldots ,X_{n})\,t^{k}=\exp \left(\sum _{j=1}^{\infty }(X_{1}^{j}+\cdots +X_{n}^{j}){\frac {t^{j}}{j}}\right)} which is the power series expansion of the plethystic exponential of ( X 1 + ⋯ + X n ) t {\displaystyle (X_{1}+\cdots +X_{n})t} (and note that p j := X 1 j + ⋯ + X n j {\displaystyle p_{j}:=X_{1}^{j}+\cdots +X_{n}^{j}} is precisely the j-th power sum symmetric polynomial). === Relation with the elementary symmetric polynomials === There is a fundamental relation between the elementary symmetric polynomials and the complete homogeneous ones: ∑ i = 0 m ( − 1 ) i e i ( X 1 , … , X n ) h m − i ( X 1 , … , X n ) = 0 , {\displaystyle \sum _{i=0}^{m}(-1)^{i}e_{i}(X_{1},\ldots ,X_{n})h_{m-i}(X_{1},\ldots ,X_{n})=0,} which is valid for all m > 0, and any number of variables n. The easiest way to see that it holds is from an identity of formal power series in t for the elementary symmetric polynomials, analogous to the one given above for the complete homogeneous ones, which can also be written in terms of plethystic exponentials as: ∑ k = 0 ∞ e k ( X 1 , … , X n ) ( − t ) k = ∏ i = 1 n ( 1 − X i t ) = P E [ − ( X 1 + ⋯ + X n ) t ] {\displaystyle \sum _{k=0}^{\infty }e_{k}(X_{1},\ldots ,X_{n})(-t)^{k}=\prod _{i=1}^{n}(1-X_{i}t)=PE[-(X_{1}+\cdots +X_{n})t]} (this is actually an identity of polynomials in t, because after en(X1, ..., Xn) the elementary symmetric polynomials become zero). Multiplying this by the generating function for the complete homogeneous symmetric polynomials, one obtains the constant series 1 (equivalently, plethystic exponentials satisfy the usual properties of an exponential), and the relation between the elementary and complete homogeneous polynomials follows from comparing coefficients of tm. A somewhat more direct way to understand that relation is to consider the contributions in the summation involving a fixed monomial Xα of degree m. For any subset S of the variables appearing with nonzero exponent in the monomial, there is a contribution involving the product XS of those variables as term from es(X1, ..., Xn), where s = #S, and the monomial Xα/XS from hm − s(X1, ..., Xn); this contribution has coefficient (−1)s. The relation then follows from the fact that ∑ s = 0 l ( l s ) ( − 1 ) s = ( 1 − 1 ) l = 0 for l > 0 , {\displaystyle \sum _{s=0}^{l}{\binom {l}{s}}(-1)^{s}=(1-1)^{l}=0\quad {\mbox{for }}l>0,} by the binomial formula, where l < m denotes the number of distinct variables occurring (with nonzero exponent) in Xα. Since e0(X1, ..., Xn) and h0(X1, ..., Xn) are both equal to 1, one can isolate from the relation either the first or the last terms of the summation. The former gives a sequence of equations: h 1 ( X 1 , … , X n ) = e 1 ( X 1 , … , X n ) , h 2 ( X 1 , … , X n ) = h 1 ( X 1 , … , X n ) e 1 ( X 1 , … , X n ) − e 2 ( X 1 , … , X n ) , h 3 ( X 1 , … , X n ) = h 2 ( X 1 , … , X n ) e 1 ( X 1 , … , X n ) − h 1 ( X 1 , … , X n ) e 2 ( X 1 , … , X n ) + e 3 ( X 1 , … , X n ) , {\displaystyle {\begin{aligned}h_{1}(X_{1},\ldots ,X_{n})&=e_{1}(X_{1},\ldots ,X_{n}),\\h_{2}(X_{1},\ldots ,X_{n})&=h_{1}(X_{1},\ldots ,X_{n})e_{1}(X_{1},\ldots ,X_{n})-e_{2}(X_{1},\ldots ,X_{n}),\\h_{3}(X_{1},\ldots ,X_{n})&=h_{2}(X_{1},\ldots ,X_{n})e_{1}(X_{1},\ldots ,X_{n})-h_{1}(X_{1},\ldots ,X_{n})e_{2}(X_{1},\ldots ,X_{n})+e_{3}(X_{1},\ldots ,X_{n}),\\\end{aligned}}} and so on, that allows to recursively express the successive complete homogeneous symmetric polynomials in terms of the elementary symmetric polynomials; the latter gives a set of equations e 1 ( X 1 , … , X n ) = h 1 ( X 1 , … , X n ) , e 2 ( X 1 , … , X n ) = h 1 ( X 1 , … , X n ) e 1 ( X 1 , … , X n ) − h 2 ( X 1 , … , X n ) , e 3 ( X 1 , … , X n ) = h 1 ( X 1 , … , X n ) e 2 ( X 1 , … , X n ) − h 2 ( X 1 , … , X n ) e 1 ( X 1 , … , X n ) + h 3 ( X 1 , … , X n ) , {\displaystyle {\begin{aligned}e_{1}(X_{1},\ldots ,X_{n})&=h_{1}(X_{1},\ldots ,X_{n}),\\e_{2}(X_{1},\ldots ,X_{n})&=h_{1}(X_{1},\ldots ,X_{n})e_{1}(X_{1},\ldots ,X_{n})-h_{2}(X_{1},\ldots ,X_{n}),\\e_{3}(X_{1},\ldots ,X_{n})&=h_{1}(X_{1},\ldots ,X_{n})e_{2}(X_{1},\ldots ,X_{n})-h_{2}(X_{1},\ldots ,X_{n})e_{1}(X_{1},\ldots ,X_{n})+h_{3}(X_{1},\ldots ,X_{n}),\\\end{aligned}}} and so forth, that allows doing the inverse. The first n elementary and complete homogeneous symmetric polynomials play perfectly similar roles in these relations, even though the former polynomials then become zero, whereas the latter do not. This phenomenon can be understood in the setting of the ring of symmetric functions. It has a ring automorphism that interchanges the sequences of the n elementary and first n complete homogeneous symmetric functions. The set of complete homogeneous symmetric polynomials of degree 1 to n in n variables generates the ring of symmetric polynomials in n variables. More specifically, the ring of symmetric polynomials with integer coefficients equals the integral polynomial ring Z [ h 1 ( X 1 , … , X n ) , … , h n ( X 1 , … , X n ) ] . {\displaystyle \mathbb {Z} {\big [}h_{1}(X_{1},\ldots ,X_{n}),\ldots ,h_{n}(X_{1},\ldots ,X_{n}){\big ]}.} This can be formulated by saying that h 1 ( X 1 , … , X n ) , … , h n ( X 1 , … , X n ) {\displaystyle h_{1}(X_{1},\ldots ,X_{n}),\ldots ,h_{n}(X_{1},\ldots ,X_{n})} form a transcendence basis of the ring of symmetric polynomials in X1, ..., Xn with integral coefficients (as is also true for the elementary symmetric polynomials). The same is true with the ring Z {\displaystyle \mathbb {Z} } of integers replaced by any other commutative ring. These statements follow from analogous statements for the elementary symmetric polynomials, due to the indicated possibility of expressing either kind of symmetric polynomials in terms of the other kind. === Relation with the Stirling numbers === The evaluation at integers of complete homogeneous polynomials and elementary symmetric polynomials is related to Stirling numbers: h n ( 1 , 2 , … , k ) = { n + k k } e n ( 1 , 2 , … , k ) = [ k + 1 k + 1 − n ] {\displaystyle {\begin{aligned}h_{n}(1,2,\ldots ,k)&=\left\{{\begin{matrix}n+k\\k\end{matrix}}\right\}\\e_{n}(1,2,\ldots ,k)&=\left[{k+1 \atop k+1-n}\right]\\\end{aligned}}} === Relation with the monomial symmetric polynomials === The polynomial hk(X1, ..., Xn) is also the sum of all distinct monomial symmetric polynomials of degree k in X1, ..., Xn, for instance h 3 ( X 1 , X 2 , X 3 ) = m ( 3 ) ( X 1 , X 2 , X 3 ) + m ( 2 , 1 ) ( X 1 , X 2 , X 3 ) + m ( 1 , 1 , 1 ) ( X 1 , X 2 , X 3 ) = ( X 1 3 + X 2 3 + X 3 3 ) + ( X 1 2 X 2 + X 1 2 X 3 + X 1 X 2 2 + X 1 X 3 2 + X 2 2 X 3 + X 2 X 3 2 ) + ( X 1 X 2 X 3 ) . {\displaystyle {\begin{aligned}h_{3}(X_{1},X_{2},X_{3})&=m_{(3)}(X_{1},X_{2},X_{3})+m_{(2,1)}(X_{1},X_{2},X_{3})+m_{(1,1,1)}(X_{1},X_{2},X_{3})\\&=\left(X_{1}^{3}+X_{2}^{3}+X_{3}^{3}\right)+\left(X_{1}^{2}X_{2}+X_{1}^{2}X_{3}+X_{1}X_{2}^{2}+X_{1}X_{3}^{2}+X_{2}^{2}X_{3}+X_{2}X_{3}^{2}\right)+(X_{1}X_{2}X_{3}).\\\end{aligned}}} === Relation with power sums === Newton's identities for homogeneous symmetric polynomials give the simple recursive formula k h k = ∑ i = 1 k h k − i p i , {\displaystyle kh_{k}=\sum _{i=1}^{k}h_{k-i}p_{i},} where h k = h k ( X 1 , … , X n ) {\displaystyle h_{k}=h_{k}(X_{1},\dots ,X_{n})} and pk is the k-th power sum symmetric polynomial: p k ( X 1 , … , X n ) = ∑ i = 1 n x i k = X 1 k + ⋯ + X n k {\displaystyle p_{k}(X_{1},\ldots ,X_{n})=\sum \nolimits _{i=1}^{n}x_{i}^{k}=X_{1}^{k}+\cdots +X_{n}^{k}} , as above. For small k {\displaystyle k} we have h 1 = p 1 , 2 h 2 = h 1 p 1 + p 2 , 3 h 3 = h 2 p 1 + h 1 p 2 + p 3 . {\displaystyle {\begin{aligned}h_{1}&=p_{1},\\2h_{2}&=h_{1}p_{1}+p_{2},\\3h_{3}&=h_{2}p_{1}+h_{1}p_{2}+p_{3}.\\\end{aligned}}} === Relation with symmetric tensors === Consider an n-dimensional vector space V and a linear operator M : V → V with eigenvalues X1, X2, ..., Xn. Denote by Symk(V) its kth symmetric tensor power and MSym(k) the induced operator Symk(V) → Symk(V). Proposition: Trace Sym k ( V ) ( M Sym ( k ) ) = h k ( X 1 , X 2 , … , X n ) . {\displaystyle \operatorname {Trace} _{\operatorname {Sym} ^{k}(V)}\left(M^{\operatorname {Sym} (k)}\right)=h_{k}(X_{1},X_{2},\ldots ,X_{n}).} The proof is easy: consider an eigenbasis ei for M. The basis in Symk(V) can be indexed by sequences i1 ≤ i2 ≤ ... ≤ ik, indeed, consider the symmetrizations of e i 1 ⊗ e i 2 ⊗ … ⊗ e i k {\displaystyle e_{i_{1}}\otimes \,e_{i_{2}}\otimes \ldots \otimes \,e_{i_{k}}} . All such vectors are eigenvectors for MSym(k) with eigenvalues X i 1 X i 2 ⋯ X i k , {\displaystyle X_{i_{1}}X_{i_{2}}\cdots X_{i_{k}},} hence this proposition is true. Similarly one can express elementary symmetric polynomials via traces over antisymmetric tensor powers. Both expressions are subsumed in expressions of Schur polynomials as traces over Schur functors, which can be seen as the Weyl character formula for GL(V). === Complete homogeneous symmetric polynomial with variables shifted by 1 === If we replace the variables X i {\displaystyle X_{i}} for 1 + X i {\displaystyle 1+X_{i}} , the symmetric polynomial h k ( 1 + X 1 , … , 1 + X n ) {\displaystyle h_{k}(1+X_{1},\ldots ,1+X_{n})} can be written as a linear combination of the h j ( X 1 , … , X n ) {\displaystyle h_{j}(X_{1},\ldots ,X_{n})} , for 0 ≤ j ≤ k {\displaystyle 0\leq j\leq k} , h k ( 1 + X 1 , … , 1 + X n ) = ∑ j = 0 k ( n + k − 1 k − j ) h j ( X 1 , … , X n ) . {\displaystyle h_{k}(1+X_{1},\ldots ,1+X_{n})=\sum _{j=0}^{k}{\binom {n+k-1}{k-j}}h_{j}(X_{1},\ldots ,X_{n}).} The proof, as found in Lemma 3.5 of, relies on the combinatorial properties of increasing k {\displaystyle k} -tuples ( i 1 , … , i k ) {\displaystyle (i_{1},\ldots ,i_{k})} where 1 ≤ i 1 ≤ ⋯ ≤ i k ≤ n {\displaystyle 1\leq i_{1}\leq \cdots \leq i_{k}\leq n} . == See also == Symmetric polynomial Elementary symmetric polynomial Schur polynomial Newton's identities MacMahon Master theorem Ring of symmetric functions Representation theory == References == Cornelius, E.F., Jr. (2011), Identities for complete homogeneous symmetric polynomials, JP J. Algebra, Number Theory & Applications, Vol. 21, No. 1, 109-116. Macdonald, I.G. (1979), Symmetric Functions and Hall Polynomials. Oxford Mathematical Monographs. Oxford: Clarendon Press. Macdonald, I.G. (1995), Symmetric Functions and Hall Polynomials, second ed. Oxford: Clarendon Press. ISBN 0-19-850450-0 (paperback, 1998). Richard P. Stanley (1999), Enumerative Combinatorics, Vol. 2. Cambridge: Cambridge University Press. ISBN 0-521-56069-1
|
Wikipedia:Completing the square#0
|
In elementary algebra, completing the square is a technique for converting a quadratic polynomial of the form a x 2 + b x + c {\displaystyle \textstyle ax^{2}+bx+c} to the form a ( x − h ) 2 + k {\displaystyle \textstyle a(x-h)^{2}+k} for some values of h {\displaystyle h} and k {\displaystyle k} . In terms of a new quantity x − h {\displaystyle x-h} , this expression is a quadratic polynomial with no linear term. By subsequently isolating ( x − h ) 2 {\displaystyle \textstyle (x-h)^{2}} and taking the square root, a quadratic problem can be reduced to a linear problem. The name completing the square comes from a geometrical picture in which x {\displaystyle x} represents an unknown length. Then the quantity x 2 {\displaystyle \textstyle x^{2}} represents the area of a square of side x {\displaystyle x} and the quantity b a x {\displaystyle {\tfrac {b}{a}}x} represents the area of a pair of congruent rectangles with sides x {\displaystyle x} and b 2 a {\displaystyle {\tfrac {b}{2a}}} . To this square and pair of rectangles one more square is added, of side length b 2 a {\displaystyle {\tfrac {b}{2a}}} . This crucial step completes a larger square of side length x + b 2 a {\displaystyle x+{\tfrac {b}{2a}}} . Completing the square is the oldest method of solving general quadratic equations, used in Old Babylonian clay tablets dating from 1800–1600 BCE, and is still taught in elementary algebra courses today. It is also used for graphing quadratic functions, deriving the quadratic formula, and more generally in computations involving quadratic polynomials, for example in calculus evaluating Gaussian integrals with a linear term in the exponent, and finding Laplace transforms. == History == The technique of completing the square was known in the Old Babylonian Empire. Muhammad ibn Musa Al-Khwarizmi, a famous polymath who wrote the early algebraic treatise Al-Jabr, used the technique of completing the square to solve quadratic equations. == Overview == === Background === The formula in elementary algebra for computing the square of a binomial is: ( x + p ) 2 = x 2 + 2 p x + p 2 . {\displaystyle (x+p)^{2}\,=\,x^{2}+2px+p^{2}.} For example: ( x + 3 ) 2 = x 2 + 6 x + 9 ( p = 3 ) ( x − 5 ) 2 = x 2 − 10 x + 25 ( p = − 5 ) . {\displaystyle {\begin{alignedat}{2}(x+3)^{2}\,&=\,x^{2}+6x+9&&(p=3)\\[3pt](x-5)^{2}\,&=\,x^{2}-10x+25\qquad &&(p=-5).\end{alignedat}}} In any perfect square, the coefficient of x is twice the number p, and the constant term is equal to p2. === Basic example === Consider the following quadratic polynomial: x 2 + 10 x + 28. {\displaystyle x^{2}+10x+28.} This quadratic is not a perfect square, since 28 is not the square of 5: ( x + 5 ) 2 = x 2 + 10 x + 25. {\displaystyle (x+5)^{2}\,=\,x^{2}+10x+25.} However, it is possible to write the original quadratic as the sum of this square and a constant: x 2 + 10 x + 28 = ( x + 5 ) 2 + 3. {\displaystyle x^{2}+10x+28\,=\,(x+5)^{2}+3.} This is called completing the square. === General description === Given any monic quadratic x 2 + b x + c , {\displaystyle x^{2}+bx+c,} it is possible to form a square that has the same first two terms: ( x + 1 2 b ) 2 = x 2 + b x + 1 4 b 2 . {\displaystyle \left(x+{\tfrac {1}{2}}b\right)^{2}\,=\,x^{2}+bx+{\tfrac {1}{4}}b^{2}.} This square differs from the original quadratic only in the value of the constant term. Therefore, we can write x 2 + b x + c = ( x + 1 2 b ) 2 + k , {\displaystyle x^{2}+bx+c\,=\,\left(x+{\tfrac {1}{2}}b\right)^{2}+k,} where k = c − b 2 4 {\displaystyle k=c-{\frac {b^{2}}{4}}} . This operation is known as completing the square. For example: x 2 + 6 x + 11 = ( x + 3 ) 2 + 2 x 2 + 14 x + 30 = ( x + 7 ) 2 − 19 x 2 − 2 x + 7 = ( x − 1 ) 2 + 6. {\displaystyle {\begin{alignedat}{1}x^{2}+6x+11\,&=\,(x+3)^{2}+2\\[3pt]x^{2}+14x+30\,&=\,(x+7)^{2}-19\\[3pt]x^{2}-2x+7\,&=\,(x-1)^{2}+6.\end{alignedat}}} === Non-monic case === Given a quadratic polynomial of the form a x 2 + b x + c {\displaystyle ax^{2}+bx+c} it is possible to factor out the coefficient a, and then complete the square for the resulting monic polynomial. Example: 3 x 2 + 12 x + 27 = 3 [ x 2 + 4 x + 9 ] = 3 [ ( x + 2 ) 2 + 5 ] = 3 ( x + 2 ) 2 + 3 ( 5 ) = 3 ( x + 2 ) 2 + 15 {\displaystyle {\begin{aligned}3x^{2}+12x+27&=3[x^{2}+4x+9]\\&{}=3\left[(x+2)^{2}+5\right]\\&{}=3(x+2)^{2}+3(5)\\&{}=3(x+2)^{2}+15\end{aligned}}} This process of factoring out the coefficient a can further be simplified by only factorising it out of the first 2 terms. The integer at the end of the polynomial does not have to be included. Example: 3 x 2 + 12 x + 27 = 3 [ x 2 + 4 x ] + 27 = 3 [ ( x + 2 ) 2 − 4 ] + 27 = 3 ( x + 2 ) 2 + 3 ( − 4 ) + 27 = 3 ( x + 2 ) 2 − 12 + 27 = 3 ( x + 2 ) 2 + 15 {\displaystyle {\begin{aligned}3x^{2}+12x+27&=3\left[x^{2}+4x\right]+27\\[1ex]&{}=3\left[(x+2)^{2}-4\right]+27\\[1ex]&{}=3(x+2)^{2}+3(-4)+27\\[1ex]&{}=3(x+2)^{2}-12+27\\[1ex]&{}=3(x+2)^{2}+15\end{aligned}}} This allows the writing of any quadratic polynomial in the form a ( x − h ) 2 + k . {\displaystyle a(x-h)^{2}+k.} === Formula === ==== Scalar case ==== The result of completing the square may be written as a formula. In the general case, one has a x 2 + b x + c = a ( x − h ) 2 + k , {\displaystyle ax^{2}+bx+c=a(x-h)^{2}+k,} with h = − b 2 a and k = c − a h 2 = c − b 2 4 a . {\displaystyle h=-{\frac {b}{2a}}\quad {\text{and}}\quad k=c-ah^{2}=c-{\frac {b^{2}}{4a}}.} In particular, when a = 1, one has x 2 + b x + c = ( x − h ) 2 + k , {\displaystyle x^{2}+bx+c=(x-h)^{2}+k,} with h = − b 2 and k = c − h 2 = c − b 2 4 . {\displaystyle h=-{\frac {b}{2}}\quad {\text{and}}\quad k=c-h^{2}=c-{\frac {b^{2}}{4}}.} By solving the equation a ( x − h ) 2 + k = 0 {\displaystyle a(x-h)^{2}+k=0} in terms of x − h , {\displaystyle x-h,} and reorganizing the resulting expression, one gets the quadratic formula for the roots of the quadratic equation: x = − b ± b 2 − 4 a c 2 a . {\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac}}}{2a}}.} ==== Matrix case ==== The matrix case looks very similar: x T A x + x T b + c = ( x − h ) T A ( x − h ) + k {\displaystyle x^{\mathrm {T} }Ax+x^{\mathrm {T} }b+c=(x-h)^{\mathrm {T} }A(x-h)+k} where h = − 1 2 A − 1 b {\textstyle h=-{\frac {1}{2}}A^{-1}b} and k = c − 1 4 b T A − 1 b {\textstyle k=c-{\frac {1}{4}}b^{\mathrm {T} }A^{-1}b} . Note that A {\displaystyle A} has to be symmetric. If A {\displaystyle A} is not symmetric the formulae for h {\displaystyle h} and k {\displaystyle k} have to be generalized to: h = − ( A + A T ) − 1 b and k = c − h T A h = c − b T ( A + A T ) − 1 A ( A + A T ) − 1 b {\displaystyle h=-(A+A^{\mathrm {T} })^{-1}b\quad {\text{and}}\quad k=c-h^{\mathrm {T} }Ah=c-b^{\mathrm {T} }(A+A^{\mathrm {T} })^{-1}A(A+A^{\mathrm {T} })^{-1}b} == Relation to the graph == In analytic geometry, the graph of any quadratic function is a parabola in the xy-plane. Given a quadratic polynomial of the form a ( x − h ) 2 + k {\displaystyle a(x-h)^{2}+k} the numbers h and k may be interpreted as the Cartesian coordinates of the vertex (or stationary point) of the parabola. That is, h is the x-coordinate of the axis of symmetry (i.e. the axis of symmetry has equation x = h), and k is the minimum value (or maximum value, if a < 0) of the quadratic function. One way to see this is to note that the graph of the function f(x) = x2 is a parabola whose vertex is at the origin (0, 0). Therefore, the graph of the function f(x − h) = (x − h)2 is a parabola shifted to the right by h whose vertex is at (h, 0), as shown in the top figure. In contrast, the graph of the function f(x) + k = x2 + k is a parabola shifted upward by k whose vertex is at (0, k), as shown in the center figure. Combining both horizontal and vertical shifts yields f(x − h) + k = (x − h)2 + k is a parabola shifted to the right by h and upward by k whose vertex is at (h, k), as shown in the bottom figure. == Solving quadratic equations == Completing the square may be used to solve any quadratic equation. For example: x 2 + 6 x + 5 = 0. {\displaystyle x^{2}+6x+5=0.} The first step is to complete the square: ( x + 3 ) 2 − 4 = 0. {\displaystyle (x+3)^{2}-4=0.} Next we solve for the squared term: ( x + 3 ) 2 = 4. {\displaystyle (x+3)^{2}=4.} Then either x + 3 = − 2 or x + 3 = 2 , {\displaystyle x+3=-2\quad {\text{or}}\quad x+3=2,} and therefore x = − 5 or x = − 1. {\displaystyle x=-5\quad {\text{or}}\quad x=-1.} This can be applied to any quadratic equation. When the x2 has a coefficient other than 1, the first step is to divide out the equation by this coefficient: for an example see the non-monic case below. === Irrational and complex roots === Unlike methods involving factoring the equation, which is reliable only if the roots are rational, completing the square will find the roots of a quadratic equation even when those roots are irrational or complex. For example, consider the equation x 2 − 10 x + 18 = 0. {\displaystyle x^{2}-10x+18=0.} Completing the square gives ( x − 5 ) 2 − 7 = 0 , {\displaystyle (x-5)^{2}-7=0,} so ( x − 5 ) 2 = 7. {\displaystyle (x-5)^{2}=7.} Then either x − 5 = − 7 or x − 5 = 7 . {\displaystyle x-5=-{\sqrt {7}}\quad {\text{or}}\quad x-5={\sqrt {7}}.} In terser language: x − 5 = ± 7 , {\displaystyle x-5=\pm {\sqrt {7}},} so x = 5 ± 7 . {\displaystyle x=5\pm {\sqrt {7}}.} Equations with complex roots can be handled in the same way. For example: x 2 + 4 x + 5 = 0 ( x + 2 ) 2 + 1 = 0 ( x + 2 ) 2 = − 1 x + 2 = ± i x = − 2 ± i . {\displaystyle {\begin{aligned}x^{2}+4x+5&=0\\[6pt](x+2)^{2}+1&=0\\[6pt](x+2)^{2}&=-1\\[6pt]x+2&=\pm i\\[6pt]x&=-2\pm i.\end{aligned}}} === Non-monic case === For an equation involving a non-monic quadratic, the first step to solving them is to divide through by the coefficient of x2. For example: 2 x 2 + 7 x + 6 = 0 x 2 + 7 2 x + 3 = 0 ( x + 7 4 ) 2 − 1 16 = 0 ( x + 7 4 ) 2 = 1 16 x + 7 4 = 1 4 or x + 7 4 = − 1 4 x = − 3 2 or x = − 2. {\displaystyle {\begin{array}{c}2x^{2}+7x+6\,=\,0\\[6pt]x^{2}+{\tfrac {7}{2}}x+3\,=\,0\\[6pt]\left(x+{\tfrac {7}{4}}\right)^{2}-{\tfrac {1}{16}}\,=\,0\\[6pt]\left(x+{\tfrac {7}{4}}\right)^{2}\,=\,{\tfrac {1}{16}}\\[6pt]x+{\tfrac {7}{4}}={\tfrac {1}{4}}\quad {\text{or}}\quad x+{\tfrac {7}{4}}=-{\tfrac {1}{4}}\\[6pt]x=-{\tfrac {3}{2}}\quad {\text{or}}\quad x=-2.\end{array}}} Applying this procedure to the general form of a quadratic equation leads to the quadratic formula. == Other applications == === Integration === Completing the square may be used to evaluate any integral of the form ∫ d x a x 2 + b x + c {\displaystyle \int {\frac {dx}{ax^{2}+bx+c}}} using the basic integrals ∫ d x x 2 − a 2 = 1 2 a ln | x − a x + a | + C and ∫ d x x 2 + a 2 = 1 a arctan ( x a ) + C . {\displaystyle \int {\frac {dx}{x^{2}-a^{2}}}={\frac {1}{2a}}\ln \left|{\frac {x-a}{x+a}}\right|+C\quad {\text{and}}\quad \int {\frac {dx}{x^{2}+a^{2}}}={\frac {1}{a}}\arctan \left({\frac {x}{a}}\right)+C.} For example, consider the integral ∫ d x x 2 + 6 x + 13 . {\displaystyle \int {\frac {dx}{x^{2}+6x+13}}.} Completing the square in the denominator gives: ∫ d x ( x + 3 ) 2 + 4 = ∫ d x ( x + 3 ) 2 + 2 2 . {\displaystyle \int {\frac {dx}{(x+3)^{2}+4}}\,=\,\int {\frac {dx}{(x+3)^{2}+2^{2}}}.} This can now be evaluated by using the substitution u = x + 3, which yields ∫ d x ( x + 3 ) 2 + 4 = 1 2 arctan ( x + 3 2 ) + C . {\displaystyle \int {\frac {dx}{(x+3)^{2}+4}}\,=\,{\frac {1}{2}}\arctan \left({\frac {x+3}{2}}\right)+C.} === Complex numbers === Consider the expression | z | 2 − b ∗ z − b z ∗ + c , {\displaystyle |z|^{2}-b^{*}z-bz^{*}+c,} where z and b are complex numbers, z* and b* are the complex conjugates of z and b, respectively, and c is a real number. Using the identity |u|2 = uu* we can rewrite this as | z − b | 2 − | b | 2 + c , {\displaystyle |z-b|^{2}-|b|^{2}+c,} which is clearly a real quantity. This is because | z − b | 2 = ( z − b ) ( z − b ) ∗ = ( z − b ) ( z ∗ − b ∗ ) = z z ∗ − z b ∗ − b z ∗ + b b ∗ = | z | 2 − z b ∗ − b z ∗ + | b | 2 . {\displaystyle {\begin{aligned}|z-b|^{2}&{}=(z-b)(z-b)^{*}\\&{}=(z-b)(z^{*}-b^{*})\\&{}=zz^{*}-zb^{*}-bz^{*}+bb^{*}\\&{}=|z|^{2}-zb^{*}-bz^{*}+|b|^{2}.\end{aligned}}} As another example, the expression a x 2 + b y 2 + c , {\displaystyle ax^{2}+by^{2}+c,} where a, b, c, x, and y are real numbers, with a > 0 and b > 0, may be expressed in terms of the square of the absolute value of a complex number. Define z = a x + i b y . {\displaystyle z={\sqrt {a}}\,x+i{\sqrt {b}}\,y.} Then | z | 2 = z z ∗ = ( a x + i b y ) ( a x − i b y ) = a x 2 − i a b x y + i b a y x − i 2 b y 2 = a x 2 + b y 2 , {\displaystyle {\begin{aligned}|z|^{2}&{}=zz^{*}\\[1ex]&{}=\left({\sqrt {a}}\,x+i{\sqrt {b}}\,y\right)\left({\sqrt {a}}\,x-i{\sqrt {b}}\,y\right)\\[1ex]&{}=ax^{2}-i{\sqrt {ab}}\,xy+i{\sqrt {ba}}\,yx-i^{2}by^{2}\\[1ex]&{}=ax^{2}+by^{2},\end{aligned}}} so a x 2 + b y 2 + c = | z | 2 + c . {\displaystyle ax^{2}+by^{2}+c=|z|^{2}+c.} === Idempotent matrix === A matrix M is idempotent when M2 = M. Idempotent matrices generalize the idempotent properties of 0 and 1. The completion of the square method of addressing the equation a 2 + b 2 = a , {\displaystyle a^{2}+b^{2}=a,} shows that some idempotent 2×2 matrices are parametrized by a circle in the (a,b)-plane: The matrix ( a b b 1 − a ) {\displaystyle {\begin{pmatrix}a&b\\b&1-a\end{pmatrix}}} will be idempotent provided a 2 + b 2 = a , {\displaystyle a^{2}+b^{2}=a,} which, upon completing the square, becomes ( a − 1 2 ) 2 + b 2 = 1 4 . {\displaystyle (a-{\tfrac {1}{2}})^{2}+b^{2}={\tfrac {1}{4}}.} In the (a,b)-plane, this is the equation of a circle with center (1/2, 0) and radius 1/2. == Geometric perspective == Consider completing the square for the equation x 2 + b x = a . {\displaystyle x^{2}+bx=a.} Since x2 represents the area of a square with side of length x, and bx represents the area of a rectangle with sides b and x, the process of completing the square can be viewed as visual manipulation of rectangles. Simple attempts to combine the x2 and the bx rectangles into a larger square result in a missing corner. The term (b/2)2 added to each side of the above equation is precisely the area of the missing corner, whence derives the terminology "completing the square". == A variation on the technique == As conventionally taught, completing the square consists of adding the third term, v2 to u 2 + 2 u v {\displaystyle u^{2}+2uv} to get a square. There are also cases in which one can add the middle term, either 2uv or −2uv, to u 2 + v 2 {\displaystyle u^{2}+v^{2}} to get a square. === Example: the sum of a positive number and its reciprocal === By writing x + 1 x = ( x − 2 + 1 x ) + 2 = ( x − 1 x ) 2 + 2 {\displaystyle {\begin{aligned}x+{1 \over x}&{}=\left(x-2+{1 \over x}\right)+2\\&{}=\left({\sqrt {x}}-{1 \over {\sqrt {x}}}\right)^{2}+2\end{aligned}}} we show that the sum of a positive number x and its reciprocal is always greater than or equal to 2. The square of a real expression is always greater than or equal to zero, which gives the stated bound; and here we achieve 2 just when x is 1, causing the square to vanish. === Example: factoring a simple quartic polynomial === Consider the problem of factoring the polynomial x 4 + 324. {\displaystyle x^{4}+324.} This is ( x 2 ) 2 + ( 18 ) 2 , {\displaystyle (x^{2})^{2}+(18)^{2},} so the middle term is 2(x2)(18) = 36x2. Thus we get x 4 + 324 = ( x 4 + 36 x 2 + 324 ) − 36 x 2 = ( x 2 + 18 ) 2 − ( 6 x ) 2 = a difference of two squares = ( x 2 + 18 + 6 x ) ( x 2 + 18 − 6 x ) = ( x 2 + 6 x + 18 ) ( x 2 − 6 x + 18 ) {\displaystyle {\begin{aligned}x^{4}+324&{}=(x^{4}+36x^{2}+324)-36x^{2}\\&{}=(x^{2}+18)^{2}-(6x)^{2}={\text{a difference of two squares}}\\&{}=(x^{2}+18+6x)(x^{2}+18-6x)\\&{}=(x^{2}+6x+18)(x^{2}-6x+18)\end{aligned}}} (the last line being added merely to follow the convention of decreasing degrees of terms). The same argument shows that x 4 + 4 a 4 {\displaystyle x^{4}+4a^{4}} is always factorizable as x 4 + 4 a 4 = ( x 2 + 2 a x + 2 a 2 ) ( x 2 − 2 a x + 2 a 2 ) {\displaystyle x^{4}+4a^{4}=\left(x^{2}+2ax+2a^{2}\right)\left(x^{2}-2ax+2a^{2}\right)} (Also known as Sophie Germain's identity). == Completing the cube == "Completing the square" consists to remark that the two first terms of a quadratic polynomial are also the first terms of the square of a linear polynomial, and to use this for expressing the quadratic polynomial as the sum of a square and a constant. Completing the cube is a similar technique that allows to transform a cubic polynomial into a cubic polynomial without term of degree two. More precisely, if a x 3 + b x 2 + c x + d {\displaystyle ax^{3}+bx^{2}+cx+d} is a polynomial in x such that a ≠ 0 , {\displaystyle a\neq 0,} its two first terms are the two first terms of the expanded form of a ( x + b 3 a ) 3 = a x 3 + b x 2 + x b 2 3 a + b 3 27 a 2 . {\displaystyle a\left(x+{\frac {b}{3a}}\right)^{3}=ax^{3}+bx^{2}+x\,{\frac {b^{2}}{3a}}+{\frac {b^{3}}{27a^{2}}}.} So, the change of variable t = x + b 3 a {\displaystyle t=x+{\frac {b}{3a}}} provides a cubic polynomial in t {\displaystyle t} without term of degree two, which is called the depressed form of the original polynomial. This transformation is generally the first step of the methods for solving the general cubic equation. More generally, a similar transformation can be used for removing terms of degree n − 1 {\displaystyle n-1} in polynomials of degree n {\displaystyle n} , which is called Tschirnhaus transformation. == References == Algebra 1, Glencoe, ISBN 0-07-825083-8, pages 539–544 Algebra 2, Saxon, ISBN 0-939798-62-X, pages 214–214, 241–242, 256–257, 398–401 == External links == Completing the square at PlanetMath.
|
Wikipedia:Complex Lie algebra#0
|
In mathematics, a complex Lie algebra is a Lie algebra over the complex numbers. Given a complex Lie algebra g {\displaystyle {\mathfrak {g}}} , its conjugate g ¯ {\displaystyle {\overline {\mathfrak {g}}}} is a complex Lie algebra with the same underlying real vector space but with i = − 1 {\displaystyle i={\sqrt {-1}}} acting as − i {\displaystyle -i} instead. As a real Lie algebra, a complex Lie algebra g {\displaystyle {\mathfrak {g}}} is trivially isomorphic to its conjugate. A complex Lie algebra is isomorphic to its conjugate if and only if it admits a real form (and is said to be defined over the real numbers). == Real form == Given a complex Lie algebra g {\displaystyle {\mathfrak {g}}} , a real Lie algebra g 0 {\displaystyle {\mathfrak {g}}_{0}} is said to be a real form of g {\displaystyle {\mathfrak {g}}} if the complexification g 0 ⊗ R C {\displaystyle {\mathfrak {g}}_{0}\otimes _{\mathbb {R} }\mathbb {C} } is isomorphic to g {\displaystyle {\mathfrak {g}}} . A real form g 0 {\displaystyle {\mathfrak {g}}_{0}} is abelian (resp. nilpotent, solvable, semisimple) if and only if g {\displaystyle {\mathfrak {g}}} is abelian (resp. nilpotent, solvable, semisimple). On the other hand, a real form g 0 {\displaystyle {\mathfrak {g}}_{0}} is simple if and only if either g {\displaystyle {\mathfrak {g}}} is simple or g {\displaystyle {\mathfrak {g}}} is of the form s × s ¯ {\displaystyle {\mathfrak {s}}\times {\overline {\mathfrak {s}}}} where s , s ¯ {\displaystyle {\mathfrak {s}},{\overline {\mathfrak {s}}}} are simple and are the conjugates of each other. The existence of a real form in a complex Lie algebra g {\displaystyle {\mathfrak {g}}} implies that g {\displaystyle {\mathfrak {g}}} is isomorphic to its conjugate; indeed, if g = g 0 ⊗ R C = g 0 ⊕ i g 0 {\displaystyle {\mathfrak {g}}={\mathfrak {g}}_{0}\otimes _{\mathbb {R} }\mathbb {C} ={\mathfrak {g}}_{0}\oplus i{\mathfrak {g}}_{0}} , then let τ : g → g ¯ {\displaystyle \tau :{\mathfrak {g}}\to {\overline {\mathfrak {g}}}} denote the R {\displaystyle \mathbb {R} } -linear isomorphism induced by complex conjugate and then τ ( i ( x + i y ) ) = τ ( i x − y ) = − i x − y = − i τ ( x + i y ) {\displaystyle \tau (i(x+iy))=\tau (ix-y)=-ix-y=-i\tau (x+iy)} , which is to say τ {\displaystyle \tau } is in fact a C {\displaystyle \mathbb {C} } -linear isomorphism. Conversely, suppose there is a C {\displaystyle \mathbb {C} } -linear isomorphism τ : g → ∼ g ¯ {\displaystyle \tau :{\mathfrak {g}}{\overset {\sim }{\to }}{\overline {\mathfrak {g}}}} ; without loss of generality, we can assume it is the identity function on the underlying real vector space. Then define g 0 = { z ∈ g | τ ( z ) = z } {\displaystyle {\mathfrak {g}}_{0}=\{z\in {\mathfrak {g}}|\tau (z)=z\}} , which is clearly a real Lie algebra. Each element z {\displaystyle z} in g {\displaystyle {\mathfrak {g}}} can be written uniquely as z = 2 − 1 ( z + τ ( z ) ) + i 2 − 1 ( i τ ( z ) − i z ) {\displaystyle z=2^{-1}(z+\tau (z))+i2^{-1}(i\tau (z)-iz)} . Here, τ ( i τ ( z ) − i z ) = − i z + i τ ( z ) {\displaystyle \tau (i\tau (z)-iz)=-iz+i\tau (z)} and similarly τ {\displaystyle \tau } fixes z + τ ( z ) {\displaystyle z+\tau (z)} . Hence, g = g 0 ⊕ i g 0 {\displaystyle {\mathfrak {g}}={\mathfrak {g}}_{0}\oplus i{\mathfrak {g}}_{0}} ; i.e., g 0 {\displaystyle {\mathfrak {g}}_{0}} is a real form. == Complex Lie algebra of a complex Lie group == Let g {\displaystyle {\mathfrak {g}}} be a semisimple complex Lie algebra that is the Lie algebra of a complex Lie group G {\displaystyle G} . Let h {\displaystyle {\mathfrak {h}}} be a Cartan subalgebra of g {\displaystyle {\mathfrak {g}}} and H {\displaystyle H} the Lie subgroup corresponding to h {\displaystyle {\mathfrak {h}}} ; the conjugates of H {\displaystyle H} are called Cartan subgroups. Suppose there is the decomposition g = n − ⊕ h ⊕ n + {\displaystyle {\mathfrak {g}}={\mathfrak {n}}^{-}\oplus {\mathfrak {h}}\oplus {\mathfrak {n}}^{+}} given by a choice of positive roots. Then the exponential map defines an isomorphism from n + {\displaystyle {\mathfrak {n}}^{+}} to a closed subgroup U ⊂ G {\displaystyle U\subset G} . The Lie subgroup B ⊂ G {\displaystyle B\subset G} corresponding to the Borel subalgebra b = h ⊕ n + {\displaystyle {\mathfrak {b}}={\mathfrak {h}}\oplus {\mathfrak {n}}^{+}} is closed and is the semidirect product of H {\displaystyle H} and U {\displaystyle U} ; the conjugates of B {\displaystyle B} are called Borel subgroups. == Notes == == References == Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103. Knapp, A. W. (2002). Lie groups beyond an introduction. Progress in Mathematics. Vol. 120 (2nd ed.). Boston·Basel·Berlin: Birkhäuser. ISBN 0-8176-4259-5.. Serre, Jean-Pierre (2001). Complex Semisimple Lie Algebras. Berlin: Springer. ISBN 3-5406-7827-1.
|
Wikipedia:Complex conjugate of a vector space#0
|
In mathematics, the complex conjugate of a complex vector space V {\displaystyle V\,} is a complex vector space V ¯ {\displaystyle {\overline {V}}} that has the same elements and additive group structure as V , {\displaystyle V,} but whose scalar multiplication involves conjugation of the scalars. In other words, the scalar multiplication of V ¯ {\displaystyle {\overline {V}}} satisfies α ∗ v = α ¯ ⋅ v {\displaystyle \alpha \,*\,v={\,{\overline {\alpha }}\cdot \,v\,}} where ∗ {\displaystyle *} is the scalar multiplication of V ¯ {\displaystyle {\overline {V}}} and ⋅ {\displaystyle \cdot } is the scalar multiplication of V . {\displaystyle V.} The letter v {\displaystyle v} stands for a vector in V , {\displaystyle V,} α {\displaystyle \alpha } is a complex number, and α ¯ {\displaystyle {\overline {\alpha }}} denotes the complex conjugate of α . {\displaystyle \alpha .} More concretely, the complex conjugate vector space is the same underlying real vector space (same set of points, same vector addition and real scalar multiplication) with the conjugate linear complex structure J {\displaystyle J} (different multiplication by i {\displaystyle i} ). == Motivation == If V {\displaystyle V} and W {\displaystyle W} are complex vector spaces, a function f : V → W {\displaystyle f:V\to W} is antilinear if f ( v + w ) = f ( v ) + f ( w ) and f ( α v ) = α ¯ f ( v ) {\displaystyle f(v+w)=f(v)+f(w)\quad {\text{ and }}\quad f(\alpha v)={\overline {\alpha }}\,f(v)} With the use of the conjugate vector space V ¯ {\displaystyle {\overline {V}}} , an antilinear map f : V → W {\displaystyle f:V\to W} can be regarded as an ordinary linear map of type V ¯ → W . {\displaystyle {\overline {V}}\to W.} The linearity is checked by noting: f ( α ∗ v ) = f ( α ¯ ⋅ v ) = α ¯ ¯ ⋅ f ( v ) = α ⋅ f ( v ) {\displaystyle f(\alpha *v)=f({\overline {\alpha }}\cdot v)={\overline {\overline {\alpha }}}\cdot f(v)=\alpha \cdot f(v)} Conversely, any linear map defined on V ¯ {\displaystyle {\overline {V}}} gives rise to an antilinear map on V . {\displaystyle V.} This is the same underlying principle as in defining the opposite ring so that a right R {\displaystyle R} -module can be regarded as a left R o p {\displaystyle R^{op}} -module, or that of an opposite category so that a contravariant functor C → D {\displaystyle C\to D} can be regarded as an ordinary functor of type C o p → D . {\displaystyle C^{op}\to D.} == Complex conjugation functor == A linear map f : V → W {\displaystyle f:V\to W\,} gives rise to a corresponding linear map f ¯ : V ¯ → W ¯ {\displaystyle {\overline {f}}:{\overline {V}}\to {\overline {W}}} that has the same action as f . {\displaystyle f.} Note that f ¯ {\displaystyle {\overline {f}}} preserves scalar multiplication because f ¯ ( α ∗ v ) = f ( α ¯ ⋅ v ) = α ¯ ⋅ f ( v ) = α ∗ f ¯ ( v ) {\displaystyle {\overline {f}}(\alpha *v)=f({\overline {\alpha }}\cdot v)={\overline {\alpha }}\cdot f(v)=\alpha *{\overline {f}}(v)} Thus, complex conjugation V ↦ V ¯ {\displaystyle V\mapsto {\overline {V}}} and f ↦ f ¯ {\displaystyle f\mapsto {\overline {f}}} define a functor from the category of complex vector spaces to itself. If V {\displaystyle V} and W {\displaystyle W} are finite-dimensional and the map f {\displaystyle f} is described by the complex matrix A {\displaystyle A} with respect to the bases B {\displaystyle {\mathcal {B}}} of V {\displaystyle V} and C {\displaystyle {\mathcal {C}}} of W , {\displaystyle W,} then the map f ¯ {\displaystyle {\overline {f}}} is described by the complex conjugate of A {\displaystyle A} with respect to the bases B ¯ {\displaystyle {\overline {\mathcal {B}}}} of V ¯ {\displaystyle {\overline {V}}} and C ¯ {\displaystyle {\overline {\mathcal {C}}}} of W ¯ . {\displaystyle {\overline {W}}.} == Structure of the conjugate == The vector spaces V {\displaystyle V} and V ¯ {\displaystyle {\overline {V}}} have the same dimension over the complex numbers and are therefore isomorphic as complex vector spaces. However, there is no natural isomorphism from V {\displaystyle V} to V ¯ . {\displaystyle {\overline {V}}.} The double conjugate V ¯ ¯ {\displaystyle {\overline {\overline {V}}}} is identical to V . {\displaystyle V.} == Complex conjugate of a Hilbert space == Given a Hilbert space H {\displaystyle {\mathcal {H}}} (either finite or infinite dimensional), its complex conjugate H ¯ {\displaystyle {\overline {\mathcal {H}}}} is the same vector space as its continuous dual space H ′ . {\displaystyle {\mathcal {H}}^{\prime }.} There is one-to-one antilinear correspondence between continuous linear functionals and vectors. In other words, any continuous linear functional on H {\displaystyle {\mathcal {H}}} is an inner multiplication to some fixed vector, and vice versa. Thus, the complex conjugate to a vector v , {\displaystyle v,} particularly in finite dimension case, may be denoted as v † {\displaystyle v^{\dagger }} (v-dagger, a row vector that is the conjugate transpose to a column vector v {\displaystyle v} ). In quantum mechanics, the conjugate to a ket vector | ψ ⟩ {\displaystyle \,|\psi \rangle } is denoted as ⟨ ψ | {\displaystyle \langle \psi |\,} – a bra vector (see bra–ket notation). == See also == Antidual space – Conjugate homogeneous additive mapPages displaying short descriptions of redirect targets Linear complex structure – Mathematics concept Riesz representation theorem – Theorem about the dual of a Hilbert space conjugate bundle == References == == Further reading == Budinich, P. and Trautman, A. The Spinorial Chessboard. Springer-Verlag, 1988. ISBN 0-387-19078-3. (complex conjugate vector spaces are discussed in section 3.3, pag. 26).
|
Wikipedia:Complex dynamics#0
|
Complex dynamics, or holomorphic dynamics, is the study of dynamical systems obtained by iterating a complex analytic mapping. This article focuses on the case of algebraic dynamics, where a polynomial or rational function is iterated. In geometric terms, that amounts to iterating a mapping from some algebraic variety to itself. The related theory of arithmetic dynamics studies iteration over the rational numbers or the p-adic numbers instead of the complex numbers. == Dynamics in complex dimension 1 == A simple example that shows some of the main issues in complex dynamics is the mapping f ( z ) = z 2 {\displaystyle f(z)=z^{2}} from the complex numbers C to itself. It is helpful to view this as a map from the complex projective line C P 1 {\displaystyle \mathbf {CP} ^{1}} to itself, by adding a point ∞ {\displaystyle \infty } to the complex numbers. ( C P 1 {\displaystyle \mathbf {CP} ^{1}} has the advantage of being compact.) The basic question is: given a point z {\displaystyle z} in C P 1 {\displaystyle \mathbf {CP} ^{1}} , how does its orbit (or forward orbit) z , f ( z ) = z 2 , f ( f ( z ) ) = z 4 , f ( f ( f ( z ) ) ) = z 8 , … {\displaystyle z,\;f(z)=z^{2},\;f(f(z))=z^{4},f(f(f(z)))=z^{8},\;\ldots } behave, qualitatively? The answer is: if the absolute value |z| is less than 1, then the orbit converges to 0, in fact more than exponentially fast. If |z| is greater than 1, then the orbit converges to the point ∞ {\displaystyle \infty } in C P 1 {\displaystyle \mathbf {CP} ^{1}} , again more than exponentially fast. (Here 0 and ∞ {\displaystyle \infty } are superattracting fixed points of f, meaning that the derivative of f is zero at those points. An attracting fixed point means one where the derivative of f has absolute value less than 1.) On the other hand, suppose that | z | = 1 {\displaystyle |z|=1} , meaning that z is on the unit circle in C. At these points, the dynamics of f is chaotic, in various ways. For example, for almost all points z on the circle in terms of measure theory, the forward orbit of z is dense in the circle, and in fact uniformly distributed on the circle. There are also infinitely many periodic points on the circle, meaning points with f r ( z ) = z {\displaystyle f^{r}(z)=z} for some positive integer r. (Here f r ( z ) {\displaystyle f^{r}(z)} means the result of applying f to z r times, f ( f ( ⋯ ( f ( z ) ) ⋯ ) ) {\displaystyle f(f(\cdots (f(z))\cdots ))} .) Even at periodic points z on the circle, the dynamics of f can be considered chaotic, since points near z diverge exponentially fast from z upon iterating f. (The periodic points of f on the unit circle are repelling: if f r ( z ) = z {\displaystyle f^{r}(z)=z} , the derivative of f r {\displaystyle f^{r}} at z has absolute value greater than 1.) Pierre Fatou and Gaston Julia showed in the late 1910s that much of this story extends to any complex algebraic map from C P 1 {\displaystyle \mathbf {CP} ^{1}} to itself of degree greater than 1. (Such a mapping may be given by a polynomial f ( z ) {\displaystyle f(z)} with complex coefficients, or more generally by a rational function.) Namely, there is always a compact subset of C P 1 {\displaystyle \mathbf {CP} ^{1}} , the Julia set, on which the dynamics of f is chaotic. For the mapping f ( z ) = z 2 {\displaystyle f(z)=z^{2}} , the Julia set is the unit circle. For other polynomial mappings, the Julia set is often highly irregular, for example a fractal in the sense that its Hausdorff dimension is not an integer. This occurs even for mappings as simple as f ( z ) = z 2 + c {\displaystyle f(z)=z^{2}+c} for a constant c ∈ C {\displaystyle c\in \mathbf {C} } . The Mandelbrot set is the set of complex numbers c such that the Julia set of f ( z ) = z 2 + c {\displaystyle f(z)=z^{2}+c} is connected. There is a rather complete classification of the possible dynamics of a rational function f : C P 1 → C P 1 {\displaystyle f\colon \mathbf {CP} ^{1}\to \mathbf {CP} ^{1}} in the Fatou set, the complement of the Julia set, where the dynamics is "tame". Namely, Dennis Sullivan showed that each connected component U of the Fatou set is pre-periodic, meaning that there are natural numbers a < b {\displaystyle a<b} such that f a ( U ) = f b ( U ) {\displaystyle f^{a}(U)=f^{b}(U)} . Therefore, to analyze the dynamics on a component U, one can assume after replacing f by an iterate that f ( U ) = U {\displaystyle f(U)=U} . Then either (1) U contains an attracting fixed point for f; (2) U is parabolic in the sense that all points in U approach a fixed point in the boundary of U; (3) U is a Siegel disk, meaning that the action of f on U is conjugate to an irrational rotation of the open unit disk; or (4) U is a Herman ring, meaning that the action of f on U is conjugate to an irrational rotation of an open annulus. (Note that the "backward orbit" of a point z in U, the set of points in C P 1 {\displaystyle \mathbf {CP} ^{1}} that map to z under some iterate of f, need not be contained in U.) == The equilibrium measure of an endomorphism == Complex dynamics has been effectively developed in any dimension. This section focuses on the mappings from complex projective space C P n {\displaystyle \mathbf {CP} ^{n}} to itself, the richest source of examples. The main results for C P n {\displaystyle \mathbf {CP} ^{n}} have been extended to a class of rational maps from any projective variety to itself. Note, however, that many varieties have no interesting self-maps. Let f be an endomorphism of C P n {\displaystyle \mathbf {CP} ^{n}} , meaning a morphism of algebraic varieties from C P n {\displaystyle \mathbf {CP} ^{n}} to itself, for a positive integer n. Such a mapping is given in homogeneous coordinates by f ( [ z 0 , … , z n ] ) = [ f 0 ( z 0 , … , z n ) , … , f n ( z 0 , … , z n ) ] {\displaystyle f([z_{0},\ldots ,z_{n}])=[f_{0}(z_{0},\ldots ,z_{n}),\ldots ,f_{n}(z_{0},\ldots ,z_{n})]} for some homogeneous polynomials f 0 , … , f n {\displaystyle f_{0},\ldots ,f_{n}} of the same degree d that have no common zeros in C P n {\displaystyle \mathbf {CP} ^{n}} . (By Chow's theorem, this is the same thing as a holomorphic mapping from C P n {\displaystyle \mathbf {CP} ^{n}} to itself.) Assume that d is greater than 1; then the degree of the mapping f is d n {\displaystyle d^{n}} , which is also greater than 1. Then there is a unique probability measure μ f {\displaystyle \mu _{f}} on C P n {\displaystyle \mathbf {CP} ^{n}} , the equilibrium measure of f, that describes the most chaotic part of the dynamics of f. (It has also been called the Green measure or measure of maximal entropy.) This measure was defined by Hans Brolin (1965) for polynomials in one variable, by Alexandre Freire, Artur Lopes, Ricardo Mañé, and Mikhail Lyubich for n = 1 {\displaystyle n=1} (around 1983), and by John Hubbard, Peter Papadopol, John Fornaess, and Nessim Sibony in any dimension (around 1994). The small Julia set J ∗ ( f ) {\displaystyle J^{*}(f)} is the support of the equilibrium measure in C P n {\displaystyle \mathbf {CP} ^{n}} ; this is simply the Julia set when n = 1 {\displaystyle n=1} . === Examples === For the mapping f ( z ) = z 2 {\displaystyle f(z)=z^{2}} on C P 1 {\displaystyle \mathbf {CP} ^{1}} , the equilibrium measure μ f {\displaystyle \mu _{f}} is the Haar measure (the standard measure, scaled to have total measure 1) on the unit circle | z | = 1 {\displaystyle |z|=1} . More generally, for an integer d > 1 {\displaystyle d>1} , let f : C P n → C P n {\displaystyle f\colon \mathbf {CP} ^{n}\to \mathbf {CP} ^{n}} be the mapping f ( [ z 0 , … , z n ] ) = [ z 0 d , … , z n d ] . {\displaystyle f([z_{0},\ldots ,z_{n}])=[z_{0}^{d},\ldots ,z_{n}^{d}].} Then the equilibrium measure μ f {\displaystyle \mu _{f}} is the Haar measure on the n-dimensional torus { [ 1 , z 1 , … , z n ] : | z 1 | = ⋯ = | z n | = 1 } . {\displaystyle \{[1,z_{1},\ldots ,z_{n}]:|z_{1}|=\cdots =|z_{n}|=1\}.} For more general holomorphic mappings from C P n {\displaystyle \mathbf {CP} ^{n}} to itself, the equilibrium measure can be much more complicated, as one sees already in complex dimension 1 from pictures of Julia sets. === Characterizations of the equilibrium measure === A basic property of the equilibrium measure is that it is invariant under f, in the sense that the pushforward measure f ∗ μ f {\displaystyle f_{*}\mu _{f}} is equal to μ f {\displaystyle \mu _{f}} . Because f is a finite morphism, the pullback measure f ∗ μ f {\displaystyle f^{*}\mu _{f}} is also defined, and μ f {\displaystyle \mu _{f}} is totally invariant in the sense that f ∗ μ f = deg ( f ) μ f {\displaystyle f^{*}\mu _{f}=\deg(f)\mu _{f}} . One striking characterization of the equilibrium measure is that it describes the asymptotics of almost every point in C P n {\displaystyle \mathbf {CP} ^{n}} when followed backward in time, by Jean-Yves Briend, Julien Duval, Tien-Cuong Dinh, and Sibony. Namely, for a point z in C P n {\displaystyle \mathbf {CP} ^{n}} and a positive integer r, consider the probability measure ( 1 / d r n ) ( f r ) ∗ ( δ z ) {\displaystyle (1/d^{rn})(f^{r})^{*}(\delta _{z})} which is evenly distributed on the d r n {\displaystyle d^{rn}} points w with f r ( w ) = z {\displaystyle f^{r}(w)=z} . Then there is a Zariski closed subset E ⊊ C P n {\displaystyle E\subsetneq \mathbf {CP} ^{n}} such that for all points z not in E, the measures just defined converge weakly to the equilibrium measure μ f {\displaystyle \mu _{f}} as r goes to infinity. In more detail: only finitely many closed complex subspaces of C P n {\displaystyle \mathbf {CP} ^{n}} are totally invariant under f (meaning that f − 1 ( S ) = S {\displaystyle f^{-1}(S)=S} ), and one can take the exceptional set E to be the unique largest totally invariant closed complex subspace not equal to C P n {\displaystyle \mathbf {CP} ^{n}} . Another characterization of the equilibrium measure (due to Briend and Duval) is as follows. For each positive integer r, the number of periodic points of period r (meaning that f r ( z ) = z {\displaystyle f^{r}(z)=z} ), counted with multiplicity, is ( d r ( n + 1 ) − 1 ) / ( d r − 1 ) {\displaystyle (d^{r(n+1)}-1)/(d^{r}-1)} , which is roughly d r n {\displaystyle d^{rn}} . Consider the probability measure which is evenly distributed on the points of period r. Then these measures also converge to the equilibrium measure μ f {\displaystyle \mu _{f}} as r goes to infinity. Moreover, most periodic points are repelling and lie in J ∗ ( f ) {\displaystyle J^{*}(f)} , and so one gets the same limit measure by averaging only over the repelling periodic points in J ∗ ( f ) {\displaystyle J^{*}(f)} . There may also be repelling periodic points outside J ∗ ( f ) {\displaystyle J^{*}(f)} . The equilibrium measure gives zero mass to any closed complex subspace of C P n {\displaystyle \mathbf {CP} ^{n}} that is not the whole space. Since the periodic points in J ∗ ( f ) {\displaystyle J^{*}(f)} are dense in J ∗ ( f ) {\displaystyle J^{*}(f)} , it follows that the periodic points of f are Zariski dense in C P n {\displaystyle \mathbf {CP} ^{n}} . A more algebraic proof of this Zariski density was given by Najmuddin Fakhruddin. Another consequence of μ f {\displaystyle \mu _{f}} giving zero mass to closed complex subspaces not equal to C P n {\displaystyle \mathbf {CP} ^{n}} is that each point has zero mass. As a result, the support J ∗ ( f ) {\displaystyle J^{*}(f)} of μ f {\displaystyle \mu _{f}} has no isolated points, and so it is a perfect set. The support J ∗ ( f ) {\displaystyle J^{*}(f)} of the equilibrium measure is not too small, in the sense that its Hausdorff dimension is always greater than zero. In that sense, an endomorphism of complex projective space with degree greater than 1 always behaves chaotically at least on part of the space. (There are examples where J ∗ ( f ) {\displaystyle J^{*}(f)} is all of C P n {\displaystyle \mathbf {CP} ^{n}} .) Another way to make precise that f has some chaotic behavior is that the topological entropy of f is always greater than zero, in fact equal to n log d {\displaystyle n\log d} , by Mikhail Gromov, Michał Misiurewicz, and Feliks Przytycki. For any continuous endomorphism f of a compact metric space X, the topological entropy of f is equal to the maximum of the measure-theoretic entropy (or "metric entropy") of all f-invariant measures on X. For a holomorphic endomorphism f of C P n {\displaystyle \mathbf {CP} ^{n}} , the equilibrium measure μ f {\displaystyle \mu _{f}} is the unique invariant measure of maximal entropy, by Briend and Duval. This is another way to say that the most chaotic behavior of f is concentrated on the support of the equilibrium measure. Finally, one can say more about the dynamics of f on the support of the equilibrium measure: f is ergodic and, more strongly, mixing with respect to that measure, by Fornaess and Sibony. It follows, for example, that for almost every point with respect to μ f {\displaystyle \mu _{f}} , its forward orbit is uniformly distributed with respect to μ f {\displaystyle \mu _{f}} . === Lattès maps === A Lattès map is an endomorphism f of C P n {\displaystyle \mathbf {CP} ^{n}} obtained from an endomorphism of an abelian variety by dividing by a finite group. In this case, the equilibrium measure of f is absolutely continuous with respect to Lebesgue measure on C P n {\displaystyle \mathbf {CP} ^{n}} . Conversely, by Anna Zdunik, François Berteloot, and Christophe Dupont, the only endomorphisms of C P n {\displaystyle \mathbf {CP} ^{n}} whose equilibrium measure is absolutely continuous with respect to Lebesgue measure are the Lattès examples. That is, for all non-Lattès endomorphisms, μ f {\displaystyle \mu _{f}} assigns its full mass 1 to some Borel set of Lebesgue measure 0. In dimension 1, more is known about the "irregularity" of the equilibrium measure. Namely, define the Hausdorff dimension of a probability measure μ {\displaystyle \mu } on C P 1 {\displaystyle \mathbf {CP} ^{1}} (or more generally on a smooth manifold) by dim ( μ ) = inf { dim H ( Y ) : μ ( Y ) = 1 } , {\displaystyle \dim(\mu )=\inf\{\dim _{H}(Y):\mu (Y)=1\},} where dim H ( Y ) {\displaystyle \dim _{H}(Y)} denotes the Hausdorff dimension of a Borel set Y. For an endomorphism f of C P 1 {\displaystyle \mathbf {CP} ^{1}} of degree greater than 1, Zdunik showed that the dimension of μ f {\displaystyle \mu _{f}} is equal to the Hausdorff dimension of its support (the Julia set) if and only if f is conjugate to a Lattès map, a Chebyshev polynomial (up to sign), or a power map f ( z ) = z ± d {\displaystyle f(z)=z^{\pm d}} with d ≥ 2 {\displaystyle d\geq 2} . (In the latter cases, the Julia set is all of C P 1 {\displaystyle \mathbf {CP} ^{1}} , a closed interval, or a circle, respectively.) Thus, outside those special cases, the equilibrium measure is highly irregular, assigning positive mass to some closed subsets of the Julia set with smaller Hausdorff dimension than the whole Julia set. == Automorphisms of projective varieties == More generally, complex dynamics seeks to describe the behavior of rational maps under iteration. One case that has been studied with some success is that of automorphisms of a smooth complex projective variety X, meaning isomorphisms f from X to itself. The case of main interest is where f acts nontrivially on the singular cohomology H ∗ ( X , Z ) {\displaystyle H^{*}(X,\mathbf {Z} )} . Gromov and Yosef Yomdin showed that the topological entropy of an endomorphism (for example, an automorphism) of a smooth complex projective variety is determined by its action on cohomology. Explicitly, for X of complex dimension n and 0 ≤ p ≤ n {\displaystyle 0\leq p\leq n} , let d p {\displaystyle d_{p}} be the spectral radius of f acting by pullback on the Hodge cohomology group H p , p ( X ) ⊂ H 2 p ( X , C ) {\displaystyle H^{p,p}(X)\subset H^{2p}(X,\mathbf {C} )} . Then the topological entropy of f is h ( f ) = max p log d p . {\displaystyle h(f)=\max _{p}\log d_{p}.} (The topological entropy of f is also the logarithm of the spectral radius of f on the whole cohomology H ∗ ( X , C ) {\displaystyle H^{*}(X,\mathbf {C} )} .) Thus f has some chaotic behavior, in the sense that its topological entropy is greater than zero, if and only if it acts on some cohomology group with an eigenvalue of absolute value greater than 1. Many projective varieties do not have such automorphisms, but (for example) many rational surfaces and K3 surfaces do have such automorphisms. Let X be a compact Kähler manifold, which includes the case of a smooth complex projective variety. Say that an automorphism f of X has simple action on cohomology if: there is only one number p such that d p {\displaystyle d_{p}} takes its maximum value, the action of f on H p , p ( X ) {\displaystyle H^{p,p}(X)} has only one eigenvalue with absolute value d p {\displaystyle d_{p}} , and this is a simple eigenvalue. For example, Serge Cantat showed that every automorphism of a compact Kähler surface with positive topological entropy has simple action on cohomology. (Here an "automorphism" is complex analytic but is not assumed to preserve a Kähler metric on X. In fact, every automorphism that preserves a metric has topological entropy zero.) For an automorphism f with simple action on cohomology, some of the goals of complex dynamics have been achieved. Dinh, Sibony, and Henry de Thélin showed that there is a unique invariant probability measure μ f {\displaystyle \mu _{f}} of maximal entropy for f, called the equilibrium measure (or Green measure, or measure of maximal entropy). (In particular, μ f {\displaystyle \mu _{f}} has entropy log d p {\displaystyle \log d_{p}} with respect to f.) The support of μ f {\displaystyle \mu _{f}} is called the small Julia set J ∗ ( f ) {\displaystyle J^{*}(f)} . Informally: f has some chaotic behavior, and the most chaotic behavior is concentrated on the small Julia set. At least when X is projective, J ∗ ( f ) {\displaystyle J^{*}(f)} has positive Hausdorff dimension. (More precisely, μ f {\displaystyle \mu _{f}} assigns zero mass to all sets of sufficiently small Hausdorff dimension.) === Kummer automorphisms === Some abelian varieties have an automorphism of positive entropy. For example, let E be a complex elliptic curve and let X be the abelian surface E × E {\displaystyle E\times E} . Then the group G L ( 2 , Z ) {\displaystyle GL(2,\mathbf {Z} )} of invertible 2 × 2 {\displaystyle 2\times 2} integer matrices acts on X. Any group element f whose trace has absolute value greater than 2, for example ( 2 1 1 1 ) {\displaystyle {\begin{pmatrix}2&1\\1&1\end{pmatrix}}} , has spectral radius greater than 1, and so it gives a positive-entropy automorphism of X. The equilibrium measure of f is the Haar measure (the standard Lebesgue measure) on X. The Kummer automorphisms are defined by taking the quotient space by a finite group of an abelian surface with automorphism, and then blowing up to make the surface smooth. The resulting surfaces include some special K3 surfaces and rational surfaces. For the Kummer automorphisms, the equilibrium measure has support equal to X and is smooth outside finitely many curves. Conversely, Cantat and Dupont showed that for all surface automorphisms of positive entropy except the Kummer examples, the equilibrium measure is not absolutely continuous with respect to Lebesgue measure. In this sense, it is usual for the equilibrium measure of an automorphism to be somewhat irregular. === Saddle periodic points === A periodic point z of f is called a saddle periodic point if, for a positive integer r such that f r ( z ) = z {\displaystyle f^{r}(z)=z} , at least one eigenvalue of the derivative of f r {\displaystyle f^{r}} on the tangent space at z has absolute value less than 1, at least one has absolute value greater than 1, and none has absolute value equal to 1. (Thus f is expanding in some directions and contracting at others, near z.) For an automorphism f with simple action on cohomology, the saddle periodic points are dense in the support J ∗ ( f ) {\displaystyle J^{*}(f)} of the equilibrium measure μ f {\displaystyle \mu _{f}} . On the other hand, the measure μ f {\displaystyle \mu _{f}} vanishes on closed complex subspaces not equal to X. It follows that the periodic points of f (or even just the saddle periodic points contained in the support of μ f {\displaystyle \mu _{f}} ) are Zariski dense in X. For an automorphism f with simple action on cohomology, f and its inverse map are ergodic and, more strongly, mixing with respect to the equilibrium measure μ f {\displaystyle \mu _{f}} . It follows that for almost every point z with respect to μ f {\displaystyle \mu _{f}} , the forward and backward orbits of z are both uniformly distributed with respect to μ f {\displaystyle \mu _{f}} . A notable difference with the case of endomorphisms of C P n {\displaystyle \mathbf {CP} ^{n}} is that for an automorphism f with simple action on cohomology, there can be a nonempty open subset of X on which neither forward nor backward orbits approach the support J ∗ ( f ) {\displaystyle J^{*}(f)} of the equilibrium measure. For example, Eric Bedford, Kyounghee Kim, and Curtis McMullen constructed automorphisms f of a smooth projective rational surface with positive topological entropy (hence simple action on cohomology) such that f has a Siegel disk, on which the action of f is conjugate to an irrational rotation. Points in that open set never approach J ∗ ( f ) {\displaystyle J^{*}(f)} under the action of f or its inverse. At least in complex dimension 2, the equilibrium measure of f describes the distribution of the isolated periodic points of f. (There may also be complex curves fixed by f or an iterate, which are ignored here.) Namely, let f be an automorphism of a compact Kähler surface X with positive topological entropy h ( f ) = log d 1 {\displaystyle h(f)=\log d_{1}} . Consider the probability measure which is evenly distributed on the isolated periodic points of period r (meaning that f r ( z ) = z {\displaystyle f^{r}(z)=z} ). Then this measure converges weakly to μ f {\displaystyle \mu _{f}} as r goes to infinity, by Eric Bedford, Lyubich, and John Smillie. The same holds for the subset of saddle periodic points, because both sets of periodic points grow at a rate of ( d 1 ) r {\displaystyle (d_{1})^{r}} . == See also == Dynamics in complex dimension 1 Complex analysis Complex quadratic polynomial Infinite compositions of analytic functions Montel's theorem Poincaré metric Schwarz lemma Riemann mapping theorem Carathéodory's theorem (conformal mapping) Böttcher's equation Orbit portraits Yoccoz puzzles Related areas of dynamics Arithmetic dynamics Chaos theory Symbolic dynamics == Notes == == References == Alexander, Daniel (1994), A history of complex dynamics: from Schröder to Fatou and Julia, Aspects of Mathematics, vol. 24, Vieweg Verlag, doi:10.1007/978-3-663-09197-4, ISBN 3-528-06520-6, MR 1260930 Beardon, Alan (1991), Iteration of rational functions: complex analytic dynamical systems, Springer-Verlag, ISBN 0-387-97589-6, MR 1128089 Berteloot, François; Dupont, Christophe (2005), "Une caractérisation des endomorphismes de Lattès par leur mesure de Green", Commentarii Mathematici Helvetici, 80 (2): 433–454, arXiv:math/0501034, doi:10.4171/CMH/21, MR 2142250 Bonifant, Araceli; Lyubich, Mikhail; Sutherland, Scott, eds. (2014), Frontiers in complex dynamics: in celebration of John Milnor's 80th birthday, Princeton University Press, doi:10.1515/9781400851317, ISBN 978-0-691-15929-4, MR 3289442 Cantat, Serge (2010), "Quelques aspects des systèmes dynamiques polynomiaux: existence, exemples, rigidité", Quelques aspects des systèmes dynamiques polynomiaux, Société Mathématique de France, pp. 13–95, ISBN 978-2-85629-338-6, MR 2932433 Cantat, Serge (2014), "Dynamics of automorphisms of compact complex surfaces", Frontiers in complex dynamics (Banff, 2011), Princeton University Press, pp. 463–514, ISBN 978-0-691-15929-4, MR 3289919 Cantat, Serge; Dupont, Christophe (2020), "Automorphisms of surfaces: Kummer rigidity and measure of maximal entropy", Journal of the European Mathematical Society, 22 (4): 1289–1351, arXiv:1410.1202, doi:10.4171/JEMS/946, MR 4071328 Carleson, Lennart; Gamelin, Theodore (1993), Complex dynamics, Springer-Verlag, doi:10.1007/978-1-4612-4364-9, ISBN 0-387-97942-5, MR 1230383 de Thélin, Henry; Dinh, Tien-Cuong (2012), "Dynamics of automorphisms on compact Kähler manifolds", Advances in Mathematics, 229 (5): 2640–2655, arXiv:1009.5796, doi:10.1016/j.aim.2012.01.014, MR 2889139 Dinh, Tien-Cuong; Sibony, Nessim (2010), "Dynamics in several complex variables: endomorphisms of projective spaces and polynomial-like mappings", Holomorphic dynamical systems, Lecture Notes in Mathematics, vol. 1998, Springer-Verlag, pp. 165–294, arXiv:0810.0811, doi:10.1007/978-3-642-13171-4_4, ISBN 978-3-642-13170-7, MR 2648690 Dinh, Tien-Cuong; Sibony, Nessim (2010), "Super-potentials for currents on compact Kähler manifolds and dynamics of automorphisms", Journal of Algebraic Geometry, 19 (3): 473–529, arXiv:0804.0860, doi:10.1090/S1056-3911-10-00549-7, MR 2629598 Fakhruddin, Najmuddin (2003), "Questions on self maps of algebraic varieties", Journal of the Ramanujan Mathematical Society, 18 (2): 109–122, arXiv:math/0212208, MR 1995861 Fornaess, John Erik (1996), Dynamics in several complex variables, American Mathematical Society, ISBN 978-0-8218-0317-2, MR 1363948 Fornaess, John Erik; Sibony, Nessim (2001), "Dynamics of P 2 {\displaystyle \mathbf {P} ^{2}} (examples)", Laminations and foliations in dynamics, geometry and topology (Stony Brook, 1998), American Mathematical Society, pp. 47–85, doi:10.1090/conm/269/04329, ISBN 978-0-8218-1985-2, MR 1810536 Guedj, Vincent (2010), "Propriétés ergodiques des applications rationnelles", Quelques aspects des systèmes dynamiques polynomiaux, Société Mathématique de France, pp. 97–202, arXiv:math/0611302, ISBN 978-2-85629-338-6, MR 2932434 Milnor, John (2006), Dynamics in one complex variable (3rd ed.), Princeton University Press, arXiv:math/9201272, doi:10.1515/9781400835539, ISBN 0-691-12488-4, MR 2193309 Morosawa, Shunsuke; Nishimura, Yasuichiro; Taniguchu, Masahiko; Ueda, Tetsuo (2000), Holomorphic dynamics, Cambridge University Press, ISBN 0-521-66258-3, MR 1747010 Tan, Lei, ed. (2000), The Mandelbrot set, theme and variations, London Mathematical Society Lecture Note Series, vol. 274, Cambridge University Press, ISBN 0-521-77476-4, MR 1765080 Zdunik, Anna (1990), "Parabolic orbifolds and the dimension of the maximal measure for rational maps", Inventiones Mathematicae, 99 (3): 627–649, Bibcode:1990InMat..99..627Z, doi:10.1007/BF01234434, MR 1032883 == External links == Gallery of dynamics (Curtis McMullen) Surveys in Dynamical Systems
|
Wikipedia:Complex network zeta function#0
|
Different definitions have been given for the dimension of a complex network or graph. For example, metric dimension is defined in terms of the resolving set for a graph. Dimension has also been defined based on the box covering method applied to graphs. Here we describe the definition based on the complex network zeta function. This generalises the definition based on the scaling property of the volume with distance. The best definition depends on the application. == Definition == One usually thinks of dimension for a set which is dense, like the points on a line, for example. Dimension makes sense in a discrete setting, like for graphs, only in the large system limit, as the size tends to infinity. For example, in Statistical Mechanics, one considers discrete points which are located on regular lattices of different dimensions. Such studies have been extended to arbitrary networks, and it is interesting to consider how the definition of dimension can be extended to cover these cases. A very simple and obvious way to extend the definition of dimension to arbitrary large networks is to consider how the volume (number of nodes within a given distance from a specified node) scales as the distance (shortest path connecting two nodes in the graph) is increased. For many systems arising in physics, this is indeed a useful approach. This definition of dimension could be put on a strong mathematical foundation, similar to the definition of Hausdorff dimension for continuous systems. The mathematically robust definition uses the concept of a zeta function for a graph. The complex network zeta function and the graph surface function were introduced to characterize large graphs. They have also been applied to study patterns in Language Analysis. In this section we will briefly review the definition of the functions and discuss further some of their properties which follow from the definition. We denote by r i j {\displaystyle \textstyle r_{ij}} the distance from node i {\displaystyle \textstyle i} to node j {\displaystyle \textstyle j} , i.e., the length of the shortest path connecting the first node to the second node. r i j {\displaystyle \textstyle r_{ij}} is ∞ {\displaystyle \textstyle \infty } if there is no path from node i {\displaystyle \textstyle i} to node j {\displaystyle \textstyle j} . With this definition, the nodes of the complex network become points in a metric space. Simple generalisations of this definition can be studied, e.g., we could consider weighted edges. The graph surface function, S ( r ) {\displaystyle \textstyle S(r)} , is defined as the number of nodes which are exactly at a distance r {\displaystyle \textstyle r} from a given node, averaged over all nodes of the network. The complex network zeta function ζ G ( α ) {\displaystyle \textstyle \zeta _{G}(\alpha )} is defined as ζ G ( α ) := 1 N ∑ i ∑ j ≠ i r i j − α , {\displaystyle \zeta _{G}(\alpha ):={\frac {1}{N}}\sum _{i}\sum _{j\neq i}r_{ij}^{-\alpha },} where N {\displaystyle \textstyle N} is the graph size, measured by the number of nodes. When α {\displaystyle \textstyle \alpha } is zero all nodes contribute equally to the sum in the previous equation. This means that ζ G ( 0 ) {\displaystyle \textstyle \zeta _{G}(0)} is N − 1 {\displaystyle \textstyle N-1} , and it diverges when N → ∞ {\displaystyle \textstyle N\rightarrow \infty } . When the exponent α {\displaystyle \textstyle \alpha } tends to infinity, the sum gets contributions only from the nearest neighbours of a node. The other terms tend to zero. Thus, ζ G ( α ) {\displaystyle \textstyle \zeta _{G}(\alpha )} tends to the average degree < k > {\displaystyle \textstyle <k>} for the graph as α → ∞ {\displaystyle \textstyle \alpha \rightarrow \infty } . ⟨ k ⟩ = lim α → ∞ ζ G ( α ) . {\displaystyle \langle k\rangle =\lim _{\alpha \rightarrow \infty }\zeta _{G}(\alpha ).} The need for taking an average over all nodes can be avoided by using the concept of supremum over nodes, which makes the concept much easier to apply for formally infinite graphs. The definition can be expressed as a weighted sum over the node distances. This gives the Dirichlet series relation ζ G ( α ) = ∑ r S ( r ) / r α . {\displaystyle \zeta _{G}(\alpha )=\sum _{r}S(r)/r^{\alpha }.} This definition has been used in the shortcut model to study several processes and their dependence on dimension. == Properties == ζ G ( α ) {\displaystyle \textstyle \zeta _{G}(\alpha )} is a decreasing function of α {\displaystyle \textstyle \alpha } , ζ G ( α 1 ) > ζ G ( α 2 ) {\displaystyle \textstyle \zeta _{G}(\alpha _{1})>\zeta _{G}(\alpha _{2})} , if α 1 < α 2 {\displaystyle \textstyle \alpha _{1}<\alpha _{2}} . If the average degree of the nodes (the mean coordination number for the graph) is finite, then there is exactly one value of α {\displaystyle \textstyle \alpha } , α t r a n s i t i o n {\displaystyle \textstyle \alpha _{transition}} , at which the complex network zeta function transitions from being infinite to being finite. This has been defined as the dimension of the complex network. If we add more edges to an existing graph, the distances between nodes will decrease. This results in an increase in the value of the complex network zeta function, since S ( r ) {\displaystyle \textstyle S(r)} will get pulled inward. If the new links connect remote parts of the system, i.e., if the distances change by amounts which do not remain finite as the graph size N → ∞ {\displaystyle \textstyle N\rightarrow \infty } , then the dimension tends to increase. For regular discrete d-dimensional lattices Z d {\displaystyle \textstyle \mathbf {Z} ^{d}} with distance defined using the L 1 {\displaystyle \textstyle L^{1}} norm ‖ n → ‖ 1 = ‖ n 1 ‖ + ⋯ + ‖ n d ‖ , {\displaystyle \|{\vec {n}}\|_{1}=\|n_{1}\|+\cdots +\|n_{d}\|,} the transition occurs at α = d {\displaystyle \textstyle \alpha =d} . The definition of dimension using the complex network zeta function satisfies properties like monotonicity (a subset has a lower or the same dimension as its containing set), stability (a union of sets has the maximum dimension of the component sets forming the union) and Lipschitz invariance, provided the operations involved change the distances between nodes only by finite amounts as the graph size N {\displaystyle \textstyle N} goes to ∞ {\displaystyle \textstyle \infty } . Algorithms to calculate the complex network zeta function have been presented. == Values for discrete regular lattices == For a one-dimensional regular lattice the graph surface function S 1 ( r ) {\displaystyle \textstyle S_{1}(r)} is exactly two for all values of r {\displaystyle \textstyle r} (there are two nearest neighbours, two next-nearest neighbours, and so on). Thus, the complex network zeta function ζ G ( α ) {\displaystyle \textstyle \zeta _{G}(\alpha )} is equal to 2 ζ ( α ) {\displaystyle \textstyle 2\zeta (\alpha )} , where ζ ( α ) {\displaystyle \textstyle \zeta (\alpha )} is the usual Riemann zeta function. By choosing a given axis of the lattice and summing over cross-sections for the allowed range of distances along the chosen axis the recursion relation below can be derived S d + 1 ( r ) = 2 + S d ( r ) + 2 ∑ i = 1 r − 1 S d ( i ) . {\displaystyle S_{d+1}(r)=2+S_{d}(r)+2\sum _{i=1}^{r-1}S_{d}(i).} From combinatorics the surface function for a regular lattice can be written as S d ( r ) = ∑ i = 0 d − 1 ( − 1 ) i 2 d − i ( d i ) ( d + r − i − 1 d − i − 1 ) . {\displaystyle S_{d}(r)=\sum _{i=0}^{d-1}(-1)^{i}2^{d-i}{d \choose i}{d+r-i-1 \choose d-i-1}.} The following expression for the sum of positive integers raised to a given power k {\displaystyle \textstyle k} will be useful to calculate the surface function for higher values of d {\displaystyle \textstyle d} : ∑ i = 1 r i k = r k + 1 ( k + 1 ) + r k 2 + ∑ j = 1 ⌊ ( k + 1 ) / 2 ⌋ ( − 1 ) j + 1 2 ζ ( 2 j ) k ! r k + 1 − 2 j ( 2 π ) 2 j ( k + 1 − 2 j ) ! . {\displaystyle \sum _{i=1}^{r}i^{k}={\frac {r^{k+1}}{(k+1)}}+{\frac {r^{k}}{2}}+\sum _{j=1}^{\lfloor (k+1)/2\rfloor }{\frac {(-1)^{j+1}2\zeta (2j)k!r^{k+1-2j}}{(2\pi )^{2j}(k+1-2j)!}}.} Another formula for the sum of positive integers raised to a given power k {\displaystyle \textstyle k} is ∑ k = 1 n ( n + 1 k ) ∑ i = 1 r i k = ( r + 1 ) ( ( r + 1 ) n − 1 ) . {\displaystyle \sum _{k=1}^{n}{\bigl (}{\begin{smallmatrix}n+1\ k\end{smallmatrix}}{\bigr )}\sum _{i=1}^{r}i^{k}=(r+1)((r+1)^{n}-1).} S d ( r ) → O ( 2 d r d − 1 / Γ ( d ) ) {\displaystyle \textstyle S_{d}(r)\rightarrow O(2^{d}r^{d-1}/\Gamma (d))} as r → ∞ {\displaystyle \textstyle r\rightarrow \infty } . The Complex network zeta function for some lattices is given below. d = 1 {\displaystyle \textstyle d=1} : ζ G ( α ) = 2 ζ ( α ) {\displaystyle \textstyle \zeta _{G}(\alpha )=2\zeta (\alpha )} d = 2 {\displaystyle \textstyle d=2} : ζ G ( α ) = 4 ζ ( α − 1 ) {\displaystyle \textstyle \zeta _{G}(\alpha )=4\zeta (\alpha -1)} d = 3 {\displaystyle \textstyle d=3} : ζ G ( α ) = 4 ζ ( α − 2 ) + 2 ζ ( α {\displaystyle \textstyle \zeta _{G}(\alpha )=4\zeta (\alpha -2)+2\zeta (\alpha } ) d = 4 {\displaystyle \textstyle d=4} : ζ G ( α ) = 8 3 ζ ( α − 3 ) + 16 3 ζ ( α − 1 ) {\displaystyle \textstyle \zeta _{G}(\alpha )={\frac {8}{3}}\zeta (\alpha -3)+{\frac {16}{3}}\zeta (\alpha -1)} r → ∞ {\displaystyle \textstyle r\rightarrow \infty } : ζ G ( α ) = 2 d ζ ( α − d + 1 ) / Γ ( d ) {\displaystyle \textstyle \zeta _{G}(\alpha )=2^{d}\zeta (\alpha -d+1)/\Gamma (d)} (for α {\displaystyle \alpha } near the transition point.) == Random graph zeta function == Random graphs are networks having some number N {\displaystyle \textstyle N} of vertices, in which each pair is connected with probability p {\displaystyle \textstyle p} , or else the pair is disconnected. Random graphs have a diameter of two with probability approaching one, in the infinite limit ( N → ∞ {\displaystyle \textstyle N\rightarrow \infty } ). To see this, consider two nodes A {\displaystyle \textstyle A} and B {\displaystyle \textstyle B} . For any node C {\displaystyle \textstyle C} different from A {\displaystyle \textstyle A} or B {\displaystyle \textstyle B} , the probability that C {\displaystyle \textstyle C} is not simultaneously connected to both A {\displaystyle \textstyle A} and B {\displaystyle \textstyle B} is ( 1 − p 2 ) {\displaystyle \textstyle (1-p^{2})} . Thus, the probability that none of the N − 2 {\displaystyle \textstyle N-2} nodes provides a path of length 2 {\displaystyle \textstyle 2} between nodes A {\displaystyle \textstyle A} and B {\displaystyle \textstyle B} is ( 1 − p 2 ) N − 2 {\displaystyle \textstyle (1-p^{2})^{N-2}} . This goes to zero as the system size goes to infinity, and hence most random graphs have their nodes connected by paths of length at most 2 {\displaystyle \textstyle 2} . Also, the mean vertex degree will be p ( N − 1 ) {\displaystyle \textstyle p(N-1)} . For large random graphs almost all nodes are at a distance of one or two from any given node, S ( 1 ) {\displaystyle \textstyle S(1)} is p ( N − 1 ) {\displaystyle \textstyle p(N-1)} , S ( 2 ) {\displaystyle \textstyle S(2)} is ( N − 1 ) ( 1 − p ) {\displaystyle \textstyle (N-1)(1-p)} , and the graph zeta function is ζ G ( α ) = p ( N − 1 ) + ( N − 1 ) ( 1 − p ) 2 − α . {\displaystyle \zeta _{G}(\alpha )=p(N-1)+(N-1)(1-p)2^{-\alpha }.} == References ==
|
Wikipedia:Complex number#0
|
In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted i, called the imaginary unit and satisfying the equation i 2 = − 1 {\displaystyle i^{2}=-1} ; every complex number can be expressed in the form a + b i {\displaystyle a+bi} , where a and b are real numbers. Because no real number satisfies the above equation, i was called an imaginary number by René Descartes. For the complex number a + b i {\displaystyle a+bi} , a is called the real part, and b is called the imaginary part. The set of complex numbers is denoted by either of the symbols C {\displaystyle \mathbb {C} } or C. Despite the historical nomenclature, "imaginary" complex numbers have a mathematical existence as firm as that of the real numbers, and they are fundamental tools in the scientific description of the natural world. Complex numbers allow solutions to all polynomial equations, even those that have no solutions in real numbers. More precisely, the fundamental theorem of algebra asserts that every non-constant polynomial equation with real or complex coefficients has a solution which is a complex number. For example, the equation ( x + 1 ) 2 = − 9 {\displaystyle (x+1)^{2}=-9} has no real solution, because the square of a real number cannot be negative, but has the two nonreal complex solutions − 1 + 3 i {\displaystyle -1+3i} and − 1 − 3 i {\displaystyle -1-3i} . Addition, subtraction and multiplication of complex numbers can be naturally defined by using the rule i 2 = − 1 {\displaystyle i^{2}=-1} along with the associative, commutative, and distributive laws. Every nonzero complex number has a multiplicative inverse. This makes the complex numbers a field with the real numbers as a subfield. Because of these properties, a + b i = a + i b {\displaystyle a+bi=a+ib} , and which form is written depends upon convention and style considerations. The complex numbers also form a real vector space of dimension two, with { 1 , i } {\displaystyle \{1,i\}} as a standard basis. This standard basis makes the complex numbers a Cartesian plane, called the complex plane. This allows a geometric interpretation of the complex numbers and their operations, and conversely some geometric objects and operations can be expressed in terms of complex numbers. For example, the real numbers form the real line, which is pictured as the horizontal axis of the complex plane, while real multiples of i {\displaystyle i} are the vertical axis. A complex number can also be defined by its geometric polar coordinates: the radius is called the absolute value of the complex number, while the angle from the positive real axis is called the argument of the complex number. The complex numbers of absolute value one form the unit circle. Adding a fixed complex number to all complex numbers defines a translation in the complex plane, and multiplying by a fixed complex number is a similarity centered at the origin (dilating by the absolute value, and rotating by the argument). The operation of complex conjugation is the reflection symmetry with respect to the real axis. The complex numbers form a rich structure that is simultaneously an algebraically closed field, a commutative algebra over the reals, and a Euclidean vector space of dimension two. == Definition and basic operations == A complex number is an expression of the form a + bi, where a and b are real numbers, and i is an abstract symbol, the so-called imaginary unit, whose meaning will be explained further below. For example, 2 + 3i is a complex number. For a complex number a + bi, the real number a is called its real part, and the real number b (not the complex number bi) is its imaginary part. The real part of a complex number z is denoted Re(z), R e ( z ) {\displaystyle {\mathcal {Re}}(z)} , or R ( z ) {\displaystyle {\mathfrak {R}}(z)} ; the imaginary part is Im(z), I m ( z ) {\displaystyle {\mathcal {Im}}(z)} , or I ( z ) {\displaystyle {\mathfrak {I}}(z)} : for example, Re ( 2 + 3 i ) = 2 {\textstyle \operatorname {Re} (2+3i)=2} , Im ( 2 + 3 i ) = 3 {\displaystyle \operatorname {Im} (2+3i)=3} . A complex number z can be identified with the ordered pair of real numbers ( ℜ ( z ) , ℑ ( z ) ) {\displaystyle (\Re (z),\Im (z))} , which may be interpreted as coordinates of a point in a Euclidean plane with standard coordinates, which is then called the complex plane or Argand diagram. The horizontal axis is generally used to display the real part, with increasing values to the right, and the imaginary part marks the vertical axis, with increasing values upwards. A real number a can be regarded as a complex number a + 0i, whose imaginary part is 0. A purely imaginary number bi is a complex number 0 + bi, whose real part is zero. It is common to write a + 0i = a, 0 + bi = bi, and a + (−b)i = a − bi; for example, 3 + (−4)i = 3 − 4i. The set of all complex numbers is denoted by C {\displaystyle \mathbb {C} } (blackboard bold) or C (upright bold). In some disciplines such as electromagnetism and electrical engineering, j is used instead of i, as i frequently represents electric current, and complex numbers are written as a + bj or a + jb. === Addition and subtraction === Two complex numbers a = x + y i {\displaystyle a=x+yi} and b = u + v i {\displaystyle b=u+vi} are added by separately adding their real and imaginary parts. That is to say: a + b = ( x + y i ) + ( u + v i ) = ( x + u ) + ( y + v ) i . {\displaystyle a+b=(x+yi)+(u+vi)=(x+u)+(y+v)i.} Similarly, subtraction can be performed as a − b = ( x + y i ) − ( u + v i ) = ( x − u ) + ( y − v ) i . {\displaystyle a-b=(x+yi)-(u+vi)=(x-u)+(y-v)i.} The addition can be geometrically visualized as follows: the sum of two complex numbers a and b, interpreted as points in the complex plane, is the point obtained by building a parallelogram from the three vertices O, and the points of the arrows labeled a and b (provided that they are not on a line). Equivalently, calling these points A, B, respectively and the fourth point of the parallelogram X the triangles OAB and XBA are congruent. === Multiplication === The product of two complex numbers is computed as follows: ( a + b i ) ⋅ ( c + d i ) = a c − b d + ( a d + b c ) i . {\displaystyle (a+bi)\cdot (c+di)=ac-bd+(ad+bc)i.} For example, ( 3 + 2 i ) ( 4 − i ) = 3 ⋅ 4 − ( 2 ⋅ ( − 1 ) ) + ( 3 ⋅ ( − 1 ) + 2 ⋅ 4 ) i = 14 + 5 i . {\displaystyle (3+2i)(4-i)=3\cdot 4-(2\cdot (-1))+(3\cdot (-1)+2\cdot 4)i=14+5i.} In particular, this includes as a special case the fundamental formula i 2 = i ⋅ i = − 1. {\displaystyle i^{2}=i\cdot i=-1.} This formula distinguishes the complex number i from any real number, since the square of any (negative or positive) real number is always a non-negative real number. With this definition of multiplication and addition, familiar rules for the arithmetic of rational or real numbers continue to hold for complex numbers. More precisely, the distributive property, the commutative properties (of addition and multiplication) hold. Therefore, the complex numbers form an algebraic structure known as a field, the same way as the rational or real numbers do. === Complex conjugate, absolute value, argument and division === The complex conjugate of the complex number z = x + yi is defined as z ¯ = x − y i . {\displaystyle {\overline {z}}=x-yi.} It is also denoted by some authors by z ∗ {\displaystyle z^{*}} . Geometrically, z is the "reflection" of z about the real axis. Conjugating twice gives the original complex number: z ¯ ¯ = z . {\displaystyle {\overline {\overline {z}}}=z.} A complex number is real if and only if it equals its own conjugate. The unary operation of taking the complex conjugate of a complex number cannot be expressed by applying only the basic operations of addition, subtraction, multiplication and division. For any complex number z = x + yi , the product z ⋅ z ¯ = ( x + i y ) ( x − i y ) = x 2 + y 2 {\displaystyle z\cdot {\overline {z}}=(x+iy)(x-iy)=x^{2}+y^{2}} is a non-negative real number. This allows to define the absolute value (or modulus or magnitude) of z to be the square root | z | = x 2 + y 2 . {\displaystyle |z|={\sqrt {x^{2}+y^{2}}}.} By Pythagoras' theorem, | z | {\displaystyle |z|} is the distance from the origin to the point representing the complex number z in the complex plane. In particular, the circle of radius one around the origin consists precisely of the numbers z such that | z | = 1 {\displaystyle |z|=1} . If z = x = x + 0 i {\displaystyle z=x=x+0i} is a real number, then | z | = | x | {\displaystyle |z|=|x|} : its absolute value as a complex number and as a real number are equal. Using the conjugate, the reciprocal of a nonzero complex number z = x + y i {\displaystyle z=x+yi} can be computed to be 1 z = z ¯ z z ¯ = z ¯ | z | 2 = x − y i x 2 + y 2 = x x 2 + y 2 − y x 2 + y 2 i . {\displaystyle {\frac {1}{z}}={\frac {\bar {z}}{z{\bar {z}}}}={\frac {\bar {z}}{|z|^{2}}}={\frac {x-yi}{x^{2}+y^{2}}}={\frac {x}{x^{2}+y^{2}}}-{\frac {y}{x^{2}+y^{2}}}i.} More generally, the division of an arbitrary complex number w = u + v i {\displaystyle w=u+vi} by a non-zero complex number z = x + y i {\displaystyle z=x+yi} equals w z = w z ¯ | z | 2 = ( u + v i ) ( x − i y ) x 2 + y 2 = u x + v y x 2 + y 2 + v x − u y x 2 + y 2 i . {\displaystyle {\frac {w}{z}}={\frac {w{\bar {z}}}{|z|^{2}}}={\frac {(u+vi)(x-iy)}{x^{2}+y^{2}}}={\frac {ux+vy}{x^{2}+y^{2}}}+{\frac {vx-uy}{x^{2}+y^{2}}}i.} This process is sometimes called "rationalization" of the denominator (although the denominator in the final expression may be an irrational real number), because it resembles the method to remove roots from simple expressions in a denominator. The argument of z (sometimes called the "phase" φ) is the angle of the radius Oz with the positive real axis, and is written as arg z, expressed in radians in this article. The angle is defined only up to adding integer multiples of 2 π {\displaystyle 2\pi } , since a rotation by 2 π {\displaystyle 2\pi } (or 360°) around the origin leaves all points in the complex plane unchanged. One possible choice to uniquely specify the argument is to require it to be within the interval ( − π , π ] {\displaystyle (-\pi ,\pi ]} , which is referred to as the principal value. The argument can be computed from the rectangular form x + yi by means of the arctan (inverse tangent) function. === Polar form === For any complex number z, with absolute value r = | z | {\displaystyle r=|z|} and argument φ {\displaystyle \varphi } , the equation z = r ( cos φ + i sin φ ) {\displaystyle z=r(\cos \varphi +i\sin \varphi )} holds. This identity is referred to as the polar form of z. It is sometimes abbreviated as z = r c i s φ {\textstyle z=r\operatorname {\mathrm {cis} } \varphi } . In electronics, one represents a phasor with amplitude r and phase φ in angle notation: z = r ∠ φ . {\displaystyle z=r\angle \varphi .} If two complex numbers are given in polar form, i.e., z1 = r1(cos φ1 + i sin φ1) and z2 = r2(cos φ2 + i sin φ2), the product and division can be computed as z 1 z 2 = r 1 r 2 ( cos ( φ 1 + φ 2 ) + i sin ( φ 1 + φ 2 ) ) . {\displaystyle z_{1}z_{2}=r_{1}r_{2}(\cos(\varphi _{1}+\varphi _{2})+i\sin(\varphi _{1}+\varphi _{2})).} z 1 z 2 = r 1 r 2 ( cos ( φ 1 − φ 2 ) + i sin ( φ 1 − φ 2 ) ) , if z 2 ≠ 0. {\displaystyle {\frac {z_{1}}{z_{2}}}={\frac {r_{1}}{r_{2}}}\left(\cos(\varphi _{1}-\varphi _{2})+i\sin(\varphi _{1}-\varphi _{2})\right),{\text{if }}z_{2}\neq 0.} (These are a consequence of the trigonometric identities for the sine and cosine function.) In other words, the absolute values are multiplied and the arguments are added to yield the polar form of the product. The picture at the right illustrates the multiplication of ( 2 + i ) ( 3 + i ) = 5 + 5 i . {\displaystyle (2+i)(3+i)=5+5i.} Because the real and imaginary part of 5 + 5i are equal, the argument of that number is 45 degrees, or π/4 (in radian). On the other hand, it is also the sum of the angles at the origin of the red and blue triangles are arctan(1/3) and arctan(1/2), respectively. Thus, the formula π 4 = arctan ( 1 2 ) + arctan ( 1 3 ) {\displaystyle {\frac {\pi }{4}}=\arctan \left({\frac {1}{2}}\right)+\arctan \left({\frac {1}{3}}\right)} holds. As the arctan function can be approximated highly efficiently, formulas like this – known as Machin-like formulas – are used for high-precision approximations of π: π 4 = 4 arctan ( 1 5 ) − arctan ( 1 239 ) {\displaystyle {\frac {\pi }{4}}=4\arctan \left({\frac {1}{5}}\right)-\arctan \left({\frac {1}{239}}\right)} === Powers and roots === The n-th power of a complex number can be computed using de Moivre's formula, which is obtained by repeatedly applying the above formula for the product: z n = z ⋅ ⋯ ⋅ z ⏟ n factors = ( r ( cos φ + i sin φ ) ) n = r n ( cos n φ + i sin n φ ) . {\displaystyle z^{n}=\underbrace {z\cdot \dots \cdot z} _{n{\text{ factors}}}=(r(\cos \varphi +i\sin \varphi ))^{n}=r^{n}\,(\cos n\varphi +i\sin n\varphi ).} For example, the first few powers of the imaginary unit i are i , i 2 = − 1 , i 3 = − i , i 4 = 1 , i 5 = i , … {\displaystyle i,i^{2}=-1,i^{3}=-i,i^{4}=1,i^{5}=i,\dots } . The n nth roots of a complex number z are given by z 1 / n = r n ( cos ( φ + 2 k π n ) + i sin ( φ + 2 k π n ) ) {\displaystyle z^{1/n}={\sqrt[{n}]{r}}\left(\cos \left({\frac {\varphi +2k\pi }{n}}\right)+i\sin \left({\frac {\varphi +2k\pi }{n}}\right)\right)} for 0 ≤ k ≤ n − 1. (Here r n {\displaystyle {\sqrt[{n}]{r}}} is the usual (positive) nth root of the positive real number r.) Because sine and cosine are periodic, other integer values of k do not give other values. For any z ≠ 0 {\displaystyle z\neq 0} , there are, in particular n distinct complex n-th roots. For example, there are 4 fourth roots of 1, namely z 1 = 1 , z 2 = i , z 3 = − 1 , z 4 = − i . {\displaystyle z_{1}=1,z_{2}=i,z_{3}=-1,z_{4}=-i.} In general there is no natural way of distinguishing one particular complex nth root of a complex number. (This is in contrast to the roots of a positive real number x, which has a unique positive real n-th root, which is therefore commonly referred to as the n-th root of x.) One refers to this situation by saying that the nth root is a n-valued function of z. === Fundamental theorem of algebra === The fundamental theorem of algebra, of Carl Friedrich Gauss and Jean le Rond d'Alembert, states that for any complex numbers (called coefficients) a0, ..., an, the equation a n z n + ⋯ + a 1 z + a 0 = 0 {\displaystyle a_{n}z^{n}+\dotsb +a_{1}z+a_{0}=0} has at least one complex solution z, provided that at least one of the higher coefficients a1, ..., an is nonzero. This property does not hold for the field of rational numbers Q {\displaystyle \mathbb {Q} } (the polynomial x2 − 2 does not have a rational root, because √2 is not a rational number) nor the real numbers R {\displaystyle \mathbb {R} } (the polynomial x2 + 4 does not have a real root, because the square of x is positive for any real number x). Because of this fact, C {\displaystyle \mathbb {C} } is called an algebraically closed field. It is a cornerstone of various applications of complex numbers, as is detailed further below. There are various proofs of this theorem, by either analytic methods such as Liouville's theorem, or topological ones such as the winding number, or a proof combining Galois theory and the fact that any real polynomial of odd degree has at least one real root. == History == The solution in radicals (without trigonometric functions) of a general cubic equation, when all three of its roots are real numbers, contains the square roots of negative numbers, a situation that cannot be rectified by factoring aided by the rational root test, if the cubic is irreducible; this is the so-called casus irreducibilis ("irreducible case"). This conundrum led Italian mathematician Gerolamo Cardano to conceive of complex numbers in around 1545 in his Ars Magna, though his understanding was rudimentary; moreover, he later described complex numbers as being "as subtle as they are useless". Cardano did use imaginary numbers, but described using them as "mental torture." This was prior to the use of the graphical complex plane. Cardano and other Italian mathematicians, notably Scipione del Ferro, in the 1500s created an algorithm for solving cubic equations which generally had one real solution and two solutions containing an imaginary number. Because they ignored the answers with the imaginary numbers, Cardano found them useless. Work on the problem of general polynomials ultimately led to the fundamental theorem of algebra, which shows that with complex numbers, a solution exists to every polynomial equation of degree one or higher. Complex numbers thus form an algebraically closed field, where any polynomial equation has a root. Many mathematicians contributed to the development of complex numbers. The rules for addition, subtraction, multiplication, and root extraction of complex numbers were developed by the Italian mathematician Rafael Bombelli. A more abstract formalism for the complex numbers was further developed by the Irish mathematician William Rowan Hamilton, who extended this abstraction to the theory of quaternions. The earliest fleeting reference to square roots of negative numbers can perhaps be said to occur in the work of the Greek mathematician Hero of Alexandria in the 1st century AD, where in his Stereometrica he considered, apparently in error, the volume of an impossible frustum of a pyramid to arrive at the term 81 − 144 {\displaystyle {\sqrt {81-144}}} in his calculations, which today would simplify to − 63 = 3 i 7 {\displaystyle {\sqrt {-63}}=3i{\sqrt {7}}} . Negative quantities were not conceived of in Hellenistic mathematics and Hero merely replaced the negative value by its positive 144 − 81 = 3 7 . {\displaystyle {\sqrt {144-81}}=3{\sqrt {7}}.} The impetus to study complex numbers as a topic in itself first arose in the 16th century when algebraic solutions for the roots of cubic and quartic polynomials were discovered by Italian mathematicians (Niccolò Fontana Tartaglia and Gerolamo Cardano). It was soon realized (but proved much later) that these formulas, even if one were interested only in real solutions, sometimes required the manipulation of square roots of negative numbers. In fact, it was proved later that the use of complex numbers is unavoidable when all three roots are real and distinct. However, the general formula can still be used in this case, with some care to deal with the ambiguity resulting from the existence of three cubic roots for nonzero complex numbers. Rafael Bombelli was the first to address explicitly these seemingly paradoxical solutions of cubic equations and developed the rules for complex arithmetic, trying to resolve these issues. The term "imaginary" for these quantities was coined by René Descartes in 1637, who was at pains to stress their unreal nature: ... sometimes only imaginary, that is one can imagine as many as I said in each equation, but sometimes there exists no quantity that matches that which we imagine. [... quelquefois seulement imaginaires c'est-à-dire que l'on peut toujours en imaginer autant que j'ai dit en chaque équation, mais qu'il n'y a quelquefois aucune quantité qui corresponde à celle qu'on imagine.] A further source of confusion was that the equation − 1 2 = − 1 − 1 = − 1 {\displaystyle {\sqrt {-1}}^{2}={\sqrt {-1}}{\sqrt {-1}}=-1} seemed to be capriciously inconsistent with the algebraic identity a b = a b {\displaystyle {\sqrt {a}}{\sqrt {b}}={\sqrt {ab}}} , which is valid for non-negative real numbers a and b, and which was also used in complex number calculations with one of a, b positive and the other negative. The incorrect use of this identity in the case when both a and b are negative, and the related identity 1 a = 1 a {\textstyle {\frac {1}{\sqrt {a}}}={\sqrt {\frac {1}{a}}}} , even bedeviled Leonhard Euler. This difficulty eventually led to the convention of using the special symbol i in place of − 1 {\displaystyle {\sqrt {-1}}} to guard against this mistake. Even so, Euler considered it natural to introduce students to complex numbers much earlier than we do today. In his elementary algebra text book, Elements of Algebra, he introduces these numbers almost at once and then uses them in a natural way throughout. In the 18th century complex numbers gained wider use, as it was noticed that formal manipulation of complex expressions could be used to simplify calculations involving trigonometric functions. For instance, in 1730 Abraham de Moivre noted that the identities relating trigonometric functions of an integer multiple of an angle to powers of trigonometric functions of that angle could be re-expressed by the following de Moivre's formula: ( cos θ + i sin θ ) n = cos n θ + i sin n θ . {\displaystyle (\cos \theta +i\sin \theta )^{n}=\cos n\theta +i\sin n\theta .} In 1748, Euler went further and obtained Euler's formula of complex analysis: e i θ = cos θ + i sin θ {\displaystyle e^{i\theta }=\cos \theta +i\sin \theta } by formally manipulating complex power series and observed that this formula could be used to reduce any trigonometric identity to much simpler exponential identities. The idea of a complex number as a point in the complex plane was first described by Danish–Norwegian mathematician Caspar Wessel in 1799, although it had been anticipated as early as 1685 in Wallis's A Treatise of Algebra. Wessel's memoir appeared in the Proceedings of the Copenhagen Academy but went largely unnoticed. In 1806 Jean-Robert Argand independently issued a pamphlet on complex numbers and provided a rigorous proof of the fundamental theorem of algebra. Carl Friedrich Gauss had earlier published an essentially topological proof of the theorem in 1797 but expressed his doubts at the time about "the true metaphysics of the square root of −1". It was not until 1831 that he overcame these doubts and published his treatise on complex numbers as points in the plane, largely establishing modern notation and terminology: If one formerly contemplated this subject from a false point of view and therefore found a mysterious darkness, this is in large part attributable to clumsy terminology. Had one not called +1, −1, − 1 {\displaystyle {\sqrt {-1}}} positive, negative, or imaginary (or even impossible) units, but instead, say, direct, inverse, or lateral units, then there could scarcely have been talk of such darkness. In the beginning of the 19th century, other mathematicians discovered independently the geometrical representation of the complex numbers: Buée, Mourey, Warren, Français and his brother, Bellavitis. The English mathematician G.H. Hardy remarked that Gauss was the first mathematician to use complex numbers in "a really confident and scientific way" although mathematicians such as Norwegian Niels Henrik Abel and Carl Gustav Jacob Jacobi were necessarily using them routinely before Gauss published his 1831 treatise. Augustin-Louis Cauchy and Bernhard Riemann together brought the fundamental ideas of complex analysis to a high state of completion, commencing around 1825 in Cauchy's case. The common terms used in the theory are chiefly due to the founders. Argand called cos φ + i sin φ the direction factor, and r = a 2 + b 2 {\displaystyle r={\sqrt {a^{2}+b^{2}}}} the modulus; Cauchy (1821) called cos φ + i sin φ the reduced form (l'expression réduite) and apparently introduced the term argument; Gauss used i for − 1 {\displaystyle {\sqrt {-1}}} , introduced the term complex number for a + bi, and called a2 + b2 the norm. The expression direction coefficient, often used for cos φ + i sin φ, is due to Hankel (1867), and absolute value, for modulus, is due to Weierstrass. Later classical writers on the general theory include Richard Dedekind, Otto Hölder, Felix Klein, Henri Poincaré, Hermann Schwarz, Karl Weierstrass and many others. Important work (including a systematization) in complex multivariate calculus has been started at beginning of the 20th century. Important results have been achieved by Wilhelm Wirtinger in 1927. == Abstract algebraic aspects == While the above low-level definitions, including the addition and multiplication, accurately describe the complex numbers, there are other, equivalent approaches that reveal the abstract algebraic structure of the complex numbers more immediately. === Construction as a quotient field === One approach to C {\displaystyle \mathbb {C} } is via polynomials, i.e., expressions of the form p ( X ) = a n X n + ⋯ + a 1 X + a 0 , {\displaystyle p(X)=a_{n}X^{n}+\dotsb +a_{1}X+a_{0},} where the coefficients a0, ..., an are real numbers. The set of all such polynomials is denoted by R [ X ] {\displaystyle \mathbb {R} [X]} . Since sums and products of polynomials are again polynomials, this set R [ X ] {\displaystyle \mathbb {R} [X]} forms a commutative ring, called the polynomial ring (over the reals). To every such polynomial p, one may assign the complex number p ( i ) = a n i n + ⋯ + a 1 i + a 0 {\displaystyle p(i)=a_{n}i^{n}+\dotsb +a_{1}i+a_{0}} , i.e., the value obtained by setting X = i {\displaystyle X=i} . This defines a function R [ X ] → C {\displaystyle \mathbb {R} [X]\to \mathbb {C} } This function is surjective since every complex number can be obtained in such a way: the evaluation of a linear polynomial a + b X {\displaystyle a+bX} at X = i {\displaystyle X=i} is a + b i {\displaystyle a+bi} . However, the evaluation of polynomial X 2 + 1 {\displaystyle X^{2}+1} at i is 0, since i 2 + 1 = 0. {\displaystyle i^{2}+1=0.} This polynomial is irreducible, i.e., cannot be written as a product of two linear polynomials. Basic facts of abstract algebra then imply that the kernel of the above map is an ideal generated by this polynomial, and that the quotient by this ideal is a field, and that there is an isomorphism R [ X ] / ( X 2 + 1 ) → ≅ C {\displaystyle \mathbb {R} [X]/(X^{2}+1){\stackrel {\cong }{\to }}\mathbb {C} } between the quotient ring and C {\displaystyle \mathbb {C} } . Some authors take this as the definition of C {\displaystyle \mathbb {C} } . Accepting that C {\displaystyle \mathbb {C} } is algebraically closed, because it is an algebraic extension of R {\displaystyle \mathbb {R} } in this approach, C {\displaystyle \mathbb {C} } is therefore the algebraic closure of R . {\displaystyle \mathbb {R} .} === Matrix representation of complex numbers === Complex numbers a + bi can also be represented by 2 × 2 matrices that have the form ( a − b b a ) . {\displaystyle {\begin{pmatrix}a&-b\\b&\;\;a\end{pmatrix}}.} Here the entries a and b are real numbers. As the sum and product of two such matrices is again of this form, these matrices form a subring of the ring of 2 × 2 matrices. A simple computation shows that the map a + i b ↦ ( a − b b a ) {\displaystyle a+ib\mapsto {\begin{pmatrix}a&-b\\b&\;\;a\end{pmatrix}}} is a ring isomorphism from the field of complex numbers to the ring of these matrices, proving that these matrices form a field. This isomorphism associates the square of the absolute value of a complex number with the determinant of the corresponding matrix, and the conjugate of a complex number with the transpose of the matrix. The geometric description of the multiplication of complex numbers can also be expressed in terms of rotation matrices by using this correspondence between complex numbers and such matrices. The action of the matrix on a vector (x, y) corresponds to the multiplication of x + iy by a + ib. In particular, if the determinant is 1, there is a real number t such that the matrix has the form ( cos t − sin t sin t cos t ) . {\displaystyle {\begin{pmatrix}\cos t&-\sin t\\\sin t&\;\;\cos t\end{pmatrix}}.} In this case, the action of the matrix on vectors and the multiplication by the complex number cos t + i sin t {\displaystyle \cos t+i\sin t} are both the rotation of the angle t. == Complex analysis == The study of functions of a complex variable is known as complex analysis and has enormous practical use in applied mathematics as well as in other branches of mathematics. Often, the most natural proofs for statements in real analysis or even number theory employ techniques from complex analysis (see prime number theorem for an example). Unlike real functions, which are commonly represented as two-dimensional graphs, complex functions have four-dimensional graphs and may usefully be illustrated by color-coding a three-dimensional graph to suggest four dimensions, or by animating the complex function's dynamic transformation of the complex plane. === Convergence === The notions of convergent series and continuous functions in (real) analysis have natural analogs in complex analysis. A sequence of complex numbers is said to converge if and only if its real and imaginary parts do. This is equivalent to the (ε, δ)-definition of limits, where the absolute value of real numbers is replaced by the one of complex numbers. From a more abstract point of view, C {\displaystyle \mathbb {C} } , endowed with the metric d ( z 1 , z 2 ) = | z 1 − z 2 | {\displaystyle \operatorname {d} (z_{1},z_{2})=|z_{1}-z_{2}|} is a complete metric space, which notably includes the triangle inequality | z 1 + z 2 | ≤ | z 1 | + | z 2 | {\displaystyle |z_{1}+z_{2}|\leq |z_{1}|+|z_{2}|} for any two complex numbers z1 and z2. === Complex exponential === Like in real analysis, this notion of convergence is used to construct a number of elementary functions: the exponential function exp z, also written ez, is defined as the infinite series, which can be shown to converge for any z: exp z := 1 + z + z 2 2 ⋅ 1 + z 3 3 ⋅ 2 ⋅ 1 + ⋯ = ∑ n = 0 ∞ z n n ! . {\displaystyle \exp z:=1+z+{\frac {z^{2}}{2\cdot 1}}+{\frac {z^{3}}{3\cdot 2\cdot 1}}+\cdots =\sum _{n=0}^{\infty }{\frac {z^{n}}{n!}}.} For example, exp ( 1 ) {\displaystyle \exp(1)} is Euler's number e ≈ 2.718 {\displaystyle e\approx 2.718} . Euler's formula states: exp ( i φ ) = cos φ + i sin φ {\displaystyle \exp(i\varphi )=\cos \varphi +i\sin \varphi } for any real number φ. This formula is a quick consequence of general basic facts about convergent power series and the definitions of the involved functions as power series. As a special case, this includes Euler's identity exp ( i π ) = − 1. {\displaystyle \exp(i\pi )=-1.} === Complex logarithm === For any positive real number t, there is a unique real number x such that exp ( x ) = t {\displaystyle \exp(x)=t} . This leads to the definition of the natural logarithm as the inverse ln : R + → R ; x ↦ ln x {\displaystyle \ln \colon \mathbb {R} ^{+}\to \mathbb {R} ;x\mapsto \ln x} of the exponential function. The situation is different for complex numbers, since exp ( z + 2 π i ) = exp z exp ( 2 π i ) = exp z {\displaystyle \exp(z+2\pi i)=\exp z\exp(2\pi i)=\exp z} by the functional equation and Euler's identity. For example, eiπ = e3iπ = −1 , so both iπ and 3iπ are possible values for the complex logarithm of −1. In general, given any non-zero complex number w, any number z solving the equation exp z = w {\displaystyle \exp z=w} is called a complex logarithm of w, denoted log w {\displaystyle \log w} . It can be shown that these numbers satisfy z = log w = ln | w | + i arg w , {\displaystyle z=\log w=\ln |w|+i\arg w,} where arg {\displaystyle \arg } is the argument defined above, and ln {\displaystyle \ln } the (real) natural logarithm. As arg is a multivalued function, unique only up to a multiple of 2π, log is also multivalued. The principal value of log is often taken by restricting the imaginary part to the interval (−π, π]. This leads to the complex logarithm being a bijective function taking values in the strip R + + i ( − π , π ] {\displaystyle \mathbb {R} ^{+}+\;i\,\left(-\pi ,\pi \right]} (that is denoted S 0 {\displaystyle S_{0}} in the above illustration) ln : C × → R + + i ( − π , π ] . {\displaystyle \ln \colon \;\mathbb {C} ^{\times }\;\to \;\;\;\mathbb {R} ^{+}+\;i\,\left(-\pi ,\pi \right].} If z ∈ C ∖ ( − R ≥ 0 ) {\displaystyle z\in \mathbb {C} \setminus \left(-\mathbb {R} _{\geq 0}\right)} is not a non-positive real number (a positive or a non-real number), the resulting principal value of the complex logarithm is obtained with −π < φ < π. It is an analytic function outside the negative real numbers, but it cannot be prolongated to a function that is continuous at any negative real number z ∈ − R + {\displaystyle z\in -\mathbb {R} ^{+}} , where the principal value is ln z = ln(−z) + iπ. Complex exponentiation zω is defined as z ω = exp ( ω ln z ) , {\displaystyle z^{\omega }=\exp(\omega \ln z),} and is multi-valued, except when ω is an integer. For ω = 1 / n, for some natural number n, this recovers the non-uniqueness of nth roots mentioned above. If z > 0 is real (and ω an arbitrary complex number), one has a preferred choice of ln x {\displaystyle \ln x} , the real logarithm, which can be used to define a preferred exponential function. Complex numbers, unlike real numbers, do not in general satisfy the unmodified power and logarithm identities, particularly when naïvely treated as single-valued functions; see failure of power and logarithm identities. For example, they do not satisfy a b c = ( a b ) c . {\displaystyle a^{bc}=\left(a^{b}\right)^{c}.} Both sides of the equation are multivalued by the definition of complex exponentiation given here, and the values on the left are a subset of those on the right. === Complex sine and cosine === The series defining the real trigonometric functions sine and cosine, as well as the hyperbolic functions sinh and cosh, also carry over to complex arguments without change. For the other trigonometric and hyperbolic functions, such as tangent, things are slightly more complicated, as the defining series do not converge for all complex values. Therefore, one must define them either in terms of sine, cosine and exponential, or, equivalently, by using the method of analytic continuation. === Holomorphic functions === A function f : C {\displaystyle f:\mathbb {C} } → C {\displaystyle \mathbb {C} } is called holomorphic or complex differentiable at a point z 0 {\displaystyle z_{0}} if the limit lim z → z 0 f ( z ) − f ( z 0 ) z − z 0 {\displaystyle \lim _{z\to z_{0}}{f(z)-f(z_{0}) \over z-z_{0}}} exists (in which case it is denoted by f ′ ( z 0 ) {\displaystyle f'(z_{0})} ). This mimics the definition for real differentiable functions, except that all quantities are complex numbers. Loosely speaking, the freedom of approaching z 0 {\displaystyle z_{0}} in different directions imposes a much stronger condition than being (real) differentiable. For example, the function f ( z ) = z ¯ {\displaystyle f(z)={\overline {z}}} is differentiable as a function R 2 → R 2 {\displaystyle \mathbb {R} ^{2}\to \mathbb {R} ^{2}} , but is not complex differentiable. A real differentiable function is complex differentiable if and only if it satisfies the Cauchy–Riemann equations, which are sometimes abbreviated as ∂ f ∂ z ¯ = 0. {\displaystyle {\frac {\partial f}{\partial {\overline {z}}}}=0.} Complex analysis shows some features not apparent in real analysis. For example, the identity theorem asserts that two holomorphic functions f and g agree if they agree on an arbitrarily small open subset of C {\displaystyle \mathbb {C} } . Meromorphic functions, functions that can locally be written as f(z)/(z − z0)n with a holomorphic function f, still share some of the features of holomorphic functions. Other functions have essential singularities, such as sin(1/z) at z = 0. == Applications == Complex numbers have applications in many scientific areas, including signal processing, control theory, electromagnetism, fluid dynamics, quantum mechanics, cartography, and vibration analysis. Some of these applications are described below. Complex conjugation is also employed in inversive geometry, a branch of geometry studying reflections more general than ones about a line. In the network analysis of electrical circuits, the complex conjugate is used in finding the equivalent impedance when the maximum power transfer theorem is looked for. === Geometry === ==== Shapes ==== Three non-collinear points u , v , w {\displaystyle u,v,w} in the plane determine the shape of the triangle { u , v , w } {\displaystyle \{u,v,w\}} . Locating the points in the complex plane, this shape of a triangle may be expressed by complex arithmetic as S ( u , v , w ) = u − w u − v . {\displaystyle S(u,v,w)={\frac {u-w}{u-v}}.} The shape S {\displaystyle S} of a triangle will remain the same, when the complex plane is transformed by translation or dilation (by an affine transformation), corresponding to the intuitive notion of shape, and describing similarity. Thus each triangle { u , v , w } {\displaystyle \{u,v,w\}} is in a similarity class of triangles with the same shape. ==== Fractal geometry ==== The Mandelbrot set is a popular example of a fractal formed on the complex plane. It is defined by plotting every location c {\displaystyle c} where iterating the sequence f c ( z ) = z 2 + c {\displaystyle f_{c}(z)=z^{2}+c} does not diverge when iterated infinitely. Similarly, Julia sets have the same rules, except where c {\displaystyle c} remains constant. ==== Triangles ==== Every triangle has a unique Steiner inellipse – an ellipse inside the triangle and tangent to the midpoints of the three sides of the triangle. The foci of a triangle's Steiner inellipse can be found as follows, according to Marden's theorem: Denote the triangle's vertices in the complex plane as a = xA + yAi, b = xB + yBi, and c = xC + yCi. Write the cubic equation ( x − a ) ( x − b ) ( x − c ) = 0 {\displaystyle (x-a)(x-b)(x-c)=0} , take its derivative, and equate the (quadratic) derivative to zero. Marden's theorem says that the solutions of this equation are the complex numbers denoting the locations of the two foci of the Steiner inellipse. === Algebraic number theory === As mentioned above, any nonconstant polynomial equation (in complex coefficients) has a solution in C {\displaystyle \mathbb {C} } . A fortiori, the same is true if the equation has rational coefficients. The roots of such equations are called algebraic numbers – they are a principal object of study in algebraic number theory. Compared to Q ¯ {\displaystyle {\overline {\mathbb {Q} }}} , the algebraic closure of Q {\displaystyle \mathbb {Q} } , which also contains all algebraic numbers, C {\displaystyle \mathbb {C} } has the advantage of being easily understandable in geometric terms. In this way, algebraic methods can be used to study geometric questions and vice versa. With algebraic methods, more specifically applying the machinery of field theory to the number field containing roots of unity, it can be shown that it is not possible to construct a regular nonagon using only compass and straightedge – a purely geometric problem. Another example is the Gaussian integers; that is, numbers of the form x + iy, where x and y are integers, which can be used to classify sums of squares. === Analytic number theory === Analytic number theory studies numbers, often integers or rationals, by taking advantage of the fact that they can be regarded as complex numbers, in which analytic methods can be used. This is done by encoding number-theoretic information in complex-valued functions. For example, the Riemann zeta function ζ(s) is related to the distribution of prime numbers. === Improper integrals === In applied fields, complex numbers are often used to compute certain real-valued improper integrals, by means of complex-valued functions. Several methods exist to do this; see methods of contour integration. === Dynamic equations === In differential equations, it is common to first find all complex roots r of the characteristic equation of a linear differential equation or equation system and then attempt to solve the system in terms of base functions of the form f(t) = ert. Likewise, in difference equations, the complex roots r of the characteristic equation of the difference equation system are used, to attempt to solve the system in terms of base functions of the form f(t) = rt. === Linear algebra === Since C {\displaystyle \mathbb {C} } is algebraically closed, any non-empty complex square matrix has at least one (complex) eigenvalue. By comparison, real matrices do not always have real eigenvalues, for example rotation matrices (for rotations of the plane for angles other than 0° or 180°) leave no direction fixed, and therefore do not have any real eigenvalue. The existence of (complex) eigenvalues, and the ensuing existence of eigendecomposition is a useful tool for computing matrix powers and matrix exponentials. Complex numbers often generalize concepts originally conceived in the real numbers. For example, the conjugate transpose generalizes the transpose, hermitian matrices generalize symmetric matrices, and unitary matrices generalize orthogonal matrices. === In applied mathematics === ==== Control theory ==== In control theory, systems are often transformed from the time domain to the complex frequency domain using the Laplace transform. The system's zeros and poles are then analyzed in the complex plane. The root locus, Nyquist plot, and Nichols plot techniques all make use of the complex plane. In the root locus method, it is important whether zeros and poles are in the left or right half planes, that is, have real part greater than or less than zero. If a linear, time-invariant (LTI) system has poles that are in the right half plane, it will be unstable, all in the left half plane, it will be stable, on the imaginary axis, it will have marginal stability. If a system has zeros in the right half plane, it is a nonminimum phase system. ==== Signal analysis ==== Complex numbers are used in signal analysis and other fields for a convenient description for periodically varying signals. For given real functions representing actual physical quantities, often in terms of sines and cosines, corresponding complex functions are considered of which the real parts are the original quantities. For a sine wave of a given frequency, the absolute value |z| of the corresponding z is the amplitude and the argument arg z is the phase. If Fourier analysis is employed to write a given real-valued signal as a sum of periodic functions, these periodic functions are often written as complex-valued functions of the form x ( t ) = Re { X ( t ) } {\displaystyle x(t)=\operatorname {Re} \{X(t)\}} and X ( t ) = A e i ω t = a e i ϕ e i ω t = a e i ( ω t + ϕ ) {\displaystyle X(t)=Ae^{i\omega t}=ae^{i\phi }e^{i\omega t}=ae^{i(\omega t+\phi )}} where ω represents the angular frequency and the complex number A encodes the phase and amplitude as explained above. This use is also extended into digital signal processing and digital image processing, which use digital versions of Fourier analysis (and wavelet analysis) to transmit, compress, restore, and otherwise process digital audio signals, still images, and video signals. Another example, relevant to the two side bands of amplitude modulation of AM radio, is: cos ( ( ω + α ) t ) + cos ( ( ω − α ) t ) = Re ( e i ( ω + α ) t + e i ( ω − α ) t ) = Re ( ( e i α t + e − i α t ) ⋅ e i ω t ) = Re ( 2 cos ( α t ) ⋅ e i ω t ) = 2 cos ( α t ) ⋅ Re ( e i ω t ) = 2 cos ( α t ) ⋅ cos ( ω t ) . {\displaystyle {\begin{aligned}\cos((\omega +\alpha )t)+\cos \left((\omega -\alpha )t\right)&=\operatorname {Re} \left(e^{i(\omega +\alpha )t}+e^{i(\omega -\alpha )t}\right)\\&=\operatorname {Re} \left(\left(e^{i\alpha t}+e^{-i\alpha t}\right)\cdot e^{i\omega t}\right)\\&=\operatorname {Re} \left(2\cos(\alpha t)\cdot e^{i\omega t}\right)\\&=2\cos(\alpha t)\cdot \operatorname {Re} \left(e^{i\omega t}\right)\\&=2\cos(\alpha t)\cdot \cos \left(\omega t\right).\end{aligned}}} === In physics === ==== Electromagnetism and electrical engineering ==== In electrical engineering, the Fourier transform is used to analyze varying electric currents and voltages. The treatment of resistors, capacitors, and inductors can then be unified by introducing imaginary, frequency-dependent resistances for the latter two and combining all three in a single complex number called the impedance. This approach is called phasor calculus. In electrical engineering, the imaginary unit is denoted by j, to avoid confusion with I, which is generally in use to denote electric current, or, more particularly, i, which is generally in use to denote instantaneous electric current. Because the voltage in an AC circuit is oscillating, it can be represented as V ( t ) = V 0 e j ω t = V 0 ( cos ω t + j sin ω t ) , {\displaystyle V(t)=V_{0}e^{j\omega t}=V_{0}\left(\cos \omega t+j\sin \omega t\right),} To obtain the measurable quantity, the real part is taken: v ( t ) = Re ( V ) = Re [ V 0 e j ω t ] = V 0 cos ω t . {\displaystyle v(t)=\operatorname {Re} (V)=\operatorname {Re} \left[V_{0}e^{j\omega t}\right]=V_{0}\cos \omega t.} The complex-valued signal V(t) is called the analytic representation of the real-valued, measurable signal v(t). ==== Fluid dynamics ==== In fluid dynamics, complex functions are used to describe potential flow in two dimensions. ==== Quantum mechanics ==== The complex number field is intrinsic to the mathematical formulations of quantum mechanics, where complex Hilbert spaces provide the context for one such formulation that is convenient and perhaps most standard. The original foundation formulas of quantum mechanics – the Schrödinger equation and Heisenberg's matrix mechanics – make use of complex numbers. ==== Relativity ==== In special relativity and general relativity, some formulas for the metric on spacetime become simpler if one takes the time component of the spacetime continuum to be imaginary. (This approach is no longer standard in classical relativity, but is used in an essential way in quantum field theory.) Complex numbers are essential to spinors, which are a generalization of the tensors used in relativity. == Characterizations, generalizations and related notions == === Algebraic characterization === The field C {\displaystyle \mathbb {C} } has the following three properties: First, it has characteristic 0. This means that 1 + 1 + ⋯ + 1 ≠ 0 for any number of summands (all of which equal one). Second, its transcendence degree over Q {\displaystyle \mathbb {Q} } , the prime field of C , {\displaystyle \mathbb {C} ,} is the cardinality of the continuum. Third, it is algebraically closed (see above). It can be shown that any field having these properties is isomorphic (as a field) to C . {\displaystyle \mathbb {C} .} For example, the algebraic closure of the field Q p {\displaystyle \mathbb {Q} _{p}} of the p-adic number also satisfies these three properties, so these two fields are isomorphic (as fields, but not as topological fields). Also, C {\displaystyle \mathbb {C} } is isomorphic to the field of complex Puiseux series. However, specifying an isomorphism requires the axiom of choice. Another consequence of this algebraic characterization is that C {\displaystyle \mathbb {C} } contains many proper subfields that are isomorphic to C {\displaystyle \mathbb {C} } . === Characterization as a topological field === The preceding characterization of C {\displaystyle \mathbb {C} } describes only the algebraic aspects of C . {\displaystyle \mathbb {C} .} That is to say, the properties of nearness and continuity, which matter in areas such as analysis and topology, are not dealt with. The following description of C {\displaystyle \mathbb {C} } as a topological field (that is, a field that is equipped with a topology, which allows the notion of convergence) does take into account the topological properties. C {\displaystyle \mathbb {C} } contains a subset P (namely the set of positive real numbers) of nonzero elements satisfying the following three conditions: P is closed under addition, multiplication and taking inverses. If x and y are distinct elements of P, then either x − y or y − x is in P. If S is any nonempty subset of P, then S + P = x + P for some x in C . {\displaystyle \mathbb {C} .} Moreover, C {\displaystyle \mathbb {C} } has a nontrivial involutive automorphism x ↦ x* (namely the complex conjugation), such that x x* is in P for any nonzero x in C . {\displaystyle \mathbb {C} .} Any field F with these properties can be endowed with a topology by taking the sets B(x, p) = { y | p − (y − x)(y − x)* ∈ P } as a base, where x ranges over the field and p ranges over P. With this topology F is isomorphic as a topological field to C . {\displaystyle \mathbb {C} .} The only connected locally compact topological fields are R {\displaystyle \mathbb {R} } and C . {\displaystyle \mathbb {C} .} This gives another characterization of C {\displaystyle \mathbb {C} } as a topological field, because C {\displaystyle \mathbb {C} } can be distinguished from R {\displaystyle \mathbb {R} } because the nonzero complex numbers are connected, while the nonzero real numbers are not. === Other number systems === The process of extending the field R {\displaystyle \mathbb {R} } of reals to C {\displaystyle \mathbb {C} } is an instance of the Cayley–Dickson construction. Applying this construction iteratively to C {\displaystyle \mathbb {C} } then yields the quaternions, the octonions, the sedenions, and the trigintaduonions. This construction turns out to diminish the structural properties of the involved number systems. Unlike the reals, C {\displaystyle \mathbb {C} } is not an ordered field, that is to say, it is not possible to define a relation z1 < z2 that is compatible with the addition and multiplication. In fact, in any ordered field, the square of any element is necessarily positive, so i2 = −1 precludes the existence of an ordering on C . {\displaystyle \mathbb {C} .} Passing from C {\displaystyle \mathbb {C} } to the quaternions H {\displaystyle \mathbb {H} } loses commutativity, while the octonions (additionally to not being commutative) fail to be associative. The reals, complex numbers, quaternions and octonions are all normed division algebras over R {\displaystyle \mathbb {R} } . By Hurwitz's theorem they are the only ones; the sedenions, the next step in the Cayley–Dickson construction, fail to have this structure. The Cayley–Dickson construction is closely related to the regular representation of C , {\displaystyle \mathbb {C} ,} thought of as an R {\displaystyle \mathbb {R} } -algebra (an R {\displaystyle \mathbb {R} } -vector space with a multiplication), with respect to the basis (1, i). This means the following: the R {\displaystyle \mathbb {R} } -linear map C → C z ↦ w z {\displaystyle {\begin{aligned}\mathbb {C} &\rightarrow \mathbb {C} \\z&\mapsto wz\end{aligned}}} for some fixed complex number w can be represented by a 2 × 2 matrix (once a basis has been chosen). With respect to the basis (1, i), this matrix is ( Re ( w ) − Im ( w ) Im ( w ) Re ( w ) ) , {\displaystyle {\begin{pmatrix}\operatorname {Re} (w)&-\operatorname {Im} (w)\\\operatorname {Im} (w)&\operatorname {Re} (w)\end{pmatrix}},} that is, the one mentioned in the section on matrix representation of complex numbers above. While this is a linear representation of C {\displaystyle \mathbb {C} } in the 2 × 2 real matrices, it is not the only one. Any matrix J = ( p q r − p ) , p 2 + q r + 1 = 0 {\displaystyle J={\begin{pmatrix}p&q\\r&-p\end{pmatrix}},\quad p^{2}+qr+1=0} has the property that its square is the negative of the identity matrix: J2 = −I. Then { z = a I + b J : a , b ∈ R } {\displaystyle \{z=aI+bJ:a,b\in \mathbb {R} \}} is also isomorphic to the field C , {\displaystyle \mathbb {C} ,} and gives an alternative complex structure on R 2 . {\displaystyle \mathbb {R} ^{2}.} This is generalized by the notion of a linear complex structure. Hypercomplex numbers also generalize R , {\displaystyle \mathbb {R} ,} C , {\displaystyle \mathbb {C} ,} H , {\displaystyle \mathbb {H} ,} and O . {\displaystyle \mathbb {O} .} For example, this notion contains the split-complex numbers, which are elements of the ring R [ x ] / ( x 2 − 1 ) {\displaystyle \mathbb {R} [x]/(x^{2}-1)} (as opposed to R [ x ] / ( x 2 + 1 ) {\displaystyle \mathbb {R} [x]/(x^{2}+1)} for complex numbers). In this ring, the equation a2 = 1 has four solutions. The field R {\displaystyle \mathbb {R} } is the completion of Q , {\displaystyle \mathbb {Q} ,} the field of rational numbers, with respect to the usual absolute value metric. Other choices of metrics on Q {\displaystyle \mathbb {Q} } lead to the fields Q p {\displaystyle \mathbb {Q} _{p}} of p-adic numbers (for any prime number p), which are thereby analogous to R {\displaystyle \mathbb {R} } . There are no other nontrivial ways of completing Q {\displaystyle \mathbb {Q} } than R {\displaystyle \mathbb {R} } and Q p , {\displaystyle \mathbb {Q} _{p},} by Ostrowski's theorem. The algebraic closures Q p ¯ {\displaystyle {\overline {\mathbb {Q} _{p}}}} of Q p {\displaystyle \mathbb {Q} _{p}} still carry a norm, but (unlike C {\displaystyle \mathbb {C} } ) are not complete with respect to it. The completion C p {\displaystyle \mathbb {C} _{p}} of Q p ¯ {\displaystyle {\overline {\mathbb {Q} _{p}}}} turns out to be algebraically closed. By analogy, the field is called p-adic complex numbers. The fields R , {\displaystyle \mathbb {R} ,} Q p , {\displaystyle \mathbb {Q} _{p},} and their finite field extensions, including C , {\displaystyle \mathbb {C} ,} are called local fields. == See also == Analytic continuation Circular motion using complex numbers Complex-base system Complex coordinate space Complex geometry Geometry of numbers Dual-complex number Eisenstein integer Geometric algebra (which includes the complex plane as the 2-dimensional spinor subspace G 2 + {\displaystyle {\mathcal {G}}_{2}^{+}} ) Unit complex number == Notes == == References == === Historical references ===
|
Wikipedia:Complex quadratic polynomial#0
|
A complex quadratic polynomial is a quadratic polynomial whose coefficients and variable are complex numbers. == Properties == Quadratic polynomials have the following properties, regardless of the form: It is a unicritical polynomial, i.e. it has one finite critical point in the complex plane, Dynamical plane consist of maximally 2 basins: basin of infinity and basin of finite critical point ( if finite critical point do not escapes) It can be postcritically finite, i.e. the orbit of the critical point can be finite, because the critical point is periodic or preperiodic. It is a unimodal function, It is a rational function, It is an entire function. == Forms == When the quadratic polynomial has only one variable (univariate), one can distinguish its four main forms: The general form: f ( x ) = a 2 x 2 + a 1 x + a 0 {\displaystyle f(x)=a_{2}x^{2}+a_{1}x+a_{0}} where a 2 ≠ 0 {\displaystyle a_{2}\neq 0} The factored form used for the logistic map: f r ( x ) = r x ( 1 − x ) {\displaystyle f_{r}(x)=rx(1-x)} f θ ( x ) = x 2 + λ x {\displaystyle f_{\theta }(x)=x^{2}+\lambda x} which has an indifferent fixed point with multiplier λ = e 2 π θ i {\displaystyle \lambda =e^{2\pi \theta i}} at the origin The monic and centered form, f c ( x ) = x 2 + c {\displaystyle f_{c}(x)=x^{2}+c} The monic and centered form has been studied extensively, and has the following properties: It is the simplest form of a nonlinear function with one coefficient (parameter), It is a centered polynomial (the sum of its critical points is zero). it is a binomial The lambda form f λ ( z ) = z 2 + λ z {\displaystyle f_{\lambda }(z)=z^{2}+\lambda z} is: the simplest non-trivial perturbation of unperturbated system z ↦ λ z {\displaystyle z\mapsto \lambda z} "the first family of dynamical systems in which explicit necessary and sufficient conditions are known for when a small divisor problem is stable" == Conjugation == === Between forms === Since f c ( x ) {\displaystyle f_{c}(x)} is affine conjugate to the general form of the quadratic polynomial it is often used to study complex dynamics and to create images of Mandelbrot, Julia and Fatou sets. When one wants change from θ {\displaystyle \theta } to c {\displaystyle c} : c = c ( θ ) = e 2 π θ i 2 ( 1 − e 2 π θ i 2 ) . {\displaystyle c=c(\theta )={\frac {e^{2\pi \theta i}}{2}}\left(1-{\frac {e^{2\pi \theta i}}{2}}\right).} When one wants change from r {\displaystyle r} to c {\displaystyle c} , the parameter transformation is c = c ( r ) = 1 − ( r − 1 ) 2 4 = − r 2 ( r − 2 2 ) {\displaystyle c=c(r)={\frac {1-(r-1)^{2}}{4}}=-{\frac {r}{2}}\left({\frac {r-2}{2}}\right)} and the transformation between the variables in z t + 1 = z t 2 + c {\displaystyle z_{t+1}=z_{t}^{2}+c} and x t + 1 = r x t ( 1 − x t ) {\displaystyle x_{t+1}=rx_{t}(1-x_{t})} is z = r ( 1 2 − x ) . {\displaystyle z=r\left({\frac {1}{2}}-x\right).} === With doubling map === There is semi-conjugacy between the dyadic transformation (the doubling map) and the quadratic polynomial case of c = –2. == Notation == === Iteration === Here f n {\displaystyle f^{n}} denotes the n-th iterate of the function f {\displaystyle f} : f c n ( z ) = f c 1 ( f c n − 1 ( z ) ) {\displaystyle f_{c}^{n}(z)=f_{c}^{1}(f_{c}^{n-1}(z))} so z n = f c n ( z 0 ) . {\displaystyle z_{n}=f_{c}^{n}(z_{0}).} Because of the possible confusion with exponentiation, some authors write f ∘ n {\displaystyle f^{\circ n}} for the nth iterate of f {\displaystyle f} . === Parameter === The monic and centered form f c ( x ) = x 2 + c {\displaystyle f_{c}(x)=x^{2}+c} can be marked by: the parameter c {\displaystyle c} the external angle θ {\displaystyle \theta } of the ray that lands: at c in Mandelbrot set on the parameter plane on the critical value:z = c in Julia set on the dynamic plane so : f c = f θ {\displaystyle f_{c}=f_{\theta }} c = c ( θ ) {\displaystyle c=c({\theta })} Examples: c is the landing point of the 1/6 external ray of the Mandelbrot set, and is z → z 2 + i {\displaystyle z\to z^{2}+i} (where i^2=-1) c is the landing point the 5/14 external ray and is z → z 2 + c {\displaystyle z\to z^{2}+c} with c = − 1.23922555538957 + 0.412602181602004 ∗ i {\displaystyle c=-1.23922555538957+0.412602181602004*i} === Map === The monic and centered form, sometimes called the Douady-Hubbard family of quadratic polynomials, is typically used with variable z {\displaystyle z} and parameter c {\displaystyle c} : f c ( z ) = z 2 + c . {\displaystyle f_{c}(z)=z^{2}+c.} When it is used as an evolution function of the discrete nonlinear dynamical system z n + 1 = f c ( z n ) {\displaystyle z_{n+1}=f_{c}(z_{n})} it is named the quadratic map: f c : z → z 2 + c . {\displaystyle f_{c}:z\to z^{2}+c.} The Mandelbrot set is the set of values of the parameter c for which the initial condition z0 = 0 does not cause the iterates to diverge to infinity. == Critical items == === Critical points === ==== complex plane ==== A critical point of f c {\displaystyle f_{c}} is a point z c r {\displaystyle z_{cr}} on the dynamical plane such that the derivative vanishes: f c ′ ( z c r ) = 0. {\displaystyle f_{c}'(z_{cr})=0.} Since f c ′ ( z ) = d d z f c ( z ) = 2 z {\displaystyle f_{c}'(z)={\frac {d}{dz}}f_{c}(z)=2z} implies z c r = 0 , {\displaystyle z_{cr}=0,} we see that the only (finite) critical point of f c {\displaystyle f_{c}} is the point z c r = 0 {\displaystyle z_{cr}=0} . z 0 {\displaystyle z_{0}} is an initial point for Mandelbrot set iteration. For the quadratic family f c ( z ) = z 2 + c {\displaystyle f_{c}(z)=z^{2}+c} the critical point z = 0 is the center of symmetry of the Julia set Jc, so it is a convex combination of two points in Jc. ==== Extended complex plane ==== In the Riemann sphere polynomial has 2d-2 critical points. Here zero and infinity are critical points. === Critical value === A critical value z c v {\displaystyle z_{cv}} of f c {\displaystyle f_{c}} is the image of a critical point: z c v = f c ( z c r ) {\displaystyle z_{cv}=f_{c}(z_{cr})} Since z c r = 0 {\displaystyle z_{cr}=0} we have z c v = c {\displaystyle z_{cv}=c} So the parameter c {\displaystyle c} is the critical value of f c ( z ) {\displaystyle f_{c}(z)} . === Critical level curves === A critical level curve the level curve which contain critical point. It acts as a sort of skeleton of dynamical plane Example : level curves cross at saddle point, which is a special type of critical point. === Critical limit set === Critical limit set is the set of forward orbit of all critical points === Critical orbit === The forward orbit of a critical point is called a critical orbit. Critical orbits are very important because every attracting periodic orbit attracts a critical point, so studying the critical orbits helps us understand the dynamics in the Fatou set. z 0 = z c r = 0 {\displaystyle z_{0}=z_{cr}=0} z 1 = f c ( z 0 ) = c {\displaystyle z_{1}=f_{c}(z_{0})=c} z 2 = f c ( z 1 ) = c 2 + c {\displaystyle z_{2}=f_{c}(z_{1})=c^{2}+c} z 3 = f c ( z 2 ) = ( c 2 + c ) 2 + c {\displaystyle z_{3}=f_{c}(z_{2})=(c^{2}+c)^{2}+c} ⋮ {\displaystyle \ \vdots } This orbit falls into an attracting periodic cycle if one exists. === Critical sector === The critical sector is a sector of the dynamical plane containing the critical point. === Critical set === Critical set is a set of critical points === Critical polynomial === P n ( c ) = f c n ( z c r ) = f c n ( 0 ) {\displaystyle P_{n}(c)=f_{c}^{n}(z_{cr})=f_{c}^{n}(0)} so P 0 ( c ) = 0 {\displaystyle P_{0}(c)=0} P 1 ( c ) = c {\displaystyle P_{1}(c)=c} P 2 ( c ) = c 2 + c {\displaystyle P_{2}(c)=c^{2}+c} P 3 ( c ) = ( c 2 + c ) 2 + c {\displaystyle P_{3}(c)=(c^{2}+c)^{2}+c} These polynomials are used for: finding centers of these Mandelbrot set components of period n. Centers are roots of n-th critical polynomials centers = { c : P n ( c ) = 0 } {\displaystyle {\text{centers}}=\{c:P_{n}(c)=0\}} finding roots of Mandelbrot set components of period n (local minimum of P n ( c ) {\displaystyle P_{n}(c)} ) Misiurewicz points M n , k = { c : P k ( c ) = P k + n ( c ) } {\displaystyle M_{n,k}=\{c:P_{k}(c)=P_{k+n}(c)\}} === Critical curves === Diagrams of critical polynomials are called critical curves. These curves create the skeleton (the dark lines) of a bifurcation diagram. == Spaces, planes == === 4D space === One can use the Julia-Mandelbrot 4-dimensional (4D) space for a global analysis of this dynamical system. In this space there are two basic types of 2D planes: the dynamical (dynamic) plane, f c {\displaystyle f_{c}} -plane or c-plane the parameter plane or z-plane There is also another plane used to analyze such dynamical systems w-plane: the conjugation plane model plane ==== 2D Parameter plane ==== Parameter plane types The phase space of a quadratic map is called its parameter plane. Here: z 0 = z c r {\displaystyle z_{0}=z_{cr}} is constant and c {\displaystyle c} is variable. There is no dynamics here. It is only a set of parameter values. There are no orbits on the parameter plane. The parameter plane consists of: The Mandelbrot set The bifurcation locus = boundary of Mandelbrot set with root points Bounded hyperbolic components of the Mandelbrot set = interior of Mandelbrot set with internal rays exterior of Mandelbrot set with external rays equipotential lines There are many different subtypes of the parameter plane. See also : Boettcher map which maps exterior of Mandelbrot set to the exterior of unit disc multiplier map which maps interior of hyperbolic component of Mandelbrot set to the interior of unit disc ==== 2D Dynamical plane ==== "The polynomial Pc maps each dynamical ray to another ray doubling the angle (which we measure in full turns, i.e. 0 = 1 = 2π rad = 360°), and the dynamical rays of any polynomial "look like straight rays" near infinity. This allows us to study the Mandelbrot and Julia sets combinatorially, replacing the dynamical plane by the unit circle, rays by angles, and the quadratic polynomial by the doubling modulo one map." Virpi KaukoOn the dynamical plane one can find: The Julia set The Filled Julia set The Fatou set Orbits The dynamical plane consists of: Fatou set Julia set Here, c {\displaystyle c} is a constant and z {\displaystyle z} is a variable. The two-dimensional dynamical plane can be treated as a Poincaré cross-section of three-dimensional space of continuous dynamical system. Dynamical z-planes can be divided into two groups: f 0 {\displaystyle f_{0}} plane for c = 0 {\displaystyle c=0} (see complex squaring map) f c {\displaystyle f_{c}} planes (all other planes for c ≠ 0 {\displaystyle c\neq 0} ) === Riemann sphere === The extended complex plane plus a point at infinity the Riemann sphere == Derivatives == === First derivative with respect to c === On the parameter plane: c {\displaystyle c} is a variable z 0 = 0 {\displaystyle z_{0}=0} is constant The first derivative of f c n ( z 0 ) {\displaystyle f_{c}^{n}(z_{0})} with respect to c is z n ′ = d d c f c n ( z 0 ) . {\displaystyle z_{n}'={\frac {d}{dc}}f_{c}^{n}(z_{0}).} This derivative can be found by iteration starting with z 0 ′ = d d c f c 0 ( z 0 ) = 1 {\displaystyle z_{0}'={\frac {d}{dc}}f_{c}^{0}(z_{0})=1} and then replacing at every consecutive step z n + 1 ′ = d d c f c n + 1 ( z 0 ) = 2 ⋅ f c n ( z ) ⋅ d d c f c n ( z 0 ) + 1 = 2 ⋅ z n ⋅ z n ′ + 1. {\displaystyle z_{n+1}'={\frac {d}{dc}}f_{c}^{n+1}(z_{0})=2\cdot {}f_{c}^{n}(z)\cdot {\frac {d}{dc}}f_{c}^{n}(z_{0})+1=2\cdot z_{n}\cdot z_{n}'+1.} This can easily be verified by using the chain rule for the derivative. This derivative is used in the distance estimation method for drawing a Mandelbrot set. === First derivative with respect to z === On the dynamical plane: z {\displaystyle z} is a variable; c {\displaystyle c} is a constant. At a fixed point z 0 {\displaystyle z_{0}} , f c ′ ( z 0 ) = d d z f c ( z 0 ) = 2 z 0 . {\displaystyle f_{c}'(z_{0})={\frac {d}{dz}}f_{c}(z_{0})=2z_{0}.} At a periodic point z0 of period p the first derivative of a function ( f c p ) ′ ( z 0 ) = d d z f c p ( z 0 ) = ∏ i = 0 p − 1 f c ′ ( z i ) = 2 p ∏ i = 0 p − 1 z i = λ {\displaystyle (f_{c}^{p})'(z_{0})={\frac {d}{dz}}f_{c}^{p}(z_{0})=\prod _{i=0}^{p-1}f_{c}'(z_{i})=2^{p}\prod _{i=0}^{p-1}z_{i}=\lambda } is often represented by λ {\displaystyle \lambda } and referred to as the multiplier or the Lyapunov characteristic number. Its logarithm is known as the Lyapunov exponent. Absolute value of multiplier is used to check the stability of periodic (also fixed) points. At a nonperiodic point, the derivative, denoted by z n ′ {\displaystyle z'_{n}} , can be found by iteration starting with z 0 ′ = 1 , {\displaystyle z'_{0}=1,} and then using z n ′ = 2 ∗ z n − 1 ∗ z n − 1 ′ . {\displaystyle z'_{n}=2*z_{n-1}*z'_{n-1}.} This derivative is used for computing the external distance to the Julia set. === Schwarzian derivative === The Schwarzian derivative (SD for short) of f is: ( S f ) ( z ) = f ‴ ( z ) f ′ ( z ) − 3 2 ( f ″ ( z ) f ′ ( z ) ) 2 . {\displaystyle (Sf)(z)={\frac {f'''(z)}{f'(z)}}-{\frac {3}{2}}\left({\frac {f''(z)}{f'(z)}}\right)^{2}.} == See also == Misiurewicz point Periodic points of complex quadratic mappings Mandelbrot set Julia set Milnor–Thurston kneading theory Tent map Logistic map == References == == External links == Monica Nevins and Thomas D. Rogers, "Quadratic maps as dynamical systems on the p-adic numbers" Wolf Jung : Homeomorphisms on Edges of the Mandelbrot Set. Ph.D. thesis of 2002 More about Quadratic Maps : Quadratic Map
|
Wikipedia:Complex-base system#0
|
In arithmetic, a complex-base system is a positional numeral system whose radix is an imaginary (proposed by Donald Knuth in 1955) or complex number (proposed by S. Khmelnik in 1964 and Walter F. Penney in 1965). == In general == Let D {\displaystyle D} be an integral domain ⊂ C {\displaystyle \subset \mathbb {C} } , and | ⋅ | {\displaystyle |\cdot |} the (Archimedean) absolute value on it. A number X ∈ D {\displaystyle X\in D} in a positional number system is represented as an expansion X = ± ∑ ν x ν ρ ν , {\displaystyle X=\pm \sum _{\nu }^{}x_{\nu }\rho ^{\nu },} where The cardinality R := | Z | {\displaystyle R:=|Z|} is called the level of decomposition. A positional number system or coding system is a pair ⟨ ρ , Z ⟩ {\displaystyle \left\langle \rho ,Z\right\rangle } with radix ρ {\displaystyle \rho } and set of digits Z {\displaystyle Z} , and we write the standard set of digits with R {\displaystyle R} digits as Z R := { 0 , 1 , 2 , … , R − 1 } . {\displaystyle Z_{R}:=\{0,1,2,\dotsc ,{R-1}\}.} Desirable are coding systems with the features: Every number in D {\displaystyle D} , e. g. the integers Z {\displaystyle \mathbb {Z} } , the Gaussian integers Z [ i ] {\displaystyle \mathbb {Z} [\mathrm {i} ]} or the integers Z [ − 1 + i 7 2 ] {\displaystyle \mathbb {Z} [{\tfrac {-1+\mathrm {i} {\sqrt {7}}}{2}}]} , is uniquely representable as a finite code, possibly with a sign ±. Every number in the field of fractions K := Quot ( D ) {\displaystyle K:=\operatorname {Quot} (D)} , which possibly is completed for the metric given by | ⋅ | {\displaystyle |\cdot |} yielding K := R {\displaystyle K:=\mathbb {R} } or K := C {\displaystyle K:=\mathbb {C} } , is representable as an infinite series X {\displaystyle X} which converges under | ⋅ | {\displaystyle |\cdot |} for ν → − ∞ {\displaystyle \nu \to -\infty } , and the measure of the set of numbers with more than one representation is 0. The latter requires that the set Z {\displaystyle Z} be minimal, i.e. R = | ρ | {\displaystyle R=|\rho |} for real numbers and R = | ρ | 2 {\displaystyle R=|\rho |^{2}} for complex numbers. == In the real numbers == In this notation our standard decimal coding scheme is denoted by ⟨ 10 , Z 10 ⟩ , {\displaystyle \left\langle 10,Z_{10}\right\rangle ,} the standard binary system is ⟨ 2 , Z 2 ⟩ , {\displaystyle \left\langle 2,Z_{2}\right\rangle ,} the negabinary system is ⟨ − 2 , Z 2 ⟩ , {\displaystyle \left\langle -2,Z_{2}\right\rangle ,} and the balanced ternary system is ⟨ 3 , { − 1 , 0 , 1 } ⟩ . {\displaystyle \left\langle 3,\{-1,0,1\}\right\rangle .} All these coding systems have the mentioned features for Z {\displaystyle \mathbb {Z} } and R {\displaystyle \mathbb {R} } , and the last two do not require a sign. == In the complex numbers == Well-known positional number systems for the complex numbers include the following ( i {\displaystyle \mathrm {i} } being the imaginary unit): ⟨ R , Z R ⟩ {\displaystyle \left\langle {\sqrt {R}},Z_{R}\right\rangle } , e.g. ⟨ ± i 2 , Z 2 ⟩ {\displaystyle \left\langle \pm \mathrm {i} {\sqrt {2}},Z_{2}\right\rangle } and ⟨ ± 2 i , Z 4 ⟩ {\displaystyle \left\langle \pm 2\mathrm {i} ,Z_{4}\right\rangle } , the quater-imaginary base, proposed by Donald Knuth in 1955. ⟨ 2 e ± π 2 i = ± i 2 , Z 2 ⟩ {\displaystyle \left\langle {\sqrt {2}}e^{\pm {\tfrac {\pi }{2}}\mathrm {i} }=\pm \mathrm {i} {\sqrt {2}},Z_{2}\right\rangle } and ⟨ 2 e ± 3 π 4 i = − 1 ± i , Z 2 ⟩ {\displaystyle \left\langle {\sqrt {2}}e^{\pm {\tfrac {3\pi }{4}}\mathrm {i} }=-1\pm \mathrm {i} ,Z_{2}\right\rangle } (see also the section Base −1 ± i below). ⟨ R e i φ , Z R ⟩ {\displaystyle \left\langle {\sqrt {R}}e^{\mathrm {i} \varphi },Z_{R}\right\rangle } , where φ = ± arccos ( − β / ( 2 R ) ) {\displaystyle \varphi =\pm \arccos {(-\beta /(2{\sqrt {R}}))}} , β < min ( R , 2 R ) {\displaystyle \beta <\min(R,2{\sqrt {R}})} and β {\displaystyle \beta _{}^{}} is a positive integer that can take multiple values at a given R {\displaystyle R} . For β = 1 {\displaystyle \beta =1} and R = 2 {\displaystyle R=2} this is the system ⟨ − 1 + i 7 2 , Z 2 ⟩ . {\displaystyle \left\langle {\tfrac {-1+\mathrm {i} {\sqrt {7}}}{2}},Z_{2}\right\rangle .} ⟨ 2 e π 3 i , A 4 := { 0 , 1 , e 2 π 3 i , e − 2 π 3 i } ⟩ {\displaystyle \left\langle 2e^{{\tfrac {\pi }{3}}\mathrm {i} },A_{4}:=\left\{0,1,e^{{\tfrac {2\pi }{3}}\mathrm {i} },e^{-{\tfrac {2\pi }{3}}\mathrm {i} }\right\}\right\rangle } . ⟨ − R , A R 2 ⟩ {\displaystyle \left\langle -R,A_{R}^{2}\right\rangle } , where the set A R 2 {\displaystyle A_{R}^{2}} consists of complex numbers r ν = α ν 1 + α ν 2 i {\displaystyle r_{\nu }=\alpha _{\nu }^{1}+\alpha _{\nu }^{2}\mathrm {i} } , and numbers α ν ∈ Z R {\displaystyle \alpha _{\nu }^{}\in Z_{R}} , e.g. ⟨ − 2 , { 0 , 1 , i , 1 + i } ⟩ . {\displaystyle \left\langle -2,\{0,1,\mathrm {i} ,1+\mathrm {i} \}\right\rangle .} ⟨ ρ = ρ 2 , Z 2 ⟩ {\displaystyle \left\langle \rho =\rho _{2},Z_{2}\right\rangle } , where ρ 2 = { ( − 2 ) ν 2 if ν even, ( − 2 ) ν − 1 2 i if ν odd. {\displaystyle \rho _{2}={\begin{cases}(-2)^{\tfrac {\nu }{2}}&{\text{if }}\nu {\text{ even,}}\\(-2)^{\tfrac {\nu -1}{2}}\mathrm {i} &{\text{if }}\nu {\text{ odd.}}\end{cases}}} == Binary systems == Binary coding systems of complex numbers, i.e. systems with the digits Z 2 = { 0 , 1 } {\displaystyle Z_{2}=\{0,1\}} , are of practical interest. Listed below are some coding systems ⟨ ρ , Z 2 ⟩ {\displaystyle \langle \rho ,Z_{2}\rangle } (all are special cases of the systems above) and resp. codes for the (decimal) numbers −1, 2, −2, i. The standard binary (which requires a sign, first line) and the "negabinary" systems (second line) are also listed for comparison. They do not have a genuine expansion for i. As in all positional number systems with an Archimedean absolute value, there are some numbers with multiple representations. Examples of such numbers are shown in the right column of the table. All of them are repeating fractions with the repetend marked by a horizontal line above it. If the set of digits is minimal, the set of such numbers has a measure of 0. This is the case with all the mentioned coding systems. The almost binary quater-imaginary system is listed in the bottom line for comparison purposes. There, real and imaginary part interleave each other. == Base −1 ± i == Of particular interest are the quater-imaginary base (base 2i) and the base −1 ± i systems discussed below, both of which can be used to finitely represent the Gaussian integers without sign. Base −1 ± i, using digits 0 and 1, was proposed by S. Khmelnik in 1964 and Walter F. Penney in 1965. === Connection to the twindragon === The rounding region of an integer – i.e., a set S {\displaystyle S} of complex (non-integer) numbers that share the integer part of their representation in this system – has in the complex plane a fractal shape: the twindragon (see figure). This set S {\displaystyle S} is, by definition, all points that can be written as ∑ k ≥ 1 x k ( i − 1 ) − k {\displaystyle \textstyle \sum _{k\geq 1}x_{k}(\mathrm {i} -1)^{-k}} with x k ∈ Z 2 {\displaystyle x_{k}\in Z_{2}} . S {\displaystyle S} can be decomposed into 16 pieces congruent to 1 4 S {\displaystyle {\tfrac {1}{4}}S} . Notice that if S {\displaystyle S} is rotated counterclockwise by 135°, we obtain two adjacent sets congruent to 1 2 S {\displaystyle {\tfrac {1}{\sqrt {2}}}S} , because ( i − 1 ) S = S ∪ ( S + 1 ) {\displaystyle (\mathrm {i} -1)S=S\cup (S+1)} . The rectangle R ⊂ S {\displaystyle R\subset S} in the center intersects the coordinate axes counterclockwise at the following points: 2 15 ← 0. 00001100 ¯ {\displaystyle {\tfrac {2}{15}}\gets 0.{\overline {00001100}}} , 1 15 i ← 0. 00000011 ¯ {\displaystyle {\tfrac {1}{15}}\mathrm {i} \gets 0.{\overline {00000011}}} , and − 8 15 ← 0. 11000000 ¯ {\displaystyle -{\tfrac {8}{15}}\gets 0.{\overline {11000000}}} , and − 4 15 i ← 0. 00110000 ¯ {\displaystyle -{\tfrac {4}{15}}\mathrm {i} \gets 0.{\overline {00110000}}} . Thus, S {\displaystyle S} contains all complex numbers with absolute value ≤ 1/15. As a consequence, there is an injection of the complex rectangle [ − 8 15 , 2 15 ] × [ − 4 15 , 1 15 ] i {\displaystyle [-{\tfrac {8}{15}},{\tfrac {2}{15}}]\times [-{\tfrac {4}{15}},{\tfrac {1}{15}}]\mathrm {i} } into the interval [ 0 , 1 ) {\displaystyle [0,1)} of real numbers by mapping ∑ k ≥ 1 x k ( i − 1 ) − k ↦ ∑ k ≥ 1 x k b − k {\displaystyle \textstyle \sum _{k\geq 1}x_{k}(\mathrm {i} -1)^{-k}\mapsto \sum _{k\geq 1}x_{k}b^{-k}} with b > 2 {\displaystyle b>2} . Furthermore, there are the two mappings Z 2 N → S ( x k ) k ∈ N ↦ ∑ k ≥ 1 x k ( i − 1 ) − k {\displaystyle {\begin{array}{lll}Z_{2}^{\mathbb {N} }&\to &S\\\left(x_{k}\right)_{k\in \mathbb {N} }&\mapsto &\sum _{k\geq 1}x_{k}(\mathrm {i} -1)^{-k}\end{array}}} and Z 2 N → [ 0 , 1 ) ( x k ) k ∈ N ↦ ∑ k ≥ 1 x k 2 − k {\displaystyle {\begin{array}{lll}Z_{2}^{\mathbb {N} }&\to &[0,1)\\\left(x_{k}\right)_{k\in \mathbb {N} }&\mapsto &\sum _{k\geq 1}x_{k}2^{-k}\end{array}}} both surjective, which give rise to a surjective (thus space-filling) mapping [ 0 , 1 ) → S {\displaystyle [0,1)\qquad \to \qquad S} which, however, is not continuous and thus not a space-filling curve. But a very close relative, the Davis-Knuth dragon, is continuous and a space-filling curve. == See also == Dragon curve == References == == External links == "Number Systems Using a Complex Base" by Jarek Duda, the Wolfram Demonstrations Project "The Boundary of Periodic Iterated Function Systems" by Jarek Duda, the Wolfram Demonstrations Project "Number Systems in 3D" by Jarek Duda, the Wolfram Demonstrations Project "Large introduction to complex base numeral systems" with Mathematica sources by Jarek Duda
|
Wikipedia:Compositio Mathematica#0
|
Compositio Mathematica is a monthly peer-reviewed mathematics journal established by L.E.J. Brouwer in 1935. It is owned by the Foundation Compositio Mathematica, and since 2004 it has been published on behalf of the Foundation by the London Mathematical Society in partnership with Cambridge University Press. According to the Journal Citation Reports, the journal has a 2020 2-year impact factor of 1.456 and a 2020 5-year impact factor of 1.696. The editors-in-chief are Fabrizio Andreatta, David Holmes, Bruno Klingler, and Éric Vasserot. == Early history == The journal was established by L. E. J. Brouwer in response to his dismissal from Mathematische Annalen in 1928. An announcement of the new journal was made in a 1934 issue of the American Mathematical Monthly. In 1940, the publication of the journal was suspended due to the German occupation of the Netherlands. == References == == External links == Official website Online archive (1935-1996)
|
Wikipedia:Composition of relations#0
|
In the mathematics of binary relations, the composition of relations is the forming of a new binary relation R ; S from two given binary relations R and S. In the calculus of relations, the composition of relations is called relative multiplication, and its result is called a relative product.: 40 Function composition is the special case of composition of relations where all relations involved are functions. The word uncle indicates a compound relation: for a person to be an uncle, he must be the brother of a parent. In algebraic logic it is said that the relation of Uncle ( x U z {\displaystyle xUz} ) is the composition of relations "is a brother of" ( x B y {\displaystyle xBy} ) and "is a parent of" ( y P z {\displaystyle yPz} ). U = B P is equivalent to: x U z if and only if ∃ y x B y P z . {\displaystyle U=BP\quad {\text{ is equivalent to: }}\quad xUz{\text{ if and only if }}\exists y\ xByPz.} Beginning with Augustus De Morgan, the traditional form of reasoning by syllogism has been subsumed by relational logical expressions and their composition. == Definition == If R ⊆ X × Y {\displaystyle R\subseteq X\times Y} and S ⊆ Y × Z {\displaystyle S\subseteq Y\times Z} are two binary relations, then their composition R ; S {\displaystyle R\mathbin {;} S} is the relation R ; S = { ( x , z ) ∈ X × Z : there exists y ∈ Y such that ( x , y ) ∈ R and ( y , z ) ∈ S } . {\displaystyle R\mathbin {;} S=\{(x,z)\in X\times Z:{\text{ there exists }}y\in Y{\text{ such that }}(x,y)\in R{\text{ and }}(y,z)\in S\}.} In other words, R ; S ⊆ X × Z {\displaystyle R\mathbin {;} S\subseteq X\times Z} is defined by the rule that says ( x , z ) ∈ R ; S {\displaystyle (x,z)\in R\mathbin {;} S} if and only if there is an element y ∈ Y {\displaystyle y\in Y} such that x R y S z {\displaystyle x\,R\,y\,S\,z} (that is, ( x , y ) ∈ R {\displaystyle (x,y)\in R} and ( y , z ) ∈ S {\displaystyle (y,z)\in S} ).: 13 === Notational variations === The semicolon as an infix notation for composition of relations dates back to Ernst Schröder's textbook of 1895. Gunther Schmidt has renewed the use of the semicolon, particularly in Relational Mathematics (2011).: 40 The use of the semicolon coincides with the notation for function composition used (mostly by computer scientists) in category theory, as well as the notation for dynamic conjunction within linguistic dynamic semantics. A small circle ( R ∘ S ) {\displaystyle (R\circ S)} has been used for the infix notation of composition of relations by John M. Howie in his books considering semigroups of relations. However, the small circle is widely used to represent composition of functions g ( f ( x ) ) = ( g ∘ f ) ( x ) {\displaystyle g(f(x))=(g\circ f)(x)} , which reverses the text sequence from the operation sequence. The small circle was used in the introductory pages of Graphs and Relations: 18 until it was dropped in favor of juxtaposition (no infix notation). Juxtaposition ( R S ) {\displaystyle (RS)} is commonly used in algebra to signify multiplication, so too, it can signify relative multiplication. Further with the circle notation, subscripts may be used. Some authors prefer to write ∘ l {\displaystyle \circ _{l}} and ∘ r {\displaystyle \circ _{r}} explicitly when necessary, depending whether the left or the right relation is the first one applied. A further variation encountered in computer science is the Z notation: ∘ {\displaystyle \circ } is used to denote the traditional (right) composition, while left composition is denoted by a fat semicolon. The unicode symbols are ⨾ and ⨟. === Mathematical generalizations === Binary relations R ⊆ X × Y {\displaystyle R\subseteq X\times Y} are morphisms R : X → Y {\displaystyle R:X\to Y} in the category R e l {\displaystyle {\mathsf {Rel}}} . In Rel the objects are sets, the morphisms are binary relations and the composition of morphisms is exactly composition of relations as defined above. The category Set of sets and functions is a subcategory of R e l {\displaystyle {\mathsf {Rel}}} where the maps X → Y {\displaystyle X\to Y} are functions f : X → Y {\displaystyle f:X\to Y} . Given a regular category X {\displaystyle \mathbb {X} } , its category of internal relations R e l ( X ) {\displaystyle {\mathsf {Rel}}(\mathbb {X} )} has the same objects as X {\displaystyle \mathbb {X} } , but now the morphisms X → Y {\displaystyle X\to Y} are given by subobjects R ⊆ X × Y {\displaystyle R\subseteq X\times Y} in X {\displaystyle \mathbb {X} } . Formally, these are jointly monic spans between X {\displaystyle X} and Y {\displaystyle Y} . Categories of internal relations are allegories. In particular R e l ( S e t ) ≅ R e l {\displaystyle {\mathsf {Rel}}({\mathsf {Set}})\cong {\mathsf {Rel}}} . Given a field k {\displaystyle k} (or more generally a principal ideal domain), the category of relations internal to matrices over k {\displaystyle k} , R e l ( M a t ( k ) ) , {\displaystyle {\mathsf {Rel}}({\mathsf {Mat}}(k)),} has morphisms n → m {\displaystyle n\to m} the linear subspaces R ⊆ k n ⊕ k m {\displaystyle R\subseteq k^{n}\oplus k^{m}} . The category of linear relations over the finite field F 2 {\displaystyle \mathbb {F} _{2}} is isomorphic to the phase-free qubit ZX-calculus modulo scalars. == Properties == Composition of relations is associative: R ; ( S ; T ) = ( R ; S ) ; T . {\displaystyle R\mathbin {;} (S\mathbin {;} T)=(R\mathbin {;} S)\mathbin {;} T.} The converse relation of R ; S {\displaystyle R\mathbin {;} S} is ( R ; S ) T = S T ; R T . {\displaystyle (R\mathbin {;} S)^{\textsf {T}}=S^{\textsf {T}}\mathbin {;} R^{\textsf {T}}.} This property makes the set of all binary relations on a set a semigroup with involution. The composition of (partial) functions (that is, functional relations) is again a (partial) function. If R {\displaystyle R} and S {\displaystyle S} are injective, then R ; S {\displaystyle R\mathbin {;} S} is injective, which conversely implies only the injectivity of R . {\displaystyle R.} If R {\displaystyle R} and S {\displaystyle S} are surjective, then R ; S {\displaystyle R\mathbin {;} S} is surjective, which conversely implies only the surjectivity of S . {\displaystyle S.} The set of binary relations on a set X {\displaystyle X} (that is, relations from X {\displaystyle X} to X {\displaystyle X} ) together with (left or right) relation composition forms a monoid with zero, where the identity map on X {\displaystyle X} is the neutral element, and the empty set is the zero element. == Composition in terms of matrices == Finite binary relations are represented by logical matrices. The entries of these matrices are either zero or one, depending on whether the relation represented is false or true for the row and column corresponding to compared objects. Working with such matrices involves the Boolean arithmetic with 1 + 1 = 1 {\displaystyle 1+1=1} and 1 × 1 = 1. {\displaystyle 1\times 1=1.} An entry in the matrix product of two logical matrices will be 1, then, only if the row and column multiplied have a corresponding 1. Thus the logical matrix of a composition of relations can be found by computing the matrix product of the matrices representing the factors of the composition. "Matrices constitute a method for computing the conclusions traditionally drawn by means of hypothetical syllogisms and sorites." == Heterogeneous relations == Consider a heterogeneous relation R ⊆ A × B ; {\displaystyle R\subseteq A\times B;} that is, where A {\displaystyle A} and B {\displaystyle B} are distinct sets. Then using composition of relation R {\displaystyle R} with its converse R T , {\displaystyle R^{\textsf {T}},} there are homogeneous relations R R T {\displaystyle RR^{\textsf {T}}} (on A {\displaystyle A} ) and R T R {\displaystyle R^{\textsf {T}}R} (on B {\displaystyle B} ). If for all x ∈ A {\displaystyle x\in A} there exists some y ∈ B , {\displaystyle y\in B,} such that x R y {\displaystyle xRy} (that is, R {\displaystyle R} is a (left-)total relation), then for all x , x R R T x {\displaystyle x,xRR^{\textsf {T}}x} so that R R T {\displaystyle RR^{\textsf {T}}} is a reflexive relation or I ⊆ R R T {\displaystyle \mathrm {I} \subseteq RR^{\textsf {T}}} where I is the identity relation { ( x , x ) : x ∈ A } . {\displaystyle \{(x,x):x\in A\}.} Similarly, if R {\displaystyle R} is a surjective relation then R T R ⊇ I = { ( x , x ) : x ∈ B } . {\displaystyle R^{\textsf {T}}R\supseteq \mathrm {I} =\{(x,x):x\in B\}.} In this case R ⊆ R R T R . {\displaystyle R\subseteq RR^{\textsf {T}}R.} The opposite inclusion occurs for a difunctional relation. The composition R ¯ T R {\displaystyle {\bar {R}}^{\textsf {T}}R} is used to distinguish relations of Ferrer's type, which satisfy R R ¯ T R = R . {\displaystyle R{\bar {R}}^{\textsf {T}}R=R.} === Example === Let A = {\displaystyle A=} { France, Germany, Italy, Switzerland } and B = {\displaystyle B=} { French, German, Italian } with the relation R {\displaystyle R} given by a R b {\displaystyle aRb} when b {\displaystyle b} is a national language of a . {\displaystyle a.} Since both A {\displaystyle A} and B {\displaystyle B} is finite, R {\displaystyle R} can be represented by a logical matrix, assuming rows (top to bottom) and columns (left to right) are ordered alphabetically: ( 1 0 0 0 1 0 0 0 1 1 1 1 ) . {\displaystyle {\begin{pmatrix}1&0&0\\0&1&0\\0&0&1\\1&1&1\end{pmatrix}}.} The converse relation R T {\displaystyle R^{\textsf {T}}} corresponds to the transposed matrix, and the relation composition R T ; R {\displaystyle R^{\textsf {T}};R} corresponds to the matrix product R T R {\displaystyle R^{\textsf {T}}R} when summation is implemented by logical disjunction. It turns out that the 3 × 3 {\displaystyle 3\times 3} matrix R T R {\displaystyle R^{\textsf {T}}R} contains a 1 at every position, while the reversed matrix product computes as: R R T = ( 1 0 0 1 0 1 0 1 0 0 1 1 1 1 1 1 ) . {\displaystyle RR^{\textsf {T}}={\begin{pmatrix}1&0&0&1\\0&1&0&1\\0&0&1&1\\1&1&1&1\end{pmatrix}}.} This matrix is symmetric, and represents a homogeneous relation on A . {\displaystyle A.} Correspondingly, R T ; R {\displaystyle R^{\textsf {T}}\,;R} is the universal relation on B , {\displaystyle B,} hence any two languages share a nation where they both are spoken (in fact: Switzerland). Vice versa, the question whether two given nations share a language can be answered using R ; R T . {\displaystyle R\,;R^{\textsf {T}}.} == Schröder rules == For a given set V , {\displaystyle V,} the collection of all binary relations on V {\displaystyle V} forms a Boolean lattice ordered by inclusion ( ⊆ ) . {\displaystyle (\subseteq ).} Recall that complementation reverses inclusion: A ⊆ B implies B ∁ ⊆ A ∁ . {\displaystyle A\subseteq B{\text{ implies }}B^{\complement }\subseteq A^{\complement }.} In the calculus of relations it is common to represent the complement of a set by an overbar: A ¯ = A ∁ . {\displaystyle {\bar {A}}=A^{\complement }.} If S {\displaystyle S} is a binary relation, let S T {\displaystyle S^{\textsf {T}}} represent the converse relation, also called the transpose. Then the Schröder rules are Q R ⊆ S is equivalent to Q T S ¯ ⊆ R ¯ is equivalent to S ¯ R T ⊆ Q ¯ . {\displaystyle QR\subseteq S\quad {\text{ is equivalent to }}\quad Q^{\textsf {T}}{\bar {S}}\subseteq {\bar {R}}\quad {\text{ is equivalent to }}\quad {\bar {S}}R^{\textsf {T}}\subseteq {\bar {Q}}.} Verbally, one equivalence can be obtained from another: select the first or second factor and transpose it; then complement the other two relations and permute them.: 15–19 Though this transformation of an inclusion of a composition of relations was detailed by Ernst Schröder, in fact Augustus De Morgan first articulated the transformation as Theorem K in 1860. He wrote L M ⊆ N implies N ¯ M T ⊆ L ¯ . {\displaystyle LM\subseteq N{\text{ implies }}{\bar {N}}M^{\textsf {T}}\subseteq {\bar {L}}.} With Schröder rules and complementation one can solve for an unknown relation X {\displaystyle X} in relation inclusions such as R X ⊆ S and X R ⊆ S . {\displaystyle RX\subseteq S\quad {\text{and}}\quad XR\subseteq S.} For instance, by Schröder rule R X ⊆ S implies R T S ¯ ⊆ X ¯ , {\displaystyle RX\subseteq S{\text{ implies }}R^{\textsf {T}}{\bar {S}}\subseteq {\bar {X}},} and complementation gives X ⊆ R T S ¯ ¯ , {\displaystyle X\subseteq {\overline {R^{\textsf {T}}{\bar {S}}}},} which is called the left residual of S {\displaystyle S} by R {\displaystyle R} . == Quotients == Just as composition of relations is a type of multiplication resulting in a product, so some operations compare to division and produce quotients. Three quotients are exhibited here: left residual, right residual, and symmetric quotient. The left residual of two relations is defined presuming that they have the same domain (source), and the right residual presumes the same codomain (range, target). The symmetric quotient presumes two relations share a domain and a codomain. Definitions: Left residual: A ∖ B := A T B ¯ ¯ {\displaystyle A\backslash B\mathrel {:=} {\overline {A^{\textsf {T}}{\bar {B}}}}} Right residual: D / C := D ¯ C T ¯ {\displaystyle D/C\mathrel {:=} {\overline {{\bar {D}}C^{\textsf {T}}}}} Symmetric quotient: syq ( E , F ) := E T F ¯ ¯ ∩ E ¯ T F ¯ {\displaystyle \operatorname {syq} (E,F)\mathrel {:=} {\overline {E^{\textsf {T}}{\bar {F}}}}\cap {\overline {{\bar {E}}^{\textsf {T}}F}}} Using Schröder's rules, A X ⊆ B {\displaystyle AX\subseteq B} is equivalent to X ⊆ A ∖ B . {\displaystyle X\subseteq A\backslash B.} Thus the left residual is the greatest relation satisfying A X ⊆ B . {\displaystyle AX\subseteq B.} Similarly, the inclusion Y C ⊆ D {\displaystyle YC\subseteq D} is equivalent to Y ⊆ D / C , {\displaystyle Y\subseteq D/C,} and the right residual is the greatest relation satisfying Y C ⊆ D . {\displaystyle YC\subseteq D.} : 43–6 One can practice the logic of residuals with Sudoku. == Join: another form of composition == A fork operator ( < ) {\displaystyle (<)} has been introduced to fuse two relations c : H → A {\displaystyle c:H\to A} and d : H → B {\displaystyle d:H\to B} into c ( < ) d : H → A × B . {\displaystyle c\,(<)\,d:H\to A\times B.} The construction depends on projections a : A × B → A {\displaystyle a:A\times B\to A} and b : A × B → B , {\displaystyle b:A\times B\to B,} understood as relations, meaning that there are converse relations a T {\displaystyle a^{\textsf {T}}} and b T . {\displaystyle b^{\textsf {T}}.} Then the fork of c {\displaystyle c} and d {\displaystyle d} is given by c ( < ) d := c ; a T ∩ d ; b T . {\displaystyle c\,(<)\,d~\mathrel {:=} ~c\mathbin {;} a^{\textsf {T}}\cap \ d\mathbin {;} b^{\textsf {T}}.} Another form of composition of relations, which applies to general n {\displaystyle n} -place relations for n ≥ 2 , {\displaystyle n\geq 2,} is the join operation of relational algebra. The usual composition of two binary relations as defined here can be obtained by taking their join, leading to a ternary relation, followed by a projection that removes the middle component. For example, in the query language SQL there is the operation join (SQL). == See also == Demonic composition – Mathematical operation Friend of a friend – Human contact that exists because of a mutual friend == Notes == == References == M. Kilp, U. Knauer, A.V. Mikhalev (2000) Monoids, Acts and Categories with Applications to Wreath Products and Graphs, De Gruyter Expositions in Mathematics vol. 29, Walter de Gruyter,ISBN 3-11-015248-7.
|
Wikipedia:Compressed sensing#0
|
Compressed sensing (also known as compressive sensing, compressive sampling, or sparse sampling) is a signal processing technique for efficiently acquiring and reconstructing a signal by finding solutions to underdetermined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Nyquist–Shannon sampling theorem. There are two conditions under which recovery is possible. The first one is sparsity, which requires the signal to be sparse in some domain. The second one is incoherence, which is applied through the isometric property, which is sufficient for sparse signals. Compressed sensing has applications in, for example, magnetic resonance imaging (MRI) where the incoherence condition is typically satisfied. == Overview == A common goal of the engineering field of signal processing is to reconstruct a signal from a series of sampling measurements. In general, this task is impossible because there is no way to reconstruct a signal during the times that the signal is not measured. Nevertheless, with prior knowledge or assumptions about the signal, it turns out to be possible to perfectly reconstruct a signal from a series of measurements (acquiring this series of measurements is called sampling). Over time, engineers have improved their understanding of which assumptions are practical and how they can be generalized. An early breakthrough in signal processing was the Nyquist–Shannon sampling theorem. It states that if a real signal's highest frequency is less than half of the sampling rate, then the signal can be reconstructed perfectly by means of sinc interpolation. The main idea is that with prior knowledge about constraints on the signal's frequencies, fewer samples are needed to reconstruct the signal. Around 2004, Emmanuel Candès, Justin Romberg, Terence Tao, and David Donoho proved that given knowledge about a signal's sparsity, the signal may be reconstructed with even fewer samples than the sampling theorem requires. This idea is the basis of compressed sensing. == History == Compressed sensing relies on L 1 {\displaystyle L^{1}} techniques, which several other scientific fields have used historically. In statistics, the least squares method was complemented by the L 1 {\displaystyle L^{1}} -norm, which was introduced by Laplace. Following the introduction of linear programming and Dantzig's simplex algorithm, the L 1 {\displaystyle L^{1}} -norm was used in computational statistics. In statistical theory, the L 1 {\displaystyle L^{1}} -norm was used by George W. Brown and later writers on median-unbiased estimators. It was used by Peter J. Huber and others working on robust statistics. The L 1 {\displaystyle L^{1}} -norm was also used in signal processing, for example, in the 1970s, when seismologists constructed images of reflective layers within the earth based on data that did not seem to satisfy the Nyquist–Shannon criterion. It was used in matching pursuit in 1993, the LASSO estimator by Robert Tibshirani in 1996 and basis pursuit in 1998. At first glance, compressed sensing might seem to violate the sampling theorem, because compressed sensing depends on the sparsity of the signal in question and not its highest frequency. This is a misconception, because the sampling theorem guarantees perfect reconstruction given sufficient, not necessary, conditions. A sampling method fundamentally different from classical fixed-rate sampling cannot "violate" the sampling theorem. Sparse signals with high frequency components can be highly under-sampled using compressed sensing compared to classical fixed-rate sampling. == Method == === Underdetermined linear system === An underdetermined system of linear equations has more unknowns than equations and generally has an infinite number of solutions. The figure below shows such an equation system y = D x {\displaystyle \mathbf {y} =D\mathbf {x} } where we want to find a solution for x {\displaystyle \mathbf {x} } . In order to choose a solution to such a system, one must impose extra constraints or conditions (such as smoothness) as appropriate. In compressed sensing, one adds the constraint of sparsity, allowing only solutions which have a small number of nonzero coefficients. Not all underdetermined systems of linear equations have a sparse solution. However, if there is a unique sparse solution to the underdetermined system, then the compressed sensing framework allows the recovery of that solution. === Solution / reconstruction method === Compressed sensing takes advantage of the redundancy in many interesting signals—they are not pure noise. In particular, many signals are sparse, that is, they contain many coefficients close to or equal to zero, when represented in some domain. This is the same insight used in many forms of lossy compression. Compressed sensing typically starts with taking a weighted linear combination of samples also called compressive measurements in a basis different from the basis in which the signal is known to be sparse. The results found by Emmanuel Candès, Justin Romberg, Terence Tao, and David Donoho showed that the number of these compressive measurements can be small and still contain nearly all the useful information. Therefore, the task of converting the image back into the intended domain involves solving an underdetermined matrix equation since the number of compressive measurements taken is smaller than the number of pixels in the full image. However, adding the constraint that the initial signal is sparse enables one to solve this underdetermined system of linear equations. The least-squares solution to such problems is to minimize the L 2 {\displaystyle L^{2}} norm—that is, minimize the amount of energy in the system. This is usually simple mathematically (involving only a matrix multiplication by the pseudo-inverse of the basis sampled in). However, this leads to poor results for many practical applications, for which the unknown coefficients have nonzero energy. To enforce the sparsity constraint when solving for the underdetermined system of linear equations, one can minimize the number of nonzero components of the solution. The function counting the number of non-zero components of a vector was called the L 0 {\displaystyle L^{0}} "norm" by David Donoho. Candès et al. proved that for many problems it is probable that the L 1 {\displaystyle L^{1}} norm is equivalent to the L 0 {\displaystyle L^{0}} norm, in a technical sense: This equivalence result allows one to solve the L 1 {\displaystyle L^{1}} problem, which is easier than the L 0 {\displaystyle L^{0}} problem. Finding the candidate with the smallest L 1 {\displaystyle L^{1}} norm can be expressed relatively easily as a linear program, for which efficient solution methods already exist. When measurements may contain a finite amount of noise, basis pursuit denoising is preferred over linear programming, since it preserves sparsity in the face of noise and can be solved faster than an exact linear program. === Total variation-based CS reconstruction === ==== Motivation and applications ==== ===== Role of TV regularization ===== Total variation can be seen as a non-negative real-valued functional defined on the space of real-valued functions (for the case of functions of one variable) or on the space of integrable functions (for the case of functions of several variables). For signals, especially, total variation refers to the integral of the absolute gradient of the signal. In signal and image reconstruction, it is applied as total variation regularization where the underlying principle is that signals with excessive details have high total variation and that removing these details, while retaining important information such as edges, would reduce the total variation of the signal and make the signal subject closer to the original signal in the problem. For the purpose of signal and image reconstruction, ℓ 1 {\displaystyle \ell _{1}} minimization models are used. Other approaches also include the least-squares as has been discussed before in this article. These methods are extremely slow and return a not-so-perfect reconstruction of the signal. The current CS Regularization models attempt to address this problem by incorporating sparsity priors of the original image, one of which is the total variation (TV). Conventional TV approaches are designed to give piece-wise constant solutions. Some of these include (as discussed ahead) – constrained ℓ 1 {\textstyle \ell _{1}} -minimization which uses an iterative scheme. This method, though fast, subsequently leads to over-smoothing of edges resulting in blurred image edges. TV methods with iterative re-weighting have been implemented to reduce the influence of large gradient value magnitudes in the images. This has been used in computed tomography (CT) reconstruction as a method known as edge-preserving total variation. However, as gradient magnitudes are used for estimation of relative penalty weights between the data fidelity and regularization terms, this method is not robust to noise and artifacts and accurate enough for CS image/signal reconstruction and, therefore, fails to preserve smaller structures. Recent progress on this problem involves using an iteratively directional TV refinement for CS reconstruction. This method would have 2 stages: the first stage would estimate and refine the initial orientation field – which is defined as a noisy point-wise initial estimate, through edge-detection, of the given image. In the second stage, the CS reconstruction model is presented by utilizing directional TV regularizer. More details about these TV-based approaches – iteratively reweighted l1 minimization, edge-preserving TV and iterative model using directional orientation field and TV- are provided below. ==== Existing approaches ==== ===== Iteratively reweighted ℓ1 minimization ===== In the CS reconstruction models using constrained ℓ 1 {\displaystyle \ell _{1}} minimization, larger coefficients are penalized heavily in the ℓ 1 {\displaystyle \ell _{1}} norm. It was proposed to have a weighted formulation of ℓ 1 {\displaystyle \ell _{1}} minimization designed to more democratically penalize nonzero coefficients. An iterative algorithm is used for constructing the appropriate weights. Each iteration requires solving one ℓ 1 {\displaystyle \ell _{1}} minimization problem by finding the local minimum of a concave penalty function that more closely resembles the ℓ 0 {\displaystyle \ell _{0}} norm. An additional parameter, usually to avoid any sharp transitions in the penalty function curve, is introduced into the iterative equation to ensure stability and so that a zero estimate in one iteration does not necessarily lead to a zero estimate in the next iteration. The method essentially involves using the current solution for computing the weights to be used in the next iteration. ====== Advantages and disadvantages ====== Early iterations may find inaccurate sample estimates, however this method will down-sample these at a later stage to give more weight to the smaller non-zero signal estimates. One of the disadvantages is the need for defining a valid starting point as a global minimum might not be obtained every time due to the concavity of the function. Another disadvantage is that this method tends to uniformly penalize the image gradient irrespective of the underlying image structures. This causes over-smoothing of edges, especially those of low contrast regions, subsequently leading to loss of low contrast information. The advantages of this method include: reduction of the sampling rate for sparse signals; reconstruction of the image while being robust to the removal of noise and other artifacts; and use of very few iterations. This can also help in recovering images with sparse gradients. In the figure shown below, P1 refers to the first-step of the iterative reconstruction process, of the projection matrix P of the fan-beam geometry, which is constrained by the data fidelity term. This may contain noise and artifacts as no regularization is performed. The minimization of P1 is solved through the conjugate gradient least squares method. P2 refers to the second step of the iterative reconstruction process wherein it utilizes the edge-preserving total variation regularization term to remove noise and artifacts, and thus improve the quality of the reconstructed image/signal. The minimization of P2 is done through a simple gradient descent method. Convergence is determined by testing, after each iteration, for image positivity, by checking if f k − 1 = 0 {\displaystyle f^{k-1}=0} for the case when f k − 1 < 0 {\displaystyle f^{k-1}<0} (Note that f {\displaystyle f} refers to the different x-ray linear attenuation coefficients at different voxels of the patient image). ===== Edge-preserving total variation (TV)-based compressed sensing ===== This is an iterative CT reconstruction algorithm with edge-preserving TV regularization to reconstruct CT images from highly undersampled data obtained at low dose CT through low current levels (milliampere). In order to reduce the imaging dose, one of the approaches used is to reduce the number of x-ray projections acquired by the scanner detectors. However, this insufficient projection data which is used to reconstruct the CT image can cause streaking artifacts. Furthermore, using these insufficient projections in standard TV algorithms end up making the problem under-determined and thus leading to infinitely many possible solutions. In this method, an additional penalty weighted function is assigned to the original TV norm. This allows for easier detection of sharp discontinuities in intensity in the images and thereby adapt the weight to store the recovered edge information during the process of signal/image reconstruction. The parameter σ {\displaystyle \sigma } controls the amount of smoothing applied to the pixels at the edges to differentiate them from the non-edge pixels. The value of σ {\displaystyle \sigma } is changed adaptively based on the values of the histogram of the gradient magnitude so that a certain percentage of pixels have gradient values larger than σ {\displaystyle \sigma } . The edge-preserving total variation term, thus, becomes sparser and this speeds up the implementation. A two-step iteration process known as forward–backward splitting algorithm is used. The optimization problem is split into two sub-problems which are then solved with the conjugate gradient least squares method and the simple gradient descent method respectively. The method is stopped when the desired convergence has been achieved or if the maximum number of iterations is reached. ===== Advantages and disadvantages ===== Some of the disadvantages of this method are the absence of smaller structures in the reconstructed image and degradation of image resolution. This edge preserving TV algorithm, however, requires fewer iterations than the conventional TV algorithm. Analyzing the horizontal and vertical intensity profiles of the reconstructed images, it can be seen that there are sharp jumps at edge points and negligible, minor fluctuation at non-edge points. Thus, this method leads to low relative error and higher correlation as compared to the TV method. It also effectively suppresses and removes any form of image noise and image artifacts such as streaking. ===== Iterative model using a directional orientation field and directional total variation ===== To prevent over-smoothing of edges and texture details and to obtain a reconstructed CS image which is accurate and robust to noise and artifacts, this method is used. First, an initial estimate of the noisy point-wise orientation field of the image I {\displaystyle I} , d ^ {\displaystyle {\hat {d}}} , is obtained. This noisy orientation field is defined so that it can be refined at a later stage to reduce the noise influences in orientation field estimation. A coarse orientation field estimation is then introduced based on structure tensor, which is formulated as: J ρ ( ∇ I σ ) = G ρ ∗ ( ∇ I σ ⊗ ∇ I σ ) = ( J 11 J 12 J 12 J 22 ) {\displaystyle J_{\rho }(\nabla I_{\sigma })=G_{\rho }*(\nabla I_{\sigma }\otimes \nabla I_{\sigma })={\begin{pmatrix}J_{11}&J_{12}\\J_{12}&J_{22}\end{pmatrix}}} . Here, J ρ {\displaystyle J_{\rho }} refers to the structure tensor related with the image pixel point (i,j) having standard deviation ρ {\displaystyle \rho } . G {\displaystyle G} refers to the Gaussian kernel ( 0 , ρ 2 ) {\displaystyle (0,\rho ^{2})} with standard deviation ρ {\displaystyle \rho } . σ {\displaystyle \sigma } refers to the manually defined parameter for the image I {\displaystyle I} below which the edge detection is insensitive to noise. ∇ I σ {\displaystyle \nabla I_{\sigma }} refers to the gradient of the image I {\displaystyle I} and ( ∇ I σ ⊗ ∇ I σ ) {\displaystyle (\nabla I_{\sigma }\otimes \nabla I_{\sigma })} refers to the tensor product obtained by using this gradient. The structure tensor obtained is convolved with a Gaussian kernel G {\displaystyle G} to improve the accuracy of the orientation estimate with σ {\displaystyle \sigma } being set to high values to account for the unknown noise levels. For every pixel (i,j) in the image, the structure tensor J is a symmetric and positive semi-definite matrix. Convolving all the pixels in the image with G {\displaystyle G} , gives orthonormal eigen vectors ω and υ of the J {\displaystyle J} matrix. ω points in the direction of the dominant orientation having the largest contrast and υ points in the direction of the structure orientation having the smallest contrast. The orientation field coarse initial estimation d ^ {\displaystyle {\hat {d}}} is defined as d ^ {\displaystyle {\hat {d}}} = υ. This estimate is accurate at strong edges. However, at weak edges or on regions with noise, its reliability decreases. To overcome this drawback, a refined orientation model is defined in which the data term reduces the effect of noise and improves accuracy while the second penalty term with the L2-norm is a fidelity term which ensures accuracy of initial coarse estimation. This orientation field is introduced into the directional total variation optimization model for CS reconstruction through the equation: min X ‖ ∇ X ∙ d ‖ 1 + λ 2 ‖ Y − Φ X ‖ 2 2 {\displaystyle \min _{\mathrm {X} }\lVert \nabla \mathrm {X} \bullet d\rVert _{1}+{\frac {\lambda }{2}}\ \lVert Y-\Phi \mathrm {X} \rVert _{2}^{2}} . X {\displaystyle \mathrm {X} } is the objective signal which needs to be recovered. Y is the corresponding measurement vector, d is the iterative refined orientation field and Φ {\displaystyle \Phi } is the CS measurement matrix. This method undergoes a few iterations ultimately leading to convergence. d ^ {\displaystyle {\hat {d}}} is the orientation field approximate estimation of the reconstructed image X k − 1 {\displaystyle X^{k-1}} from the previous iteration (in order to check for convergence and the subsequent optical performance, the previous iteration is used). For the two vector fields represented by X {\displaystyle \mathrm {X} } and d {\displaystyle d} , X ∙ d {\displaystyle \mathrm {X} \bullet d} refers to the multiplication of respective horizontal and vertical vector elements of X {\displaystyle \mathrm {X} } and d {\displaystyle d} followed by their subsequent addition. These equations are reduced to a series of convex minimization problems which are then solved with a combination of variable splitting and augmented Lagrangian (FFT-based fast solver with a closed form solution) methods. It (Augmented Lagrangian) is considered equivalent to the split Bregman iteration which ensures convergence of this method. The orientation field, d is defined as being equal to ( d h , d v ) {\displaystyle (d_{h},d_{v})} , where d h , d v {\displaystyle d_{h},d_{v}} define the horizontal and vertical estimates of d {\displaystyle d} . The Augmented Lagrangian method for the orientation field, min X ‖ ∇ X ∙ d ‖ 1 + λ 2 ‖ Y − Φ X ‖ 2 2 {\displaystyle \min _{\mathrm {X} }\lVert \nabla \mathrm {X} \bullet d\rVert _{1}+{\frac {\lambda }{2}}\ \lVert Y-\Phi \mathrm {X} \rVert _{2}^{2}} , involves initializing d h , d v , H , V {\displaystyle d_{h},d_{v},H,V} and then finding the approximate minimizer of L 1 {\displaystyle L_{1}} with respect to these variables. The Lagrangian multipliers are then updated and the iterative process is stopped when convergence is achieved. For the iterative directional total variation refinement model, the augmented lagrangian method involves initializing X , P , Q , λ P , λ Q {\displaystyle \mathrm {X} ,P,Q,\lambda _{P},\lambda _{Q}} . Here, H , V , P , Q {\displaystyle H,V,P,Q} are newly introduced variables where H {\displaystyle H} = ∇ d h {\displaystyle \nabla d_{h}} , V {\displaystyle V} = ∇ d v {\displaystyle \nabla d_{v}} , P {\displaystyle P} = ∇ X {\displaystyle \nabla \mathrm {X} } , and Q {\displaystyle Q} = P ∙ d {\displaystyle P\bullet d} . λ H , λ V , λ P , λ Q {\displaystyle \lambda _{H},\lambda _{V},\lambda _{P},\lambda _{Q}} are the Lagrangian multipliers for H , V , P , Q {\displaystyle H,V,P,Q} . For each iteration, the approximate minimizer of L 2 {\displaystyle L_{2}} with respect to variables ( X , P , Q {\displaystyle \mathrm {X} ,P,Q} ) is calculated. And as in the field refinement model, the lagrangian multipliers are updated and the iterative process is stopped when convergence is achieved. For the orientation field refinement model, the Lagrangian multipliers are updated in the iterative process as follows: ( λ H ) k = ( λ H ) k − 1 + γ H ( H k − ∇ ( d h ) k ) {\displaystyle (\lambda _{H})^{k}=(\lambda _{H})^{k-1}+\gamma _{H}(H^{k}-\nabla (d_{h})^{k})} ( λ V ) k = ( λ V ) k − 1 + γ V ( V k − ∇ ( d v ) k ) {\displaystyle (\lambda _{V})^{k}=(\lambda _{V})^{k-1}+\gamma _{V}(V^{k}-\nabla (d_{v})^{k})} For the iterative directional total variation refinement model, the Lagrangian multipliers are updated as follows: ( λ P ) k = ( λ P ) k − 1 + γ P P k − ∇ ( X ) k ) {\displaystyle (\lambda _{P})^{k}=(\lambda _{P})^{k-1}+\gamma _{P}P^{k}-\nabla (\mathrm {X} )^{k})} ( λ Q ) k = ( λ Q ) k − 1 + γ Q ( Q k − P k ∙ d ) {\displaystyle (\lambda _{Q})^{k}=(\lambda _{Q})^{k-1}+\gamma _{Q}(Q^{k}-P^{k}\bullet d)} Here, γ H , γ V , γ P , γ Q {\displaystyle \gamma _{H},\gamma _{V},\gamma _{P},\gamma _{Q}} are positive constants. ===== Advantages and disadvantages ===== Based on peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) metrics and known ground-truth images for testing performance, it is concluded that iterative directional total variation has a better reconstructed performance than the non-iterative methods in preserving edge and texture areas. The orientation field refinement model plays a major role in this improvement in performance as it increases the number of directionless pixels in the flat area while enhancing the orientation field consistency in the regions with edges. == Applications == The field of compressive sensing is related to several topics in signal processing and computational mathematics, such as underdetermined linear systems, group testing, heavy hitters, sparse coding, multiplexing, sparse sampling, and finite rate of innovation. Its broad scope and generality has enabled several innovative CS-enhanced approaches in signal processing and compression, solution of inverse problems, design of radiating systems, radar and through-the-wall imaging, and antenna characterization. Imaging techniques having a strong affinity with compressive sensing include coded aperture and computational photography. Conventional CS reconstruction uses sparse signals (usually sampled at a rate less than the Nyquist sampling rate) for reconstruction through constrained l 1 {\displaystyle l_{1}} minimization. One of the earliest applications of such an approach was in reflection seismology which used sparse reflected signals from band-limited data for tracking changes between sub-surface layers. When the LASSO model came into prominence in the 1990s as a statistical method for selection of sparse models, this method was further used in computational harmonic analysis for sparse signal representation from over-complete dictionaries. Some of the other applications include incoherent sampling of radar pulses. The work by Boyd et al. has applied the LASSO model- for selection of sparse models- towards analog to digital converters (the current ones use a sampling rate higher than the Nyquist rate along with the quantized Shannon representation). This would involve a parallel architecture in which the polarity of the analog signal changes at a high rate followed by digitizing the integral at the end of each time-interval to obtain the converted digital signal. === Photography === Compressed sensing has been used in an experimental mobile phone camera sensor. The approach allows a reduction in image acquisition energy per image by as much as a factor of 15 at the cost of complex decompression algorithms; the computation may require an off-device implementation. Compressed sensing is used in single-pixel cameras from Rice University. Bell Labs employed the technique in a lensless single-pixel camera that takes stills using repeated snapshots of randomly chosen apertures from a grid. Image quality improves with the number of snapshots, and generally requires a small fraction of the data of conventional imaging, while eliminating lens/focus-related aberrations. === Holography === Compressed sensing can be used to improve image reconstruction in holography by increasing the number of voxels one can infer from a single hologram. It is also used for image retrieval from undersampled measurements in optical and millimeter-wave holography. === Facial recognition === Compressed sensing has been used in facial recognition applications. === Magnetic resonance imaging === Compressed sensing has been used to shorten magnetic resonance imaging scanning sessions on conventional hardware. Reconstruction methods include ISTA FISTA SISTA ePRESS EWISTA EWISTARS etc. Compressed sensing addresses the issue of high scan time by enabling faster acquisition by measuring fewer Fourier coefficients. This produces a high-quality image with relatively lower scan time. Another application (also discussed ahead) is for CT reconstruction with fewer X-ray projections. Compressed sensing, in this case, removes the high spatial gradient parts – mainly, image noise and artifacts. This holds potential as one can obtain high-resolution CT images at low radiation doses (through lower current-mA settings). === Network tomography === Compressed sensing has shown outstanding results in the application of network tomography to network management. Network delay estimation and network congestion detection can both be modeled as underdetermined systems of linear equations where the coefficient matrix is the network routing matrix. Moreover, in the Internet, network routing matrices usually satisfy the criterion for using compressed sensing. === Shortwave-infrared cameras === In 2013 one company announced shortwave-infrared cameras which utilize compressed sensing. These cameras have light sensitivity from 0.9 μm to 1.7 μm, wavelengths invisible to the human eye. === Aperture synthesis astronomy === In radio astronomy and optical astronomical interferometry, full coverage of the Fourier plane is usually absent and phase information is not obtained in most hardware configurations. In order to obtain aperture synthesis images, various compressed sensing algorithms are employed. The Högbom CLEAN algorithm has been in use since 1974 for the reconstruction of images obtained from radio interferometers, which is similar to the matching pursuit algorithm mentioned above. === Transmission electron microscopy === Compressed sensing combined with a moving aperture has been used to increase the acquisition rate of images in a transmission electron microscope. In scanning mode, compressive sensing combined with random scanning of the electron beam has enabled both faster acquisition and less electron dose, which allows for imaging of electron beam sensitive materials. == See also == Compressed sensing in speech signals Low-density parity-check code Noiselet Sparse approximation Sparse coding Verification-based message-passing algorithms in compressed sensing == Notes == == References == == Further reading == "The Fundamentals of Compressive Sensing" Part 1, Part 2 and Part 3: video tutorial by Mark Davenport, Georgia Tech. at SigView, the IEEE Signal Processing Society Tutorial Library. Using Math to Turn Lo-Res Datasets Into Hi-Res Samples Wired Magazine article Compressive Sensing Resources at Rice University. Compressed Sensing Makes Every Pixel Count – article in the AMS What's Happening in the Mathematical Sciences series Wiki on sparse reconstruction
|
Wikipedia:Computer algebra#0
|
In mathematics and computer science, computer algebra, also called symbolic computation or algebraic computation, is a scientific area that refers to the study and development of algorithms and software for manipulating mathematical expressions and other mathematical objects. Although computer algebra could be considered a subfield of scientific computing, they are generally considered as distinct fields because scientific computing is usually based on numerical computation with approximate floating point numbers, while symbolic computation emphasizes exact computation with expressions containing variables that have no given value and are manipulated as symbols. Software applications that perform symbolic calculations are called computer algebra systems, with the term system alluding to the complexity of the main applications that include, at least, a method to represent mathematical data in a computer, a user programming language (usually different from the language used for the implementation), a dedicated memory manager, a user interface for the input/output of mathematical expressions, and a large set of routines to perform usual operations, like simplification of expressions, differentiation using the chain rule, polynomial factorization, indefinite integration, etc. Computer algebra is widely used to experiment in mathematics and to design the formulas that are used in numerical programs. It is also used for complete scientific computations, when purely numerical methods fail, as in public key cryptography, or for some non-linear problems. == Terminology == Some authors distinguish computer algebra from symbolic computation, using the latter name to refer to kinds of symbolic computation other than the computation with mathematical formulas. Some authors use symbolic computation for the computer-science aspect of the subject and computer algebra for the mathematical aspect. In some languages, the name of the field is not a direct translation of its English name. Typically, it is called calcul formel in French, which means "formal computation". This name reflects the ties this field has with formal methods. Symbolic computation has also been referred to, in the past, as symbolic manipulation, algebraic manipulation, symbolic processing, symbolic mathematics, or symbolic algebra, but these terms, which also refer to non-computational manipulation, are no longer used in reference to computer algebra. == Scientific community == There is no learned society that is specific to computer algebra, but this function is assumed by the special interest group of the Association for Computing Machinery named SIGSAM (Special Interest Group on Symbolic and Algebraic Manipulation). There are several annual conferences on computer algebra, the premier being ISSAC (International Symposium on Symbolic and Algebraic Computation), which is regularly sponsored by SIGSAM. There are several journals specializing in computer algebra, the top one being Journal of Symbolic Computation founded in 1985 by Bruno Buchberger. There are also several other journals that regularly publish articles in computer algebra. == Computer science aspects == === Data representation === As numerical software is highly efficient for approximate numerical computation, it is common, in computer algebra, to emphasize exact computation with exactly represented data. Such an exact representation implies that, even when the size of the output is small, the intermediate data generated during a computation may grow in an unpredictable way. This behavior is called expression swell. To alleviate this problem, various methods are used in the representation of the data, as well as in the algorithms that manipulate them. ==== Numbers ==== The usual number systems used in numerical computation are floating point numbers and integers of a fixed, bounded size. Neither of these is convenient for computer algebra, due to expression swell. Therefore, the basic numbers used in computer algebra are the integers of the mathematicians, commonly represented by an unbounded signed sequence of digits in some base of numeration, usually the largest base allowed by the machine word. These integers allow one to define the rational numbers, which are irreducible fractions of two integers. Programming an efficient implementation of the arithmetic operations is a hard task. Therefore, most free computer algebra systems, and some commercial ones such as Mathematica and Maple, use the GMP library, which is thus a de facto standard. ==== Expressions ==== Except for numbers and variables, every mathematical expression may be viewed as the symbol of an operator followed by a sequence of operands. In computer-algebra software, the expressions are usually represented in this way. This representation is very flexible, and many things that seem not to be mathematical expressions at first glance, may be represented and manipulated as such. For example, an equation is an expression with "=" as an operator, and a matrix may be represented as an expression with "matrix" as an operator and its rows as operands. Even programs may be considered and represented as expressions with operator "procedure" and, at least, two operands, the list of parameters and the body, which is itself an expression with "body" as an operator and a sequence of instructions as operands. Conversely, any mathematical expression may be viewed as a program. For example, the expression a + b may be viewed as a program for the addition, with a and b as parameters. Executing this program consists of evaluating the expression for given values of a and b; if they are not given any values, then the result of the evaluation is simply its input. This process of delayed evaluation is fundamental in computer algebra. For example, the operator "=" of the equations is also, in most computer algebra systems, the name of the program of the equality test: normally, the evaluation of an equation results in an equation, but, when an equality test is needed, either explicitly asked by the user through an "evaluation to a Boolean" command, or automatically started by the system in the case of a test inside a program, then the evaluation to a Boolean result is executed. As the size of the operands of an expression is unpredictable and may change during a working session, the sequence of the operands is usually represented as a sequence of either pointers (like in Macsyma) or entries in a hash table (like in Maple). === Simplification === The raw application of the basic rules of differentiation with respect to x on the expression ax gives the result x ⋅ a x − 1 ⋅ 0 + a x ⋅ ( 1 ⋅ log a + x ⋅ 0 a ) . {\displaystyle x\cdot a^{x-1}\cdot 0+a^{x}\cdot \left(1\cdot \log a+x\cdot {\frac {0}{a}}\right).} A simpler expression than this is generally desired, and simplification is needed when working with general expressions. This simplification is normally done through rewriting rules. There are several classes of rewriting rules to be considered. The simplest are rules that always reduce the size of the expression, like E − E → 0 or sin(0) → 0. They are systematically applied in computer algebra systems. A difficulty occurs with associative operations like addition and multiplication. The standard way to deal with associativity is to consider that addition and multiplication have an arbitrary number of operands; that is, that a + b + c is represented as "+"(a, b, c). Thus a + (b + c) and (a + b) + c are both simplified to "+"(a, b, c), which is displayed a + b + c. In the case of expressions such as a − b + c, the simplest way is to systematically rewrite −E, E − F, E/F as, respectively, (−1)⋅E, E + (−1)⋅F, E⋅F−1. In other words, in the internal representation of the expressions, there is no subtraction nor division nor unary minus, outside the representation of the numbers. Another difficulty occurs with the commutativity of addition and multiplication. The problem is to quickly recognize the like terms in order to combine or cancel them. Testing every pair of terms is costly with very long sums and products. To address this, Macsyma sorts the operands of sums and products into an order that places like terms in consecutive places, allowing easy detection. In Maple, a hash function is designed for generating collisions when like terms are entered, allowing them to be combined as soon as they are introduced. This allows subexpressions that appear several times in a computation to be immediately recognized and stored only once. This saves memory and speeds up computation by avoiding repetition of the same operations on identical expressions. Some rewriting rules sometimes increase and sometimes decrease the size of the expressions to which they are applied. This is the case for the distributive law or trigonometric identities. For example, the distributive law allows rewriting ( x + 1 ) 4 → x 4 + 4 x 3 + 6 x 2 + 4 x + 1 {\displaystyle (x+1)^{4}\rightarrow x^{4}+4x^{3}+6x^{2}+4x+1} and ( x − 1 ) ( x 4 + x 3 + x 2 + x + 1 ) → x 5 − 1. {\displaystyle (x-1)(x^{4}+x^{3}+x^{2}+x+1)\rightarrow x^{5}-1.} As there is no way to make a good general choice of applying or not such a rewriting rule, such rewriting is done only when explicitly invoked by the user. For the distributive law, the computer function that applies this rewriting rule is typically called "expand". The reverse rewriting rule, called "factor", requires a non-trivial algorithm, which is thus a key function in computer algebra systems (see Polynomial factorization). == Mathematical aspects == Some fundamental mathematical questions arise when one wants to manipulate mathematical expressions in a computer. We consider mainly the case of the multivariate rational fractions. This is not a real restriction, because, as soon as the irrational functions appearing in an expression are simplified, they are usually considered as new indeterminates. For example, ( sin ( x + y ) 2 + log ( z 2 − 5 ) ) 3 {\displaystyle (\sin(x+y)^{2}+\log(z^{2}-5))^{3}} is viewed as a polynomial in sin ( x + y ) {\displaystyle \sin(x+y)} and log ( z 2 − 5 ) {\displaystyle \log(z^{2}-5)} . === Equality === There are two notions of equality for mathematical expressions. Syntactic equality is the equality of their representation in a computer. This is easy to test in a program. Semantic equality is when two expressions represent the same mathematical object, as in ( x + y ) 2 = x 2 + 2 x y + y 2 . {\displaystyle (x+y)^{2}=x^{2}+2xy+y^{2}.} It is known from Richardson's theorem that there may not exist an algorithm that decides whether two expressions representing numbers are semantically equal if exponentials and logarithms are allowed in the expressions. Accordingly, (semantic) equality may be tested only on some classes of expressions such as the polynomials and rational fractions. To test the equality of two expressions, instead of designing specific algorithms, it is usual to put expressions in some canonical form or to put their difference in a normal form, and to test the syntactic equality of the result. In computer algebra, "canonical form" and "normal form" are not synonymous. A canonical form is such that two expressions in canonical form are semantically equal if and only if they are syntactically equal, while a normal form is such that an expression in normal form is semantically zero only if it is syntactically zero. In other words, zero has a unique representation as an expression in normal form. Normal forms are usually preferred in computer algebra for several reasons. Firstly, canonical forms may be more costly to compute than normal forms. For example, to put a polynomial in canonical form, one has to expand every product through the distributive law, while it is not necessary with a normal form (see below). Secondly, it may be the case, like for expressions involving radicals, that a canonical form, if it exists, depends on some arbitrary choices and that these choices may be different for two expressions that have been computed independently. This may make the use of a canonical form impractical. == History == === Human-driven computer algebra === Early computer algebra systems, such as the ENIAC at the University of Pennsylvania, relied on human computers or programmers to reprogram it between calculations, manipulate its many physical modules (or panels), and feed its IBM card reader. Female mathematicians handled the majority of ENIAC programming human-guided computation: Jean Jennings, Marlyn Wescoff, Ruth Lichterman, Betty Snyder, Frances Bilas, and Kay McNulty led said efforts. === Foundations and early applications === In 1960, John McCarthy explored an extension of primitive recursive functions for computing symbolic expressions through the Lisp programming language while at the Massachusetts Institute of Technology. Though his series on "Recursive functions of symbolic expressions and their computation by machine" remained incomplete, McCarthy and his contributions to artificial intelligence programming and computer algebra via Lisp helped establish Project MAC at the Massachusetts Institute of Technology and the organization that later became the Stanford AI Laboratory (SAIL) at Stanford University, whose competition facilitated significant development in computer algebra throughout the late 20th century. Early efforts at symbolic computation, in the 1960s and 1970s, faced challenges surrounding the inefficiency of long-known algorithms when ported to computer algebra systems. Predecessors to Project MAC, such as ALTRAN, sought to overcome algorithmic limitations through advancements in hardware and interpreters, while later efforts turned towards software optimization. === Historic problems === A large part of the work of researchers in the field consisted of revisiting classical algebra to increase its effectiveness while developing efficient algorithms for use in computer algebra. An example of this type of work is the computation of polynomial greatest common divisors, a task required to simplify fractions and an essential component of computer algebra. Classical algorithms for this computation, such as Euclid's algorithm, proved inefficient over infinite fields; algorithms from linear algebra faced similar struggles. Thus, researchers turned to discovering methods of reducing polynomials (such as those over a ring of integers or a unique factorization domain) to a variant efficiently computable via a Euclidean algorithm. == Algorithms used in computer algebra == == See also == Automated theorem prover Computer-assisted proof Computational algebraic geometry Computer algebra system Differential analyser Proof checker Model checker Symbolic-numeric computation Symbolic simulation Symbolic artificial intelligence == References == == Further reading == For a detailed definition of the subject: Buchberger, Bruno (1985). "Symbolic Computation (An Editorial)" (PDF). Journal of Symbolic Computation. 1 (1): 1–6. doi:10.1016/S0747-7171(85)80025-0. For textbooks devoted to the subject: Davenport, James H.; Siret, Yvon; Tournier, Èvelyne (1988). Computer Algebra: Systems and Algorithms for Algebraic Computation. Translated from the French by A. Davenport and J. H. Davenport. Academic Press. ISBN 978-0-12-204230-0. von zur Gathen, Joachim; Gerhard, Jürgen (2003). Modern computer algebra (2nd ed.). Cambridge University Press. ISBN 0-521-82646-2. Geddes, K. O.; Czapor, S. R.; Labahn, G. (1992). Algorithms for Computer Algebra. Bibcode:1992afca.book.....G. doi:10.1007/b102438. ISBN 978-0-7923-9259-0. Buchberger, Bruno; Collins, George Edwin; Loos, Rüdiger; Albrecht, Rudolf, eds. (1983). Computer Algebra: Symbolic and Algebraic Computation. Computing Supplementa. Vol. 4. doi:10.1007/978-3-7091-7551-4. ISBN 978-3-211-81776-6. S2CID 5221892.
|
Wikipedia:Computer algebra system#0
|
A computer algebra system (CAS) or symbolic algebra system (SAS) is any mathematical software with the ability to manipulate mathematical expressions in a way similar to the traditional manual computations of mathematicians and scientists. The development of the computer algebra systems in the second half of the 20th century is part of the discipline of "computer algebra" or "symbolic computation", which has spurred work in algorithms over mathematical objects such as polynomials. Computer algebra systems may be divided into two classes: specialized and general-purpose. The specialized ones are devoted to a specific part of mathematics, such as number theory, group theory, or teaching of elementary mathematics. General-purpose computer algebra systems aim to be useful to a user working in any scientific field that requires manipulation of mathematical expressions. To be useful, a general-purpose computer algebra system must include various features such as: a user interface allowing a user to enter and display mathematical formulas, typically from a keyboard, menu selections, mouse or stylus. a programming language and an interpreter (the result of a computation commonly has an unpredictable form and an unpredictable size; therefore user intervention is frequently needed), a simplifier, which is a rewrite system for simplifying mathematics formulas, a memory manager, including a garbage collector, needed by the huge size of the intermediate data, which may appear during a computation, an arbitrary-precision arithmetic, needed by the huge size of the integers that may occur, a large library of mathematical algorithms and special functions. The library must not only provide for the needs of the users, but also the needs of the simplifier. For example, the computation of polynomial greatest common divisors is systematically used for the simplification of expressions involving fractions. This large amount of required computer capabilities explains the small number of general-purpose computer algebra systems. Significant systems include Axiom, GAP, Maxima, Magma, Maple, Mathematica, and SageMath. == History == In the 1950s, while computers were mainly used for numerical computations, there were some research projects into using them for symbolic manipulation. Computer algebra systems began to appear in the 1960s and evolved out of two quite different sources—the requirements of theoretical physicists and research into artificial intelligence. A prime example for the first development was the pioneering work conducted by the later Nobel Prize laureate in physics Martinus Veltman, who designed a program for symbolic mathematics, especially high-energy physics, called Schoonschip (Dutch for "clean ship") in 1963. Other early systems include FORMAC. Using Lisp as the programming basis, Carl Engelman created MATHLAB in 1964 at MITRE within an artificial-intelligence research environment. Later MATHLAB was made available to users on PDP-6 and PDP-10 systems running TOPS-10 or TENEX in universities. Today it can still be used on SIMH emulations of the PDP-10. MATHLAB ("mathematical laboratory") should not be confused with MATLAB ("matrix laboratory"), which is a system for numerical computation built 15 years later at the University of New Mexico. In 1987, Hewlett-Packard introduced the first hand-held calculator CAS with the HP-28 series. Other early handheld calculators with symbolic algebra capabilities included the Texas Instruments TI-89 series and TI-92 calculator, and the Casio CFX-9970G. The first popular computer algebra systems were muMATH, Reduce, Derive (based on muMATH), and Macsyma; a copyleft version of Macsyma is called Maxima. Reduce became free software in 2008. Commercial systems include Mathematica and Maple, which are commonly used by research mathematicians, scientists, and engineers. Freely available alternatives include SageMath (which can act as a front-end to several other free and nonfree CAS). Other significant systems include Axiom, GAP, Maxima and Magma. The movement to web-based applications in the early 2000s saw the release of WolframAlpha, an online search engine and CAS which includes the capabilities of Mathematica. More recently, computer algebra systems have been implemented using artificial neural networks, though as of 2020 they are not commercially available. == Symbolic manipulations == The symbolic manipulations supported typically include: simplification to a smaller expression or some standard form, including automatic simplification with assumptions and simplification with constraints substitution of symbols or numeric values for certain expressions change of form of expressions: expanding products and powers, partial and full factorization, rewriting as partial fractions, constraint satisfaction, rewriting trigonometric functions as exponentials, transforming logic expressions, etc. partial and total differentiation some indefinite and definite integration (see symbolic integration), including multidimensional integrals symbolic constrained and unconstrained global optimization solution of linear and some non-linear equations over various domains solution of some differential and difference equations taking some limits integral transforms series operations such as expansion, summation and products matrix operations including products, inverses, etc. statistical computation theorem proving and verification which is very useful in the area of experimental mathematics optimized code generation In the above, the word some indicates that the operation cannot always be performed. == Additional capabilities == Many also include: a programming language, allowing users to implement their own algorithms arbitrary-precision numeric operations exact integer arithmetic and number theory functionality Editing of mathematical expressions in two-dimensional form plotting graphs and parametric plots of functions in two and three dimensions, and animating them drawing charts and diagrams APIs for linking it on an external program such as a database, or using in a programming language to use the computer algebra system string manipulation such as matching and searching add-ons for use in applied mathematics such as physics, bioinformatics, computational chemistry and packages for physical computation solvers for differential equations Some include: graphic production and editing such as computer-generated imagery and signal processing as image processing sound synthesis Some computer algebra systems focus on specialized disciplines; these are typically developed in academia and are free. They can be inefficient for numeric operations as compared to numeric systems. == Types of expressions == The expressions manipulated by the CAS typically include polynomials in multiple variables; standard functions of expressions (sine, exponential, etc.); various special functions (Γ, ζ, erf, Bessel functions, etc.); arbitrary functions of expressions; optimization; derivatives, integrals, simplifications, sums, and products of expressions; truncated series with expressions as coefficients, matrices of expressions, and so on. Numeric domains supported typically include floating-point representation of real numbers, integers (of unbounded size), complex (floating-point representation), interval representation of reals, rational number (exact representation) and algebraic numbers. == Use in education == There have been many advocates for increasing the use of computer algebra systems in primary and secondary-school classrooms. The primary reason for such advocacy is that computer algebra systems represent real-world math more than do paper-and-pencil or hand calculator based mathematics. This push for increasing computer usage in mathematics classrooms has been supported by some boards of education. It has even been mandated in the curriculum of some regions. Computer algebra systems have been extensively used in higher education. Many universities offer either specific courses on developing their use, or they implicitly expect students to use them for their course work. The companies that develop computer algebra systems have pushed to increase their prevalence among university and college programs. CAS-equipped calculators are not permitted on the ACT, the PLAN, and in some classrooms though it may be permitted on all of College Board's calculator-permitted tests, including the SAT, some SAT Subject Tests and the AP Calculus, Chemistry, Physics, and Statistics exams. == Mathematics used in computer algebra systems == Knuth–Bendix completion algorithm Root-finding algorithms Symbolic integration via e.g. Risch algorithm or Risch–Norman algorithm Hypergeometric summation via e.g. Gosper's algorithm Limit computation via e.g. Gruntz's algorithm Polynomial factorization via e.g., over finite fields, Berlekamp's algorithm or Cantor–Zassenhaus algorithm. Greatest common divisor via e.g. Euclidean algorithm Gaussian elimination Gröbner basis via e.g. Buchberger's algorithm; generalization of Euclidean algorithm and Gaussian elimination Padé approximant Schwartz–Zippel lemma and testing polynomial identities Chinese remainder theorem Diophantine equations Landau's algorithm (nested radicals) Derivatives of elementary functions and special functions. (e.g. See derivatives of the incomplete gamma function.) Cylindrical algebraic decomposition Quantifier elimination over real numbers via cylindrical algebraic decomposition == See also == List of computer algebra systems Scientific computation Statistical package Automated theorem proving Algebraic modeling language Constraint-logic programming Satisfiability modulo theories == References == == External links == Curriculum and Assessment in an Age of Computer Algebra Systems Archived 2009-12-01 at the Wayback Machine - From the Education Resources Information Center Clearinghouse for Science, Mathematics, and Environmental Education, Columbus, Ohio. Richard J. Fateman. "Essays in algebraic simplification." Technical report MIT-LCS-TR-095, 1972. (Of historical interest in showing the direction of research in computer algebra. At the MIT LCS website: [1])
|
Wikipedia:Computing the permanent#0
|
In linear algebra, the computation of the permanent of a matrix is a problem that is thought to be more difficult than the computation of the determinant of a matrix despite the apparent similarity of the definitions. The permanent is defined similarly to the determinant, as a sum of products of sets of matrix entries that lie in distinct rows and columns. However, where the determinant weights each of these products with a ±1 sign based on the parity of the set, the permanent weights them all with a +1 sign. While the determinant can be computed in polynomial time by Gaussian elimination, it is generally believed that the permanent cannot be computed in polynomial time. In computational complexity theory, a theorem of Valiant states that computing permanents is #P-hard, and even #P-complete for matrices in which all entries are 0 or 1 Valiant (1979). This puts the computation of the permanent in a class of problems believed to be even more difficult to compute than NP. It is known that computing the permanent is impossible for logspace-uniform ACC0 circuits.(Allender & Gore 1994) The development of both exact and approximate algorithms for computing the permanent of a matrix is an active area of research. == Definition and naive algorithm == The permanent of an n-by-n matrix A = (ai,j) is defined as perm ( A ) = ∑ σ ∈ S n ∏ i = 1 n a i , σ ( i ) . {\displaystyle \operatorname {perm} (A)=\sum _{\sigma \in S_{n}}\prod _{i=1}^{n}a_{i,\sigma (i)}.} The sum here extends over all elements σ of the symmetric group Sn, i.e. over all permutations of the numbers 1, 2, ..., n. This formula differs from the corresponding formula for the determinant only in that, in the determinant, each product is multiplied by the sign of the permutation σ while in this formula each product is unsigned. The formula may be directly translated into an algorithm that naively expands the formula, summing over all permutations and within the sum multiplying out each matrix entry. This requires n! n arithmetic operations. == Ryser formula == The best known general exact algorithm is due to H. J. Ryser (1963). Ryser's method is based on an inclusion–exclusion formula that can be given as follows: Let A k {\displaystyle A_{k}} be obtained from A by deleting k columns, let P ( A k ) {\displaystyle P(A_{k})} be the product of the row-sums of A k {\displaystyle A_{k}} , and let Σ k {\displaystyle \Sigma _{k}} be the sum of the values of P ( A k ) {\displaystyle P(A_{k})} over all possible A k {\displaystyle A_{k}} . Then perm ( A ) = ∑ k = 0 n − 1 ( − 1 ) k Σ k . {\displaystyle \operatorname {perm} (A)=\sum _{k=0}^{n-1}(-1)^{k}\Sigma _{k}.} It may be rewritten in terms of the matrix entries as follows perm ( A ) = ( − 1 ) n ∑ S ⊆ { 1 , … , n } ( − 1 ) | S | ∏ i = 1 n ∑ j ∈ S a i j . {\displaystyle \operatorname {perm} (A)=(-1)^{n}\sum _{S\subseteq \{1,\dots ,n\}}(-1)^{|S|}\prod _{i=1}^{n}\sum _{j\in S}a_{ij}.} Ryser's formula can be evaluated using O ( 2 n − 1 n 2 ) {\displaystyle O(2^{n-1}n^{2})} arithmetic operations, or O ( 2 n − 1 n ) {\displaystyle O(2^{n-1}n)} by processing the sets S {\displaystyle S} in Gray code order. == Balasubramanian–Bax–Franklin–Glynn formula == Another formula that appears to be as fast as Ryser's (or perhaps even twice as fast) is to be found in the two Ph.D. theses; see (Balasubramanian 1980), (Bax 1998); also (Bax & Franklin 1996). The methods to find the formula are quite different, being related to the combinatorics of the Muir algebra, and to finite difference theory respectively. Another way, connected with invariant theory is via the polarization identity for a symmetric tensor (Glynn 2010). The formula generalizes to infinitely many others, as found by all these authors, although it is not clear if they are any faster than the basic one. See (Glynn 2013). The simplest known formula of this type (when the characteristic of the field is not two) is perm ( A ) = 1 2 n − 1 [ ∑ δ ( ∏ k = 1 n δ k ) ∏ j = 1 n ∑ i = 1 n δ i a i j ] , {\displaystyle \operatorname {perm} (A)={\frac {1}{2^{n-1}}}\left[\sum _{\delta }\left(\prod _{k=1}^{n}\delta _{k}\right)\prod _{j=1}^{n}\sum _{i=1}^{n}\delta _{i}a_{ij}\right],} where the outer sum is over all 2 n − 1 {\displaystyle 2^{n-1}} vectors δ = ( δ 1 = 1 , δ 2 , … , δ n ) ∈ { ± 1 } n {\displaystyle \delta =(\delta _{1}=1,\delta _{2},\dots ,\delta _{n})\in \{\pm 1\}^{n}} . == Special cases == === Planar and K3,3-free === The number of perfect matchings in a bipartite graph is counted by the permanent of the graph's biadjacency matrix, and the permanent of any 0-1 matrix can be interpreted in this way as the number of perfect matchings in a graph. For planar graphs (regardless of bipartiteness), the FKT algorithm computes the number of perfect matchings in polynomial time by changing the signs of a carefully chosen subset of the entries in the Tutte matrix of the graph, so that the Pfaffian of the resulting skew-symmetric matrix (the square root of its determinant) is the number of perfect matchings. This technique can be generalized to graphs that contain no subgraph homeomorphic to the complete bipartite graph K3,3. George Pólya had asked the question of when it is possible to change the signs of some of the entries of a 01 matrix A so that the determinant of the new matrix is the permanent of A. Not all 01 matrices are "convertible" in this manner; in fact it is known (Marcus & Minc (1961)) that there is no linear map T {\displaystyle T} such that per T ( A ) = det A {\displaystyle \operatorname {per} T(A)=\det A} for all n × n {\displaystyle n\times n} matrices A {\displaystyle A} . The characterization of "convertible" matrices was given by Little (1975) who showed that such matrices are precisely those that are the biadjacency matrix of bipartite graphs that have a Pfaffian orientation: an orientation of the edges such that for every even cycle C {\displaystyle C} for which G ∖ C {\displaystyle G\setminus C} has a perfect matching, there are an odd number of edges directed along C (and thus an odd number with the opposite orientation). It was also shown that these graphs are exactly those that do not contain a subgraph homeomorphic to K 3 , 3 {\displaystyle K_{3,3}} , as above. === Computation modulo a number === Modulo 2, the permanent is the same as the determinant, as ( − 1 ) ≡ 1 ( mod 2 ) . {\displaystyle (-1)\equiv 1{\pmod {2}}.} It can also be computed modulo 2 k {\displaystyle 2^{k}} in time O ( n 4 k − 3 ) {\displaystyle O(n^{4k-3})} for k ≥ 2 {\displaystyle k\geq 2} . However, it is UP-hard to compute the permanent modulo any number that is not a power of 2. Valiant (1979) There are various formulae given by Glynn (2010) for the computation modulo a prime p. First, there is one using symbolic calculations with partial derivatives. Second, for p = 3 there is the following formula for an n×n-matrix A {\displaystyle A} , involving the matrix's principal minors (Kogan (1996)): per ( A ) = ( − 1 ) n ∑ J ⊆ { 1 , … , n } det ( A J ) det ( A J ¯ ) , {\displaystyle \operatorname {per} (A)=(-1)^{n}\sum _{J\subseteq \{1,\dots ,n\}}\det(A_{J})\det(A_{\bar {J}}),} where A J {\displaystyle A_{J}} is the submatrix of A {\displaystyle A} induced by the rows and columns of A {\displaystyle A} indexed by J {\displaystyle J} , and J ¯ {\displaystyle {\bar {J}}} is the complement of J {\displaystyle J} in { 1 , … , n } {\displaystyle \{1,\dots ,n\}} , while the determinant of the empty submatrix is defined to be 1. The expansion above can be generalized in an arbitrary characteristic p as the following pair of dual identities: per ( A ) = ( − 1 ) n ∑ J 1 , … , J p − 1 det ( A J 1 ) ⋯ det ( A J p − 1 ) det ( A ) = ( − 1 ) n ∑ J 1 , … , J p − 1 per ( A J 1 ) ⋯ per ( A J p − 1 ) {\displaystyle {\begin{aligned}\operatorname {per} (A)&=(-1)^{n}\sum _{{J_{1}},\ldots ,{J_{p-1}}}\det(A_{J_{1}})\dotsm \det(A_{J_{p-1}})\\\det(A)&=(-1)^{n}\sum _{{J_{1}},\ldots ,{J_{p-1}}}\operatorname {per} (A_{J_{1}})\dotsm \operatorname {per} (A_{J_{p-1}})\end{aligned}}} where in both formulas the sum is taken over all the (p − 1)-tuples J 1 , … , J p − 1 {\displaystyle {J_{1}},\ldots ,{J_{p-1}}} that are partitions of the set { 1 , … , n } {\displaystyle \{1,\dots ,n\}} into p − 1 subsets, some of them possibly empty. The former formula possesses an analog for the hafnian of a symmetric A {\displaystyle A} and an odd p: haf 2 ( A ) = ( − 1 ) n ∑ J 1 , … , J p − 1 det ( A J 1 ) ⋯ det ( A J p − 1 ) ( − 1 ) | J 1 | + ⋯ + | J ( p − 1 ) / 2 | {\displaystyle \operatorname {haf} ^{2}(A)=(-1)^{n}\sum _{{J_{1}},\ldots ,{J_{p-1}}}\det(A_{J_{1}})\dotsm \det(A_{J_{p-1}})(-1)^{|J_{1}|+\dots +|J_{(p-1)/2}|}} with the sum taken over the same set of indexes. Moreover, in characteristic zero a similar convolution sum expression involving both the permanent and the determinant yields the Hamiltonian cycle polynomial (defined as ham ( A ) = ∑ σ ∈ H n ∏ i = 1 n a i , σ ( i ) {\textstyle \operatorname {ham} (A)=\sum _{\sigma \in H_{n}}\prod _{i=1}^{n}a_{i,\sigma (i)}} where H n {\displaystyle H_{n}} is the set of n-permutations having only one cycle): ham ( A ) = ∑ J ⊆ { 2 , … , n } det ( A J ) per ( A J ¯ ) ( − 1 ) | J | . {\displaystyle \operatorname {ham} (A)=\sum _{J\subseteq \{2,\dots ,n\}}\det(A_{J})\operatorname {per} (A_{\bar {J}})(-1)^{|J|}.} In characteristic 2 the latter equality turns into ham ( A ) = ∑ J ⊆ { 2 , … , n } det ( A J ) det ( A J ¯ ) {\displaystyle \operatorname {ham} (A)=\sum _{J\subseteq \{2,\dots ,n\}}\det(A_{J})\operatorname {det} (A_{\bar {J}})} what therefore provides an opportunity to polynomial-time calculate the Hamiltonian cycle polynomial of any unitary U {\displaystyle U} (i.e. such that U T U = I {\displaystyle U^{\textsf {T}}U=I} where I {\displaystyle I} is the identity n×n-matrix), because each minor of such a matrix coincides with its algebraic complement: ham ( U ) = det 2 ( U + I / 1 ) {\displaystyle \operatorname {ham} (U)=\operatorname {det} ^{2}(U+I_{/1})} where I / 1 {\displaystyle I_{/1}} is the identity n×n-matrix with the entry of indexes 1,1 replaced by 0. Moreover, it may, in turn, be further generalized for a unitary n×n-matrix U {\displaystyle U} as h a m K ( U ) = det 2 ( U + I / K ) {\displaystyle \operatorname {ham_{K}} (U)=\operatorname {det} ^{2}(U+I_{/K})} where K {\displaystyle K} is a subset of {1, ..., n}, I / K {\displaystyle I_{/K}} is the identity n×n-matrix with the entries of indexes k,k replaced by 0 for all k belonging to K {\displaystyle K} , and we define h a m K ( A ) = ∑ σ ∈ H n ( K ) ∏ i = 1 n a i , σ ( i ) {\textstyle \operatorname {ham_{K}} (A)=\sum _{\sigma \in H_{n}(K)}\prod _{i=1}^{n}a_{i,\sigma (i)}} where H n ( K ) {\displaystyle H_{n}(K)} is the set of n-permutations whose each cycle contains at least one element of K {\displaystyle K} . This formula also implies the following identities over fields of characteristic 3: for any invertible A {\displaystyle A} per ( A − 1 ) det 2 ( A ) = per ( A ) ; {\displaystyle \operatorname {per} (A^{-1})\operatorname {det} ^{2}(A)=\operatorname {per} (A);} for any unitary U {\displaystyle U} , that is, a square matrix U {\displaystyle U} such that U T U = I {\displaystyle U^{\textsf {T}}U=I} where I {\displaystyle I} is the identity matrix of the corresponding size, per 2 ( U ) = det ( U + V ) det ( − U ) {\displaystyle \operatorname {per} ^{2}(U)=\det(U+V)\det(-U)} where V {\displaystyle V} is the matrix whose entries are the cubes of the corresponding entries of U {\displaystyle U} . It was also shown (Kogan (1996)) that, if we define a square matrix A {\displaystyle A} as k-semi-unitary when rank ( A T A − I ) = k {\displaystyle \operatorname {rank} (A^{\textsf {T}}A-I)=k} , the permanent of a 1-semi-unitary matrix is computable in polynomial time over fields of characteristic 3, while for k > 1 the problem becomes #3-P-complete. (A parallel theory concerns the Hamiltonian cycle polynomial in characteristic 2: while computing it on the unitary matrices is polynomial-time feasible, the problem is #2-P-complete for the k-semi-unitary ones for any k > 0). The latter result was essentially extended in 2017 (Knezevic & Cohen (2017)) and it was proven that in characteristic 3 there is a simple formula relating the permanents of a square matrix and its partial inverse (for A 11 {\displaystyle A_{11}} and A 22 {\displaystyle A_{22}} being square, A 11 {\displaystyle A_{11}} being invertible): per ( A 11 A 12 A 21 A 22 ) = det 2 ( A 11 ) per ( A 11 − 1 A 11 − 1 A 12 A 21 A 11 − 1 A 22 − A 21 A 11 − 1 A 12 ) {\displaystyle \operatorname {per} {\begin{pmatrix}A_{11}&A_{12}\\A_{21}&A_{22}\end{pmatrix}}=\operatorname {det} ^{2}(A_{11})\operatorname {per} {\begin{pmatrix}A_{11}^{-1}&A_{11}^{-1}A_{12}\\A_{21}A_{11}^{-1}&A_{22}-A_{21}A_{11}^{-1}A_{12}\end{pmatrix}}} and it allows to polynomial-time reduce the computation of the permanent of an n×n-matrix with a subset of k or k − 1 rows expressible as linear combinations of another (disjoint) subset of k rows to the computation of the permanent of an (n − k)×(n − k)- or (n − k + 1)×(n − k + 1)-matrix correspondingly, hence having introduced a compression operator (analogical to the Gaussian modification applied for calculating the determinant) that "preserves" the permanent in characteristic 3. (Analogically, it would be worth noting that the Hamiltonian cycle polynomial in characteristic 2 does possess its invariant matrix compressions as well, taking into account the fact that ham(A) = 0 for any n×n-matrix A having three equal rows or, if n > 2, a pair of indexes i,j such that its i-th and j-th rows are identical and its i-th and j-th columns are identical too.) The closure of that operator defined as the limit of its sequential application together with the transpose transformation (utilized each time the operator leaves the matrix intact) is also an operator mapping, when applied to classes of matrices, one class to another. While the compression operator maps the class of 1-semi-unitary matrices to itself and the classes of unitary and 2-semi-unitary ones, the compression-closure of the 1-semi-unitary class (as well as the class of matrices received from unitary ones through replacing one row by an arbitrary row vector — the permanent of such a matrix is, via the Laplace expansion, the sum of the permanents of 1-semi-unitary matrices and, accordingly, polynomial-time computable) is yet unknown and tensely related to the general problem of the permanent's computational complexity in characteristic 3 and the chief question of P versus NP: as it was shown in (Knezevic & Cohen (2017)), if such a compression-closure is the set of all square matrices over a field of characteristic 3 or, at least, contains a matrix class the permanent's computation on is #3-P-complete (like the class of 2-semi-unitary matrices) then the permanent is computable in polynomial time in this characteristic. Besides, the problem of finding and classifying any possible analogs of the permanent-preserving compressions existing in characteristic 3 for other prime characteristics was formulated (Knezevic & Cohen (2017)), while giving the following identity for an n×n matrix A {\displaystyle A} and two n-vectors (having all their entries from the set {0, ..., p − 1}) α {\displaystyle \alpha } and β {\displaystyle \beta } such that ∑ i = 1 n α i = ∑ j = 1 n β j {\textstyle {\sum _{i=1}^{n}\alpha _{i}=\sum _{j=1}^{n}\beta _{j}}} , valid in an arbitrary prime characteristic p: per ( A ( α , β ) ) = det p − 1 ( A ) per ( A − 1 ) ( ( p − 1 ) 1 → n − β , ( p − 1 ) 1 → n − α ) ( ∏ i = 1 n α i ! ) ( ∏ j = 1 n β j ! ) ( − 1 ) n + ∑ i = 1 n α i {\displaystyle \operatorname {per} (A^{(\alpha ,\beta )})=\det ^{p-1}(A)\operatorname {per} (A^{-1})^{((p-1){\vec {1}}_{n}-\beta ,(p-1){\vec {1}}_{n}-\alpha )}\left(\prod _{i=1}^{n}\alpha _{i}!\right)\left(\prod _{j=1}^{n}\beta _{j}!\right)(-1)^{n+\sum _{i=1}^{n}\alpha _{i}}} where for an n×m-matrix M {\displaystyle M} , an n-vector x {\displaystyle x} and an m-vector y {\displaystyle y} , both vectors having all their entries from the set {0, ..., p − 1}, M ( x , y ) {\displaystyle M^{(x,y)}} denotes the matrix received from M {\displaystyle M} via repeating x i {\displaystyle x_{i}} times its i-th row for i = 1, ..., n and y j {\displaystyle y_{j}} times its j-th column for j = 1, ..., m (if some row's or column's multiplicity equals zero it would mean that the row or column was removed, and thus this notion is a generalization of the notion of submatrix), and 1 → n {\displaystyle {\vec {1}}_{n}} denotes the n-vector all whose entries equal unity. This identity is an exact analog of the classical formula expressing a matrix's minor through a minor of its inverse and hence demonstrates (once more) a kind of duality between the determinant and the permanent as relative immanants. (Actually its own analogue for the hafnian of a symmetric A {\displaystyle A} and an odd prime p is haf 2 ( A ( α , α ) ) = det p − 1 ( A ) haf 2 ( A − 1 ) ( ( p − 1 ) 1 → n − α , ( p − 1 ) 1 → n − α ) ( ∏ i = 1 n α i ! ) 2 ( − 1 ) n ( p − 1 ) / 2 + n + ∑ i = 1 n α i {\textstyle \operatorname {haf} ^{2}(A^{(\alpha ,\alpha )})=\det ^{p-1}(A)\operatorname {haf} ^{2}(A^{-1})^{((p-1){\vec {1}}_{n}-\alpha ,(p-1){\vec {1}}_{n}-\alpha )}\left(\prod _{i=1}^{n}\alpha _{i}!\right)^{2}(-1)^{n(p-1)/2+n+\sum _{i=1}^{n}\alpha _{i}}} ). And, as an even wider generalization for the partial inverse case in a prime characteristic p, for A 11 {\displaystyle A_{11}} , A 22 {\displaystyle A_{22}} being square, A 11 {\displaystyle A_{11}} being invertible and of size n 1 {\displaystyle {n_{1}}} x n 1 {\displaystyle {n_{1}}} , and ∑ i = 1 n α i = ∑ j = 1 n β j {\textstyle {\sum _{i=1}^{n}\alpha _{i}=\sum _{j=1}^{n}\beta _{j}}} , there holds also the identity per ( A 11 A 12 A 21 A 22 ) ( α , β ) = det p − 1 ( A 11 ) per ( A 11 − 1 A 11 − 1 A 12 A 21 A 11 − 1 A 22 − A 21 A 11 − 1 A 12 ) ( ( p − 1 ) 1 → n − β , ( p − 1 ) 1 → n − α ) ( ∏ i = 1 n α 1 , i ! ) ( ∏ j = 1 n β 1 , j ! ) ( − 1 ) n 1 + ∑ i = 1 n α 1 , i {\displaystyle \operatorname {per} {\begin{pmatrix}A_{11}&A_{12}\\A_{21}&A_{22}\end{pmatrix}}^{(\alpha ,\beta )}={\det }^{p-1}(A_{11})\operatorname {per} {\begin{pmatrix}A_{11}^{-1}&A_{11}^{-1}A_{12}\\A_{21}A_{11}^{-1}&A_{22}-A_{21}A_{11}^{-1}A_{12}\end{pmatrix}}^{((p-1){\vec {1}}_{n}-\beta ,(p-1){\vec {1}}_{n}-\alpha )}\left(\prod _{i=1}^{n}\alpha _{1,i}!\right)\left(\prod _{j=1}^{n}\beta _{1,j}!\right)(-1)^{n_{1}+\sum _{i=1}^{n}\alpha _{1,i}}} where the common row/column multiplicity vectors α {\displaystyle \alpha } and β {\displaystyle \beta } for the matrix A {\displaystyle A} generate the corresponding row/column multiplicity vectors α s {\displaystyle \alpha _{s}} and β t {\displaystyle \beta _{t}} , s,t = 1,2, for its blocks (the same concerns A {\displaystyle A} 's partial inverse in the equality's right side). == Approximate computation == When the entries of A are nonnegative, the permanent can be computed approximately in probabilistic polynomial time, up to an error of εM, where M is the value of the permanent and ε > 0 is arbitrary. In other words, there exists a fully polynomial-time randomized approximation scheme (FPRAS) (Jerrum, Sinclair & Vigoda (2001)). The most difficult step in the computation is the construction of an algorithm to sample almost uniformly from the set of all perfect matchings in a given bipartite graph: in other words, a fully polynomial almost uniform sampler (FPAUS). This can be done using a Markov chain Monte Carlo algorithm that uses a Metropolis rule to define and run a Markov chain whose distribution is close to uniform, and whose mixing time is polynomial. It is possible to approximately count the number of perfect matchings in a graph via the self-reducibility of the permanent, by using the FPAUS in combination with a well-known reduction from sampling to counting due to Jerrum, Valiant & Vazirani (1986). Let M ( G ) {\displaystyle M(G)} denote the number of perfect matchings in G {\displaystyle G} . Roughly, for any particular edge e {\displaystyle e} in G {\displaystyle G} , by sampling many matchings in G {\displaystyle G} and counting how many of them are matchings in G ∖ e {\displaystyle G\setminus e} , one can obtain an estimate of the ratio ρ = M ( G ) M ( G ∖ e ) {\textstyle \rho ={\frac {M(G)}{M(G\setminus e)}}} . The number M ( G ) {\displaystyle M(G)} is then ρ M ( G ∖ e ) {\displaystyle \rho M(G\setminus e)} , where M ( G ∖ e ) {\displaystyle M(G\setminus e)} can be approximated by applying the same method recursively. Another class of matrices for which the permanent is of particular interest, is the positive-semidefinite matrices. Using a technique of Stockmeyer counting, they can be computed within the class BPP NP {\displaystyle {\textsf {BPP}}^{\textsf {NP}}} , but this is considered an infeasible class in general. It is NP-hard to approximate permanents of PSD matrices within a subexponential factor, and it is conjectured to be BPP NP {\displaystyle {\textsf {BPP}}^{\textsf {NP}}} -hard If further constraints on the spectrum are imposed, there are more efficient algorithms known. One randomized algorithm is based on the model of boson sampling and it uses the tools proper to quantum optics, to represent the permanent of positive-semidefinite matrices as the expected value of a specific random variable. The latter is then approximated by its sample mean. This algorithm, for a certain set of positive-semidefinite matrices, approximates their permanent in polynomial time up to an additive error, which is more reliable than that of the standard classical polynomial-time algorithm by Gurvits. == Notes == == References == == Further reading == Barvinok, A. (2017), "Approximating permanents and hafnians", Discrete Analysis, arXiv:1601.07518, doi:10.19086/da.1244, S2CID 397350.
|
Wikipedia:Conchoid (mathematics)#0
|
In geometry, a conchoid is a curve derived from a fixed point O, another curve, and a length d. It was invented by the ancient Greek mathematician Nicomedes. == Description == For every line through O that intersects the given curve at A the two points on the line which are d from A are on the conchoid. The conchoid is, therefore, the cissoid of the given curve and a circle of radius d and center O. They are called conchoids because the shape of their outer branches resembles conch shells. The simplest expression uses polar coordinates with O at the origin. If r = α ( θ ) {\displaystyle r=\alpha (\theta )} expresses the given curve, then r = α ( θ ) ± d {\displaystyle r=\alpha (\theta )\pm d} expresses the conchoid. If the curve is a line, then the conchoid is the conchoid of Nicomedes. For instance, if the curve is the line x = a, then the line's polar form is r = a sec θ and therefore the conchoid can be expressed parametrically as x = a ± d cos θ , y = a tan θ ± d sin θ . {\displaystyle x=a\pm d\cos \theta ,\,y=a\tan \theta \pm d\sin \theta .} A limaçon is a conchoid with a circle as the given curve. The so-called conchoid of de Sluze and conchoid of Dürer are not actually conchoids. The former is a strict cissoid and the latter a construction more general yet. == See also == Cissoid Strophoid == References == J. Dennis Lawrence (1972). A catalog of special plane curves. Dover Publications. pp. 36, 49–51, 113, 137. ISBN 0-486-60288-5. == External links == conchoid with conic sections - interactive illustration Weisstein, Eric W. "Conchoid of Nicomedes". MathWorld. conchoid at mathcurves.com
|
Wikipedia:Conditional event algebra#0
|
In probability theory, a conditional event algebra (CEA) is an alternative to a standard, Boolean algebra of possible events (a set of possible events related to one another by the familiar operations and, or, and not) that contains not just ordinary events but also conditional events that have the form "if A, then B". The usual motivation for a CEA is to ground the definition of a probability function for events, P, that satisfies the equation P(if A then B) = P(A and B) / P(A). == Motivation == In standard probability theory the occurrence of an event corresponds to a set of possible outcomes, each of which is an outcome that corresponds to the occurrence of the event. P(A), the probability of event A, is the sum of the probabilities of all outcomes that correspond to event A; P(B) is the sum of the probabilities of all outcomes that correspond to event B; and P(A and B) is the sum of the probabilities of all outcomes that correspond to both A and B. In other words, and, customarily represented by the logical symbol ∧, is interpreted as set intersection: P(A ∧ B) = P(A ∩ B). In the same vein, or, ∨, becomes set union, ∪, and not, ¬, becomes set complementation, ′. Any combination of events using the operations and, or, and not is also an event, and assigning probabilities to all outcomes generates a probability for every event. In technical terms, this means that the set of events and the three operations together constitute a Boolean algebra of sets, with an associated probability function. In standard practice, P(if A, then B) is not interpreted as P(A′ ∪ B), following the rule of material implication, but rather as the conditional probability of B given A, P(B | A) = P(A ∩ B) / P(A). This raises a question: what about a probability like P(if A, then B, and if C, then D)? For this, there is no standard answer. What would be needed, for consistency, is a treatment of if-then as a binary operation, →, such that for conditional events A → B and C → D, P(A → B) = P(B | A), P(C → D) = P(D | C), and P((A → B) ∧ (C → D)) are well-defined and reasonable. Philosophers including Robert Stalnaker argued that ideally, a conditional event algebra, or CEA, would support a probability function that meets three conditions: 1. The probability function satisfies the usual axioms. 2. For any two ordinary events A and B, if P(A) > 0, then P(A → B) = P(B | A) = P(A ∧ B) / P(A). 3. For ordinary event A and acceptable probability function P, if P(A) > 0, then PA = P ( ⋅ | A), the function produced by conditioning on A, is also an acceptable probability function. However, David Lewis proved in 1976 a fact now known as Lewis's triviality result: these conditions can only be met with near-standard approaches in trivial examples. In particular, those conditions can only be met when there are just two possible outcomes—as with, say, a single coin flip. With three or more possible outcomes, constructing a probability function requires choosing which of the above three conditions to violate. Interpreting A → B as A′ ∪ B produces an ordinary Boolean algebra that violates 2. With CEAs, the choice is between 1 and 3. == Types of conditional event algebra == === Tri-event CEAs === Tri-event CEAs take their inspiration from three-valued logic, where the identification of logical conjunction, disjunction, and negation with simple set operations no longer applies. For ordinary events A and B, the tri-event A → B occurs when A and B both occur, fails to occur when A occurs but B does not, and is undecided when A fails to occur. (The term “tri-event” comes from de Finetti (1935): triévénement.) Ordinary events, which are never undecided, are incorporated into the algebra as tri-events conditional on Ω, the vacuous event represented by the entire sample space of outcomes; thus, A becomes Ω → A. Since there are many three-valued logics, there are many possible tri-event algebras. Two types, however, have attracted more interest than the others. In one type, A ∧ B and A ∨ B are each undecided only when both A and B are undecided; when just one of them is, the conjunction or disjunction follows the other conjunct or disjunct. When negation is handled in the obvious way, with ¬A undecided just in case A is, this type of tri-event algebra corresponds to a three-valued logic proposed by Sobociński (1920) and favored by Belnap (1973), and also implied by Adams’s (1975) “quasi-conjunction” for conditionals. Schay (1968) was the first to propose an algebraic treatment, which Calabrese (1987) developed more properly. The other type of tri-event CEA treats negation the same way as the first, but it treats conjunction and disjunction as min and max functions, respectively, with occurrence as the high value, failure as the low value, and undecidedness in between. This type of tri-event algebra corresponds to a three-valued logic proposed by Łukasiewicz (1920) and also favored by de Finetti (1935). Goodman, Nguyen and Walker (1991) eventually provided the algebraic formulation. The probability of any tri-event is defined as the probability that it occurs divided by the probability that it either occurs or fails to occur. With this convention, conditions 2 and 3 above are satisfied by the two leading tri-event CEA types. Condition 1, however, fails. In a Sobociński-type algebra, ∧ does not distribute over ∨, so P(A ∧ (B ∨ C)) and P((A ∧ B) ∨ (A ∧ C)) need not be equal. In a Łukasiewicz-type algebra, ∧ distributes over ∨ but not over exclusive or, ⊕ {\displaystyle \oplus } (A ⊕ {\displaystyle \oplus } B = (A ∧ ¬B) ∨ (¬A ∧ B)). Also, tri-event CEAs are not complemented lattices, only pseudocomplemented, because in general, (A → B) ∧ ¬(A → B) cannot occur but can be undecided and therefore is not identical to Ω → ∅, the bottom element of the lattice. This means that P(C) and P(C ⊕ {\displaystyle \oplus } ((A → B) ∧ ¬(A → B))) can differ, when classically they would not. === Product-space CEAs === If P(if A, then B) is thought of as the probability of A-and-B occurring before A-and-not-B in a series of trials, this can be calculated as an infinite sum of simple probabilities: the probability of A-and-B on the first trial, plus the probability of not-A (and either B or not-B) on the first trial and A-and-B on the second, plus the probability of not-A on the first two trials and A-and-B on the third, and so on—that is, P(A ∧ B) + P(¬A)P(A ∧ B) + P(¬A)2P(A ∧ B) + …, or, in factored form, P(A ∧ B)[1 + P(¬A) + P(¬A)2 + …]. Since the second factor is the Maclaurin series expansion of 1 / [1 – P(¬A)] = 1 / P(A), the infinite sum equals P(A ∧ B) / P(A) = P(B |A). The infinite sum is itself is a simple probability, but with the sample space now containing not ordinary outcomes of single trials but infinite sequences of ordinary outcomes. Thus the conditional probability P(B |A) is turned into simple probability P(B → A) by replacing Ω, the sample space of all ordinary outcomes, with Ω*, the sample space of all sequences of ordinary outcomes, and by identifying conditional event A → B with the set of sequences where the first (A ∧ B)-outcome comes before the first (A ∧ ¬B)-outcome. In Cartesian-product notation, Ω* = Ω × Ω × Ω × …, and A → B is the infinite union [(A ∩ B) × Ω × Ω × …] ∪ [A′ × (A ∩ B) × Ω × Ω × …] ∪ [A′ × A′ × (A ∩ B) × Ω × Ω × …] ∪ …. Unconditional event A is, again, represented by conditional event Ω → A. Unlike tri-event CEAs, this type of CEA supports the identification of ∧, ∨, and ¬ with the familiar operations ∩, ∪, and ′ not just for ordinary, unconditional events but for conditional ones, as well. Because Ω* is a space defined by an infinitely long Cartesian product, the Boolean algebra of conditional-event subsets of Ω* is called a product-space CEA. This type of CEA was introduced by van Fraassen (1976), in response to Lewis’s result, and was later discovered independently by Goodman and Nguyen (1994). The probability functions associated with product-space CEAs satisfy conditions 1 and 2 above. However, given probability function P that satisfies conditions 1 and 2, if P(A) > 0, it can be shown that PA(C | B) = P(C | A ∧ B) and PA(B → C) = P(B ∧ C | A) + P(B′ | A)P(C | B). If A, B and C are pairwise compatible but P(A ∧ B ∧ C) = 0, then P(C | A ∧ B) = P(B ∧ C | A) = 0 but P(B′ | A)P (C | B) > 0. Therefore, PA(B → C) does not reliably equal PA(C | B). Since PA fails condition 2, P fails condition 3. === Nested if–thens === What about nested conditional constructions? In a tri-event CEA, right-nested constructions are handled more or less automatically, since it is natural to say that A → (B → C) takes the value of B → C (possibly undecided) when A is true and is undecided when A is false. Left-nesting, however, requires a more deliberate choice: when A → B is undecided, should (A → B) → C be undecided, or should it take the value of C? Opinions vary. Calabrese adopts the latter view, identifying (A → B) → (C → D) with ((¬A ∨ B) ∧ C) → D. With a product-space CEA, nested conditionals call for nested sequence-constructions: evaluating P((A → B) → (C → D)) requires a sample space of metasequences of sequences of ordinary outcomes. The probabilities of the ordinary sequences are calculated as before. Given a series of trials where the outcomes are sequences of ordinary outcomes, P((A → B) → (C → D)) is P(C → D | A → B) = P((A → B) ∧ (C → D)) / P(A → B), the probability that an ((A → B) ∧ (C → B))-sequence will be encountered before an ((A → B) ∧ ¬(C → B))-sequence. Higher-order-iterations of conditionals require higher-order metasequential constructions. In either of the two leading types of tri-event CEA, A → (B → C) = (A ∧ B) → C. Product space CEAs, on the other hand, do not support this identity. The latter fact can be inferred from the failure, already noted, of PA(B → C) to equal PA(C | B), since PA(C | B) = P((A ∧ B) → C) and PA(B → C) = P(A → (B → C)). For a direct analysis, however, consider a metasequence whose first member-sequence starts with an (A ∧ ¬B ∧ C)-outcome, followed by a (¬A ∧ B ∧ C)-outcome, followed by an (A ∧ B ∧ ¬C)-outcome. That metasequence will belong to the event A → (B → C), because the first member-sequence is an (A ∧ (B → C))-sequence, but the metasequence will not belong to the event (A ∧ B) → C, because the first member-sequence is an ((A ∧ B) → ¬C)-sequence. == Applications == The initial impetus for CEAs is theoretical—namely, the challenge of responding to Lewis's triviality result—but practical applications have been proposed. If, for instance, events A and C involve signals emitted by military radar stations and events B and D involve missile launches, an opposing military force with an automated missile defense system may want the system to be able to calculate P((A → B) ∧ (C → D)) and/or P((A → B) → (C → D)). Other applications range from image interpretation to the detection of denial-of-service attacks on computer networks. == Notes == == References == Adams, E. W. 1975. The Logic of Conditionals. D. Reidel, Dordrecht. Bamber, D., Goodman, I. R. and Nguyen, H. T. 2004. "Deduction from Conditional Knowledge". Soft Computing 8: 247–255. Belnap, N. D. 1973. "Restricted quantification and conditional assertion", in H. Leblanc (ed.), Truth, Syntax and Modality North-Holland, Amsterdam. 48–75. Calabrese, P. 1987. "An algebraic synthesis of the foundations of logic and probability". Information Sciences 42:187-237. de Finetti, Bruno. 1935. "La logique de la probabilité". Actes du Congrès International Philosophie Scientifique. Paris. van Fraassen, Bas C. 1976. "Probabilities of conditionals” in W. L. Harper and C. A. Hooker (eds.), Foundations of Probability Theory, Statistical Inference, and Statistical Theories of Science, Vol. I. D. Reidel, Dordrecht, pp. 261–308. Goodman, I. R., Mahler, R. P. S. and Nguyen, H. T. 1999. "What is conditional event algebra and why should you care?" SPIE Proceedings, Vol. 3720. Goodman, I. R., Nguyen, H. T. and Walker, E .A. 1991. Conditional Inference and Logic for Intelligent Systems: A Theory of Measure-Free Conditioning. Office of Chief of Naval Research, Arlington, Virginia. Goodman, I. R. and Nguyen, H. T. 1994. "A theory of conditional information for probabilistic inference in intelligent systems: II, Product space approach; III Mathematical appendix". Information Sciences 76:13-42; 75: 253-277. Goodman, I. R. and Nguyen, H. T. 1995. "Mathematical foundations of conditionals and their probabilistic assignments". International Journal of Uncertainty, Fuzziness and Knowledge-based Systems 3(3): 247-339 Kelly, P. A., Derin, H., and Gong, W.-B. 1999. "Some applications of conditional events and random sets for image estimation and system modeling". SPIE Proceedings 3720: 14-24. Łukasiewicz, J. 1920. "O logice trójwartościowej" (in Polish). Ruch Filozoficzny 5:170–171. English translation: "On three-valued logic", in L. Borkowski (ed.), Selected works by Jan Łukasiewicz, North–Holland, Amsterdam, 1970, pp. 87–88. ISBN 0-7204-2252-3 Schay, Geza. 1968. "An algebra of conditional events". Journal of Mathematical Analysis and Applications 24: 334-344. Sobociński, B. 1952. "Axiomatization of a partial system of three-valued calculus of propositions". Journal of Computing Systems 1(1):23-55. Sun, D., Yang, K., Jing, X., Lv, B., and Wang, Y. 2014. "Abnormal network traffic detection based on conditional event algebra". Applied Mechanics and Materials 644-650: 1093-1099.
|
Wikipedia:Conditional trigonometric identity#0
|
In trigonometry, trigonometric identities are equalities that involve trigonometric functions and are true for every value of the occurring variables for which both sides of the equality are defined. Geometrically, these are identities involving certain functions of one or more angles. They are distinct from triangle identities, which are identities potentially involving angles but also involving side lengths or other lengths of a triangle. These identities are useful whenever expressions involving trigonometric functions need to be simplified. An important application is the integration of non-trigonometric functions: a common technique involves first using the substitution rule with a trigonometric function, and then simplifying the resulting integral with a trigonometric identity. == Pythagorean identities == The basic relationship between the sine and cosine is given by the Pythagorean identity: sin 2 θ + cos 2 θ = 1 , {\displaystyle \sin ^{2}\theta +\cos ^{2}\theta =1,} where sin 2 θ {\displaystyle \sin ^{2}\theta } means ( sin θ ) 2 {\displaystyle {(\sin \theta )}^{2}} and cos 2 θ {\displaystyle \cos ^{2}\theta } means ( cos θ ) 2 . {\displaystyle {(\cos \theta )}^{2}.} This can be viewed as a version of the Pythagorean theorem, and follows from the equation x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} for the unit circle. This equation can be solved for either the sine or the cosine: sin θ = ± 1 − cos 2 θ , cos θ = ± 1 − sin 2 θ . {\displaystyle {\begin{aligned}\sin \theta &=\pm {\sqrt {1-\cos ^{2}\theta }},\\\cos \theta &=\pm {\sqrt {1-\sin ^{2}\theta }}.\end{aligned}}} where the sign depends on the quadrant of θ . {\displaystyle \theta .} Dividing this identity by sin 2 θ {\displaystyle \sin ^{2}\theta } , cos 2 θ {\displaystyle \cos ^{2}\theta } , or both yields the following identities: 1 + cot 2 θ = csc 2 θ 1 + tan 2 θ = sec 2 θ sec 2 θ + csc 2 θ = sec 2 θ csc 2 θ {\displaystyle {\begin{aligned}&1+\cot ^{2}\theta =\csc ^{2}\theta \\&1+\tan ^{2}\theta =\sec ^{2}\theta \\&\sec ^{2}\theta +\csc ^{2}\theta =\sec ^{2}\theta \csc ^{2}\theta \end{aligned}}} Using these identities, it is possible to express any trigonometric function in terms of any other (up to a plus or minus sign): == Reflections, shifts, and periodicity == By examining the unit circle, one can establish the following properties of the trigonometric functions. === Reflections === When the direction of a Euclidean vector is represented by an angle θ , {\displaystyle \theta ,} this is the angle determined by the free vector (starting at the origin) and the positive x {\displaystyle x} -unit vector. The same concept may also be applied to lines in an Euclidean space, where the angle is that determined by a parallel to the given line through the origin and the positive x {\displaystyle x} -axis. If a line (vector) with direction θ {\displaystyle \theta } is reflected about a line with direction α , {\displaystyle \alpha ,} then the direction angle θ ′ {\displaystyle \theta ^{\prime }} of this reflected line (vector) has the value θ ′ = 2 α − θ . {\displaystyle \theta ^{\prime }=2\alpha -\theta .} The values of the trigonometric functions of these angles θ , θ ′ {\displaystyle \theta ,\;\theta ^{\prime }} for specific angles α {\displaystyle \alpha } satisfy simple identities: either they are equal, or have opposite signs, or employ the complementary trigonometric function. These are also known as reduction formulae. === Shifts and periodicity === === Signs === The sign of trigonometric functions depends on quadrant of the angle. If − π < θ ≤ π {\displaystyle {-\pi }<\theta \leq \pi } and sgn is the sign function, sgn ( sin θ ) = sgn ( csc θ ) = { + 1 if 0 < θ < π − 1 if − π < θ < 0 0 if θ ∈ { 0 , π } sgn ( cos θ ) = sgn ( sec θ ) = { + 1 if − 1 2 π < θ < 1 2 π − 1 if − π < θ < − 1 2 π or 1 2 π < θ < π 0 if θ ∈ { − 1 2 π , 1 2 π } sgn ( tan θ ) = sgn ( cot θ ) = { + 1 if − π < θ < − 1 2 π or 0 < θ < 1 2 π − 1 if − 1 2 π < θ < 0 or 1 2 π < θ < π 0 if θ ∈ { − 1 2 π , 0 , 1 2 π , π } {\displaystyle {\begin{aligned}\operatorname {sgn}(\sin \theta )=\operatorname {sgn}(\csc \theta )&={\begin{cases}+1&{\text{if}}\ \ 0<\theta <\pi \\-1&{\text{if}}\ \ {-\pi }<\theta <0\\0&{\text{if}}\ \ \theta \in \{0,\pi \}\end{cases}}\\[5mu]\operatorname {sgn}(\cos \theta )=\operatorname {sgn}(\sec \theta )&={\begin{cases}+1&{\text{if}}\ \ {-{\tfrac {1}{2}}\pi }<\theta <{\tfrac {1}{2}}\pi \\-1&{\text{if}}\ \ {-\pi }<\theta <-{\tfrac {1}{2}}\pi \ \ {\text{or}}\ \ {\tfrac {1}{2}}\pi <\theta <\pi \\0&{\text{if}}\ \ \theta \in {\bigl \{}{-{\tfrac {1}{2}}\pi },{\tfrac {1}{2}}\pi {\bigr \}}\end{cases}}\\[5mu]\operatorname {sgn}(\tan \theta )=\operatorname {sgn}(\cot \theta )&={\begin{cases}+1&{\text{if}}\ \ {-\pi }<\theta <-{\tfrac {1}{2}}\pi \ \ {\text{or}}\ \ 0<\theta <{\tfrac {1}{2}}\pi \\-1&{\text{if}}\ \ {-{\tfrac {1}{2}}\pi }<\theta <0\ \ {\text{or}}\ \ {\tfrac {1}{2}}\pi <\theta <\pi \\0&{\text{if}}\ \ \theta \in {\bigl \{}{-{\tfrac {1}{2}}\pi },0,{\tfrac {1}{2}}\pi ,\pi {\bigr \}}\end{cases}}\end{aligned}}} The trigonometric functions are periodic with common period 2 π , {\displaystyle 2\pi ,} so for values of θ outside the interval ( − π , π ] , {\displaystyle ({-\pi },\pi ],} they take repeating values (see § Shifts and periodicity above). == Angle sum and difference identities == These are also known as the angle addition and subtraction theorems (or formulae). sin ( α + β ) = sin α cos β + cos α sin β sin ( α − β ) = sin α cos β − cos α sin β cos ( α + β ) = cos α cos β − sin α sin β cos ( α − β ) = cos α cos β + sin α sin β {\displaystyle {\begin{aligned}\sin(\alpha +\beta )&=\sin \alpha \cos \beta +\cos \alpha \sin \beta \\\sin(\alpha -\beta )&=\sin \alpha \cos \beta -\cos \alpha \sin \beta \\\cos(\alpha +\beta )&=\cos \alpha \cos \beta -\sin \alpha \sin \beta \\\cos(\alpha -\beta )&=\cos \alpha \cos \beta +\sin \alpha \sin \beta \end{aligned}}} The angle difference identities for sin ( α − β ) {\displaystyle \sin(\alpha -\beta )} and cos ( α − β ) {\displaystyle \cos(\alpha -\beta )} can be derived from the angle sum versions by substituting − β {\displaystyle -\beta } for β {\displaystyle \beta } and using the facts that sin ( − β ) = − sin ( β ) {\displaystyle \sin(-\beta )=-\sin(\beta )} and cos ( − β ) = cos ( β ) {\displaystyle \cos(-\beta )=\cos(\beta )} . They can also be derived by using a slightly modified version of the figure for the angle sum identities, both of which are shown here. These identities are summarized in the first two rows of the following table, which also includes sum and difference identities for the other trigonometric functions. === Sines and cosines of sums of infinitely many angles === When the series ∑ i = 1 ∞ θ i {\textstyle \sum _{i=1}^{\infty }\theta _{i}} converges absolutely then sin ( ∑ i = 1 ∞ θ i ) = ∑ odd k ≥ 1 ( − 1 ) k − 1 2 ∑ A ⊆ { 1 , 2 , 3 , … } | A | = k ( ∏ i ∈ A sin θ i ∏ i ∉ A cos θ i ) cos ( ∑ i = 1 ∞ θ i ) = ∑ even k ≥ 0 ( − 1 ) k 2 ∑ A ⊆ { 1 , 2 , 3 , … } | A | = k ( ∏ i ∈ A sin θ i ∏ i ∉ A cos θ i ) . {\displaystyle {\begin{aligned}{\sin }{\biggl (}\sum _{i=1}^{\infty }\theta _{i}{\biggl )}&=\sum _{{\text{odd}}\ k\geq 1}(-1)^{\frac {k-1}{2}}\!\!\sum _{\begin{smallmatrix}A\subseteq \{\,1,2,3,\dots \,\}\\\left|A\right|=k\end{smallmatrix}}{\biggl (}\prod _{i\in A}\sin \theta _{i}\prod _{i\not \in A}\cos \theta _{i}{\biggr )}\\{\cos }{\biggl (}\sum _{i=1}^{\infty }\theta _{i}{\biggr )}&=\sum _{{\text{even}}\ k\geq 0}(-1)^{\frac {k}{2}}\,\sum _{\begin{smallmatrix}A\subseteq \{\,1,2,3,\dots \,\}\\\left|A\right|=k\end{smallmatrix}}{\biggl (}\prod _{i\in A}\sin \theta _{i}\prod _{i\not \in A}\cos \theta _{i}{\biggr )}.\end{aligned}}} Because the series ∑ i = 1 ∞ θ i {\textstyle \sum _{i=1}^{\infty }\theta _{i}} converges absolutely, it is necessarily the case that lim i → ∞ θ i = 0 , {\textstyle \lim _{i\to \infty }\theta _{i}=0,} lim i → ∞ sin θ i = 0 , {\textstyle \lim _{i\to \infty }\sin \theta _{i}=0,} and lim i → ∞ cos θ i = 1. {\textstyle \lim _{i\to \infty }\cos \theta _{i}=1.} In particular, in these two identities an asymmetry appears that is not seen in the case of sums of finitely many angles: in each product, there are only finitely many sine factors but there are cofinitely many cosine factors. Terms with infinitely many sine factors would necessarily be equal to zero. When only finitely many of the angles θ i {\displaystyle \theta _{i}} are nonzero then only finitely many of the terms on the right side are nonzero because all but finitely many sine factors vanish. Furthermore, in each term all but finitely many of the cosine factors are unity. === Tangents and cotangents of sums === Let e k {\displaystyle e_{k}} (for k = 0 , 1 , 2 , 3 , … {\displaystyle k=0,1,2,3,\ldots } ) be the kth-degree elementary symmetric polynomial in the variables x i = tan θ i {\displaystyle x_{i}=\tan \theta _{i}} for i = 0 , 1 , 2 , 3 , … , {\displaystyle i=0,1,2,3,\ldots ,} that is, e 0 = 1 e 1 = ∑ i x i = ∑ i tan θ i e 2 = ∑ i < j x i x j = ∑ i < j tan θ i tan θ j e 3 = ∑ i < j < k x i x j x k = ∑ i < j < k tan θ i tan θ j tan θ k ⋮ ⋮ {\displaystyle {\begin{aligned}e_{0}&=1\\[6pt]e_{1}&=\sum _{i}x_{i}&&=\sum _{i}\tan \theta _{i}\\[6pt]e_{2}&=\sum _{i<j}x_{i}x_{j}&&=\sum _{i<j}\tan \theta _{i}\tan \theta _{j}\\[6pt]e_{3}&=\sum _{i<j<k}x_{i}x_{j}x_{k}&&=\sum _{i<j<k}\tan \theta _{i}\tan \theta _{j}\tan \theta _{k}\\&\ \ \vdots &&\ \ \vdots \end{aligned}}} Then tan ( ∑ i θ i ) = sin ( ∑ i θ i ) / ∏ i cos θ i cos ( ∑ i θ i ) / ∏ i cos θ i = ∑ odd k ≥ 1 ( − 1 ) k − 1 2 ∑ A ⊆ { 1 , 2 , 3 , … } | A | = k ∏ i ∈ A tan θ i ∑ even k ≥ 0 ( − 1 ) k 2 ∑ A ⊆ { 1 , 2 , 3 , … } | A | = k ∏ i ∈ A tan θ i = e 1 − e 3 + e 5 − ⋯ e 0 − e 2 + e 4 − ⋯ cot ( ∑ i θ i ) = e 0 − e 2 + e 4 − ⋯ e 1 − e 3 + e 5 − ⋯ {\displaystyle {\begin{aligned}{\tan }{\Bigl (}\sum _{i}\theta _{i}{\Bigr )}&={\frac {{\sin }{\bigl (}\sum _{i}\theta _{i}{\bigr )}/\prod _{i}\cos \theta _{i}}{{\cos }{\bigl (}\sum _{i}\theta _{i}{\bigr )}/\prod _{i}\cos \theta _{i}}}\\[10pt]&={\frac {\displaystyle \sum _{{\text{odd}}\ k\geq 1}(-1)^{\frac {k-1}{2}}\sum _{\begin{smallmatrix}A\subseteq \{1,2,3,\dots \}\\\left|A\right|=k\end{smallmatrix}}\prod _{i\in A}\tan \theta _{i}}{\displaystyle \sum _{{\text{even}}\ k\geq 0}~(-1)^{\frac {k}{2}}~~\sum _{\begin{smallmatrix}A\subseteq \{1,2,3,\dots \}\\\left|A\right|=k\end{smallmatrix}}\prod _{i\in A}\tan \theta _{i}}}={\frac {e_{1}-e_{3}+e_{5}-\cdots }{e_{0}-e_{2}+e_{4}-\cdots }}\\[10pt]{\cot }{\Bigl (}\sum _{i}\theta _{i}{\Bigr )}&={\frac {e_{0}-e_{2}+e_{4}-\cdots }{e_{1}-e_{3}+e_{5}-\cdots }}\end{aligned}}} using the sine and cosine sum formulae above. The number of terms on the right side depends on the number of terms on the left side. For example: tan ( θ 1 + θ 2 ) = e 1 e 0 − e 2 = x 1 + x 2 1 − x 1 x 2 = tan θ 1 + tan θ 2 1 − tan θ 1 tan θ 2 , tan ( θ 1 + θ 2 + θ 3 ) = e 1 − e 3 e 0 − e 2 = ( x 1 + x 2 + x 3 ) − ( x 1 x 2 x 3 ) 1 − ( x 1 x 2 + x 1 x 3 + x 2 x 3 ) , tan ( θ 1 + θ 2 + θ 3 + θ 4 ) = e 1 − e 3 e 0 − e 2 + e 4 = ( x 1 + x 2 + x 3 + x 4 ) − ( x 1 x 2 x 3 + x 1 x 2 x 4 + x 1 x 3 x 4 + x 2 x 3 x 4 ) 1 − ( x 1 x 2 + x 1 x 3 + x 1 x 4 + x 2 x 3 + x 2 x 4 + x 3 x 4 ) + ( x 1 x 2 x 3 x 4 ) , {\displaystyle {\begin{aligned}\tan(\theta _{1}+\theta _{2})&={\frac {e_{1}}{e_{0}-e_{2}}}={\frac {x_{1}+x_{2}}{1\ -\ x_{1}x_{2}}}={\frac {\tan \theta _{1}+\tan \theta _{2}}{1\ -\ \tan \theta _{1}\tan \theta _{2}}},\\[8pt]\tan(\theta _{1}+\theta _{2}+\theta _{3})&={\frac {e_{1}-e_{3}}{e_{0}-e_{2}}}={\frac {(x_{1}+x_{2}+x_{3})\ -\ (x_{1}x_{2}x_{3})}{1\ -\ (x_{1}x_{2}+x_{1}x_{3}+x_{2}x_{3})}},\\[8pt]\tan(\theta _{1}+\theta _{2}+\theta _{3}+\theta _{4})&={\frac {e_{1}-e_{3}}{e_{0}-e_{2}+e_{4}}}\\[8pt]&={\frac {(x_{1}+x_{2}+x_{3}+x_{4})\ -\ (x_{1}x_{2}x_{3}+x_{1}x_{2}x_{4}+x_{1}x_{3}x_{4}+x_{2}x_{3}x_{4})}{1\ -\ (x_{1}x_{2}+x_{1}x_{3}+x_{1}x_{4}+x_{2}x_{3}+x_{2}x_{4}+x_{3}x_{4})\ +\ (x_{1}x_{2}x_{3}x_{4})}},\end{aligned}}} and so on. The case of only finitely many terms can be proved by mathematical induction. The case of infinitely many terms can be proved by using some elementary inequalities. === Secants and cosecants of sums === sec ( ∑ i θ i ) = ∏ i sec θ i e 0 − e 2 + e 4 − ⋯ csc ( ∑ i θ i ) = ∏ i sec θ i e 1 − e 3 + e 5 − ⋯ {\displaystyle {\begin{aligned}{\sec }{\Bigl (}\sum _{i}\theta _{i}{\Bigr )}&={\frac {\prod _{i}\sec \theta _{i}}{e_{0}-e_{2}+e_{4}-\cdots }}\\[8pt]{\csc }{\Bigl (}\sum _{i}\theta _{i}{\Bigr )}&={\frac {\prod _{i}\sec \theta _{i}}{e_{1}-e_{3}+e_{5}-\cdots }}\end{aligned}}} where e k {\displaystyle e_{k}} is the kth-degree elementary symmetric polynomial in the n variables x i = tan θ i , {\displaystyle x_{i}=\tan \theta _{i},} i = 1 , … , n , {\displaystyle i=1,\ldots ,n,} and the number of terms in the denominator and the number of factors in the product in the numerator depend on the number of terms in the sum on the left. The case of only finitely many terms can be proved by mathematical induction on the number of such terms. For example, sec ( α + β + γ ) = sec α sec β sec γ 1 − tan α tan β − tan α tan γ − tan β tan γ csc ( α + β + γ ) = sec α sec β sec γ tan α + tan β + tan γ − tan α tan β tan γ . {\displaystyle {\begin{aligned}\sec(\alpha +\beta +\gamma )&={\frac {\sec \alpha \sec \beta \sec \gamma }{1-\tan \alpha \tan \beta -\tan \alpha \tan \gamma -\tan \beta \tan \gamma }}\\[8pt]\csc(\alpha +\beta +\gamma )&={\frac {\sec \alpha \sec \beta \sec \gamma }{\tan \alpha +\tan \beta +\tan \gamma -\tan \alpha \tan \beta \tan \gamma }}.\end{aligned}}} === Ptolemy's theorem === Ptolemy's theorem is important in the history of trigonometric identities, as it is how results equivalent to the sum and difference formulas for sine and cosine were first proved. It states that in a cyclic quadrilateral A B C D {\displaystyle ABCD} , as shown in the accompanying figure, the sum of the products of the lengths of opposite sides is equal to the product of the lengths of the diagonals. In the special cases of one of the diagonals or sides being a diameter of the circle, this theorem gives rise directly to the angle sum and difference trigonometric identities. The relationship follows most easily when the circle is constructed to have a diameter of length one, as shown here. By Thales's theorem, ∠ D A B {\displaystyle \angle DAB} and ∠ D C B {\displaystyle \angle DCB} are both right angles. The right-angled triangles D A B {\displaystyle DAB} and D C B {\displaystyle DCB} both share the hypotenuse B D ¯ {\displaystyle {\overline {BD}}} of length 1. Thus, the side A B ¯ = sin α {\displaystyle {\overline {AB}}=\sin \alpha } , A D ¯ = cos α {\displaystyle {\overline {AD}}=\cos \alpha } , B C ¯ = sin β {\displaystyle {\overline {BC}}=\sin \beta } and C D ¯ = cos β {\displaystyle {\overline {CD}}=\cos \beta } . By the inscribed angle theorem, the central angle subtended by the chord A C ¯ {\displaystyle {\overline {AC}}} at the circle's center is twice the angle ∠ A D C {\displaystyle \angle ADC} , i.e. 2 ( α + β ) {\displaystyle 2(\alpha +\beta )} . Therefore, the symmetrical pair of red triangles each has the angle α + β {\displaystyle \alpha +\beta } at the center. Each of these triangles has a hypotenuse of length 1 2 {\textstyle {\frac {1}{2}}} , so the length of A C ¯ {\displaystyle {\overline {AC}}} is 2 × 1 2 sin ( α + β ) {\textstyle 2\times {\frac {1}{2}}\sin(\alpha +\beta )} , i.e. simply sin ( α + β ) {\displaystyle \sin(\alpha +\beta )} . The quadrilateral's other diagonal is the diameter of length 1, so the product of the diagonals' lengths is also sin ( α + β ) {\displaystyle \sin(\alpha +\beta )} . When these values are substituted into the statement of Ptolemy's theorem that | A C ¯ | ⋅ | B D ¯ | = | A B ¯ | ⋅ | C D ¯ | + | A D ¯ | ⋅ | B C ¯ | {\displaystyle |{\overline {AC}}|\cdot |{\overline {BD}}|=|{\overline {AB}}|\cdot |{\overline {CD}}|+|{\overline {AD}}|\cdot |{\overline {BC}}|} , this yields the angle sum trigonometric identity for sine: sin ( α + β ) = sin α cos β + cos α sin β {\displaystyle \sin(\alpha +\beta )=\sin \alpha \cos \beta +\cos \alpha \sin \beta } . The angle difference formula for sin ( α − β ) {\displaystyle \sin(\alpha -\beta )} can be similarly derived by letting the side C D ¯ {\displaystyle {\overline {CD}}} serve as a diameter instead of B D ¯ {\displaystyle {\overline {BD}}} . == Multiple-angle and half-angle formulae == === Multiple-angle formulae === ==== Double-angle formulae ==== Formulae for twice an angle. ==== Triple-angle formulae ==== Formulae for triple angles. ==== Multiple-angle formulae ==== Formulae for multiple angles. ==== Chebyshev method ==== The Chebyshev method is a recursive algorithm for finding the nth multiple angle formula knowing the ( n − 1 ) {\displaystyle (n-1)} th and ( n − 2 ) {\displaystyle (n-2)} th values. cos ( n x ) {\displaystyle \cos(nx)} can be computed from cos ( ( n − 1 ) x ) {\displaystyle \cos((n-1)x)} , cos ( ( n − 2 ) x ) {\displaystyle \cos((n-2)x)} , and cos ( x ) {\displaystyle \cos(x)} with cos ( n x ) = 2 cos x cos ( ( n − 1 ) x ) − cos ( ( n − 2 ) x ) . {\displaystyle \cos(nx)=2\cos x\cos((n-1)x)-\cos((n-2)x).} This can be proved by adding together the formulae cos ( ( n − 1 ) x + x ) = cos ( ( n − 1 ) x ) cos x − sin ( ( n − 1 ) x ) sin x cos ( ( n − 1 ) x − x ) = cos ( ( n − 1 ) x ) cos x + sin ( ( n − 1 ) x ) sin x {\displaystyle {\begin{aligned}\cos((n-1)x+x)&=\cos((n-1)x)\cos x-\sin((n-1)x)\sin x\\\cos((n-1)x-x)&=\cos((n-1)x)\cos x+\sin((n-1)x)\sin x\end{aligned}}} It follows by induction that cos ( n x ) {\displaystyle \cos(nx)} is a polynomial of cos x , {\displaystyle \cos x,} the so-called Chebyshev polynomial of the first kind, see Chebyshev polynomials#Trigonometric definition. Similarly, sin ( n x ) {\displaystyle \sin(nx)} can be computed from sin ( ( n − 1 ) x ) , {\displaystyle \sin((n-1)x),} sin ( ( n − 2 ) x ) , {\displaystyle \sin((n-2)x),} and cos x {\displaystyle \cos x} with sin ( n x ) = 2 cos x sin ( ( n − 1 ) x ) − sin ( ( n − 2 ) x ) {\displaystyle \sin(nx)=2\cos x\sin((n-1)x)-\sin((n-2)x)} This can be proved by adding formulae for sin ( ( n − 1 ) x + x ) {\displaystyle \sin((n-1)x+x)} and sin ( ( n − 1 ) x − x ) . {\displaystyle \sin((n-1)x-x).} Serving a purpose similar to that of the Chebyshev method, for the tangent we can write: tan ( n x ) = tan ( ( n − 1 ) x ) + tan x 1 − tan ( ( n − 1 ) x ) tan x . {\displaystyle \tan(nx)={\frac {\tan((n-1)x)+\tan x}{1-\tan((n-1)x)\tan x}}\,.} === Half-angle formulae === sin θ 2 = sgn ( sin θ 2 ) 1 − cos θ 2 cos θ 2 = sgn ( cos θ 2 ) 1 + cos θ 2 tan θ 2 = 1 − cos θ sin θ = sin θ 1 + cos θ = csc θ − cot θ = tan θ 1 + sec θ = sgn ( sin θ ) 1 − cos θ 1 + cos θ = − 1 + sgn ( cos θ ) 1 + tan 2 θ tan θ cot θ 2 = 1 + cos θ sin θ = sin θ 1 − cos θ = csc θ + cot θ = sgn ( sin θ ) 1 + cos θ 1 − cos θ sec θ 2 = sgn ( cos θ 2 ) 2 1 + cos θ csc θ 2 = sgn ( sin θ 2 ) 2 1 − cos θ {\displaystyle {\begin{aligned}\sin {\frac {\theta }{2}}&=\operatorname {sgn} \left(\sin {\frac {\theta }{2}}\right){\sqrt {\frac {1-\cos \theta }{2}}}\\[3pt]\cos {\frac {\theta }{2}}&=\operatorname {sgn} \left(\cos {\frac {\theta }{2}}\right){\sqrt {\frac {1+\cos \theta }{2}}}\\[3pt]\tan {\frac {\theta }{2}}&={\frac {1-\cos \theta }{\sin \theta }}={\frac {\sin \theta }{1+\cos \theta }}=\csc \theta -\cot \theta ={\frac {\tan \theta }{1+\sec {\theta }}}\\[6mu]&=\operatorname {sgn}(\sin \theta ){\sqrt {\frac {1-\cos \theta }{1+\cos \theta }}}={\frac {-1+\operatorname {sgn}(\cos \theta ){\sqrt {1+\tan ^{2}\theta }}}{\tan \theta }}\\[3pt]\cot {\frac {\theta }{2}}&={\frac {1+\cos \theta }{\sin \theta }}={\frac {\sin \theta }{1-\cos \theta }}=\csc \theta +\cot \theta =\operatorname {sgn}(\sin \theta ){\sqrt {\frac {1+\cos \theta }{1-\cos \theta }}}\\\sec {\frac {\theta }{2}}&=\operatorname {sgn} \left(\cos {\frac {\theta }{2}}\right){\sqrt {\frac {2}{1+\cos \theta }}}\\\csc {\frac {\theta }{2}}&=\operatorname {sgn} \left(\sin {\frac {\theta }{2}}\right){\sqrt {\frac {2}{1-\cos \theta }}}\\\end{aligned}}} Also tan η ± θ 2 = sin η ± sin θ cos η + cos θ tan ( θ 2 + π 4 ) = sec θ + tan θ 1 − sin θ 1 + sin θ = | 1 − tan θ 2 | | 1 + tan θ 2 | {\displaystyle {\begin{aligned}\tan {\frac {\eta \pm \theta }{2}}&={\frac {\sin \eta \pm \sin \theta }{\cos \eta +\cos \theta }}\\[3pt]\tan \left({\frac {\theta }{2}}+{\frac {\pi }{4}}\right)&=\sec \theta +\tan \theta \\[3pt]{\sqrt {\frac {1-\sin \theta }{1+\sin \theta }}}&={\frac {\left|1-\tan {\frac {\theta }{2}}\right|}{\left|1+\tan {\frac {\theta }{2}}\right|}}\end{aligned}}} === Table === These can be shown by using either the sum and difference identities or the multiple-angle formulae. The fact that the triple-angle formula for sine and cosine only involves powers of a single function allows one to relate the geometric problem of a compass and straightedge construction of angle trisection to the algebraic problem of solving a cubic equation, which allows one to prove that trisection is in general impossible using the given tools. A formula for computing the trigonometric identities for the one-third angle exists, but it requires finding the zeroes of the cubic equation 4x3 − 3x + d = 0, where x {\displaystyle x} is the value of the cosine function at the one-third angle and d is the known value of the cosine function at the full angle. However, the discriminant of this equation is positive, so this equation has three real roots (of which only one is the solution for the cosine of the one-third angle). None of these solutions are reducible to a real algebraic expression, as they use intermediate complex numbers under the cube roots. == Power-reduction formulae == Obtained by solving the second and third versions of the cosine double-angle formula. In general terms of powers of sin θ {\displaystyle \sin \theta } or cos θ {\displaystyle \cos \theta } the following is true, and can be deduced using De Moivre's formula, Euler's formula and the binomial theorem. == Product-to-sum and sum-to-product identities == The product-to-sum identities or prosthaphaeresis formulae can be proven by expanding their right-hand sides using the angle addition theorems. Historically, the first four of these were known as Werner's formulas, after Johannes Werner who used them for astronomical calculations. See amplitude modulation for an application of the product-to-sum formulae, and beat (acoustics) and phase detector for applications of the sum-to-product formulae. === Product-to-sum identities === === Sum-to-product identities === The sum-to-product identities are as follows: === Hermite's cotangent identity === Charles Hermite demonstrated the following identity. Suppose a 1 , … , a n {\displaystyle a_{1},\ldots ,a_{n}} are complex numbers, no two of which differ by an integer multiple of π. Let A n , k = ∏ 1 ≤ j ≤ n j ≠ k cot ( a k − a j ) {\displaystyle A_{n,k}=\prod _{\begin{smallmatrix}1\leq j\leq n\\j\neq k\end{smallmatrix}}\cot(a_{k}-a_{j})} (in particular, A 1 , 1 , {\displaystyle A_{1,1},} being an empty product, is 1). Then cot ( z − a 1 ) ⋯ cot ( z − a n ) = cos n π 2 + ∑ k = 1 n A n , k cot ( z − a k ) . {\displaystyle \cot(z-a_{1})\cdots \cot(z-a_{n})=\cos {\frac {n\pi }{2}}+\sum _{k=1}^{n}A_{n,k}\cot(z-a_{k}).} The simplest non-trivial example is the case n = 2: cot ( z − a 1 ) cot ( z − a 2 ) = − 1 + cot ( a 1 − a 2 ) cot ( z − a 1 ) + cot ( a 2 − a 1 ) cot ( z − a 2 ) . {\displaystyle \cot(z-a_{1})\cot(z-a_{2})=-1+\cot(a_{1}-a_{2})\cot(z-a_{1})+\cot(a_{2}-a_{1})\cot(z-a_{2}).} === Finite products of trigonometric functions === For coprime integers n, m ∏ k = 1 n ( 2 a + 2 cos ( 2 π k m n + x ) ) = 2 ( T n ( a ) + ( − 1 ) n + m cos ( n x ) ) {\displaystyle \prod _{k=1}^{n}\left(2a+2\cos \left({\frac {2\pi km}{n}}+x\right)\right)=2\left(T_{n}(a)+{(-1)}^{n+m}\cos(nx)\right)} where Tn is the Chebyshev polynomial. The following relationship holds for the sine function ∏ k = 1 n − 1 sin ( k π n ) = n 2 n − 1 . {\displaystyle \prod _{k=1}^{n-1}\sin \left({\frac {k\pi }{n}}\right)={\frac {n}{2^{n-1}}}.} More generally for an integer n > 0 sin ( n x ) = 2 n − 1 ∏ k = 0 n − 1 sin ( k n π + x ) = 2 n − 1 ∏ k = 1 n sin ( k n π − x ) . {\displaystyle \sin(nx)=2^{n-1}\prod _{k=0}^{n-1}\sin \left({\frac {k}{n}}\pi +x\right)=2^{n-1}\prod _{k=1}^{n}\sin \left({\frac {k}{n}}\pi -x\right).} or written in terms of the chord function crd x ≡ 2 sin 1 2 x {\textstyle \operatorname {crd} x\equiv 2\sin {\tfrac {1}{2}}x} , crd ( n x ) = ∏ k = 1 n crd ( k n 2 π − x ) . {\displaystyle \operatorname {crd} (nx)=\prod _{k=1}^{n}\operatorname {crd} \left({\frac {k}{n}}2\pi -x\right).} This comes from the factorization of the polynomial z n − 1 {\textstyle z^{n}-1} into linear factors (cf. root of unity): For any complex z and an integer n > 0, z n − 1 = ∏ k = 1 n ( z − exp ( k n 2 π i ) ) . {\displaystyle z^{n}-1=\prod _{k=1}^{n}\left(z-\exp {\Bigl (}{\frac {k}{n}}2\pi i{\Bigr )}\right).} == Linear combinations == For some purposes it is important to know that any linear combination of sine waves of the same period or frequency but different phase shifts is also a sine wave with the same period or frequency, but a different phase shift. This is useful in sinusoid data fitting, because the measured or observed data are linearly related to the a and b unknowns of the in-phase and quadrature components basis below, resulting in a simpler Jacobian, compared to that of c {\displaystyle c} and φ {\displaystyle \varphi } . === Sine and cosine === The linear combination, or harmonic addition, of sine and cosine waves is equivalent to a single sine wave with a phase shift and scaled amplitude, a cos x + b sin x = c cos ( x + φ ) {\displaystyle a\cos x+b\sin x=c\cos(x+\varphi )} where c {\displaystyle c} and φ {\displaystyle \varphi } are defined as so: c = sgn ( a ) a 2 + b 2 , φ = arctan ( − b / a ) , {\displaystyle {\begin{aligned}c&=\operatorname {sgn}(a){\sqrt {a^{2}+b^{2}}},\\\varphi &={\arctan }{\bigl (}{-b/a}{\bigr )},\end{aligned}}} given that a ≠ 0. {\displaystyle a\neq 0.} === Arbitrary phase shift === More generally, for arbitrary phase shifts, we have a sin ( x + θ a ) + b sin ( x + θ b ) = c sin ( x + φ ) {\displaystyle a\sin(x+\theta _{a})+b\sin(x+\theta _{b})=c\sin(x+\varphi )} where c {\displaystyle c} and φ {\displaystyle \varphi } satisfy: c 2 = a 2 + b 2 + 2 a b cos ( θ a − θ b ) , tan φ = a sin θ a + b sin θ b a cos θ a + b cos θ b . {\displaystyle {\begin{aligned}c^{2}&=a^{2}+b^{2}+2ab\cos \left(\theta _{a}-\theta _{b}\right),\\\tan \varphi &={\frac {a\sin \theta _{a}+b\sin \theta _{b}}{a\cos \theta _{a}+b\cos \theta _{b}}}.\end{aligned}}} === More than two sinusoids === The general case reads ∑ i a i sin ( x + θ i ) = a sin ( x + θ ) , {\displaystyle \sum _{i}a_{i}\sin(x+\theta _{i})=a\sin(x+\theta ),} where a 2 = ∑ i , j a i a j cos ( θ i − θ j ) {\displaystyle a^{2}=\sum _{i,j}a_{i}a_{j}\cos(\theta _{i}-\theta _{j})} and tan θ = ∑ i a i sin θ i ∑ i a i cos θ i . {\displaystyle \tan \theta ={\frac {\sum _{i}a_{i}\sin \theta _{i}}{\sum _{i}a_{i}\cos \theta _{i}}}.} == Lagrange's trigonometric identities == These identities, named after Joseph Louis Lagrange, are: ∑ k = 0 n sin k θ = cos 1 2 θ − cos ( ( n + 1 2 ) θ ) 2 sin 1 2 θ ∑ k = 1 n cos k θ = − sin 1 2 θ + sin ( ( n + 1 2 ) θ ) 2 sin 1 2 θ {\displaystyle {\begin{aligned}\sum _{k=0}^{n}\sin k\theta &={\frac {\cos {\tfrac {1}{2}}\theta -\cos \left(\left(n+{\tfrac {1}{2}}\right)\theta \right)}{2\sin {\tfrac {1}{2}}\theta }}\\[5pt]\sum _{k=1}^{n}\cos k\theta &={\frac {-\sin {\tfrac {1}{2}}\theta +\sin \left(\left(n+{\tfrac {1}{2}}\right)\theta \right)}{2\sin {\tfrac {1}{2}}\theta }}\end{aligned}}} for θ ≢ 0 ( mod 2 π ) . {\displaystyle \theta \not \equiv 0{\pmod {2\pi }}.} A related function is the Dirichlet kernel: D n ( θ ) = 1 + 2 ∑ k = 1 n cos k θ = sin ( ( n + 1 2 ) θ ) sin 1 2 θ . {\displaystyle D_{n}(\theta )=1+2\sum _{k=1}^{n}\cos k\theta ={\frac {\sin \left(\left(n+{\tfrac {1}{2}}\right)\theta \right)}{\sin {\tfrac {1}{2}}\theta }}.} A similar identity is ∑ k = 1 n cos ( 2 k − 1 ) α = sin ( 2 n α ) 2 sin α . {\displaystyle \sum _{k=1}^{n}\cos(2k-1)\alpha ={\frac {\sin(2n\alpha )}{2\sin \alpha }}.} The proof is the following. By using the angle sum and difference identities, sin ( A + B ) − sin ( A − B ) = 2 cos A sin B . {\displaystyle \sin(A+B)-\sin(A-B)=2\cos A\sin B.} Then let's examine the following formula, 2 sin α ∑ k = 1 n cos ( 2 k − 1 ) α = 2 sin α cos α + 2 sin α cos 3 α + 2 sin α cos 5 α + … + 2 sin α cos ( 2 n − 1 ) α {\displaystyle 2\sin \alpha \sum _{k=1}^{n}\cos(2k-1)\alpha =2\sin \alpha \cos \alpha +2\sin \alpha \cos 3\alpha +2\sin \alpha \cos 5\alpha +\ldots +2\sin \alpha \cos(2n-1)\alpha } and this formula can be written by using the above identity, 2 sin α ∑ k = 1 n cos ( 2 k − 1 ) α = ∑ k = 1 n ( sin ( 2 k α ) − sin ( 2 ( k − 1 ) α ) ) = ( sin 2 α − sin 0 ) + ( sin 4 α − sin 2 α ) + ( sin 6 α − sin 4 α ) + … + ( sin ( 2 n α ) − sin ( 2 ( n − 1 ) α ) ) = sin ( 2 n α ) . {\displaystyle {\begin{aligned}&2\sin \alpha \sum _{k=1}^{n}\cos(2k-1)\alpha \\&\quad =\sum _{k=1}^{n}(\sin(2k\alpha )-\sin(2(k-1)\alpha ))\\&\quad =(\sin 2\alpha -\sin 0)+(\sin 4\alpha -\sin 2\alpha )+(\sin 6\alpha -\sin 4\alpha )+\ldots +(\sin(2n\alpha )-\sin(2(n-1)\alpha ))\\&\quad =\sin(2n\alpha ).\end{aligned}}} So, dividing this formula with 2 sin α {\displaystyle 2\sin \alpha } completes the proof. == Certain linear fractional transformations == If f ( x ) {\displaystyle f(x)} is given by the linear fractional transformation f ( x ) = ( cos α ) x − sin α ( sin α ) x + cos α , {\displaystyle f(x)={\frac {(\cos \alpha )x-\sin \alpha }{(\sin \alpha )x+\cos \alpha }},} and similarly g ( x ) = ( cos β ) x − sin β ( sin β ) x + cos β , {\displaystyle g(x)={\frac {(\cos \beta )x-\sin \beta }{(\sin \beta )x+\cos \beta }},} then f ( g ( x ) ) = g ( f ( x ) ) = ( cos ( α + β ) ) x − sin ( α + β ) ( sin ( α + β ) ) x + cos ( α + β ) . {\displaystyle f{\big (}g(x){\big )}=g{\big (}f(x){\big )}={\frac {{\big (}\cos(\alpha +\beta ){\big )}x-\sin(\alpha +\beta )}{{\big (}\sin(\alpha +\beta ){\big )}x+\cos(\alpha +\beta )}}.} More tersely stated, if for all α {\displaystyle \alpha } we let f α {\displaystyle f_{\alpha }} be what we called f {\displaystyle f} above, then f α ∘ f β = f α + β . {\displaystyle f_{\alpha }\circ f_{\beta }=f_{\alpha +\beta }.} If x {\displaystyle x} is the slope of a line, then f ( x ) {\displaystyle f(x)} is the slope of its rotation through an angle of − α . {\displaystyle -\alpha .} == Relation to the complex exponential function == Euler's formula states that, for any real number x: e i x = cos x + i sin x , {\displaystyle e^{ix}=\cos x+i\sin x,} where i is the imaginary unit. Substituting −x for x gives us: e − i x = cos ( − x ) + i sin ( − x ) = cos x − i sin x . {\displaystyle e^{-ix}=\cos(-x)+i\sin(-x)=\cos x-i\sin x.} These two equations can be used to solve for cosine and sine in terms of the exponential function. Specifically, cos x = e i x + e − i x 2 {\displaystyle \cos x={\frac {e^{ix}+e^{-ix}}{2}}} sin x = e i x − e − i x 2 i {\displaystyle \sin x={\frac {e^{ix}-e^{-ix}}{2i}}} These formulae are useful for proving many other trigonometric identities. For example, that ei(θ+φ) = eiθ eiφ means that That the real part of the left hand side equals the real part of the right hand side is an angle addition formula for cosine. The equality of the imaginary parts gives an angle addition formula for sine. The following table expresses the trigonometric functions and their inverses in terms of the exponential function and the complex logarithm. == Relation to complex hyperbolic functions == Trigonometric functions may be deduced from hyperbolic functions with complex arguments. The formulae for the relations are shown below. sin x = − i sinh ( i x ) cos x = cosh ( i x ) tan x = − i tanh ( i x ) cot x = i coth ( i x ) sec x = sech ( i x ) csc x = i csch ( i x ) {\displaystyle {\begin{aligned}\sin x&=-i\sinh(ix)\\\cos x&=\cosh(ix)\\\tan x&=-i\tanh(ix)\\\cot x&=i\coth(ix)\\\sec x&=\operatorname {sech} (ix)\\\csc x&=i\operatorname {csch} (ix)\\\end{aligned}}} == Series expansion == When using a power series expansion to define trigonometric functions, the following identities are obtained: sin x = x − x 3 3 ! + x 5 5 ! − x 7 7 ! + ⋯ = ∑ n = 0 ∞ ( − 1 ) n x 2 n + 1 ( 2 n + 1 ) ! , {\displaystyle \sin x=x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-{\frac {x^{7}}{7!}}+\cdots =\sum _{n=0}^{\infty }{\frac {(-1)^{n}x^{2n+1}}{(2n+1)!}},} cos x = 1 − x 2 2 ! + x 4 4 ! − x 6 6 ! + ⋯ = ∑ n = 0 ∞ ( − 1 ) n x 2 n ( 2 n ) ! . {\displaystyle \cos x=1-{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}-{\frac {x^{6}}{6!}}+\cdots =\sum _{n=0}^{\infty }{\frac {(-1)^{n}x^{2n}}{(2n)!}}.} == Infinite product formulae == For applications to special functions, the following infinite product formulae for trigonometric functions are useful: sin x = x ∏ n = 1 ∞ ( 1 − x 2 π 2 n 2 ) , cos x = ∏ n = 1 ∞ ( 1 − x 2 π 2 ( n − 1 2 ) ) 2 ) , sinh x = x ∏ n = 1 ∞ ( 1 + x 2 π 2 n 2 ) , cosh x = ∏ n = 1 ∞ ( 1 + x 2 π 2 ( n − 1 2 ) ) 2 ) . {\displaystyle {\begin{aligned}\sin x&=x\prod _{n=1}^{\infty }\left(1-{\frac {x^{2}}{\pi ^{2}n^{2}}}\right),&\cos x&=\prod _{n=1}^{\infty }\left(1-{\frac {x^{2}}{\pi ^{2}\left(n-{\frac {1}{2}}\right)\!{\vphantom {)}}^{2}}}\right),\\[10mu]\sinh x&=x\prod _{n=1}^{\infty }\left(1+{\frac {x^{2}}{\pi ^{2}n^{2}}}\right),&\cosh x&=\prod _{n=1}^{\infty }\left(1+{\frac {x^{2}}{\pi ^{2}\left(n-{\frac {1}{2}}\right)\!{\vphantom {)}}^{2}}}\right).\end{aligned}}} == Inverse trigonometric functions == The following identities give the result of composing a trigonometric function with an inverse trigonometric function. sin ( arcsin x ) = x cos ( arcsin x ) = 1 − x 2 tan ( arcsin x ) = x 1 − x 2 sin ( arccos x ) = 1 − x 2 cos ( arccos x ) = x tan ( arccos x ) = 1 − x 2 x sin ( arctan x ) = x 1 + x 2 cos ( arctan x ) = 1 1 + x 2 tan ( arctan x ) = x sin ( arccsc x ) = 1 x cos ( arccsc x ) = x 2 − 1 x tan ( arccsc x ) = 1 x 2 − 1 sin ( arcsec x ) = x 2 − 1 x cos ( arcsec x ) = 1 x tan ( arcsec x ) = x 2 − 1 sin ( arccot x ) = 1 1 + x 2 cos ( arccot x ) = x 1 + x 2 tan ( arccot x ) = 1 x {\displaystyle {\begin{aligned}\sin(\arcsin x)&=x&\cos(\arcsin x)&={\sqrt {1-x^{2}}}&\tan(\arcsin x)&={\frac {x}{\sqrt {1-x^{2}}}}\\\sin(\arccos x)&={\sqrt {1-x^{2}}}&\cos(\arccos x)&=x&\tan(\arccos x)&={\frac {\sqrt {1-x^{2}}}{x}}\\\sin(\arctan x)&={\frac {x}{\sqrt {1+x^{2}}}}&\cos(\arctan x)&={\frac {1}{\sqrt {1+x^{2}}}}&\tan(\arctan x)&=x\\\sin(\operatorname {arccsc} x)&={\frac {1}{x}}&\cos(\operatorname {arccsc} x)&={\frac {\sqrt {x^{2}-1}}{x}}&\tan(\operatorname {arccsc} x)&={\frac {1}{\sqrt {x^{2}-1}}}\\\sin(\operatorname {arcsec} x)&={\frac {\sqrt {x^{2}-1}}{x}}&\cos(\operatorname {arcsec} x)&={\frac {1}{x}}&\tan(\operatorname {arcsec} x)&={\sqrt {x^{2}-1}}\\\sin(\operatorname {arccot} x)&={\frac {1}{\sqrt {1+x^{2}}}}&\cos(\operatorname {arccot} x)&={\frac {x}{\sqrt {1+x^{2}}}}&\tan(\operatorname {arccot} x)&={\frac {1}{x}}\\\end{aligned}}} Taking the multiplicative inverse of both sides of the each equation above results in the equations for csc = 1 sin , sec = 1 cos , and cot = 1 tan . {\displaystyle \csc ={\frac {1}{\sin }},\;\sec ={\frac {1}{\cos }},{\text{ and }}\cot ={\frac {1}{\tan }}.} The right hand side of the formula above will always be flipped. For example, the equation for cot ( arcsin x ) {\displaystyle \cot(\arcsin x)} is: cot ( arcsin x ) = 1 tan ( arcsin x ) = 1 x 1 − x 2 = 1 − x 2 x {\displaystyle \cot(\arcsin x)={\frac {1}{\tan(\arcsin x)}}={\frac {1}{\frac {x}{\sqrt {1-x^{2}}}}}={\frac {\sqrt {1-x^{2}}}{x}}} while the equations for csc ( arccos x ) {\displaystyle \csc(\arccos x)} and sec ( arccos x ) {\displaystyle \sec(\arccos x)} are: csc ( arccos x ) = 1 sin ( arccos x ) = 1 1 − x 2 and sec ( arccos x ) = 1 cos ( arccos x ) = 1 x . {\displaystyle \csc(\arccos x)={\frac {1}{\sin(\arccos x)}}={\frac {1}{\sqrt {1-x^{2}}}}\qquad {\text{ and }}\quad \sec(\arccos x)={\frac {1}{\cos(\arccos x)}}={\frac {1}{x}}.} The following identities are implied by the reflection identities. They hold whenever x , r , s , − x , − r , and − s {\displaystyle x,r,s,-x,-r,{\text{ and }}-s} are in the domains of the relevant functions. π 2 = arcsin ( x ) + arccos ( x ) = arctan ( r ) + arccot ( r ) = arcsec ( s ) + arccsc ( s ) π = arccos ( x ) + arccos ( − x ) = arccot ( r ) + arccot ( − r ) = arcsec ( s ) + arcsec ( − s ) 0 = arcsin ( x ) + arcsin ( − x ) = arctan ( r ) + arctan ( − r ) = arccsc ( s ) + arccsc ( − s ) {\displaystyle {\begin{alignedat}{9}{\frac {\pi }{2}}~&=~\arcsin(x)&&+\arccos(x)~&&=~\arctan(r)&&+\operatorname {arccot}(r)~&&=~\operatorname {arcsec}(s)&&+\operatorname {arccsc}(s)\\[0.4ex]\pi ~&=~\arccos(x)&&+\arccos(-x)~&&=~\operatorname {arccot}(r)&&+\operatorname {arccot}(-r)~&&=~\operatorname {arcsec}(s)&&+\operatorname {arcsec}(-s)\\[0.4ex]0~&=~\arcsin(x)&&+\arcsin(-x)~&&=~\arctan(r)&&+\arctan(-r)~&&=~\operatorname {arccsc}(s)&&+\operatorname {arccsc}(-s)\\[1.0ex]\end{alignedat}}} Also, arctan x + arctan 1 x = { π 2 , if x > 0 − π 2 , if x < 0 arccot x + arccot 1 x = { π 2 , if x > 0 3 π 2 , if x < 0 {\displaystyle {\begin{aligned}\arctan x+\arctan {\dfrac {1}{x}}&={\begin{cases}{\frac {\pi }{2}},&{\text{if }}x>0\\-{\frac {\pi }{2}},&{\text{if }}x<0\end{cases}}\\\operatorname {arccot} x+\operatorname {arccot} {\dfrac {1}{x}}&={\begin{cases}{\frac {\pi }{2}},&{\text{if }}x>0\\{\frac {3\pi }{2}},&{\text{if }}x<0\end{cases}}\\\end{aligned}}} arccos 1 x = arcsec x and arcsec 1 x = arccos x {\displaystyle \arccos {\frac {1}{x}}=\operatorname {arcsec} x\qquad {\text{ and }}\qquad \operatorname {arcsec} {\frac {1}{x}}=\arccos x} arcsin 1 x = arccsc x and arccsc 1 x = arcsin x {\displaystyle \arcsin {\frac {1}{x}}=\operatorname {arccsc} x\qquad {\text{ and }}\qquad \operatorname {arccsc} {\frac {1}{x}}=\arcsin x} The arctangent function can be expanded as a series: arctan ( n x ) = ∑ m = 1 n arctan x 1 + ( m − 1 ) m x 2 {\displaystyle \arctan(nx)=\sum _{m=1}^{n}\arctan {\frac {x}{1+(m-1)mx^{2}}}} == Identities without variables == In terms of the arctangent function we have arctan 1 2 = arctan 1 3 + arctan 1 7 . {\displaystyle \arctan {\frac {1}{2}}=\arctan {\frac {1}{3}}+\arctan {\frac {1}{7}}.} The curious identity known as Morrie's law, cos 20 ∘ ⋅ cos 40 ∘ ⋅ cos 80 ∘ = 1 8 , {\displaystyle \cos 20^{\circ }\cdot \cos 40^{\circ }\cdot \cos 80^{\circ }={\frac {1}{8}},} is a special case of an identity that contains one variable: ∏ j = 0 k − 1 cos ( 2 j x ) = sin ( 2 k x ) 2 k sin x . {\displaystyle \prod _{j=0}^{k-1}\cos \left(2^{j}x\right)={\frac {\sin \left(2^{k}x\right)}{2^{k}\sin x}}.} Similarly, sin 20 ∘ ⋅ sin 40 ∘ ⋅ sin 80 ∘ = 3 8 {\displaystyle \sin 20^{\circ }\cdot \sin 40^{\circ }\cdot \sin 80^{\circ }={\frac {\sqrt {3}}{8}}} is a special case of an identity with x = 20 ∘ {\displaystyle x=20^{\circ }} : sin x ⋅ sin ( 60 ∘ − x ) ⋅ sin ( 60 ∘ + x ) = sin 3 x 4 . {\displaystyle \sin x\cdot \sin \left(60^{\circ }-x\right)\cdot \sin \left(60^{\circ }+x\right)={\frac {\sin 3x}{4}}.} For the case x = 15 ∘ {\displaystyle x=15^{\circ }} , sin 15 ∘ ⋅ sin 45 ∘ ⋅ sin 75 ∘ = 2 8 , sin 15 ∘ ⋅ sin 75 ∘ = 1 4 . {\displaystyle {\begin{aligned}\sin 15^{\circ }\cdot \sin 45^{\circ }\cdot \sin 75^{\circ }&={\frac {\sqrt {2}}{8}},\\\sin 15^{\circ }\cdot \sin 75^{\circ }&={\frac {1}{4}}.\end{aligned}}} For the case x = 10 ∘ {\displaystyle x=10^{\circ }} , sin 10 ∘ ⋅ sin 50 ∘ ⋅ sin 70 ∘ = 1 8 . {\displaystyle \sin 10^{\circ }\cdot \sin 50^{\circ }\cdot \sin 70^{\circ }={\frac {1}{8}}.} The same cosine identity is cos x ⋅ cos ( 60 ∘ − x ) ⋅ cos ( 60 ∘ + x ) = cos 3 x 4 . {\displaystyle \cos x\cdot \cos \left(60^{\circ }-x\right)\cdot \cos \left(60^{\circ }+x\right)={\frac {\cos 3x}{4}}.} Similarly, cos 10 ∘ ⋅ cos 50 ∘ ⋅ cos 70 ∘ = 3 8 , cos 15 ∘ ⋅ cos 45 ∘ ⋅ cos 75 ∘ = 2 8 , cos 15 ∘ ⋅ cos 75 ∘ = 1 4 . {\displaystyle {\begin{aligned}\cos 10^{\circ }\cdot \cos 50^{\circ }\cdot \cos 70^{\circ }&={\frac {\sqrt {3}}{8}},\\\cos 15^{\circ }\cdot \cos 45^{\circ }\cdot \cos 75^{\circ }&={\frac {\sqrt {2}}{8}},\\\cos 15^{\circ }\cdot \cos 75^{\circ }&={\frac {1}{4}}.\end{aligned}}} Similarly, tan 50 ∘ ⋅ tan 60 ∘ ⋅ tan 70 ∘ = tan 80 ∘ , tan 40 ∘ ⋅ tan 30 ∘ ⋅ tan 20 ∘ = tan 10 ∘ . {\displaystyle {\begin{aligned}\tan 50^{\circ }\cdot \tan 60^{\circ }\cdot \tan 70^{\circ }&=\tan 80^{\circ },\\\tan 40^{\circ }\cdot \tan 30^{\circ }\cdot \tan 20^{\circ }&=\tan 10^{\circ }.\end{aligned}}} The following is perhaps not as readily generalized to an identity containing variables (but see explanation below): cos 24 ∘ + cos 48 ∘ + cos 96 ∘ + cos 168 ∘ = 1 2 . {\displaystyle \cos 24^{\circ }+\cos 48^{\circ }+\cos 96^{\circ }+\cos 168^{\circ }={\frac {1}{2}}.} Degree measure ceases to be more felicitous than radian measure when we consider this identity with 21 in the denominators: cos 2 π 21 + cos ( 2 ⋅ 2 π 21 ) + cos ( 4 ⋅ 2 π 21 ) + cos ( 5 ⋅ 2 π 21 ) + cos ( 8 ⋅ 2 π 21 ) + cos ( 10 ⋅ 2 π 21 ) = 1 2 . {\displaystyle \cos {\frac {2\pi }{21}}+\cos \left(2\cdot {\frac {2\pi }{21}}\right)+\cos \left(4\cdot {\frac {2\pi }{21}}\right)+\cos \left(5\cdot {\frac {2\pi }{21}}\right)+\cos \left(8\cdot {\frac {2\pi }{21}}\right)+\cos \left(10\cdot {\frac {2\pi }{21}}\right)={\frac {1}{2}}.} The factors 1, 2, 4, 5, 8, 10 may start to make the pattern clear: they are those integers less than 21/2 that are relatively prime to (or have no prime factors in common with) 21. The last several examples are corollaries of a basic fact about the irreducible cyclotomic polynomials: the cosines are the real parts of the zeroes of those polynomials; the sum of the zeroes is the Möbius function evaluated at (in the very last case above) 21; only half of the zeroes are present above. The two identities preceding this last one arise in the same fashion with 21 replaced by 10 and 15, respectively. Other cosine identities include: 2 cos π 3 = 1 , 2 cos π 5 × 2 cos 2 π 5 = 1 , 2 cos π 7 × 2 cos 2 π 7 × 2 cos 3 π 7 = 1 , {\displaystyle {\begin{aligned}2\cos {\frac {\pi }{3}}&=1,\\2\cos {\frac {\pi }{5}}\times 2\cos {\frac {2\pi }{5}}&=1,\\2\cos {\frac {\pi }{7}}\times 2\cos {\frac {2\pi }{7}}\times 2\cos {\frac {3\pi }{7}}&=1,\end{aligned}}} and so forth for all odd numbers, and hence cos π 3 + cos π 5 × cos 2 π 5 + cos π 7 × cos 2 π 7 × cos 3 π 7 + ⋯ = 1. {\displaystyle \cos {\frac {\pi }{3}}+\cos {\frac {\pi }{5}}\times \cos {\frac {2\pi }{5}}+\cos {\frac {\pi }{7}}\times \cos {\frac {2\pi }{7}}\times \cos {\frac {3\pi }{7}}+\dots =1.} Many of those curious identities stem from more general facts like the following: ∏ k = 1 n − 1 sin k π n = n 2 n − 1 {\displaystyle \prod _{k=1}^{n-1}\sin {\frac {k\pi }{n}}={\frac {n}{2^{n-1}}}} and ∏ k = 1 n − 1 cos k π n = sin π n 2 2 n − 1 . {\displaystyle \prod _{k=1}^{n-1}\cos {\frac {k\pi }{n}}={\frac {\sin {\frac {\pi n}{2}}}{2^{n-1}}}.} Combining these gives us ∏ k = 1 n − 1 tan k π n = n sin π n 2 {\displaystyle \prod _{k=1}^{n-1}\tan {\frac {k\pi }{n}}={\frac {n}{\sin {\frac {\pi n}{2}}}}} If n is an odd number ( n = 2 m + 1 {\displaystyle n=2m+1} ) we can make use of the symmetries to get ∏ k = 1 m tan k π 2 m + 1 = 2 m + 1 {\displaystyle \prod _{k=1}^{m}\tan {\frac {k\pi }{2m+1}}={\sqrt {2m+1}}} The transfer function of the Butterworth low pass filter can be expressed in terms of polynomial and poles. By setting the frequency as the cutoff frequency, the following identity can be proved: ∏ k = 1 n sin ( 2 k − 1 ) π 4 n = ∏ k = 1 n cos ( 2 k − 1 ) π 4 n = 2 2 n {\displaystyle \prod _{k=1}^{n}\sin {\frac {\left(2k-1\right)\pi }{4n}}=\prod _{k=1}^{n}\cos {\frac {\left(2k-1\right)\pi }{4n}}={\frac {\sqrt {2}}{2^{n}}}} === Computing π === An efficient way to compute π to a large number of digits is based on the following identity without variables, due to Machin. This is known as a Machin-like formula: π 4 = 4 arctan 1 5 − arctan 1 239 {\displaystyle {\frac {\pi }{4}}=4\arctan {\frac {1}{5}}-\arctan {\frac {1}{239}}} or, alternatively, by using an identity of Leonhard Euler: π 4 = 5 arctan 1 7 + 2 arctan 3 79 {\displaystyle {\frac {\pi }{4}}=5\arctan {\frac {1}{7}}+2\arctan {\frac {3}{79}}} or by using Pythagorean triples: π = arccos 4 5 + arccos 5 13 + arccos 16 65 = arcsin 3 5 + arcsin 12 13 + arcsin 63 65 . {\displaystyle \pi =\arccos {\frac {4}{5}}+\arccos {\frac {5}{13}}+\arccos {\frac {16}{65}}=\arcsin {\frac {3}{5}}+\arcsin {\frac {12}{13}}+\arcsin {\frac {63}{65}}.} Others include: π 4 = arctan 1 2 + arctan 1 3 , {\displaystyle {\frac {\pi }{4}}=\arctan {\frac {1}{2}}+\arctan {\frac {1}{3}},} π = arctan 1 + arctan 2 + arctan 3 , {\displaystyle \pi =\arctan 1+\arctan 2+\arctan 3,} π 4 = 2 arctan 1 3 + arctan 1 7 . {\displaystyle {\frac {\pi }{4}}=2\arctan {\frac {1}{3}}+\arctan {\frac {1}{7}}.} Generally, for numbers t1, ..., tn−1 ∈ (−1, 1) for which θn = Σn−1k=1 arctan tk ∈ (π/4, 3π/4), let tn = tan(π/2 − θn) = cot θn. This last expression can be computed directly using the formula for the cotangent of a sum of angles whose tangents are t1, ..., tn−1 and its value will be in (−1, 1). In particular, the computed tn will be rational whenever all the t1, ..., tn−1 values are rational. With these values, π 2 = ∑ k = 1 n arctan ( t k ) π = ∑ k = 1 n sgn ( t k ) arccos ( 1 − t k 2 1 + t k 2 ) π = ∑ k = 1 n arcsin ( 2 t k 1 + t k 2 ) π = ∑ k = 1 n arctan ( 2 t k 1 − t k 2 ) , {\displaystyle {\begin{aligned}{\frac {\pi }{2}}&=\sum _{k=1}^{n}\arctan(t_{k})\\\pi &=\sum _{k=1}^{n}\operatorname {sgn}(t_{k})\arccos \left({\frac {1-t_{k}^{2}}{1+t_{k}^{2}}}\right)\\\pi &=\sum _{k=1}^{n}\arcsin \left({\frac {2t_{k}}{1+t_{k}^{2}}}\right)\\\pi &=\sum _{k=1}^{n}\arctan \left({\frac {2t_{k}}{1-t_{k}^{2}}}\right)\,,\end{aligned}}} where in all but the first expression, we have used tangent half-angle formulae. The first two formulae work even if one or more of the tk values is not within (−1, 1). Note that if t = p/q is rational, then the (2t, 1 − t2, 1 + t2) values in the above formulae are proportional to the Pythagorean triple (2pq, q2 − p2, q2 + p2). For example, for n = 3 terms, π 2 = arctan ( a b ) + arctan ( c d ) + arctan ( b d − a c a d + b c ) {\displaystyle {\frac {\pi }{2}}=\arctan \left({\frac {a}{b}}\right)+\arctan \left({\frac {c}{d}}\right)+\arctan \left({\frac {bd-ac}{ad+bc}}\right)} for any a, b, c, d > 0. === An identity of Euclid === Euclid showed in Book XIII, Proposition 10 of his Elements that the area of the square on the side of a regular pentagon inscribed in a circle is equal to the sum of the areas of the squares on the sides of the regular hexagon and the regular decagon inscribed in the same circle. In the language of modern trigonometry, this says: sin 2 18 ∘ + sin 2 30 ∘ = sin 2 36 ∘ . {\displaystyle \sin ^{2}18^{\circ }+\sin ^{2}30^{\circ }=\sin ^{2}36^{\circ }.} Ptolemy used this proposition to compute some angles in his table of chords in Book I, chapter 11 of Almagest. == Composition of trigonometric functions == These identities involve a trigonometric function of a trigonometric function: cos ( t sin x ) = J 0 ( t ) + 2 ∑ k = 1 ∞ J 2 k ( t ) cos ( 2 k x ) {\displaystyle \cos(t\sin x)=J_{0}(t)+2\sum _{k=1}^{\infty }J_{2k}(t)\cos(2kx)} sin ( t sin x ) = 2 ∑ k = 0 ∞ J 2 k + 1 ( t ) sin ( ( 2 k + 1 ) x ) {\displaystyle \sin(t\sin x)=2\sum _{k=0}^{\infty }J_{2k+1}(t)\sin {\big (}(2k+1)x{\big )}} cos ( t cos x ) = J 0 ( t ) + 2 ∑ k = 1 ∞ ( − 1 ) k J 2 k ( t ) cos ( 2 k x ) {\displaystyle \cos(t\cos x)=J_{0}(t)+2\sum _{k=1}^{\infty }(-1)^{k}J_{2k}(t)\cos(2kx)} sin ( t cos x ) = 2 ∑ k = 0 ∞ ( − 1 ) k J 2 k + 1 ( t ) cos ( ( 2 k + 1 ) x ) {\displaystyle \sin(t\cos x)=2\sum _{k=0}^{\infty }(-1)^{k}J_{2k+1}(t)\cos {\big (}(2k+1)x{\big )}} where Ji are Bessel functions. == Further "conditional" identities for the case α + β + γ = 180° == A conditional trigonometric identity is a trigonometric identity that holds if specified conditions on the arguments to the trigonometric functions are satisfied. The following formulae apply to arbitrary plane triangles and follow from α + β + γ = 180 ∘ , {\displaystyle \alpha +\beta +\gamma =180^{\circ },} as long as the functions occurring in the formulae are well-defined (the latter applies only to the formulae in which tangents and cotangents occur). tan α + tan β + tan γ = tan α tan β tan γ 1 = cot β cot γ + cot γ cot α + cot α cot β cot ( α 2 ) + cot ( β 2 ) + cot ( γ 2 ) = cot ( α 2 ) cot ( β 2 ) cot ( γ 2 ) 1 = tan ( β 2 ) tan ( γ 2 ) + tan ( γ 2 ) tan ( α 2 ) + tan ( α 2 ) tan ( β 2 ) sin α + sin β + sin γ = 4 cos ( α 2 ) cos ( β 2 ) cos ( γ 2 ) − sin α + sin β + sin γ = 4 cos ( α 2 ) sin ( β 2 ) sin ( γ 2 ) cos α + cos β + cos γ = 4 sin ( α 2 ) sin ( β 2 ) sin ( γ 2 ) + 1 − cos α + cos β + cos γ = 4 sin ( α 2 ) cos ( β 2 ) cos ( γ 2 ) − 1 sin ( 2 α ) + sin ( 2 β ) + sin ( 2 γ ) = 4 sin α sin β sin γ − sin ( 2 α ) + sin ( 2 β ) + sin ( 2 γ ) = 4 sin α cos β cos γ cos ( 2 α ) + cos ( 2 β ) + cos ( 2 γ ) = − 4 cos α cos β cos γ − 1 − cos ( 2 α ) + cos ( 2 β ) + cos ( 2 γ ) = − 4 cos α sin β sin γ + 1 sin 2 α + sin 2 β + sin 2 γ = 2 cos α cos β cos γ + 2 − sin 2 α + sin 2 β + sin 2 γ = 2 cos α sin β sin γ cos 2 α + cos 2 β + cos 2 γ = − 2 cos α cos β cos γ + 1 − cos 2 α + cos 2 β + cos 2 γ = − 2 cos α sin β sin γ + 1 sin 2 ( 2 α ) + sin 2 ( 2 β ) + sin 2 ( 2 γ ) = − 2 cos ( 2 α ) cos ( 2 β ) cos ( 2 γ ) + 2 cos 2 ( 2 α ) + cos 2 ( 2 β ) + cos 2 ( 2 γ ) = 2 cos ( 2 α ) cos ( 2 β ) cos ( 2 γ ) + 1 1 = sin 2 ( α 2 ) + sin 2 ( β 2 ) + sin 2 ( γ 2 ) + 2 sin ( α 2 ) sin ( β 2 ) sin ( γ 2 ) {\displaystyle {\begin{aligned}\tan \alpha +\tan \beta +\tan \gamma &=\tan \alpha \tan \beta \tan \gamma \\1&=\cot \beta \cot \gamma +\cot \gamma \cot \alpha +\cot \alpha \cot \beta \\\cot \left({\frac {\alpha }{2}}\right)+\cot \left({\frac {\beta }{2}}\right)+\cot \left({\frac {\gamma }{2}}\right)&=\cot \left({\frac {\alpha }{2}}\right)\cot \left({\frac {\beta }{2}}\right)\cot \left({\frac {\gamma }{2}}\right)\\1&=\tan \left({\frac {\beta }{2}}\right)\tan \left({\frac {\gamma }{2}}\right)+\tan \left({\frac {\gamma }{2}}\right)\tan \left({\frac {\alpha }{2}}\right)+\tan \left({\frac {\alpha }{2}}\right)\tan \left({\frac {\beta }{2}}\right)\\\sin \alpha +\sin \beta +\sin \gamma &=4\cos \left({\frac {\alpha }{2}}\right)\cos \left({\frac {\beta }{2}}\right)\cos \left({\frac {\gamma }{2}}\right)\\-\sin \alpha +\sin \beta +\sin \gamma &=4\cos \left({\frac {\alpha }{2}}\right)\sin \left({\frac {\beta }{2}}\right)\sin \left({\frac {\gamma }{2}}\right)\\\cos \alpha +\cos \beta +\cos \gamma &=4\sin \left({\frac {\alpha }{2}}\right)\sin \left({\frac {\beta }{2}}\right)\sin \left({\frac {\gamma }{2}}\right)+1\\-\cos \alpha +\cos \beta +\cos \gamma &=4\sin \left({\frac {\alpha }{2}}\right)\cos \left({\frac {\beta }{2}}\right)\cos \left({\frac {\gamma }{2}}\right)-1\\\sin(2\alpha )+\sin(2\beta )+\sin(2\gamma )&=4\sin \alpha \sin \beta \sin \gamma \\-\sin(2\alpha )+\sin(2\beta )+\sin(2\gamma )&=4\sin \alpha \cos \beta \cos \gamma \\\cos(2\alpha )+\cos(2\beta )+\cos(2\gamma )&=-4\cos \alpha \cos \beta \cos \gamma -1\\-\cos(2\alpha )+\cos(2\beta )+\cos(2\gamma )&=-4\cos \alpha \sin \beta \sin \gamma +1\\\sin ^{2}\alpha +\sin ^{2}\beta +\sin ^{2}\gamma &=2\cos \alpha \cos \beta \cos \gamma +2\\-\sin ^{2}\alpha +\sin ^{2}\beta +\sin ^{2}\gamma &=2\cos \alpha \sin \beta \sin \gamma \\\cos ^{2}\alpha +\cos ^{2}\beta +\cos ^{2}\gamma &=-2\cos \alpha \cos \beta \cos \gamma +1\\-\cos ^{2}\alpha +\cos ^{2}\beta +\cos ^{2}\gamma &=-2\cos \alpha \sin \beta \sin \gamma +1\\\sin ^{2}(2\alpha )+\sin ^{2}(2\beta )+\sin ^{2}(2\gamma )&=-2\cos(2\alpha )\cos(2\beta )\cos(2\gamma )+2\\\cos ^{2}(2\alpha )+\cos ^{2}(2\beta )+\cos ^{2}(2\gamma )&=2\cos(2\alpha )\,\cos(2\beta )\,\cos(2\gamma )+1\\1&=\sin ^{2}\left({\frac {\alpha }{2}}\right)+\sin ^{2}\left({\frac {\beta }{2}}\right)+\sin ^{2}\left({\frac {\gamma }{2}}\right)+2\sin \left({\frac {\alpha }{2}}\right)\,\sin \left({\frac {\beta }{2}}\right)\,\sin \left({\frac {\gamma }{2}}\right)\end{aligned}}} == Historical shorthands == The versine, coversine, haversine, and exsecant were used in navigation. For example, the haversine formula was used to calculate the distance between two points on a sphere. They are rarely used today. == Miscellaneous == === Dirichlet kernel === The Dirichlet kernel Dn(x) is the function occurring on both sides of the next identity: 1 + 2 cos x + 2 cos ( 2 x ) + 2 cos ( 3 x ) + ⋯ + 2 cos ( n x ) = sin ( ( n + 1 2 ) x ) sin ( 1 2 x ) . {\displaystyle 1+2\cos x+2\cos(2x)+2\cos(3x)+\cdots +2\cos(nx)={\frac {\sin \left(\left(n+{\frac {1}{2}}\right)x\right)}{\sin \left({\frac {1}{2}}x\right)}}.} The convolution of any integrable function of period 2 π {\displaystyle 2\pi } with the Dirichlet kernel coincides with the function's n {\displaystyle n} th-degree Fourier approximation. The same holds for any measure or generalized function. === Tangent half-angle substitution === If we set t = tan x 2 , {\displaystyle t=\tan {\frac {x}{2}},} then sin x = 2 t 1 + t 2 ; cos x = 1 − t 2 1 + t 2 ; e i x = 1 + i t 1 − i t ; d x = 2 d t 1 + t 2 , {\displaystyle \sin x={\frac {2t}{1+t^{2}}};\qquad \cos x={\frac {1-t^{2}}{1+t^{2}}};\qquad e^{ix}={\frac {1+it}{1-it}};\qquad dx={\frac {2\,dt}{1+t^{2}}},} where e i x = cos x + i sin x , {\displaystyle e^{ix}=\cos x+i\sin x,} sometimes abbreviated to cis x. When this substitution of t {\displaystyle t} for tan x/2 is used in calculus, it follows that sin x {\displaystyle \sin x} is replaced by 2t/1 + t2, cos x {\displaystyle \cos x} is replaced by 1 − t2/1 + t2 and the differential dx is replaced by 2 dt/1 + t2. Thereby one converts rational functions of sin x {\displaystyle \sin x} and cos x {\displaystyle \cos x} to rational functions of t {\displaystyle t} in order to find their antiderivatives. === Viète's infinite product === cos θ 2 ⋅ cos θ 4 ⋅ cos θ 8 ⋯ = ∏ n = 1 ∞ cos θ 2 n = sin θ θ = sinc θ . {\displaystyle \cos {\frac {\theta }{2}}\cdot \cos {\frac {\theta }{4}}\cdot \cos {\frac {\theta }{8}}\cdots =\prod _{n=1}^{\infty }\cos {\frac {\theta }{2^{n}}}={\frac {\sin \theta }{\theta }}=\operatorname {sinc} \theta .} == See also == == References == == Bibliography == == External links == Values of sin and cos, expressed in surds, for integer multiples of 3° and of 5+5/8°, and for the same angles csc and sec and tan
|
Wikipedia:Conductance (graph theory)#0
|
In theoretical computer science, graph theory, and mathematics, the conductance is a parameter of a Markov chain that is closely tied to its mixing time, that is, how rapidly the chain converges to its stationary distribution, should it exist. Equivalently, the conductance can be viewed as a parameter of a directed graph, in which case it can be used to analyze how quickly random walks in the graph converge. The conductance of a graph is closely related to the Cheeger constant of the graph, which is also known as the edge expansion or the isoperimetic number. However, due to subtly different definitions, the conductance and the edge expansion do not generally coincide if the graphs are not regular. On the other hand, the notion of electrical conductance that appears in electrical networks is unrelated to the conductance of a graph. == History == The conductance was first defined by Mark Jerrum and Alistair Sinclair in 1988 to prove that the permanent of a matrix with entries from {0,1} has a polynomial-time approximation scheme. In the proof, Jerrum and Sinclair studied the Markov chain that switches between perfect and near-perfect matchings in bipartite graphs by adding or removing individual edges. They defined and used the conductance to prove that this Markov chain is rapidly mixing. This means that, after running the Markov chain for a polynomial number of steps, the resulting distribution is guaranteed to be close to the stationary distribution, which in this case is the uniform distribution on the set of all perfect and near-perfect matchings. This rapidly mixing Markov chain makes it possible in polynomial time to draw approximately uniform random samples from the set of all perfect matchings in the bipartite graph, which in turn gives rise to the polynomial-time approximation scheme for computing the permanent. == Definition == For undirected d-regular graphs G {\displaystyle G} without edge weights, the conductance φ ( G ) {\displaystyle \varphi (G)} is equal to the Cheeger constant h ( G ) {\displaystyle h(G)} divided by d, that is, we have φ ( G ) = h ( G ) / d {\displaystyle \varphi (G)=h(G)/d} . More generally, let G {\displaystyle G} be a directed graph with n {\displaystyle n} vertices, vertex set V {\displaystyle V} , edge set E {\displaystyle E} , and real weights a i j ≥ 0 {\displaystyle a_{ij}\geq 0} on each edge i j ∈ E {\displaystyle ij\in E} . Let S ⊆ V {\displaystyle S\subseteq V} be any vertex subset. The conductance φ ( S ) {\displaystyle \varphi (S)} of the cut ( S , S ¯ ) {\displaystyle (S,{\bar {S}})} is defined via φ ( S ) = a ( S , S ¯ ) min ( v o l ( S ) , v o l ( S ¯ ) ) , {\displaystyle \varphi (S)={\frac {\displaystyle a(S,{\bar {S}})}{\min(\mathrm {vol} (S),\mathrm {vol} ({\bar {S}}))}}\,,} where a ( S , T ) = ∑ i ∈ S ∑ j ∈ T a i j , {\displaystyle a(S,T)=\sum _{i\in S}\sum _{j\in T}a_{ij}\,,} and so a ( S , S ¯ ) {\displaystyle a(S,{\bar {S}})} is the total weight of all edges that are crossing the cut from S {\displaystyle S} to S ¯ {\displaystyle {\bar {S}}} and v o l ( S ) = a ( S , V ) = ∑ i ∈ S ∑ j ∈ V a i j {\displaystyle \mathrm {vol} (S)=a(S,V)=\sum _{i\in S}\sum _{j\in V}a_{ij}} is the volume of S {\displaystyle S} , that is, the total weight of all edges that start at S {\displaystyle S} . If v o l ( S ) {\displaystyle \mathrm {vol} (S)} equals 0 {\displaystyle 0} , then a ( S , S ¯ ) {\displaystyle a(S,{\bar {S}})} also equals 0 {\displaystyle 0} and φ ( S ) {\displaystyle \varphi (S)} is defined as 1 {\displaystyle 1} . The conductance φ ( G ) {\displaystyle \varphi (G)} of the graph G {\displaystyle G} is now defined as the minimum conductance over all possible cuts: φ ( G ) = min S ⊆ V φ ( S ) . {\displaystyle \varphi (G)=\min _{S\subseteq V}\varphi (S).} Equivalently, the conductance satisfies φ ( G ) = min { a ( S , S ¯ ) v o l ( S ) : v o l ( S ) ≤ v o l ( V ) 2 } . {\displaystyle \varphi (G)=\min \left\{{\frac {a(S,{\bar {S}})}{\mathrm {vol} (S)}}\;\colon \;{\mathrm {vol} (S)\leq {\frac {\mathrm {vol} (V)}{2}}}\right\}\,.} == Generalizations and applications == In practical applications, one often considers the conductance only over a cut. A common generalization of conductance is to handle the case of weights assigned to the edges: then the weights are added; if the weight is in the form of a resistance, then the reciprocal weights are added. The notion of conductance underpins the study of percolation in physics and other applied areas; thus, for example, the permeability of petroleum through porous rock can be modeled in terms of the conductance of a graph, with weights given by pore sizes. Conductance also helps measure the quality of a Spectral clustering. The maximum among the conductance of clusters provides a bound which can be used, along with inter-cluster edge weight, to define a measure on the quality of clustering. Intuitively, the conductance of a cluster (which can be seen as a set of vertices in a graph) should be low. Apart from this, the conductance of the subgraph induced by a cluster (called "internal conductance") can be used as well. == Markov chains == For an ergodic reversible Markov chain with an underlying graph G, the conductance is a way to measure how hard it is to leave a small set of nodes. Formally, the conductance of a graph is defined as the minimum over all sets S {\displaystyle S} of the capacity of S {\displaystyle S} divided by the ergodic flow out of S {\displaystyle S} . Alistair Sinclair showed that conductance is closely tied to mixing time in ergodic reversible Markov chains. We can also view conductance in a more probabilistic way, as the probability of leaving a set of nodes given that we started in that set to begin with. This may also be written as Φ = min S ⊆ V , 0 < π ( S ) ≤ 1 2 Φ S = min S ⊆ V , 0 < π ( S ) ≤ 1 2 ∑ x ∈ S , y ∈ S ¯ π ( x ) P ( x , y ) π ( S ) , {\displaystyle \Phi =\min _{S\subseteq V,0<\pi (S)\leq {\frac {1}{2}}}\Phi _{S}=\min _{S\subseteq V,0<\pi (S)\leq {\frac {1}{2}}}{\frac {\sum _{x\in S,y\in {\bar {S}}}\pi (x)P(x,y)}{\pi (S)}},} where π {\displaystyle \pi } is the stationary distribution of the chain. In some literature, this quantity is also called the bottleneck ratio of G. Conductance is related to Markov chain mixing time in the reversible setting. Precisely, for any irreducible, reversible Markov Chain with self loop probabilities P ( y , y ) ≥ 1 / 2 {\displaystyle P(y,y)\geq 1/2} for all states y {\displaystyle y} and an initial state x ∈ Ω {\displaystyle x\in \Omega } , 1 4 Φ ≤ τ x ( δ ) ≤ 2 Φ 2 ( ln π ( x ) − 1 + ln δ − 1 ) {\displaystyle {\frac {1}{4\Phi }}\leq \tau _{x}(\delta )\leq {\frac {2}{\Phi ^{2}}}{\big (}\ln \pi (x)^{-1}+\ln \delta ^{-1}{\big )}} . == See also == Resistance distance Percolation theory Krackhardt E/I Ratio == Notes == == References ==
|
Wikipedia:Conference graph#0
|
In the mathematical area of graph theory, a conference graph is a strongly regular graph with parameters v, k = (v − 1)/2, λ = (v − 5)/4, and μ = (v − 1)/4. It is the graph associated with a symmetric conference matrix, and consequently its order v must be 1 (modulo 4) and a sum of two squares. Conference graphs are known to exist for all small values of v allowed by the restrictions, e.g., v = 5, 9, 13, 17, 25, 29, and (the Paley graphs) for all prime powers congruent to 1 (modulo 4). However, there are many values of v that are allowed, for which the existence of a conference graph is unknown. The eigenvalues of a conference graph need not be integers, unlike those of other strongly regular graphs. If the graph is connected, the eigenvalues are k with multiplicity 1, and two other eigenvalues, − 1 ± v 2 , {\displaystyle {\frac {-1\pm {\sqrt {v}}}{2}},} each with multiplicity (v − 1)/2. == References == Brouwer, A.E., Cohen, A.M., and Neumaier, A. (1989), Distance Regular Graphs. Berlin, New York: Springer-Verlag. ISBN 3-540-50619-5, ISBN 0-387-50619-5
|
Wikipedia:Conference matrix#0
|
In mathematics, a conference matrix (also called a C-matrix) is a square matrix C with 0 on the diagonal and +1 and −1 off the diagonal, such that CTC is a multiple of the identity matrix I. Thus, if the matrix has order n, CTC = (n−1)I. Some authors use a more general definition, which requires there to be a single 0 in each row and column but not necessarily on the diagonal. Conference matrices first arose in connection with a problem in telephony. They were first described by Vitold Belevitch, who also gave them their name. Belevitch was interested in constructing ideal telephone conference networks from ideal transformers and discovered that such networks were represented by conference matrices, hence the name. Other applications are in statistics, and another is in elliptic geometry. For n > 1, there are two kinds of conference matrix. Let us normalize C by, first (if the more general definition is used), rearranging the rows so that all the zeros are on the diagonal, and then negating any row or column whose first entry is negative. (These operations do not change whether a matrix is a conference matrix.) Thus, a normalized conference matrix has all 1's in its first row and column, except for a 0 in the top left corner, and is 0 on the diagonal. Let S be the matrix that remains when the first row and column of C are removed. Then either n is evenly even (a multiple of 4) and S is skew-symmetric (as is the normalized C if its first row is negated), or n is oddly even (congruent to 2 modulo 4) and S is symmetric (as is the normalized C). == Symmetric conference matrices == If C is a symmetric conference matrix of order n > 1, then not only must n be congruent to 2 mod 4 but also n − 1 must be a sum of two squares; there is a clever proof by elementary matrix theory in van Lint and Seidel. n will always be the sum of two squares if n − 1 is a prime power. Given a symmetric conference matrix, the matrix S can be viewed as the Seidel adjacency matrix of a graph. The graph has n − 1 vertices, corresponding to the rows and columns of S, and two vertices are adjacent if the corresponding entry in S is negative. This graph is strongly regular of the type called (after the matrix) a conference graph. The existence of conference matrices of orders n allowed by the above restrictions is known only for some values of n. For instance, if n = q + 1 where q is a prime power congruent to 1 mod 4, then the Paley graphs provide examples of symmetric conference matrices of order n, by taking S to be the Seidel matrix of the Paley graph. The first few possible orders of a symmetric conference matrix are n = 2, 6, 10, 14, 18, (not 22, since 21 is not a sum of two squares), 26, 30, (not 34 since 33 is not a sum of two squares), 38, 42, 46, 50, 54, (not 58), 62 (sequence A000952 in the OEIS); for every one of these, it is known that a symmetric conference matrix of that order exists. Order 66 seems to be an open problem. === Examples === The essentially unique conference matrix of order 6 is given by ( 0 + 1 + 1 + 1 + 1 + 1 + 1 0 + 1 − 1 − 1 + 1 + 1 + 1 0 + 1 − 1 − 1 + 1 − 1 + 1 0 + 1 − 1 + 1 − 1 − 1 + 1 0 + 1 + 1 + 1 − 1 − 1 + 1 0 ) {\displaystyle {\begin{pmatrix}0&+1&+1&+1&+1&+1\\+1&0&+1&-1&-1&+1\\+1&+1&0&+1&-1&-1\\+1&-1&+1&0&+1&-1\\+1&-1&-1&+1&0&+1\\+1&+1&-1&-1&+1&0\end{pmatrix}}} . All other conference matrices of order 6 are obtained from this one by flipping the signs of some row and/or column (and by taking permutations of rows and/or columns, according to the definition in use). One conference matrix of order 10 is ( 0 − 1 − 1 + 1 − 1 − 1 − 1 + 1 − 1 − 1 − 1 0 − 1 − 1 − 1 + 1 + 1 − 1 − 1 − 1 − 1 − 1 0 − 1 + 1 − 1 + 1 + 1 + 1 − 1 + 1 − 1 − 1 0 + 1 + 1 + 1 + 1 − 1 + 1 − 1 − 1 + 1 + 1 0 − 1 + 1 − 1 − 1 + 1 − 1 + 1 − 1 + 1 − 1 0 + 1 + 1 + 1 + 1 − 1 + 1 + 1 + 1 + 1 + 1 0 + 1 − 1 − 1 + 1 − 1 + 1 + 1 − 1 + 1 + 1 0 + 1 − 1 − 1 − 1 + 1 − 1 − 1 + 1 − 1 + 1 0 + 1 − 1 − 1 − 1 + 1 + 1 + 1 − 1 − 1 + 1 0 ) {\displaystyle {\begin{pmatrix}0&-1&-1&+1&-1&-1&-1&+1&-1&-1\\-1&0&-1&-1&-1&+1&+1&-1&-1&-1\\-1&-1&0&-1&+1&-1&+1&+1&+1&-1\\+1&-1&-1&0&+1&+1&+1&+1&-1&+1\\-1&-1&+1&+1&0&-1&+1&-1&-1&+1\\-1&+1&-1&+1&-1&0&+1&+1&+1&+1\\-1&+1&+1&+1&+1&+1&0&+1&-1&-1\\+1&-1&+1&+1&-1&+1&+1&0&+1&-1\\-1&-1&+1&-1&-1&+1&-1&+1&0&+1\\-1&-1&-1&+1&+1&+1&-1&-1&+1&0\end{pmatrix}}} . == Skew-symmetric conference matrices == Skew-symmetric matrices can also be produced by the Paley construction. Let q be a prime power with residue 3 mod 4. Then there is a Paley digraph of order q which leads to a skew-symmetric conference matrix of order n = q + 1. The matrix is obtained by taking for S the q × q matrix that has a +1 in position (i, j ) and −1 in position (j, i) if there is an arc of the digraph from i to j, and zero diagonal. Then C constructed as above from S, but with the first row all negative, is a skew-symmetric conference matrix. This construction solves only a small part of the problem of deciding for which evenly even numbers n there exist skew-symmetric conference matrices of order n. == Generalizations == Sometimes a conference matrix of order n is just defined as a weighing matrix of the form W(n, n−1), where W(n,w) is said to be of weight w > 0 and order n if it is a square matrix of size n with entries from {−1, 0, +1} satisfying W W T = w I. Using this definition, the zero element is no more required to be on the diagonal, but it is easy to see that still there must be exactly one zero element in each row and column. For example, the matrix ( 1 0 1 1 0 − 1 − 1 1 1 − 1 0 − 1 1 1 − 1 0 ) {\displaystyle {\begin{pmatrix}1&0&1&1\\0&-1&-1&1\\1&-1&0&-1\\1&1&-1&0\end{pmatrix}}} would satisfy this relaxed definition, but not the more strict one requiring the zero elements to be on the diagonal. A conference design is a generalization of conference matrices to non-rectangular matrices. A conference design C is an N × k {\displaystyle N\times k} matrix, with entries from {−1, 0, +1} satisfying W T W = ( N − 1 ) I k {\displaystyle W^{\mathrm {T} }W=(N-1)I_{k}} , where I k {\displaystyle I_{k}} is the k × k {\displaystyle k\times k} identity matrix and at most one zero in each row. The foldover designs of conference designs can be used as definitive screening designs. == Telephone conference circuits == Belevitch obtained complete solutions for conference matrices for all values of n up to 38 and provided circuits for some of the smaller matrices. An ideal conference network is one where the loss of signal is entirely due to the signal being split between multiple conference subscriber ports. That is, there are no dissipation losses within the network. The network must contain ideal transformers only and no resistances. An n-port ideal conference network exists if and only if there exists a conference matrix of order n. For instance, a 3-port conference network can be constructed with the well-known hybrid transformer circuit used for 2-wire to 4-wire conversion in telephone handsets and line repeaters. However, there is no order 3 conference matrix and this circuit does not produce an ideal conference network. A resistance is needed for matching which dissipates signal, or else signal is lost through mismatch. As mentioned above, a necessary condition for a conference matrix to exist is that n−1 must be the sum of two squares. Where there is more than one possible sum of two squares for n−1 there will exist multiple essentially different solutions for the corresponding conference network. This situation occurs at n of 26 and 66. The networks are particularly simple when n−1 is a perfect square (n = 2, 10, 26, ...). == Notes == == References == == Further reading ==
|
Wikipedia:Conformable matrix#0
|
In mathematics, a matrix is conformable if its dimensions are suitable for defining some operation (e.g. addition, multiplication, etc.). == Examples == If two matrices have the same dimensions (number of rows and number of columns), they are conformable for addition. Multiplication of two matrices is defined if and only if the number of columns of the left matrix is the same as the number of rows of the right matrix. That is, if A is an m × n matrix and B is an s × p matrix, then n needs to be equal to s for the matrix product AB to be defined. In this case, we say that A and B are conformable for multiplication (in that sequence). Since squaring a matrix involves multiplying it by itself (A2 = AA) a matrix must be m × m (that is, it must be a square matrix) to be conformable for squaring. Thus for example only a square matrix can be idempotent. Only a square matrix is conformable for matrix inversion. However, the Moore–Penrose pseudoinverse and other generalized inverses do not have this requirement. Only a square matrix is conformable for matrix exponentiation. == See also == Linear algebra == References ==
|
Wikipedia:Conformal dimension#0
|
A dimension is a structure that categorizes facts and measures in order to enable users to answer business questions. Commonly used dimensions are people, products, place and time. (Note: People and time sometimes are not modeled as dimensions.) In a data warehouse, dimensions provide structured labeling information to otherwise unordered numeric measures. The dimension is a data set composed of individual, non-overlapping data elements. The primary functions of dimensions are threefold: to provide filtering, grouping and labelling. These functions are often described as "slice and dice". A common data warehouse example involves sales as the measure, with customer and product as dimensions. In each sale a customer buys a product. The data can be sliced by removing all customers except for a group under study, and then diced by grouping by product. A dimensional data element is similar to a categorical variable in statistics. Typically dimensions in a data warehouse are organized internally into one or more hierarchies. "Date" is a common dimension, with several possible hierarchies: "Days (are grouped into) Months (which are grouped into) Years", "Days (are grouped into) Weeks (which are grouped into) Years" "Days (are grouped into) Months (which are grouped into) Quarters (which are grouped into) Years" etc. == Types == === Slowly changing dimensions === A slowly changing dimension is a set of data attributes that change slowly over a period of time rather than changing regularly e.g. address or name. These attributes can change over a period of time and that will get combined as a slowly changing dimension. These dimensions can be classified in types: Type 0 (Retain original): Attributes never change. No history. Type 1 (Overwrite): Old values are overwritten with new values for attribute. No history. Type 2 (Add new row): A new row is created with either a start date / end date or a version for a new value. This creates history. Type 3 (Add new attribute): A new column is created for a new value. History is limited to the number of columns designated for storing historical data. Type 4 (Add history table): One table keeps the current value, while the history is saved in a second table. Type 5 (Combined Approach 1 + 4): Combination of type 1 and type 4. History is created through a second history table. Type 6 (Combined Approach 1 + 2 + 3): Combination of type 1, type 2 and type 3. History is created through separate row and attributes. Type 7 (Hybrid Approach): Both surrogate and natural key are used. === Conformed dimension === A conformed dimension is a set of data attributes that have been physically referenced in multiple database tables using the same key value to refer to the same structure, attributes, domain values, definitions and concepts. A conformed dimension cuts across many facts. Dimensions are conformed when they are either exactly the same (including keys) or one is a proper subset of the other. Most important, the row headers produced in two different answer sets from the same conformed dimension(s) must be able to match perfectly.' Conformed dimensions are either identical or strict mathematical subsets of the most granular, detailed dimension. Dimension tables are not conformed if the attributes are labeled differently or contain different values. Conformed dimensions come in several different flavors. At the most basic level, conformed dimensions mean exactly the same thing with every possible fact table to which they are joined. The date dimension table connected to the sales facts is identical to the date dimension connected to the inventory facts. === Junk dimension === A junk dimension is a convenient grouping of typically low-cardinality flags and indicators. By creating an abstract dimension, these flags and indicators are removed from the fact table while placing them into a useful dimensional framework. A junk dimension is a dimension table consisting of attributes that do not belong in the fact table or in any of the existing dimension tables. The nature of these attributes is usually text or various flags, e.g. non-generic comments or just simple yes/no or true/false indicators. These kinds of attributes are typically remaining when all the obvious dimensions in the business process have been identified and thus the designer is faced with the challenge of where to put these attributes that do not belong in the other dimensions. One solution is to create a new dimension for each of the remaining attributes, but due to their nature, it could be necessary to create a vast number of new dimensions resulting in a fact table with a very large number of foreign keys. The designer could also decide to leave the remaining attributes in the fact table but this could make the row length of the table unnecessarily large if, for example, the attribute is a long text string. The solution to this challenge is to identify all the attributes and then put them into one or several junk dimensions. One junk dimension can hold several true/false or yes/no indicators that have no correlation with each other, so it would be convenient to convert the indicators into a more describing attribute. An example would be an indicator about whether a package had arrived: instead of indicating this as “yes” or “no”, it would be converted into "arrived" or "pending" in the junk dimension. The designer can choose to build the dimension table so it ends up holding all the indicators occurring with every other indicator so that all combinations are covered. This sets up a fixed size for the table itself which would be 2x rows, where x is the number of indicators. This solution is appropriate in situations where the designer would expect to encounter a lot of different combinations and where the possible combinations are limited to an acceptable level. In a situation where the number of indicators are large, thus creating a very big table or where the designer only expects to encounter a few of the possible combinations, it would be more appropriate to build each row in the junk dimension as new combinations are encountered. To limit the size of the tables, multiple junk dimensions might be appropriate in other situations depending on the correlation between various indicators. Junk dimensions are also appropriate for placing attributes like non-generic comments from the fact table. Such attributes might consist of data from an optional comment field when a customer places an order and as a result will probably be blank in many cases. Therefore, the junk dimension should contain a single row representing the blanks as a surrogate key that will be used in the fact table for every row returned with a blank comment field. === Degenerate dimension === A degenerate dimension is a key, such as a transaction number, invoice number, ticket number, or bill-of-lading number, that has no attributes and hence does not join to an actual dimension table. Degenerate dimensions are very common when the grain of a fact table represents a single transaction item or line item because the degenerate dimension represents the unique identifier of the parent. Degenerate dimensions often play an integral role in the fact table's primary key. === Role-playing dimension === Dimensions are often recycled for multiple applications within the same database. For instance, a "Date" dimension can be used for "Date of Sale", as well as "Date of Delivery", or "Date of Hire". This is often referred to as a "role-playing dimension". This can be implemented using a view over the same dimension table. === Outrigger dimension === Usually dimension tables do not reference other dimensions via foreign keys. When this happens, the referenced dimension is called an outrigger dimension. Outrigger dimensions should be considered a data warehouse anti-pattern: it is considered a better practice to use some fact tables that relate the two dimensions. === Shrunken dimension === A conformed dimensions is said to be a shrunken dimension when it includes a subset of the rows and/or columns of the original dimension. === Calendar date dimension === A special type of dimension can be used to represent dates with a granularity of a day. Dates would be referenced in a fact table as foreign keys to a date dimension. The date dimension primary key could be a surrogate key or a number using the format YYYYMMDD. The date dimension can include other attributes like the week of the year, or flags representing work days, holidays, etc. It could also include special rows representing: not known dates, or yet to be defined dates. The date dimension should be initialized with all the required dates, say the next 10 years of dates, or more if required, or past dates if events in the past are handled. Time instead is usually best represented as a timestamp in the fact table. == Use of ISO representation terms == When referencing data from a metadata registry such as ISO/IEC 11179, representation terms such as "Indicator" (a boolean true/false value), "Code" (a set of non-overlapping enumerated values) are typically used as dimensions. For example, using the National Information Exchange Model (NIEM) the data element name would be "PersonGenderCode" and the enumerated values might be "male", "female" and "unknown". == Dimension table == In data warehousing, a dimension table is one of the set of companion tables to a fact table. The fact table contains business facts (or measures), and foreign keys which refer to candidate keys (normally primary keys) in the dimension tables. Contrary to fact tables, dimension tables contain descriptive attributes (or fields) that are typically textual fields (or discrete numbers that behave like text). These attributes are designed to serve two critical purposes: query constraining and/or filtering, and query result set labeling. Dimension attributes should be: Verbose (labels consisting of full words) Descriptive Complete (having no missing values) Discretely valued (having only one value per dimension table row) Quality assured (having no misspellings or impossible values) Dimension table rows are uniquely identified by a single key field. It is recommended that the key field be a simple integer because a key value is meaningless, used only for joining fields between the fact and dimension tables. Dimension tables often use primary keys that are also surrogate keys. Surrogate keys are often auto-generated (e.g. a Sybase or SQL Server "identity column", a PostgreSQL or Informix serial, an Oracle SEQUENCE or a column defined with AUTO_INCREMENT in MySQL). The use of surrogate dimension keys brings several advantages, including: Performance. Join processing is made much more efficient by using a single field (the surrogate key) Buffering from operational key management practices. This prevents situations where removed data rows might reappear when their natural keys get reused or reassigned after a long period of dormancy Mapping to integrate disparate sources Handling unknown or not-applicable connections Tracking changes in dimension attribute values Although surrogate key use places a burden on the ETL system, pipeline processing can be improved, and ETL tools have built-in improved surrogate key processing. The goal of a dimension table is to create standardized, conformed dimensions that can be shared across the enterprise's data warehouse environment, and enable joining to multiple fact tables representing various business processes. Conformed dimensions are important to the enterprise nature of DW/BI systems because they promote: Consistency. Every fact table is filtered consistently, so that query answers are labeled consistently. Integration. Queries can drill into different process fact tables separately, then join the results on common dimension attributes. Reduced development time to market. The common dimensions are available without recreating them. Over time, the attributes of a given row in a dimension table may change. For example, the shipping address for a company may change. Kimball refers to this phenomenon as slowly changing dimension. Strategies for dealing with this kind of change are divided into three categories: Type one: Simply overwrite the old value(s). Type two: Add a new row containing the new value(s), and distinguish between the rows using Tuple-versioning techniques. Type three: Add a new attribute to the existing row. == Common patterns == === Date and time === Source: Since many fact tables in a data warehouse are time series of observations, one or more date dimensions are often needed. One of the reasons to have date dimensions is to place calendar knowledge in the data warehouse instead of hard-coded in an application. While a simple SQL date-timestamp is useful for providing accurate information about the time a fact was recorded, it can not give information about holidays, fiscal periods, etc. An SQL date-timestamp can still be useful to store in the fact table, as it allows for precise calculations. Having both the date and time of day in the same dimension, may easily result in a huge dimension with millions of rows. If a high amount of detail is needed it is usually a good idea to split date and time into two or more separate dimensions. A time dimension with a grain of seconds in a day will only have 86400 rows. A more or less detailed grain for date/time dimensions can be chosen depending on needs. As examples, date dimensions can be accurate to year, quarter, month or day and time dimensions can be accurate to hours, minutes or seconds. As a rule of thumb, time of day dimension should only be created if hierarchical groupings are needed or if there are meaningful textual descriptions for periods of time within the day (ex. “evening rush” or “first shift”). If the rows in a fact table are coming from several time zones, it might be useful to store date and time in both local time and a standard time. This can be done by having two dimensions for each date/time dimension needed – one for local time, and one for standard time. Storing date/time in both local and standard time, will allow for analysis on when facts are created in a local setting and in a global setting as well. The standard time chosen can be a global standard time (ex. UTC), it can be the local time of the business’ headquarters (ex. CET), or any other time zone that would make sense to use. == See also == Categorical variable Data warehouse Degenerate dimension Slowly changing dimension Fact table ISO/IEC 11179 Measure (data warehouse) Metadata == References ==
|
Wikipedia:Conformal linear transformation#0
|
A conformal linear transformation, also called a homogeneous similarity transformation or homogeneous similitude, is a similarity transformation of a Euclidean or pseudo-Euclidean vector space which fixes the origin. It can be written as the composition of an orthogonal transformation (an origin-preserving rigid transformation) with a uniform scaling (dilation). All similarity transformations (which globally preserve the shape but not necessarily the size of geometric figures) are also conformal (locally preserve shape). Similarity transformations which fix the origin also preserve scalar–vector multiplication and vector addition, making them linear transformations. Every origin-fixing reflection or dilation is a conformal linear transformation, as is any composition of these basic transformations, including rotations and improper rotations and most generally similarity transformations. However, shear transformations and non-uniform scaling are not. Conformal linear transformations come in two types, proper transformations preserve the orientation of the space whereas improper transformations reverse it. As linear transformations, conformal linear transformations are representable by matrices once the vector space has been given a basis, composing with each-other and transforming vectors by matrix multiplication. The Lie group of these transformations has been called the conformal orthogonal group, the conformal linear transformation group or the homogeneous similtude group. Alternatively any conformal linear transformation can be represented as a versor (geometric product of vectors); every versor and its negative represent the same transformation, so the versor group (also called the Lipschitz group) is a double cover of the conformal orthogonal group. Conformal linear transformations are a special type of Möbius transformations (conformal transformations mapping circles to circles); the conformal orthogonal group is a subgroup of the conformal group. == General properties == Across all dimensions, a conformal linear transformation has the following properties: Distance ratios are preserved by the transformation. Given an orthonormal basis, a matrix representing the transformation must have each column the same magnitude and each pair of columns must be orthogonal. The transformation is conformal (angle preserving); in particular orthogonal vectors remain orthogonal after applying the transformation. The transformation maps concentric k-spheres to concentric k-spheres for every k (circles to circles, spheres to spheres, etc.). In particular, k-spheres centered at the origin are mapped to k-spheres centered at the origin. By the Cartan–Dieudonné theorem, every orthogonal transformation in an n-dimensional space can be expressed as some composition of up to n reflections. Therefore, every conformal linear transformation can be expressed as the composition of up to n reflections and a dilation. Because every reflection across a hyperplane reverses the orientation of a pseudo-Euclidean space, the composition of any even number of reflections and a dilation by a positive real number is a proper conformal linear transformation, and the composition of any odd number of reflections and a dilation is an improper conformal linear transformation. == Two dimensions == In the Euclidean vector plane, an improper conformal linear transformation is a reflection across a line through the origin composed with a positive dilation. Given an orthonormal basis, it can be represented by a matrix of the form [ a b b − a ] . {\displaystyle {\begin{bmatrix}a&b\\b&-a\end{bmatrix}}.} A proper conformal linear transformation is a rotation about the origin composed with a positive dilation. It can be represented by a matrix of the form [ a − b b a ] . {\displaystyle {\begin{bmatrix}a&-b\\b&a\end{bmatrix}}.} Alternately a proper conformal linear transformation can be represented by a complex number of the form a + b i . {\displaystyle a+bi.} == Practical applications == When composing multiple linear transformations, it is possible to create a shear/skew by composing a parent transform with a non-uniform scale, and a child transform with a rotation. Therefore, in situations where shear/skew is not allowed, transformation matrices must also have uniform scale in order to prevent a shear/skew from appearing as the result of composition. This implies conformal linear transformations are required to prevent shear/skew when composing multiple transformations. In physics simulations, a sphere (or circle, hypersphere, etc.) is often defined by a point and a radius. Checking if a point overlaps the sphere can therefore be performed by using a distance check to the center. With a rotation or flip/reflection, the sphere is symmetric and invariant, therefore the same check works. With a uniform scale, only the radius needs to be changed. However, with a non-uniform scale or shear/skew, the sphere becomes "distorted" into an ellipsoid, therefore the distance check algorithm does not work correctly anymore. == References ==
|
Wikipedia:Conformal welding#0
|
In mathematics, conformal welding (sewing or gluing) is a process in geometric function theory for producing a Riemann surface by joining together two Riemann surfaces, each with a disk removed, along their boundary circles. This problem can be reduced to that of finding univalent holomorphic maps f, g of the unit disk and its complement into the extended complex plane, both admitting continuous extensions to the closure of their domains, such that the images are complementary Jordan domains and such that on the unit circle they differ by a given quasisymmetric homeomorphism. Several proofs are known using a variety of techniques, including the Beltrami equation, the Hilbert transform on the circle and elementary approximation techniques. Sharon & Mumford (2006) describe the first two methods of conformal welding as well as providing numerical computations and applications to the analysis of shapes in the plane. == Welding using the Beltrami equation == This method was first proposed by Pfluger (1960). If f is a diffeomorphism of the circle, the Alexander extension gives a way of extending f to a diffeomorphism of the unit disk D: F ( r , θ ) = r exp [ i ψ ( r ) g ( θ ) + i ( 1 − ψ ( r ) ) θ ] , {\displaystyle \displaystyle {F(r,\theta )=r\exp[i\psi (r)g(\theta )+i(1-\psi (r))\theta ],}} where ψ is a smooth function with values in [0,1], equal to 0 near 0 and 1 near 1, and f ( e i θ ) = e i g ( θ ) , {\displaystyle \displaystyle {f(e^{i\theta })=e^{ig(\theta )},}} with g(θ + 2π) = g(θ) + 2π. The extension F can be continued to any larger disk |z| < R with R > 1. Accordingly in the unit disc ‖ μ ‖ ∞ < 1 , μ ( z ) = F z ¯ / F z . {\displaystyle \displaystyle {\|\mu \|_{\infty }<1,\,\,\,\mu (z)=F_{\overline {z}}/F_{z}.}} Now extend μ to a Beltrami coefficient on the whole of C by setting it equal to 0 for |z| ≥ 1. Let G be the corresponding solution of the Beltrami equation: G z ¯ = μ G z . {\displaystyle \displaystyle {G_{\overline {z}}=\mu G_{z}.}} Let F1(z) = G ∘ F−1(z) for |z| ≤ 1 and F2(z) = G (z) for |z| ≥ 1. Thus F1 and F2 are univalent holomorphic maps of |z| < 1 and |z| > 1 onto the inside and outside of a Jordan curve. They extend continuously to homeomorphisms fi of the unit circle onto the Jordan curve on the boundary. By construction they satisfy the conformal welding condition: f = f 1 − 1 ∘ f 2 . {\displaystyle \displaystyle {f=f_{1}^{-1}\circ f_{2}.}} == Welding using the Hilbert transform on the circle == The use of the Hilbert transform to establish conformal welding was first suggested by the Georgian mathematicians D.G. Mandzhavidze and B.V. Khvedelidze in 1958. A detailed account was given at the same time by F.D. Gakhov and presented in his classic monograph (Gakhov (1990)). Let en(θ) = einθ be the standard orthonormal basis of L2(T). Let H2(T) be Hardy space, the closed subspace spanned by the en with n ≥ 0. Let P be the orthogonal projection onto Hardy space and set T = 2P - I. The operator H = iT is the Hilbert transform on the circle and can be written as a singular integral operator. Given a diffeomorphism f of the unit circle, the task is to determine two univalent holomorphic functions f − ( z ) = a 0 + a 1 z + a 2 z 2 + ⋯ , f + ( z ) = z + b 1 z − 1 + b 2 z − 2 + ⋯ , {\displaystyle \displaystyle {f_{-}(z)=a_{0}+a_{1}z+a_{2}z^{2}+\cdots ,\,\,\,\,\,f_{+}(z)=z+b_{1}z^{-1}+b_{2}z^{-2}+\cdots ,}} defined in |z| < 1 and |z| > 1 and both extending smoothly to the unit circle, mapping onto a Jordan domain and its complement, such that f − ( e i θ ) = f + ( f ( e i θ ) ) . {\displaystyle \displaystyle {f_{-}(e^{i\theta })=f_{+}(f(e^{i\theta })).}} Let F be the restriction of f+ to the unit circle. Then T F = − F + 2 e i θ {\displaystyle \displaystyle {TF=-F+2e^{i\theta }}} and T F ∘ f = F ∘ f . {\displaystyle \displaystyle {TF\circ f=F\circ f.}} Hence T ( F ∘ f ) ∘ f − 1 − T ( F ) = 2 F − 2 e i θ . {\displaystyle \displaystyle {T(F\circ f)\circ f^{-1}-T(F)=2F-2e^{i\theta }.}} If V(f) denotes the bounded invertible operator on L2 induced by the diffeomorphism f, then the operator K f = V ( f ) P V ( f ) − 1 − P {\displaystyle \displaystyle {K_{f}=V(f)PV(f)^{-1}-P}} is compact, indeed it is given by an operator with smooth kernel because P and T are given by singular integral operators. The equation above then reduces to ( I − K f ) F = e i θ . {\displaystyle \displaystyle {(I-K_{f})F=e^{i\theta }.}} The operator I − Kf is a Fredholm operator of index zero. It has zero kernel and is therefore invertible. In fact an element in the kernel would consist of a pair of holomorphic functions on D and Dc which have smooth boundary values on the circle related by f. Since the holomorphic function on Dc vanishes at ∞, the positive powers of this pair also provide solutions, which are linearly independent, contradicting the fact that I − Kf is a Fredholm operator. The above equation therefore has a unique solution F which is smooth and from which f± can be reconstructed by reversing the steps above. Indeed, by looking at the equation satisfied by the logarithm of the derivative of F, it follows that F has nowhere vanishing derivative on the unit circle. Moreover F is one-to-one on the circle since if it assumes the value a at different points z1 and z2 then the logarithm of R(z) = (F(z) − a)/(z - z1)(z − z2) would satisfy an integral equation known to have no non-zero solutions. Given these properties on the unit circle, the required properties of f± then follow from the argument principle. == Notes == == References == Pfluger, A. (1960), "Ueber die Konstruktion Riemannscher Flächen durch Verheftung", J. Indian Math. Soc., 24: 401–412 Lehto, O.; Virtanen, K.I. (1973), Quasiconformal mappings in the plane, Springer-Verlag, p. 92 Lehto, O. (1987), Univalent functions and Teichmüller spaces, Springer-Verlag, pp. 100–101, ISBN 0-387-96310-3 Sharon, E.; Mumford, D. (2006), "2-D analysis using conformal mapping" (PDF), International Journal of Computer Vision, 70: 55–75, doi:10.1007/s11263-006-6121-z, archived from the original (PDF) on 2012-08-03, retrieved 2012-07-01 Gakhov, F. D. (1990), Boundary value problems. Reprint of the 1966 translation, Dover Publications, ISBN 0-486-66275-6 Titchmarsh, E. C. (1939), The Theory of Functions (2nd ed.), Oxford University Press, ISBN 0198533497 {{citation}}: ISBN / Date incompatibility (help)
|
Wikipedia:Congruence relation#0
|
In abstract algebra, a congruence relation (or simply congruence) is an equivalence relation on an algebraic structure (such as a group, ring, or vector space) that is compatible with the structure in the sense that algebraic operations done with equivalent elements will yield equivalent elements. Every congruence relation has a corresponding quotient structure, whose elements are the equivalence classes (or congruence classes) for the relation. == Definition == The definition of a congruence depends on the type of algebraic structure under consideration. Particular definitions of congruence can be made for groups, rings, vector spaces, modules, semigroups, lattices, and so forth. The common theme is that a congruence is an equivalence relation on an algebraic object that is compatible with the algebraic structure, in the sense that the operations are well-defined on the equivalence classes. === General === The general notion of a congruence relation can be formally defined in the context of universal algebra, a field which studies ideas common to all algebraic structures. In this setting, a relation R {\displaystyle R} on a given algebraic structure is called compatible if for each n {\displaystyle n} and each n {\displaystyle n} -ary operation μ {\displaystyle \mu } defined on the structure: whenever a 1 R a 1 ′ {\displaystyle a_{1}\mathrel {R} a'_{1}} and ... and a n R a n ′ {\displaystyle a_{n}\mathrel {R} a'_{n}} , then μ ( a 1 , … , a n ) R μ ( a 1 ′ , … , a n ′ ) {\displaystyle \mu (a_{1},\ldots ,a_{n})\mathrel {R} \mu (a'_{1},\ldots ,a'_{n})} . A congruence relation on the structure is then defined as an equivalence relation that is also compatible. == Examples == === Basic example === The prototypical example of a congruence relation is congruence modulo n {\displaystyle n} on the set of integers. For a given positive integer n {\displaystyle n} , two integers a {\displaystyle a} and b {\displaystyle b} are called congruent modulo n {\displaystyle n} , written a ≡ b ( mod n ) {\displaystyle a\equiv b{\pmod {n}}} if a − b {\displaystyle a-b} is divisible by n {\displaystyle n} (or equivalently if a {\displaystyle a} and b {\displaystyle b} have the same remainder when divided by n {\displaystyle n} ). For example, 37 {\displaystyle 37} and 57 {\displaystyle 57} are congruent modulo 10 {\displaystyle 10} , 37 ≡ 57 ( mod 10 ) {\displaystyle 37\equiv 57{\pmod {10}}} since 37 − 57 = − 20 {\displaystyle 37-57=-20} is a multiple of 10, or equivalently since both 37 {\displaystyle 37} and 57 {\displaystyle 57} have a remainder of 7 {\displaystyle 7} when divided by 10 {\displaystyle 10} . Congruence modulo n {\displaystyle n} (for a fixed n {\displaystyle n} ) is compatible with both addition and multiplication on the integers. That is, if a 1 ≡ a 2 ( mod n ) {\displaystyle a_{1}\equiv a_{2}{\pmod {n}}} and b 1 ≡ b 2 ( mod n ) {\displaystyle b_{1}\equiv b_{2}{\pmod {n}}} then a 1 + b 1 ≡ a 2 + b 2 ( mod n ) {\displaystyle a_{1}+b_{1}\equiv a_{2}+b_{2}{\pmod {n}}} and a 1 b 1 ≡ a 2 b 2 ( mod n ) {\displaystyle a_{1}b_{1}\equiv a_{2}b_{2}{\pmod {n}}} The corresponding addition and multiplication of equivalence classes is known as modular arithmetic. From the point of view of abstract algebra, congruence modulo n {\displaystyle n} is a congruence relation on the ring of integers, and arithmetic modulo n {\displaystyle n} occurs on the corresponding quotient ring. === Example: Groups === For example, a group is an algebraic object consisting of a set together with a single binary operation, satisfying certain axioms. If G {\displaystyle G} is a group with operation ∗ {\displaystyle \ast } , a congruence relation on G {\displaystyle G} is an equivalence relation ≡ {\displaystyle \equiv } on the elements of G {\displaystyle G} satisfying g 1 ≡ g 2 {\displaystyle g_{1}\equiv g_{2}\ \ \,} and h 1 ≡ h 2 ⟹ g 1 ∗ h 1 ≡ g 2 ∗ h 2 {\displaystyle \ \ \,h_{1}\equiv h_{2}\implies g_{1}\ast h_{1}\equiv g_{2}\ast h_{2}} for all g 1 , g 2 , h 1 , h 2 ∈ G {\displaystyle g_{1},g_{2},h_{1},h_{2}\in G} . For a congruence on a group, the equivalence class containing the identity element is always a normal subgroup, and the other equivalence classes are the other cosets of this subgroup. Together, these equivalence classes are the elements of a quotient group. === Example: Rings === When an algebraic structure includes more than one operation, congruence relations are required to be compatible with each operation. For example, a ring possesses both addition and multiplication, and a congruence relation on a ring must satisfy r 1 + s 1 ≡ r 2 + s 2 {\displaystyle r_{1}+s_{1}\equiv r_{2}+s_{2}} and r 1 s 1 ≡ r 2 s 2 {\displaystyle r_{1}s_{1}\equiv r_{2}s_{2}} whenever r 1 ≡ r 2 {\displaystyle r_{1}\equiv r_{2}} and s 1 ≡ s 2 {\displaystyle s_{1}\equiv s_{2}} . For a congruence on a ring, the equivalence class containing 0 is always a two-sided ideal, and the two operations on the set of equivalence classes define the corresponding quotient ring. == Relation with homomorphisms == If f : A → B {\displaystyle f:A\,\rightarrow B} is a homomorphism between two algebraic structures (such as homomorphism of groups, or a linear map between vector spaces), then the relation R {\displaystyle R} defined by a 1 R a 2 {\displaystyle a_{1}\,R\,a_{2}} if and only if f ( a 1 ) = f ( a 2 ) {\displaystyle f(a_{1})=f(a_{2})} is a congruence relation on A {\displaystyle A} . By the first isomorphism theorem, the image of A under f {\displaystyle f} is a substructure of B isomorphic to the quotient of A by this congruence. On the other hand, the congruence relation R {\displaystyle R} induces a unique homomorphism f : A → A / R {\displaystyle f:A\rightarrow A/R} given by f ( x ) = { y ∣ x R y } {\displaystyle f(x)=\{y\mid x\,R\,y\}} . Thus, there is a natural correspondence between the congruences and the homomorphisms of any given algebraic structure. == Congruences of groups, and normal subgroups and ideals == In the particular case of groups, congruence relations can be described in elementary terms as follows: If G is a group (with identity element e and operation *) and ~ is a binary relation on G, then ~ is a congruence whenever: Given any element a of G, a ~ a (reflexivity); Given any elements a and b of G, if a ~ b, then b ~ a (symmetry); Given any elements a, b, and c of G, if a ~ b and b ~ c, then a ~ c (transitivity); Given any elements a, a′, b, and b′ of G, if a ~ a′ and b ~ b′, then a * b ~ a′ * b′; Given any elements a and a′ of G, if a ~ a′, then a−1 ~ a′−1 (this is implied by the other four, so is strictly redundant). Conditions 1, 2, and 3 say that ~ is an equivalence relation. A congruence ~ is determined entirely by the set {a ∈ G | a ~ e} of those elements of G that are congruent to the identity element, and this set is a normal subgroup. Specifically, a ~ b if and only if b−1 * a ~ e. So instead of talking about congruences on groups, people usually speak in terms of normal subgroups of them; in fact, every congruence corresponds uniquely to some normal subgroup of G. === Ideals of rings and the general case === A similar trick allows one to speak of kernels in ring theory as ideals instead of congruence relations, and in module theory as submodules instead of congruence relations. A more general situation where this trick is possible is with Omega-groups (in the general sense allowing operators with multiple arity). But this cannot be done with, for example, monoids, so the study of congruence relations plays a more central role in monoid theory. == Universal algebra == The general notion of a congruence is particularly useful in universal algebra. An equivalent formulation in this context is the following: A congruence relation on an algebra A is a subset of the direct product A × A that is both an equivalence relation on A and a subalgebra of A × A. The kernel of a homomorphism is always a congruence. Indeed, every congruence arises as a kernel. For a given congruence ~ on A, the set A / ~ of equivalence classes can be given the structure of an algebra in a natural fashion, the quotient algebra. The function that maps every element of A to its equivalence class is a homomorphism, and the kernel of this homomorphism is ~. The lattice Con(A) of all congruence relations on an algebra A is algebraic. John M. Howie described how semigroup theory illustrates congruence relations in universal algebra: In a group a congruence is determined if we know a single congruence class, in particular if we know the normal subgroup which is the class containing the identity. Similarly, in a ring a congruence is determined if we know the ideal which is the congruence class containing the zero. In semigroups there is no such fortunate occurrence, and we are therefore faced with the necessity of studying congruences as such. More than anything else, it is this necessity that gives semigroup theory its characteristic flavour. Semigroups are in fact the first and simplest type of algebra to which the methods of universal algebra must be applied ... == Category theory == In category theory, a congruence relation R on a category C is given by: for each pair of objects X, Y in C, an equivalence relation RX,Y on Hom(X,Y), such that the equivalence relations respect composition of morphisms. See Quotient category § Definition for details. == See also == Chinese remainder theorem Congruence lattice problem Table of congruences == Explanatory notes == == Notes == == References == Barendregt, Henk (1990). "Functional Programming and Lambda Calculus". In Jan van Leeuwen (ed.). Formal Models and Semantics. Handbook of Theoretical Computer Science. Vol. B. Elsevier. pp. 321–364. ISBN 0-444-88074-7. Bergman, Clifford (2011), Universal Algebra: Fundamentals and Selected Topics, Taylor & Francis Horn; Johnson (1985), Matrix Analysis, Cambridge University Press, ISBN 0-521-38632-2 (Section 4.5 discusses congruency of matrices.) Howie, J. M. (1975), An Introduction to Semigroup Theory, Academic Press Hungerford, Thomas W. (1974), Algebra, Springer-Verlag Rosen, Kenneth H (2012). Discrete Mathematics and Its Applications. McGraw-Hill Education. ISBN 978-0077418939.
|
Wikipedia:Conical combination#0
|
Given a finite number of vectors x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\dots ,x_{n}} in a real vector space, a conical combination, conical sum, or weighted sum of these vectors is a vector of the form α 1 x 1 + α 2 x 2 + ⋯ + α n x n {\displaystyle \alpha _{1}x_{1}+\alpha _{2}x_{2}+\cdots +\alpha _{n}x_{n}} where α i {\displaystyle \alpha _{i}} are non-negative real numbers. The name derives from the fact that the set of all conical sum of vectors defines a cone (possibly in a lower-dimensional subspace). == Conical hull == The set of all conical combinations for a given set S is called the conical hull of S and denoted cone(S) or coni(S). That is, coni ( S ) = { ∑ i = 1 k α i x i : x i ∈ S , α i ∈ R ≥ 0 , k ∈ N } . {\displaystyle \operatorname {coni} (S)=\left\{\sum _{i=1}^{k}\alpha _{i}x_{i}:x_{i}\in S,\,\alpha _{i}\in \mathbb {R} _{\geq 0},\,k\in \mathbb {N} \right\}.} By taking k = 0, it follows the zero vector (origin) belongs to all conical hulls (since the summation becomes an empty sum). The conical hull of a set S is a convex set. In fact, it is the intersection of all convex cones containing S plus the origin. If S is a compact set (in particular, when it is a finite non-empty set of points), then the condition "plus the origin" is unnecessary. If we discard the origin, we can divide all coefficients by their sum to see that a conical combination is a convex combination scaled by a positive factor. Therefore, "conical combinations" and "conical hulls" are in fact "convex conical combinations" and "convex conical hulls" respectively. Moreover, the above remark about dividing the coefficients while discarding the origin implies that the conical combinations and hulls may be considered as convex combinations and convex hulls in the projective space. While the convex hull of a compact set is also a compact set, this is not so for the conical hull; first of all, the latter one is unbounded. Moreover, it is not even necessarily a closed set: a counterexample is a sphere passing through the origin, with the conical hull being an open half-space plus the origin. However, if S is a non-empty convex compact set which does not contain the origin, then the convex conical hull of S is a closed set. == See also == === Related combinations === Affine combination Convex combination Linear combination == References ==
|
Wikipedia:Conjugate (square roots)#0
|
In mathematics, the conjugate of an expression of the form a + b d {\displaystyle a+b{\sqrt {d}}} is a − b d , {\displaystyle a-b{\sqrt {d}},} provided that d {\displaystyle {\sqrt {d}}} does not appear in a and b. One says also that the two expressions are conjugate. In particular, the two solutions of a quadratic equation are conjugate, as per the ± {\displaystyle \pm } in the quadratic formula x = − b ± b 2 − 4 a c 2 a {\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac}}}{2a}}} . Complex conjugation is the special case where the square root is i = − 1 , {\displaystyle i={\sqrt {-1}},} the imaginary unit. == Properties == As ( a + b d ) ( a − b d ) = a 2 − b 2 d {\displaystyle (a+b{\sqrt {d}})(a-b{\sqrt {d}})=a^{2}-b^{2}d} and ( a + b d ) + ( a − b d ) = 2 a , {\displaystyle (a+b{\sqrt {d}})+(a-b{\sqrt {d}})=2a,} the sum and the product of conjugate expressions do not involve the square root anymore. This property is used for removing a square root from a denominator, by multiplying the numerator and the denominator of a fraction by the conjugate of the denominator (see Rationalisation). An example of this usage is: a + b d x + y d = ( a + b d ) ( x − y d ) ( x + y d ) ( x − y d ) = a x − d b y + ( x b − a y ) d x 2 − y 2 d . {\displaystyle {\frac {a+b{\sqrt {d}}}{x+y{\sqrt {d}}}}={\frac {(a+b{\sqrt {d}})(x-y{\sqrt {d}})}{(x+y{\sqrt {d}})(x-y{\sqrt {d}})}}={\frac {ax-dby+(xb-ay){\sqrt {d}}}{x^{2}-y^{2}d}}.} Hence: 1 a + b d = a − b d a 2 − d b 2 . {\displaystyle {\frac {1}{a+b{\sqrt {d}}}}={\frac {a-b{\sqrt {d}}}{a^{2}-db^{2}}}.} A corollary property is that the subtraction: ( a + b d ) − ( a − b d ) = 2 b d , {\displaystyle (a+b{\sqrt {d}})-(a-b{\sqrt {d}})=2b{\sqrt {d}},} leaves only a term containing the root. == See also == Conjugate element (field theory), the generalization to the roots of a polynomial of any degree
|
Wikipedia:Conjugate transpose#0
|
In mathematics, the conjugate transpose, also known as the Hermitian transpose, of an m × n {\displaystyle m\times n} complex matrix A {\displaystyle \mathbf {A} } is an n × m {\displaystyle n\times m} matrix obtained by transposing A {\displaystyle \mathbf {A} } and applying complex conjugation to each entry (the complex conjugate of a + i b {\displaystyle a+ib} being a − i b {\displaystyle a-ib} , for real numbers a {\displaystyle a} and b {\displaystyle b} ). There are several notations, such as A H {\displaystyle \mathbf {A} ^{\mathrm {H} }} or A ∗ {\displaystyle \mathbf {A} ^{*}} , A ′ {\displaystyle \mathbf {A} '} , or (often in physics) A † {\displaystyle \mathbf {A} ^{\dagger }} . For real matrices, the conjugate transpose is just the transpose, A H = A T {\displaystyle \mathbf {A} ^{\mathrm {H} }=\mathbf {A} ^{\operatorname {T} }} . == Definition == The conjugate transpose of an m × n {\displaystyle m\times n} matrix A {\displaystyle \mathbf {A} } is formally defined by where the subscript i j {\displaystyle ij} denotes the ( i , j ) {\displaystyle (i,j)} -th entry (matrix element), for 1 ≤ i ≤ n {\displaystyle 1\leq i\leq n} and 1 ≤ j ≤ m {\displaystyle 1\leq j\leq m} , and the overbar denotes a scalar complex conjugate. This definition can also be written as A H = ( A ¯ ) T = A T ¯ {\displaystyle \mathbf {A} ^{\mathrm {H} }=\left({\overline {\mathbf {A} }}\right)^{\operatorname {T} }={\overline {\mathbf {A} ^{\operatorname {T} }}}} where A T {\displaystyle \mathbf {A} ^{\operatorname {T} }} denotes the transpose and A ¯ {\displaystyle {\overline {\mathbf {A} }}} denotes the matrix with complex conjugated entries. Other names for the conjugate transpose of a matrix are Hermitian transpose, Hermitian conjugate, adjoint matrix or transjugate. The conjugate transpose of a matrix A {\displaystyle \mathbf {A} } can be denoted by any of these symbols: A ∗ {\displaystyle \mathbf {A} ^{*}} , commonly used in linear algebra A H {\displaystyle \mathbf {A} ^{\mathrm {H} }} , commonly used in linear algebra A † {\displaystyle \mathbf {A} ^{\dagger }} (sometimes pronounced as A dagger), commonly used in quantum mechanics A + {\displaystyle \mathbf {A} ^{+}} , although this symbol is more commonly used for the Moore–Penrose pseudoinverse In some contexts, A ∗ {\displaystyle \mathbf {A} ^{*}} denotes the matrix with only complex conjugated entries and no transposition. == Example == Suppose we want to calculate the conjugate transpose of the following matrix A {\displaystyle \mathbf {A} } . A = [ 1 − 2 − i 5 1 + i i 4 − 2 i ] {\displaystyle \mathbf {A} ={\begin{bmatrix}1&-2-i&5\\1+i&i&4-2i\end{bmatrix}}} We first transpose the matrix: A T = [ 1 1 + i − 2 − i i 5 4 − 2 i ] {\displaystyle \mathbf {A} ^{\operatorname {T} }={\begin{bmatrix}1&1+i\\-2-i&i\\5&4-2i\end{bmatrix}}} Then we conjugate every entry of the matrix: A H = [ 1 1 − i − 2 + i − i 5 4 + 2 i ] {\displaystyle \mathbf {A} ^{\mathrm {H} }={\begin{bmatrix}1&1-i\\-2+i&-i\\5&4+2i\end{bmatrix}}} == Basic remarks == A square matrix A {\displaystyle \mathbf {A} } with entries a i j {\displaystyle a_{ij}} is called Hermitian or self-adjoint if A = A H {\displaystyle \mathbf {A} =\mathbf {A} ^{\mathrm {H} }} ; i.e., a i j = a j i ¯ {\displaystyle a_{ij}={\overline {a_{ji}}}} . Skew Hermitian or antihermitian if A = − A H {\displaystyle \mathbf {A} =-\mathbf {A} ^{\mathrm {H} }} ; i.e., a i j = − a j i ¯ {\displaystyle a_{ij}=-{\overline {a_{ji}}}} . Normal if A H A = A A H {\displaystyle \mathbf {A} ^{\mathrm {H} }\mathbf {A} =\mathbf {A} \mathbf {A} ^{\mathrm {H} }} . Unitary if A H = A − 1 {\displaystyle \mathbf {A} ^{\mathrm {H} }=\mathbf {A} ^{-1}} , equivalently A A H = I {\displaystyle \mathbf {A} \mathbf {A} ^{\mathrm {H} }={\boldsymbol {I}}} , equivalently A H A = I {\displaystyle \mathbf {A} ^{\mathrm {H} }\mathbf {A} ={\boldsymbol {I}}} . Even if A {\displaystyle \mathbf {A} } is not square, the two matrices A H A {\displaystyle \mathbf {A} ^{\mathrm {H} }\mathbf {A} } and A A H {\displaystyle \mathbf {A} \mathbf {A} ^{\mathrm {H} }} are both Hermitian and in fact positive semi-definite matrices. The conjugate transpose "adjoint" matrix A H {\displaystyle \mathbf {A} ^{\mathrm {H} }} should not be confused with the adjugate, adj ( A ) {\displaystyle \operatorname {adj} (\mathbf {A} )} , which is also sometimes called adjoint. The conjugate transpose can be motivated by noting that complex numbers can be usefully represented by 2 × 2 {\displaystyle 2\times 2} real matrices, obeying matrix addition and multiplication: a + i b ≡ [ a − b b a ] . {\displaystyle a+ib\equiv {\begin{bmatrix}a&-b\\b&a\end{bmatrix}}.} That is, denoting each complex number z {\displaystyle z} by the real 2 × 2 {\displaystyle 2\times 2} matrix of the linear transformation on the Argand diagram (viewed as the real vector space R 2 {\displaystyle \mathbb {R} ^{2}} ), affected by complex z {\displaystyle z} -multiplication on C {\displaystyle \mathbb {C} } . Thus, an m × n {\displaystyle m\times n} matrix of complex numbers could be well represented by a 2 m × 2 n {\displaystyle 2m\times 2n} matrix of real numbers. The conjugate transpose, therefore, arises very naturally as the result of simply transposing such a matrix—when viewed back again as an n × m {\displaystyle n\times m} matrix made up of complex numbers. For an explanation of the notation used here, we begin by representing complex numbers e i θ {\displaystyle e^{i\theta }} as the rotation matrix, that is, e i θ = ( cos θ − sin θ sin θ cos θ ) = cos θ ( 1 0 0 1 ) + sin θ ( 0 − 1 1 0 ) . {\displaystyle e^{i\theta }={\begin{pmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \end{pmatrix}}=\cos \theta {\begin{pmatrix}1&0\\0&1\end{pmatrix}}+\sin \theta {\begin{pmatrix}0&-1\\1&0\end{pmatrix}}.} Since e i θ = cos θ + i sin θ {\displaystyle e^{i\theta }=\cos \theta +i\sin \theta } , we are led to the matrix representations of the unit numbers as 1 = ( 1 0 0 1 ) , i = ( 0 − 1 1 0 ) . {\displaystyle 1={\begin{pmatrix}1&0\\0&1\end{pmatrix}},\quad i={\begin{pmatrix}0&-1\\1&0\end{pmatrix}}.} A general complex number z = x + i y {\displaystyle z=x+iy} is then represented as z = ( x − y y x ) . {\displaystyle z={\begin{pmatrix}x&-y\\y&x\end{pmatrix}}.} The complex conjugate operation (that sends a + b i {\displaystyle a+bi} to a − b i {\displaystyle a-bi} for real a , b {\displaystyle a,b} ) is encoded as the matrix transpose. == Properties == ( A + B ) H = A H + B H {\displaystyle (\mathbf {A} +{\boldsymbol {B}})^{\mathrm {H} }=\mathbf {A} ^{\mathrm {H} }+{\boldsymbol {B}}^{\mathrm {H} }} for any two matrices A {\displaystyle \mathbf {A} } and B {\displaystyle {\boldsymbol {B}}} of the same dimensions. ( z A ) H = z ¯ A H {\displaystyle (z\mathbf {A} )^{\mathrm {H} }={\overline {z}}\mathbf {A} ^{\mathrm {H} }} for any complex number z {\displaystyle z} and any m × n {\displaystyle m\times n} matrix A {\displaystyle \mathbf {A} } . ( A B ) H = B H A H {\displaystyle (\mathbf {A} {\boldsymbol {B}})^{\mathrm {H} }={\boldsymbol {B}}^{\mathrm {H} }\mathbf {A} ^{\mathrm {H} }} for any m × n {\displaystyle m\times n} matrix A {\displaystyle \mathbf {A} } and any n × p {\displaystyle n\times p} matrix B {\displaystyle {\boldsymbol {B}}} . Note that the order of the factors is reversed. ( A H ) H = A {\displaystyle \left(\mathbf {A} ^{\mathrm {H} }\right)^{\mathrm {H} }=\mathbf {A} } for any m × n {\displaystyle m\times n} matrix A {\displaystyle \mathbf {A} } , i.e. Hermitian transposition is an involution. If A {\displaystyle \mathbf {A} } is a square matrix, then det ( A H ) = det ( A ) ¯ {\displaystyle \det \left(\mathbf {A} ^{\mathrm {H} }\right)={\overline {\det \left(\mathbf {A} \right)}}} where det ( A ) {\displaystyle \operatorname {det} (A)} denotes the determinant of A {\displaystyle \mathbf {A} } . If A {\displaystyle \mathbf {A} } is a square matrix, then tr ( A H ) = tr ( A ) ¯ {\displaystyle \operatorname {tr} \left(\mathbf {A} ^{\mathrm {H} }\right)={\overline {\operatorname {tr} (\mathbf {A} )}}} where tr ( A ) {\displaystyle \operatorname {tr} (A)} denotes the trace of A {\displaystyle \mathbf {A} } . A {\displaystyle \mathbf {A} } is invertible if and only if A H {\displaystyle \mathbf {A} ^{\mathrm {H} }} is invertible, and in that case ( A H ) − 1 = ( A − 1 ) H {\displaystyle \left(\mathbf {A} ^{\mathrm {H} }\right)^{-1}=\left(\mathbf {A} ^{-1}\right)^{\mathrm {H} }} . The eigenvalues of A H {\displaystyle \mathbf {A} ^{\mathrm {H} }} are the complex conjugates of the eigenvalues of A {\displaystyle \mathbf {A} } . ⟨ A x , y ⟩ m = ⟨ x , A H y ⟩ n {\displaystyle \left\langle \mathbf {A} x,y\right\rangle _{m}=\left\langle x,\mathbf {A} ^{\mathrm {H} }y\right\rangle _{n}} for any m × n {\displaystyle m\times n} matrix A {\displaystyle \mathbf {A} } , any vector in x ∈ C n {\displaystyle x\in \mathbb {C} ^{n}} and any vector y ∈ C m {\displaystyle y\in \mathbb {C} ^{m}} . Here, ⟨ ⋅ , ⋅ ⟩ m {\displaystyle \langle \cdot ,\cdot \rangle _{m}} denotes the standard complex inner product on C m {\displaystyle \mathbb {C} ^{m}} , and similarly for ⟨ ⋅ , ⋅ ⟩ n {\displaystyle \langle \cdot ,\cdot \rangle _{n}} . == Generalizations == The last property given above shows that if one views A {\displaystyle \mathbf {A} } as a linear transformation from Hilbert space C n {\displaystyle \mathbb {C} ^{n}} to C m , {\displaystyle \mathbb {C} ^{m},} then the matrix A H {\displaystyle \mathbf {A} ^{\mathrm {H} }} corresponds to the adjoint operator of A {\displaystyle \mathbf {A} } . The concept of adjoint operators between Hilbert spaces can thus be seen as a generalization of the conjugate transpose of matrices with respect to an orthonormal basis. Another generalization is available: suppose A {\displaystyle A} is a linear map from a complex vector space V {\displaystyle V} to another, W {\displaystyle W} , then the complex conjugate linear map as well as the transposed linear map are defined, and we may thus take the conjugate transpose of A {\displaystyle A} to be the complex conjugate of the transpose of A {\displaystyle A} . It maps the conjugate dual of W {\displaystyle W} to the conjugate dual of V {\displaystyle V} . == See also == Complex dot product Hermitian adjoint Adjugate matrix == References == == External links == "Adjoint matrix", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
|
Wikipedia:Conley index theory#0
|
In dynamical systems theory, Conley index theory, named after Charles Conley, analyzes topological structure of invariant sets of diffeomorphisms and of smooth flows. It is a far-reaching generalization of the Hopf index theorem that predicts existence of fixed points of a flow inside a planar region in terms of information about its behavior on the boundary. Conley's theory is related to Morse theory, which describes the topological structure of a closed manifold by means of a nondegenerate gradient vector field. It has an enormous range of applications to the study of dynamics, including existence of periodic orbits in Hamiltonian systems and travelling wave solutions for partial differential equations, structure of global attractors for reaction–diffusion equations and delay differential equations, proof of chaotic behavior in dynamical systems, and bifurcation theory. Conley index theory formed the basis for development of Floer homology. == Short description == A key role in the theory is played by the notions of isolating neighborhood N {\displaystyle N} and isolated invariant set S {\displaystyle S} . The Conley index h ( S ) {\displaystyle h(S)} is the homotopy type of a space built from a certain pair ( N 1 , N 2 ) {\displaystyle (N_{1},N_{2})} of compact sets called an index pair for S {\displaystyle S} . Charles Conley showed that index pairs exist and that the index of S {\displaystyle S} is independent of the choice of the index pair. In the special case of the negative gradient flow of a smooth function, the Conley index of a nondegenerate (Morse) critical point of index N {\displaystyle N} is the pointed homotopy type of the k-sphere Sk. A deep theorem due to Conley asserts continuation invariance: Conley index is invariant under certain deformations of the dynamical system. Computation of the index can, therefore, be reduced to the case of the diffeomorphism or a vector field whose invariant sets are well understood. If the index is nontrivial then the invariant set S is nonempty. This principle can be amplified to establish existence of fixed points and periodic orbits inside N. == Construction == We build the Conley Index from the concept of an index pair. Given an isolated invariant set S {\displaystyle S} in a flow ϕ {\displaystyle \phi } , an index pair for S {\displaystyle S} is a pair of compact sets ( N 1 , N 2 ) {\displaystyle (N_{1},N_{2})} , with N 2 ⊂ N 1 {\displaystyle N_{2}\subset N_{1}} , satisfying S = Inv ( N 1 ∖ N 2 ) {\displaystyle S={\text{Inv}}(N_{1}\setminus N_{2})} and N 1 ∖ N 2 {\displaystyle N_{1}\setminus N_{2}} is a neighborhood of S {\displaystyle S} ; For all x ∈ N 2 {\displaystyle x\in N_{2}} and t > 0 {\displaystyle t>0} , ϕ ( [ 0 , t ] , x ) ⊂ N 1 ⇒ ϕ ( [ 0 , t ] , x ) ⊂ N 2 {\displaystyle \phi ([0,t],x)\subset N_{1}\Rightarrow \phi ([0,t],x)\subset N_{2}} ; For all x ∈ N 1 {\displaystyle x\in N_{1}} and t > 0 {\displaystyle t>0} , ϕ ( t , x ) ∉ N 1 ⇒ ∃ t ′ ∈ [ 0 , t ] {\displaystyle \phi (t,x)\not \in N_{1}\Rightarrow \exists t'\in [0,t]} such that ϕ ( t ′ , x ) ∈ N 2 {\displaystyle \phi (t',x)\in N_{2}} . Conley shows that every isolating invariant set admits an index pair. For an isolated invariant set S {\displaystyle S} , we choose some index pair ( N 1 , N 2 ) {\displaystyle (N_{1},N_{2})} of S {\displaystyle S} and the we define, then, the homotopy Conley index of S {\displaystyle S} as h ( S , ϕ ) := [ ( N 1 / N 2 , [ N 2 ] ) ] {\displaystyle h(S,\phi ):=[(N_{1}/N_{2},[N_{2}])]} , the homotopy type of the quotient space ( N 1 / N 2 , [ N 2 ] ) {\displaystyle (N_{1}/N_{2},[N_{2}])} , seen as a topological pointed space. Analogously, the (co)homology Conley index of S {\displaystyle S} is the chain complex C H ∙ ( S , ϕ ) = H ∙ ( N 1 / N 2 , [ N 2 ] ) {\displaystyle CH_{\bullet }(S,\phi )=H_{\bullet }(N_{1}/N_{2},[N_{2}])} . We remark that also Conley showed that the Conley index is independent of the choice of an index pair, so that the index is well defined. == Properties == Some of the most important properties of the index are direct consequences of its definition, inheriting properties from homology and homotopy. Some of them include the following: If h ( S ) ≠ 0 {\displaystyle h(S)\neq 0} , then S ≠ ∅ {\displaystyle S\neq \emptyset } ; If S = ∪ i = 1 n M i {\displaystyle S=\cup _{i=1}^{n}M_{i}} , where each M i {\displaystyle M_{i}} is an isolated invariant set, then C H k ( S ) = ⊕ i = 1 n C H k ( M i ) {\displaystyle CH_{k}(S)=\oplus _{i=1}^{n}CH_{k}(M_{i})} ; The Conley index is homotopy invariant. Notice that a Morse set is an isolated invariant set, so that the Conley index is defined for it. == References == Charles Conley, Isolated invariant sets and the Morse index. CBMS Regional Conference Series in Mathematics, 38. American Mathematical Society, Providence, R.I., 1978 ISBN 0-8218-1688-8 Thomas Bartsch (2001) [1994], "Conley index", Encyclopedia of Mathematics, EMS Press John Franks, Michal Misiurewicz, Topological methods in dynamics. Chapter 7 in Handbook of Dynamical Systems, vol 1, part 1, pp 547–598, Elsevier 2002 ISBN 978-0-444-82669-5 Jürgen Jost, Dynamical systems. Examples of complex behaviour. Universitext. Springer-Verlag, Berlin, 2005 ISBN 978-3-540-22908-7 Konstantin Mischaikow, Marian Mrozek, Conley index. Chapter 9 in Handbook of Dynamical Systems, vol 2, pp 393–460, Elsevier 2002 ISBN 978-0-444-50168-4 M. R. Razvan, On Conley’s fundamental theorem of dynamical systems, 2002. == External links == Separation of Topological Singularities (Wolfram Demonstrations Project) == See also == Conley's fundamental theorem of dynamical systems
|
Wikipedia:Conley–Zehnder theorem#0
|
In mathematics, the Conley–Zehnder theorem, named after Charles C. Conley and Eduard Zehnder, provides a lower bound for the number of fixed points of Hamiltonian diffeomorphisms of standard symplectic tori in terms of the topology of the underlying tori. The lower bound is one plus the cup-length of the torus (thus 2n+1, where 2n is the dimension of the considered torus), and it can be strengthen to the rank of the homology of the torus (which is 22n) provided all the fixed points are non-degenerate, this latter condition being generic in the C1-topology. The theorem was conjectured by Vladimir Arnold, and it was known as the Arnold conjecture on fixed points of symplectomorphisms. Its validity was later extended to more general closed symplectic manifolds by Andreas Floer and several others. == References == Conley, C. C.; Zehnder, E. (1983), "The Birkhoff–Lewis fixed point theorem and a conjecture of V. I. Arnol'd" (PDF), Inventiones Mathematicae, 73 (1): 33–49, Bibcode:1983InMat..73...33C, doi:10.1007/BF01393824, ISSN 0020-9910, MR 0707347, S2CID 3124799, archived from the original on September 27, 2017
|
Wikipedia:Connected Mathematics#0
|
Connected Mathematics is a comprehensive mathematics program intended for U.S. students in grades 6–8. The curriculum design, text materials for students, and supporting resources for teachers were created and have been progressively refined by the Connected Mathematics Project (CMP) at Michigan State University with advice and contributions from many mathematics teachers, curriculum developers, mathematicians, and mathematics education researchers. The current third edition of Connected Mathematics is a major revision of the program to reflect new expectations of the Common Core State Standards for Mathematics and what the authors have learned from over twenty years of field experience by thousands of teachers working with millions of middle grades students. This CMP3 program is now published in paper and electronic form by Pearson Education. == Core principles == The first edition of Connected Mathematics, developed with financial support from the National Science Foundation, was designed to provide instructional materials for middle grades mathematics. It was based on the 1989 Curriculum and Evaluation Standards and the 1991 Professional Standards for Teaching Mathematics from the National Council of Teachers of Mathematics. These standards highlighted four core features of the curriculum: Comprehensive coverage of mathematical concepts and skills across four content strands—number, algebra, geometry and measurement, and probability and statistics. Connections between the concepts and methods of the four major content strands, and between the abstractions of mathematics and their applications in real-world problem contexts. Instructional materials that transform classrooms into dynamic environments where students learn by solving problems and sharing their thinking with others, while teachers encourage and support students to be curious, to ask questions, and to enjoy learning and using mathematics. Developing students' understanding of mathematical concepts, principles, procedures, and habits of mind, and fostering the disposition to use mathematical reasoning in making sense of new situations and solving problems. These principles have guided the development and refinement of the Connected Mathematics program for over twenty years. The first edition was published in 1995; a major revision, also supported by National Science Foundation funding, was published in 2006; and the current third edition was published in 2014. In the third edition, the collection of units was expanded to cover Common Core Standards for both grade eight and Algebra I. Each CMP grade level course aims to advance student understanding, skills, and problem-solving in every content strand, with increasing sophistication and challenge over the middle school grades. The problem tasks for students are designed to make connections within mathematics, between mathematics and other subject areas, and/or to real-world settings that appeal to students. Curriculum units consist of 3–5 investigations, each focused on a key mathematical idea; each investigation consists of several major problems that the teacher and students explore in class. Applications/Connections/Extensions problem sets are included for each investigation to help students practice, apply, connect, and extend essential understandings. While engaged in collaborative problem-solving and classroom discourse about mathematics, students are explicitly encouraged to reflect on their use of what the NCTM standards once called mathematical processes and now refer to as mathematical practices—making sense of problems and solving them, reasoning abstractly and quantitatively, constructing arguments and critiquing the reasoning of others, modeling with mathematics, using mathematical tools strategically, seeking and using structure, expressing regularity in repeated reasoning, and communicating ideas and results with precision. == Implementation challenges == The introduction of new curriculum content, instructional materials, and teaching methods is challenging in K–12 education. When the proposed changes contrast with long-standing traditional practice, it is common to hear concerns from parents, teachers, and other professionals, as well as from students who have been successful and comfortable in traditional classrooms. In recognition of this innovation challenge, the National Science Foundation complemented its investment in new curriculum materials with substantial investments in professional development for teachers. By funding state and urban systemic initiatives, local systemic change projects, and math-science partnership programs, as well as national centers for standards-based school mathematics curriculum dissemination and implementation, the NSF provided powerful support for the adoption and implementation of the various reform mathematics curricula developed during the standards era. In addition to those programs, for nearly twenty years, CMP has sponsored summer Getting to Know CMP institutes, workshops for leaders of CMP implementation, and an annual User's Conference for the sharing of implementation experiences and insights, all on the campus of Michigan State University. The whole reform curriculum effort has greatly enhanced the field's understanding of what works in that important and challenging process—the clearest message being that significant lasting change takes time, persistent effort, and coordination of work by teachers at all levels in a system. == Research findings == Connected Mathematics has become the most widely used of the middle school curriculum materials developed to implement the NCTM Standards. The effects of its use have been described in expository journal articles and evaluated in mathematics education research projects. Many of the research studies are master's or doctoral dissertation research projects focused on specific aspects of the CMP classroom experience and student learning. But there have also been a number of large-scale independent evaluations of the results of the program. In the large-scale controlled research studies the most common (but by no means universal) pattern of results has been better performance by CMP students on measures of conceptual understanding and problem solving and no significant difference between students of CMP and traditional curriculum materials on measures of routine skills and factual knowledge. For example, this pattern is what the LieCal project found from a longitudinal study comparing learning by students in CMP and traditional middle grades curricula: (1) Students did not sacrifice basic mathematical skills if they were taught using a standards-based or reform mathematics curriculum like CMP; (2) African American students experienced greater gains in symbol manipulation when they used a traditional curriculum; (3) the use of either the CMP or a non-CMP curriculum improved the mathematics achievement of all students, including students of color; (4) the use of CMP contributed to significantly higher problem-solving growth for all ethnic groups; and (5) a high level of conceptual emphasis in a classroom improved the students’ ability to represent problem situations. Perhaps the most telling result of all is reported in the 2008 study by James Tarr and colleagues at the University of Missouri. While finding no overall significant effects from use of reform or traditional curriculum materials, the study did discover effects favoring the NSF-funded curricula when those programs were implemented with high or even moderate levels of fidelity to Standards-based learning environments. That is, when the innovative programs are used as designed, they produce positive effects. == Historical controversy == Like other curricula designed and developed during the 1990s to implement the NCTM Standards, Connected Math was criticized by supporters of more traditional curricula. Critics made the following claims: Reform curricula like CMP pay too little attention to the development of basic computational skills in number and algebra; Student investigation and discovery of key mathematical concepts and skills might lead to critical gaps and misconceptions in their knowledge. Emphasis on mathematics in real-world contexts might cause students to miss abstractions and generalizations that are the powerful heart of the subject. The lack of explanatory prose in textbooks makes it hard for parents to help their children with homework and puts students with weak note-taking abilities, poor handwriting, slow handwriting, and attention deficits at a distinct disadvantage. Additionally, with limited explanatory written materials, students who miss one or more days of school will struggle to catch up on missed materials. Small-group learning is less efficient than teacher-led direct instructional methods, and the most able and interested students might be held back by having to collaborate with less able and motivated students. The CMP program does not take into account the needs of students with minor learning disabilities or other disabilities who might be integrated into general education classrooms but still need extra help and need associated or modified learning materials. The publishers and creators of CMP have stated that reassuring results from a variety of research projects blunted concerns about basic skill mastery, missing knowledge, and student misconceptions resulting from use of CMP and other reform curricula. However, many teachers and parents remain wary. == References == == External links == Connected Mathematics Project http://connectedmath.msu.edu/ Pearson http://www.connectedmathematics3.com Common Core State Standards http://www.corestandards.org/Math
|
Wikipedia:Connectedness locus#0
|
In one-dimensional complex dynamics, the connectedness locus of a parameterized family of one-variable holomorphic functions is a subset of the parameter space which consists of those parameters for which the corresponding Julia set is connected. == Examples == Without doubt, the most famous connectedness locus is the Mandelbrot set, which arises from the family of complex quadratic polynomials : f c ( z ) = z 2 + c {\displaystyle f_{c}(z)=z^{2}+c\,} The connectedness loci of the higher-degree unicritical families, z ↦ z d + c {\displaystyle z\mapsto z^{d}+c\,} (where d ≥ 3 {\displaystyle d\geq 3\,} ) are often called 'Multibrot sets'. For these families, the bifurcation locus is the boundary of the connectedness locus. This is no longer true in settings, such as the full parameter space of cubic polynomials, where there is more than one free critical point. For these families, even maps with disconnected Julia sets may display nontrivial dynamics. Hence here the connectedness locus is generally of less interest. == References == == External links == Epstein, Adam; Yampolsky, Michael (March 1999). "Geography of the cubic connectedness locus: Intertwining surgery". Annales Scientifiques de l'École Normale Supérieure. 32 (2): 151–185. arXiv:math/9608213. doi:10.1016/S0012-9593(99)80013-5. S2CID 18035406.
|
Wikipedia:Conny Palm#0
|
Conrad "Conny" Rudolf Agaton Palm (May 31, 1907 – December 27, 1951) was a Swedish electrical engineer and statistician, known for several contributions to teletraffic engineering and queueing theory. == Education and career == Palm enrolled at the School of Electrical Engineering at the Royal Institute of Technology in Stockholm in 1925, being awarded his M.Sc. (1940) and Ph.D. (1943) on a dissertation entitled Intensitätsschwankungen im Fernsprechverkehr (Intensity Fluctuations in Telephone Traffic). Palm's work was also joint with L. M. Ericsson, cooperating with Christian Jacobæus. He attended Harald Cramér's queueing theory group, met William Feller (1937). Later, Palm was in the Swedish Board for Computing Machinery (Matematikmaskinnämnden), where he led the project that developed the first Swedish computer, the BARK (1947–51), informally referred to as CONIAC (Conny [Palm] Integrator And Calculator). He was adjunct professor in telecommunications at Royal Institute of Technology as well. Professor Håkan Sterky, who was Palm's thesis advisor, has characterised Palm as a bohemian and a brilliant statistician. He had started his research before he graduated, which seemed to be due to a lack of interest for some undergraduate courses rather than the level of difficulty of the courses. To apply pressure on him, it was finally agreed that his monthly salary from Ericsson would only be paid out if he had passed one of his remaining undergraduate exam that month. Sterky related how Palm would typically show up a few days before pay day wishing to sit for an oral exam. == Books == Intensity variations in telephone traffic. North-Holland. 1988. ISBN 0-444-70472-8. Book edition of Ph.D. thesis of 1943, translated from German to English by Christian Jacobæus. == See also == Palm calculus Palm–Khintchine theorem == References ==
|
Wikipedia:Conservation form#0
|
Conservation form or Eulerian form refers to an arrangement of an equation or system of equations, usually representing a hyperbolic system, that emphasizes that a property represented is conserved, i.e. a type of continuity equation. The term is usually used in the context of continuum mechanics. == General form == Equations in conservation form take the form ∂ ξ ∂ t + ∇ ⋅ f ( ξ ) = 0 {\displaystyle {\frac {\partial \xi }{\partial t}}+{\boldsymbol {\nabla }}\cdot \mathbf {f} (\xi )=0} for any conserved quantity ξ {\displaystyle \xi } , with a suitable function f {\displaystyle \mathbf {f} } . An equation of this form can be transformed into an integral equation d d t ∫ V ξ d V = − ∮ ∂ V f ( ξ ) ⋅ ν d S {\displaystyle {\frac {d}{dt}}\int _{V}\xi ~dV=-\oint _{\partial V}\mathbf {f} (\xi )\cdot {\boldsymbol {\nu }}~dS} using the divergence theorem. The integral equation states that the change rate of the integral of the quantity ξ {\displaystyle \xi } over an arbitrary control volume V {\displaystyle V} is given by the flux f ( ξ ) {\displaystyle \mathbf {f} (\xi )} through the boundary of the control volume, with ν {\displaystyle {\boldsymbol {\nu }}} being the outer surface normal through the boundary. ξ {\displaystyle \xi } is neither produced nor consumed inside of V {\displaystyle V} and is hence conserved. A typical choice for f {\displaystyle \mathbf {f} } is f ( ξ ) = ξ u {\displaystyle \mathbf {f} (\xi )=\xi \mathbf {u} } , with velocity u {\displaystyle \mathbf {u} } , meaning that the quantity ξ {\displaystyle \xi } flows with a given velocity field. The integral form of such equations is usually the physically more natural formulation, and the differential equation arises from differentiation. Since the integral equation can also have non-differentiable solutions, the equality of both formulations can break down in some cases, leading to weak solutions and severe numerical difficulties in simulations of such equations. == Example == An example of a set of equations written in conservation form are the Euler equations of fluid flow: ∂ ρ ∂ t + ∇ ⋅ ( ρ u ) = 0 {\displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot (\rho \mathbf {u} )=0} ∂ ρ u ∂ t + ∇ ⋅ ( ρ u ⊗ u + p I ) = 0 {\displaystyle {\frac {\partial \rho \mathbf {u} }{\partial t}}+\nabla \cdot (\rho \mathbf {u} \otimes \mathbf {u} +p\mathbf {I} )=0} ∂ E ∂ t + ∇ ⋅ ( u ( E + p V ) ) = 0 {\displaystyle {\frac {\partial E}{\partial t}}+\nabla \cdot (\mathbf {u} (E+pV))=0} Each of these represents the conservation of mass, momentum and energy, respectively. == See also == Conservation law Lagrangian and Eulerian specification of the flow field == Further reading == Toro, E.F. (1999). Riemann Solvers and Numerical Methods for Fluid Dynamics. Springer-Verlag. ISBN 3-540-65966-8. Randall J. LeVeque: Finite Volume Methods for Hyperbolic Problems. Cambridge University Press, Cambridge 2002, ISBN 0-521-00924-3 (Cambridge Texts in Applied Mathematics).
|
Wikipedia:Consistent and inconsistent equations#0
|
In mathematics and particularly in algebra, a system of equations (either linear or nonlinear) is called consistent if there is at least one set of values for the unknowns that satisfies each equation in the system—that is, when substituted into each of the equations, they make each equation hold true as an identity. In contrast, a linear or non linear equation system is called inconsistent if there is no set of values for the unknowns that satisfies all of the equations. If a system of equations is inconsistent, then the equations cannot be true together leading to contradictory information, such as the false statements 2 = 1, or x 3 + y 3 = 5 {\displaystyle x^{3}+y^{3}=5} and x 3 + y 3 = 6 {\displaystyle x^{3}+y^{3}=6} (which implies 5 = 6). Both types of equation system, inconsistent and consistent, can be any of overdetermined (having more equations than unknowns), underdetermined (having fewer equations than unknowns), or exactly determined. == Simple examples == === Underdetermined and consistent === The system x + y + z = 3 , x + y + 2 z = 4 {\displaystyle {\begin{aligned}x+y+z&=3,\\x+y+2z&=4\end{aligned}}} has an infinite number of solutions, all of them having z = 1 (as can be seen by subtracting the first equation from the second), and all of them therefore having x + y = 2 for any values of x and y. The nonlinear system x 2 + y 2 + z 2 = 10 , x 2 + y 2 = 5 {\displaystyle {\begin{aligned}x^{2}+y^{2}+z^{2}&=10,\\x^{2}+y^{2}&=5\end{aligned}}} has an infinitude of solutions, all involving z = ± 5 . {\displaystyle z=\pm {\sqrt {5}}.} Since each of these systems has more than one solution, it is an indeterminate system . === Underdetermined and inconsistent === The system x + y + z = 3 , x + y + z = 4 {\displaystyle {\begin{aligned}x+y+z&=3,\\x+y+z&=4\end{aligned}}} has no solutions, as can be seen by subtracting the first equation from the second to obtain the impossible 0 = 1. The non-linear system x 2 + y 2 + z 2 = 17 , x 2 + y 2 + z 2 = 14 {\displaystyle {\begin{aligned}x^{2}+y^{2}+z^{2}&=17,\\x^{2}+y^{2}+z^{2}&=14\end{aligned}}} has no solutions, because if one equation is subtracted from the other we obtain the impossible 0 = 3. === Exactly determined and consistent === The system x + y = 3 , x + 2 y = 5 {\displaystyle {\begin{aligned}x+y&=3,\\x+2y&=5\end{aligned}}} has exactly one solution: x = 1, y = 2 The nonlinear system x + y = 1 , x 2 + y 2 = 1 {\displaystyle {\begin{aligned}x+y&=1,\\x^{2}+y^{2}&=1\end{aligned}}} has the two solutions (x, y) = (1, 0) and (x, y) = (0, 1), while x 3 + y 3 + z 3 = 10 , x 3 + 2 y 3 + z 3 = 12 , 3 x 3 + 5 y 3 + 3 z 3 = 34 {\displaystyle {\begin{aligned}x^{3}+y^{3}+z^{3}&=10,\\x^{3}+2y^{3}+z^{3}&=12,\\3x^{3}+5y^{3}+3z^{3}&=34\end{aligned}}} has an infinite number of solutions because the third equation is the first equation plus twice the second one and hence contains no independent information; thus any value of z can be chosen and values of x and y can be found to satisfy the first two (and hence the third) equations. === Exactly determined and inconsistent === The system x + y = 3 , 4 x + 4 y = 10 {\displaystyle {\begin{aligned}x+y&=3,\\4x+4y&=10\end{aligned}}} has no solutions; the inconsistency can be seen by multiplying the first equation by 4 and subtracting the second equation to obtain the impossible 0 = 2. Likewise, x 3 + y 3 + z 3 = 10 , x 3 + 2 y 3 + z 3 = 12 , 3 x 3 + 5 y 3 + 3 z 3 = 32 {\displaystyle {\begin{aligned}x^{3}+y^{3}+z^{3}&=10,\\x^{3}+2y^{3}+z^{3}&=12,\\3x^{3}+5y^{3}+3z^{3}&=32\end{aligned}}} is an inconsistent system because the first equation plus twice the second minus the third contains the contradiction 0 = 2. === Overdetermined and consistent === The system x + y = 3 , x + 2 y = 7 , 4 x + 6 y = 20 {\displaystyle {\begin{aligned}x+y&=3,\\x+2y&=7,\\4x+6y&=20\end{aligned}}} has a solution, x = –1, y = 4, because the first two equations do not contradict each other and the third equation is redundant (since it contains the same information as can be obtained from the first two equations by multiplying each through by 2 and summing them). The system x + 2 y = 7 , 3 x + 6 y = 21 , 7 x + 14 y = 49 {\displaystyle {\begin{aligned}x+2y&=7,\\3x+6y&=21,\\7x+14y&=49\end{aligned}}} has an infinitude of solutions since all three equations give the same information as each other (as can be seen by multiplying through the first equation by either 3 or 7). Any value of y is part of a solution, with the corresponding value of x being 7 – 2y. The nonlinear system x 2 − 1 = 0 , y 2 − 1 = 0 , ( x − 1 ) ( y − 1 ) = 0 {\displaystyle {\begin{aligned}x^{2}-1&=0,\\y^{2}-1&=0,\\(x-1)(y-1)&=0\end{aligned}}} has the three solutions (x, y) = (1, –1), (–1, 1), (1, 1). === Overdetermined and inconsistent === The system x + y = 3 , x + 2 y = 7 , 4 x + 6 y = 21 {\displaystyle {\begin{aligned}x+y&=3,\\x+2y&=7,\\4x+6y&=21\end{aligned}}} is inconsistent because the last equation contradicts the information embedded in the first two, as seen by multiplying each of the first two through by 2 and summing them. The system x 2 + y 2 = 1 , x 2 + 2 y 2 = 2 , 2 x 2 + 3 y 2 = 4 {\displaystyle {\begin{aligned}x^{2}+y^{2}&=1,\\x^{2}+2y^{2}&=2,\\2x^{2}+3y^{2}&=4\end{aligned}}} is inconsistent because the sum of the first two equations contradicts the third one. == Criteria for consistency == As can be seen from the above examples, consistency versus inconsistency is a different issue from comparing the numbers of equations and unknowns. === Linear systems === A linear system is consistent if and only if its coefficient matrix has the same rank as does its augmented matrix (the coefficient matrix with an extra column added, that column being the column vector of constants). === Nonlinear systems === == References ==
|
Wikipedia:Constant (mathematics)#0
|
In mathematics, the word constant conveys multiple meanings. As an adjective, it refers to non-variance (i.e. unchanging with respect to some other value); as a noun, it has two different meanings: A fixed and well-defined number or other non-changing mathematical object, or the symbol denoting it. The terms mathematical constant or physical constant are sometimes used to distinguish this meaning. A function whose value remains unchanged (i.e., a constant function). Such a constant is commonly represented by a variable which does not depend on the main variable(s) in question. For example, a general quadratic function is commonly written as: a x 2 + b x + c , {\displaystyle ax^{2}+bx+c\,,} where a, b and c are constants (coefficients or parameters), and x a variable—a placeholder for the argument of the function being studied. A more explicit way to denote this function is x ↦ a x 2 + b x + c , {\displaystyle x\mapsto ax^{2}+bx+c\,,} which makes the function-argument status of x (and by extension the constancy of a, b and c) clear. In this example a, b and c are coefficients of the polynomial. Since c occurs in a term that does not involve x, it is called the constant term of the polynomial and can be thought of as the coefficient of x0. More generally, any polynomial term or expression of degree zero (no variable) is a constant.: 18 == Constant function == A constant may be used to define a constant function that ignores its arguments and always gives the same value. A constant function of a single variable, such as f ( x ) = 5 {\displaystyle f(x)=5} , has a graph of a horizontal line parallel to the x-axis. Such a function always takes the same value (in this case 5), because the variable does not appear in the expression defining the function. == Context-dependence == The context-dependent nature of the concept of "constant" can be seen in this example from elementary calculus: d d x 2 x = lim h → 0 2 x + h − 2 x h = lim h → 0 2 x 2 h − 1 h = 2 x lim h → 0 2 h − 1 h since x is constant (i.e. does not depend on h ) = 2 x ⋅ c o n s t a n t , where c o n s t a n t means not depending on x . {\displaystyle {\begin{aligned}{\frac {d}{dx}}2^{x}&=\lim _{h\to 0}{\frac {2^{x+h}-2^{x}}{h}}=\lim _{h\to 0}2^{x}{\frac {2^{h}-1}{h}}\\[8pt]&=2^{x}\lim _{h\to 0}{\frac {2^{h}-1}{h}}&&{\text{since }}x{\text{ is constant (i.e. does not depend on }}h{\text{)}}\\[8pt]&=2^{x}\cdot \mathbf {constant,} &&{\text{where }}\mathbf {constant} {\text{ means not depending on }}x.\end{aligned}}} "Constant" means not depending on some variable; not changing as that variable changes. In the first case above, it means not depending on h; in the second, it means not depending on x. A constant in a narrower context could be regarded as a variable in a broader context. == Notable mathematical constants == Some values occur frequently in mathematics and are conventionally denoted by a specific symbol. These standard symbols and their values are called mathematical constants. Examples include: 0 (zero). 1 (one), the natural number after zero. π (pi), the constant representing the ratio of a circle's circumference to its diameter, approximately equal to 3.141592653589793238462643. e, approximately equal to 2.718281828459045235360287. i, the imaginary unit such that i2 = −1. 2 {\displaystyle {\sqrt {2}}} (square root of 2), the length of the diagonal of a square with unit sides, approximately equal to 1.414213562373095048801688. φ (golden ratio), approximately equal to 1.618033988749894848204586, or algebraically, 1 + 5 2 {\displaystyle 1+{\sqrt {5}} \over 2} . == Constants in calculus == In calculus, constants are treated in several different ways depending on the operation. For example, the derivative (rate of change) of a constant function is zero. This is because constants, by definition, do not change. Their derivative is hence zero. Conversely, when integrating a constant function, the constant is multiplied by the variable of integration. During the evaluation of a limit, a constant remains the same as it was before and after evaluation. Integration of a function of one variable often involves a constant of integration. This arises because the integral is the inverse (opposite) of the derivative meaning that the aim of integration is to recover the original function before differentiation. The derivative of a constant function is zero, as noted above, and the differential operator is a linear operator, so functions that only differ by a constant term have the same derivative. To acknowledge this, a constant of integration is added to an indefinite integral; this ensures that all possible solutions are included. The constant of integration is generally written as 'c', and represents a constant with a fixed but undefined value. === Examples === If f is the constant function such that f ( x ) = 72 {\displaystyle f(x)=72} for every x then f ′ ( x ) = 0 ∫ f ( x ) d x = 72 x + c lim x → 0 f ( x ) = 72 {\displaystyle {\begin{aligned}f'(x)&=0\\\int f(x)\,dx&=72x+c\\\lim _{x\rightarrow 0}f(x)&=72\end{aligned}}} == See also == Constant (disambiguation) Expression Level set List of mathematical constants Physical constant == References == == External links == Media related to Constants at Wikimedia Commons
|
Wikipedia:Constant term#0
|
In mathematics, a constant term (sometimes referred to as a free term) is a term in an algebraic expression that does not contain any variables and therefore is constant. For example, in the quadratic polynomial, x 2 + 2 x + 3 , {\displaystyle x^{2}+2x+3,\ } The number 3 is a constant term. After like terms are combined, an algebraic expression will have at most one constant term. Thus, it is common to speak of the quadratic polynomial a x 2 + b x + c , {\displaystyle ax^{2}+bx+c,\ } where x {\displaystyle x} is the variable, as having a constant term of c . {\displaystyle c.} If the constant term is 0, then it will conventionally be omitted when the quadratic is written out. Any polynomial written in standard form has a unique constant term, which can be considered a coefficient of x 0 . {\displaystyle x^{0}.} In particular, the constant term will always be the lowest degree term of the polynomial. This also applies to multivariate polynomials. For example, the polynomial x 2 + 2 x y + y 2 − 2 x + 2 y − 4 {\displaystyle x^{2}+2xy+y^{2}-2x+2y-4\ } has a constant term of −4, which can be considered to be the coefficient of x 0 y 0 , {\displaystyle x^{0}y^{0},} where the variables are eliminated by being exponentiated to 0 (any non-zero number exponentiated to 0 becomes 1). For any polynomial, the constant term can be obtained by substituting in 0 instead of each variable; thus, eliminating each variable. The concept of exponentiation to 0 can be applied to power series and other types of series, for example in this power series: a 0 + a 1 x + a 2 x 2 + a 3 x 3 + ⋯ , {\displaystyle a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}+\cdots ,} a 0 {\displaystyle a_{0}} is the constant term. == Constant of integration == The derivative of a constant term is 0, so when a term containing a constant term is differentiated, the constant term vanishes, regardless of its value. Therefore the antiderivative is only determined up to an unknown constant term, which is called "the constant of integration" and added in symbolic form (usually denoted as C {\displaystyle C} ). For example, the antiderivative of cos x {\displaystyle \cos x} is sin x {\displaystyle \sin x} , since the derivative of sin x {\displaystyle \sin x} is equal to cos x {\displaystyle \cos x} based on the properties of trigonometric derivatives. However, the integral of cos x {\displaystyle \cos x} is equal to sin x {\displaystyle \sin x} (the antiderivative), plus an arbitrary constant: ∫ cos x d x = sin x + C , {\displaystyle \int \cos x\,\mathrm {d} x=\sin x+C,} because for any constant C {\displaystyle C} , the derivative of the right-hand side of the equation is equal to the left-hand side of the equation. == See also == Constant (mathematics) == References ==
|
Wikipedia:Constant-recursive sequence#0
|
In mathematics, an infinite sequence of numbers s 0 , s 1 , s 2 , s 3 , … {\displaystyle s_{0},s_{1},s_{2},s_{3},\ldots } is called constant-recursive if it satisfies an equation of the form s n = c 1 s n − 1 + c 2 s n − 2 + ⋯ + c d s n − d , {\displaystyle s_{n}=c_{1}s_{n-1}+c_{2}s_{n-2}+\dots +c_{d}s_{n-d},} for all n ≥ d {\displaystyle n\geq d} , where c i {\displaystyle c_{i}} are constants. The equation is called a linear recurrence relation. The concept is also known as a linear recurrence sequence, linear-recursive sequence, linear-recurrent sequence, or a C-finite sequence. For example, the Fibonacci sequence 0 , 1 , 1 , 2 , 3 , 5 , 8 , 13 , … {\displaystyle 0,1,1,2,3,5,8,13,\ldots } , is constant-recursive because it satisfies the linear recurrence F n = F n − 1 + F n − 2 {\displaystyle F_{n}=F_{n-1}+F_{n-2}} : each number in the sequence is the sum of the previous two. Other examples include the power of two sequence 1 , 2 , 4 , 8 , 16 , … {\displaystyle 1,2,4,8,16,\ldots } , where each number is the sum of twice the previous number, and the square number sequence 0 , 1 , 4 , 9 , 16 , 25 , … {\displaystyle 0,1,4,9,16,25,\ldots } . All arithmetic progressions, all geometric progressions, and all polynomials are constant-recursive. However, not all sequences are constant-recursive; for example, the factorial sequence 1 , 1 , 2 , 6 , 24 , 120 , … {\displaystyle 1,1,2,6,24,120,\ldots } is not constant-recursive. Constant-recursive sequences are studied in combinatorics and the theory of finite differences. They also arise in algebraic number theory, due to the relation of the sequence to polynomial roots; in the analysis of algorithms, as the running time of simple recursive functions; and in the theory of formal languages, where they count strings up to a given length in a regular language. Constant-recursive sequences are closed under important mathematical operations such as term-wise addition, term-wise multiplication, and Cauchy product. The Skolem–Mahler–Lech theorem states that the zeros of a constant-recursive sequence have a regularly repeating (eventually periodic) form. The Skolem problem, which asks for an algorithm to determine whether a linear recurrence has at least one zero, is an unsolved problem in mathematics. == Definition == A constant-recursive sequence is any sequence of integers, rational numbers, algebraic numbers, real numbers, or complex numbers s 0 , s 1 , s 2 , s 3 , … {\displaystyle s_{0},s_{1},s_{2},s_{3},\ldots } (written as ( s n ) n = 0 ∞ {\displaystyle (s_{n})_{n=0}^{\infty }} as a shorthand) satisfying a formula of the form s n = c 1 s n − 1 + c 2 s n − 2 + ⋯ + c d s n − d , {\displaystyle s_{n}=c_{1}s_{n-1}+c_{2}s_{n-2}+\dots +c_{d}s_{n-d},} for all n ≥ d , {\displaystyle n\geq d,} for some fixed coefficients c 1 , c 2 , … , c d {\displaystyle c_{1},c_{2},\dots ,c_{d}} ranging over the same domain as the sequence (integers, rational numbers, algebraic numbers, real numbers, or complex numbers). The equation is called a linear recurrence with constant coefficients of order d. The order of the sequence is the smallest positive integer d {\displaystyle d} such that the sequence satisfies a recurrence of order d, or d = 0 {\displaystyle d=0} for the everywhere-zero sequence. The definition above allows eventually-periodic sequences such as 1 , 0 , 0 , 0 , … {\displaystyle 1,0,0,0,\ldots } and 0 , 1 , 0 , 0 , … {\displaystyle 0,1,0,0,\ldots } . Some authors require that c d ≠ 0 {\displaystyle c_{d}\neq 0} , which excludes such sequences. == Examples == === Fibonacci and Lucas sequences === The sequence 0, 1, 1, 2, 3, 5, 8, 13, ... of Fibonacci numbers is constant-recursive of order 2 because it satisfies the recurrence F n = F n − 1 + F n − 2 {\displaystyle F_{n}=F_{n-1}+F_{n-2}} with F 0 = 0 , F 1 = 1 {\displaystyle F_{0}=0,F_{1}=1} . For example, F 2 = F 1 + F 0 = 1 + 0 = 1 {\displaystyle F_{2}=F_{1}+F_{0}=1+0=1} and F 6 = F 5 + F 4 = 5 + 3 = 8 {\displaystyle F_{6}=F_{5}+F_{4}=5+3=8} . The sequence 2, 1, 3, 4, 7, 11, ... of Lucas numbers satisfies the same recurrence as the Fibonacci sequence but with initial conditions L 0 = 2 {\displaystyle L_{0}=2} and L 1 = 1 {\displaystyle L_{1}=1} . More generally, every Lucas sequence is constant-recursive of order 2. === Arithmetic progressions === For any a {\displaystyle a} and any r ≠ 0 {\displaystyle r\neq 0} , the arithmetic progression a , a + r , a + 2 r , … {\displaystyle a,a+r,a+2r,\ldots } is constant-recursive of order 2, because it satisfies s n = 2 s n − 1 − s n − 2 {\displaystyle s_{n}=2s_{n-1}-s_{n-2}} . Generalizing this, see polynomial sequences below. === Geometric progressions === For any a ≠ 0 {\displaystyle a\neq 0} and r {\displaystyle r} , the geometric progression a , a r , a r 2 , … {\displaystyle a,ar,ar^{2},\ldots } is constant-recursive of order 1, because it satisfies s n = r s n − 1 {\displaystyle s_{n}=rs_{n-1}} . This includes, for example, the sequence 1, 2, 4, 8, 16, ... as well as the rational number sequence 1 , 1 2 , 1 4 , 1 8 , 1 16 , . . . {\textstyle 1,{\frac {1}{2}},{\frac {1}{4}},{\frac {1}{8}},{\frac {1}{16}},...} . === Eventually periodic sequences === A sequence that is eventually periodic with period length ℓ {\displaystyle \ell } is constant-recursive, since it satisfies s n = s n − ℓ {\displaystyle s_{n}=s_{n-\ell }} for all n ≥ d {\displaystyle n\geq d} , where the order d {\displaystyle d} is the length of the initial segment including the first repeating block. Examples of such sequences are 1, 0, 0, 0, ... (order 1) and 1, 6, 6, 6, ... (order 2). === Polynomial sequences === A sequence defined by a polynomial s n = a 0 + a 1 n + a 2 n 2 + ⋯ + a d n d {\displaystyle s_{n}=a_{0}+a_{1}n+a_{2}n^{2}+\cdots +a_{d}n^{d}} is constant-recursive. The sequence satisfies a recurrence of order d + 1 {\displaystyle d+1} (where d {\displaystyle d} is the degree of the polynomial), with coefficients given by the corresponding element of the binomial transform. The first few such equations are s n = 1 ⋅ s n − 1 {\displaystyle s_{n}=1\cdot s_{n-1}} for a degree 0 (that is, constant) polynomial, s n = 2 ⋅ s n − 1 − 1 ⋅ s n − 2 {\displaystyle s_{n}=2\cdot s_{n-1}-1\cdot s_{n-2}} for a degree 1 or less polynomial, s n = 3 ⋅ s n − 1 − 3 ⋅ s n − 2 + 1 ⋅ s n − 3 {\displaystyle s_{n}=3\cdot s_{n-1}-3\cdot s_{n-2}+1\cdot s_{n-3}} for a degree 2 or less polynomial, and s n = 4 ⋅ s n − 1 − 6 ⋅ s n − 2 + 4 ⋅ s n − 3 − 1 ⋅ s n − 4 {\displaystyle s_{n}=4\cdot s_{n-1}-6\cdot s_{n-2}+4\cdot s_{n-3}-1\cdot s_{n-4}} for a degree 3 or less polynomial. A sequence obeying the order-d equation also obeys all higher order equations. These identities may be proved in a number of ways, including via the theory of finite differences. Any sequence of d + 1 {\displaystyle d+1} integer, real, or complex values can be used as initial conditions for a constant-recursive sequence of order d + 1 {\displaystyle d+1} . If the initial conditions lie on a polynomial of degree d − 1 {\displaystyle d-1} or less, then the constant-recursive sequence also obeys a lower order equation. === Enumeration of words in a regular language === Let L {\displaystyle L} be a regular language, and let s n {\displaystyle s_{n}} be the number of words of length n {\displaystyle n} in L {\displaystyle L} . Then ( s n ) n = 0 ∞ {\displaystyle (s_{n})_{n=0}^{\infty }} is constant-recursive. For example, s n = 2 n {\displaystyle s_{n}=2^{n}} for the language of all binary strings, s n = 1 {\displaystyle s_{n}=1} for the language of all unary strings, and s n = F n + 2 {\displaystyle s_{n}=F_{n+2}} for the language of all binary strings that do not have two consecutive ones. More generally, any function accepted by a weighted automaton over the unary alphabet Σ = { a } {\displaystyle \Sigma =\{a\}} over the semiring ( R , + , × ) {\displaystyle (\mathbb {R} ,+,\times )} (which is in fact a ring, and even a field) is constant-recursive. === Other examples === The sequences of Jacobsthal numbers, Padovan numbers, Pell numbers, and Perrin numbers are constant-recursive. === Non-examples === The factorial sequence 1 , 1 , 2 , 6 , 24 , 120 , 720 , … {\displaystyle 1,1,2,6,24,120,720,\ldots } is not constant-recursive. More generally, every constant-recursive function is asymptotically bounded by an exponential function (see #Closed-form characterization) and the factorial sequence grows faster than this. The Catalan sequence 1 , 1 , 2 , 5 , 14 , 42 , 132 , … {\displaystyle 1,1,2,5,14,42,132,\ldots } is not constant-recursive. This is because the generating function of the Catalan numbers is not a rational function (see #Equivalent definitions). == Equivalent definitions == === In terms of matrices === A sequence ( s n ) n = 0 ∞ {\displaystyle (s_{n})_{n=0}^{\infty }} is constant-recursive of order less than or equal to d {\displaystyle d} if and only if it can be written as s n = u A n v {\displaystyle s_{n}=uA^{n}v} where u {\displaystyle u} is a 1 × d {\displaystyle 1\times d} vector, A {\displaystyle A} is a d × d {\displaystyle d\times d} matrix, and v {\displaystyle v} is a d × 1 {\displaystyle d\times 1} vector, where the elements come from the same domain (integers, rational numbers, algebraic numbers, real numbers, or complex numbers) as the original sequence. Specifically, v {\displaystyle v} can be taken to be the first d {\displaystyle d} values of the sequence, A {\displaystyle A} the linear transformation that computes s n + 1 , s n + 2 , … , s n + d {\displaystyle s_{n+1},s_{n+2},\ldots ,s_{n+d}} from s n , s n + 1 , … , s n + d − 1 {\displaystyle s_{n},s_{n+1},\ldots ,s_{n+d-1}} , and u {\displaystyle u} the vector [ 0 , 0 , … , 0 , 1 ] {\displaystyle [0,0,\ldots ,0,1]} . === In terms of non-homogeneous linear recurrences === A non-homogeneous linear recurrence is an equation of the form s n = c 1 s n − 1 + c 2 s n − 2 + ⋯ + c d s n − d + c {\displaystyle s_{n}=c_{1}s_{n-1}+c_{2}s_{n-2}+\dots +c_{d}s_{n-d}+c} where c {\displaystyle c} is an additional constant. Any sequence satisfying a non-homogeneous linear recurrence is constant-recursive. This is because subtracting the equation for s n − 1 {\displaystyle s_{n-1}} from the equation for s n {\displaystyle s_{n}} yields a homogeneous recurrence for s n − s n − 1 {\displaystyle s_{n}-s_{n-1}} , from which we can solve for s n {\displaystyle s_{n}} to obtain s n = ( c 1 + 1 ) s n − 1 + ( c 2 − c 1 ) s n − 2 + ⋯ + ( c d − c d − 1 ) s n − d − c d s n − d − 1 . {\displaystyle {\begin{aligned}s_{n}=&(c_{1}+1)s_{n-1}\\&+(c_{2}-c_{1})s_{n-2}+\dots +(c_{d}-c_{d-1})s_{n-d}\\&-c_{d}s_{n-d-1}.\end{aligned}}} === In terms of generating functions === A sequence is constant-recursive precisely when its generating function ∑ n = 0 ∞ s n x n = s 0 + s 1 x 1 + s 2 x 2 + s 3 x 3 + ⋯ {\displaystyle \sum _{n=0}^{\infty }s_{n}x^{n}=s_{0}+s_{1}x^{1}+s_{2}x^{2}+s_{3}x^{3}+\cdots } is a rational function p ( x ) / q ( x ) {\displaystyle p(x)\,/\,q(x)} , where p {\displaystyle p} and q {\displaystyle q} are polynomials and q ( 0 ) = 1 {\displaystyle q(0)=1} . Moreover, the order of the sequence is the minimum d {\displaystyle d} such that it has such a form with deg q ( x ) ≤ d {\displaystyle {\text{deg }}q(x)\leq d} and deg p ( x ) < d {\displaystyle {\text{deg }}p(x)<d} . The denominator is the polynomial obtained from the auxiliary polynomial by reversing the order of the coefficients, and the numerator is determined by the initial values of the sequence: ∑ n = 0 ∞ s n x n = b 0 + b 1 x 1 + b 2 x 2 + ⋯ + b d − 1 x d − 1 1 − c 1 x 1 − c 2 x 2 − ⋯ − c d x d , {\displaystyle \sum _{n=0}^{\infty }s_{n}x^{n}={\frac {b_{0}+b_{1}x^{1}+b_{2}x^{2}+\dots +b_{d-1}x^{d-1}}{1-c_{1}x^{1}-c_{2}x^{2}-\dots -c_{d}x^{d}}},} where b n = s n − c 1 s n − 1 − c 2 s n − 2 − ⋯ − c d s n − d . {\displaystyle b_{n}=s_{n}-c_{1}s_{n-1}-c_{2}s_{n-2}-\dots -c_{d}s_{n-d}.} It follows from the above that the denominator q ( x ) {\displaystyle q(x)} must be a polynomial not divisible by x {\displaystyle x} (and in particular nonzero). === In terms of sequence spaces === A sequence ( s n ) n = 0 ∞ {\displaystyle (s_{n})_{n=0}^{\infty }} is constant-recursive if and only if the set of sequences { ( s n + r ) n = 0 ∞ : r ≥ 0 } {\displaystyle \left\{(s_{n+r})_{n=0}^{\infty }:r\geq 0\right\}} is contained in a sequence space (vector space of sequences) whose dimension is finite. That is, ( s n ) n = 0 ∞ {\displaystyle (s_{n})_{n=0}^{\infty }} is contained in a finite-dimensional subspace of C N {\displaystyle \mathbb {C} ^{\mathbb {N} }} closed under the left-shift operator. This characterization is because the order- d {\displaystyle d} linear recurrence relation can be understood as a proof of linear dependence between the sequences ( s n + r ) n = 0 ∞ {\displaystyle (s_{n+r})_{n=0}^{\infty }} for r = 0 , … , d {\displaystyle r=0,\ldots ,d} . An extension of this argument shows that the order of the sequence is equal to the dimension of the sequence space generated by ( s n + r ) n = 0 ∞ {\displaystyle (s_{n+r})_{n=0}^{\infty }} for all r {\displaystyle r} . == Closed-form characterization == Constant-recursive sequences admit the following unique closed form characterization using exponential polynomials: every constant-recursive sequence can be written in the form s n = z n + k 1 ( n ) r 1 n + k 2 ( n ) r 2 n + ⋯ + k e ( n ) r e n , {\displaystyle s_{n}=z_{n}+k_{1}(n)r_{1}^{n}+k_{2}(n)r_{2}^{n}+\cdots +k_{e}(n)r_{e}^{n},} for all n ≥ 0 {\displaystyle n\geq 0} , where The term z n {\displaystyle z_{n}} is a sequence which is zero for all n ≥ d {\displaystyle n\geq d} (where d {\displaystyle d} is the order of the sequence); The terms k 1 ( n ) , k 2 ( n ) , … , k e ( n ) {\displaystyle k_{1}(n),k_{2}(n),\ldots ,k_{e}(n)} are complex polynomials; and The terms r 1 , r 2 , … , r k {\displaystyle r_{1},r_{2},\ldots ,r_{k}} are distinct complex constants. This characterization is exact: every sequence of complex numbers that can be written in the above form is constant-recursive. For example, the Fibonacci number F n {\displaystyle F_{n}} is written in this form using Binet's formula: F n = 1 5 φ n − 1 5 ψ n , {\displaystyle F_{n}={\frac {1}{\sqrt {5}}}\varphi ^{n}-{\frac {1}{\sqrt {5}}}\psi ^{n},} where φ = ( 1 + 5 ) / 2 ≈ 1.61803 … {\displaystyle \varphi =(1+{\sqrt {5}})\,/\,2\approx 1.61803\ldots } is the golden ratio and ψ = − 1 / φ {\displaystyle \psi =-1\,/\,\varphi } . These are the roots of the equation x 2 − x − 1 = 0 {\displaystyle x^{2}-x-1=0} . In this case, e = 2 {\displaystyle e=2} , z n = 0 {\displaystyle z_{n}=0} for all n {\displaystyle n} , k 1 ( n ) = k 2 ( n ) = 1 / 5 {\displaystyle k_{1}(n)=k_{2}(n)=1\,/\,{\sqrt {5}}} are both constant polynomials, r 1 = φ {\displaystyle r_{1}=\varphi } , and r 2 = ψ {\displaystyle r_{2}=\psi } . The term z n {\displaystyle z_{n}} is only needed when c d ≠ 0 {\displaystyle c_{d}\neq 0} ; if c d = 0 {\displaystyle c_{d}=0} then it corrects for the fact that some initial values may be exceptions to the general recurrence. In particular, z n = 0 {\displaystyle z_{n}=0} for all n ≥ d {\displaystyle n\geq d} . The complex numbers r 1 , … , r n {\displaystyle r_{1},\ldots ,r_{n}} are the roots of the characteristic polynomial of the recurrence: x d − c 1 x d − 1 − ⋯ − c d − 1 x − c d {\displaystyle x^{d}-c_{1}x^{d-1}-\dots -c_{d-1}x-c_{d}} whose coefficients are the same as those of the recurrence. We call r 1 , … , r n {\displaystyle r_{1},\ldots ,r_{n}} the characteristic roots of the recurrence. If the sequence consists of integers or rational numbers, the roots will be algebraic numbers. If the d {\displaystyle d} roots r 1 , r 2 , … , r d {\displaystyle r_{1},r_{2},\dots ,r_{d}} are all distinct, then the polynomials k i ( n ) {\displaystyle k_{i}(n)} are all constants, which can be determined from the initial values of the sequence. If the roots of the characteristic polynomial are not distinct, and r i {\displaystyle r_{i}} is a root of multiplicity m {\displaystyle m} , then k i ( n ) {\displaystyle k_{i}(n)} in the formula has degree m − 1 {\displaystyle m-1} . For instance, if the characteristic polynomial factors as ( x − r ) 3 {\displaystyle (x-r)^{3}} , with the same root r occurring three times, then the n {\displaystyle n} th term is of the form s n = ( a + b n + c n 2 ) r n . {\displaystyle s_{n}=(a+bn+cn^{2})r^{n}.} == Closure properties == === Examples === The sum of two constant-recursive sequences is also constant-recursive. For example, the sum of s n = 2 n {\displaystyle s_{n}=2^{n}} and t n = n {\displaystyle t_{n}=n} is u n = 2 n + n {\displaystyle u_{n}=2^{n}+n} ( 1 , 3 , 6 , 11 , 20 , … {\displaystyle 1,3,6,11,20,\ldots } ), which satisfies the recurrence u n = 4 u n − 1 − 5 u n − 2 + 2 u n − 3 {\displaystyle u_{n}=4u_{n-1}-5u_{n-2}+2u_{n-3}} . The new recurrence can be found by adding the generating functions for each sequence. Similarly, the product of two constant-recursive sequences is constant-recursive. For example, the product of s n = 2 n {\displaystyle s_{n}=2^{n}} and t n = n {\displaystyle t_{n}=n} is u n = n ⋅ 2 n {\displaystyle u_{n}=n\cdot 2^{n}} ( 0 , 2 , 8 , 24 , 64 , … {\displaystyle 0,2,8,24,64,\ldots } ), which satisfies the recurrence u n = 4 u n − 1 − 4 u n − 2 {\displaystyle u_{n}=4u_{n-1}-4u_{n-2}} . The left-shift sequence u n = s n + 1 {\displaystyle u_{n}=s_{n+1}} and the right-shift sequence u n = s n − 1 {\displaystyle u_{n}=s_{n-1}} (with u 0 = 0 {\displaystyle u_{0}=0} ) are constant-recursive because they satisfy the same recurrence relation. For example, because s n = 2 n {\displaystyle s_{n}=2^{n}} is constant-recursive, so is u n = 2 n + 1 {\displaystyle u_{n}=2^{n+1}} . === List of operations === In general, constant-recursive sequences are closed under the following operations, where s = ( s n ) n ∈ N , t = ( t n ) n ∈ N {\displaystyle s=(s_{n})_{n\in \mathbb {N} },t=(t_{n})_{n\in \mathbb {N} }} denote constant-recursive sequences, f ( x ) , g ( x ) {\displaystyle f(x),g(x)} are their generating functions, and d , e {\displaystyle d,e} are their orders, respectively. The closure under term-wise addition and multiplication follows from the closed-form characterization in terms of exponential polynomials. The closure under Cauchy product follows from the generating function characterization. The requirement s 0 = 1 {\displaystyle s_{0}=1} for Cauchy inverse is necessary for the case of integer sequences, but can be replaced by s 0 ≠ 0 {\displaystyle s_{0}\neq 0} if the sequence is over any field (rational, algebraic, real, or complex numbers). == Behavior == === Zeros === Despite satisfying a simple local formula, a constant-recursive sequence can exhibit complicated global behavior. Define a zero of a constant-recursive sequence to be a nonnegative integer n {\displaystyle n} such that s n = 0 {\displaystyle s_{n}=0} . The Skolem–Mahler–Lech theorem states that the zeros of the sequence are eventually repeating: there exists constants M {\displaystyle M} and N {\displaystyle N} such that for all n > M {\displaystyle n>M} , s n = 0 {\displaystyle s_{n}=0} if and only if s n + N = 0 {\displaystyle s_{n+N}=0} . This result holds for a constant-recursive sequence over the complex numbers, or more generally, over any field of characteristic zero. === Decision problems === The pattern of zeros in a constant-recursive sequence can also be investigated from the perspective of computability theory. To do so, the description of the sequence s n {\displaystyle s_{n}} must be given a finite description; this can be done if the sequence is over the integers or rational numbers, or even over the algebraic numbers. Given such an encoding for sequences s n {\displaystyle s_{n}} , the following problems can be studied: Because the square of a constant-recursive sequence s n 2 {\displaystyle s_{n}^{2}} is still constant-recursive (see closure properties), the existence-of-a-zero problem in the table above reduces to positivity, and infinitely-many-zeros reduces to eventual positivity. Other problems also reduce to those in the above table: for example, whether s n = c {\displaystyle s_{n}=c} for some n {\displaystyle n} reduces to existence-of-a-zero for the sequence s n − c {\displaystyle s_{n}-c} . As a second example, for sequences in the real numbers, weak positivity (is s n ≥ 0 {\displaystyle s_{n}\geq 0} for all n {\displaystyle n} ?) reduces to positivity of the sequence − s n {\displaystyle -s_{n}} (because the answer must be negated, this is a Turing reduction). The Skolem-Mahler-Lech theorem would provide answers to some of these questions, except that its proof is non-constructive. It states that for all n > M {\displaystyle n>M} , the zeros are repeating; however, the value of M {\displaystyle M} is not known to be computable, so this does not lead to a solution to the existence-of-a-zero problem. On the other hand, the exact pattern which repeats after n > M {\displaystyle n>M} is computable. This is why the infinitely-many-zeros problem is decidable: just determine if the infinitely-repeating pattern is empty. Decidability results are known when the order of a sequence is restricted to be small. For example, the Skolem problem is decidable for algebraic sequences of order up to 4. It is also known to be decidable for reversible integer sequences up to order 7, that is, sequences that may be continued backwards in the integers. Decidability results are also known under the assumption of certain unproven conjectures in number theory. For example, decidability is known for rational sequences of order up to 5 subject to the Skolem conjecture (also known as the exponential local-global principle). Decidability is also known for all simple rational sequences (those with simple characteristic polynomial) subject to the Skolem conjecture and the weak p-adic Schanuel conjecture. === Degeneracy === Let r 1 , … , r n {\displaystyle r_{1},\ldots ,r_{n}} be the characteristic roots of a constant recursive sequence s {\displaystyle s} . We say that the sequence is degenerate if any ratio r i / r j {\displaystyle r_{i}/r_{j}} is a root of unity, for i ≠ j {\displaystyle i\neq j} . It is often easier to study non-degenerate sequences, in a certain sense one can reduce to this using the following theorem: if s {\displaystyle s} has order d {\displaystyle d} and is contained in a number field K {\displaystyle K} of degree k {\displaystyle k} over Q {\displaystyle \mathbb {Q} } , then there is a constant M ( k , d ) ≤ { exp ( 2 d ( 3 log d ) 1 / 2 ) if k = 1 , 2 k d + 1 if k ≥ 2 {\displaystyle M(k,d)\leq {\begin{cases}\exp(2d(3\log d)^{1/2})&{\text{if }}k=1,\\2^{kd+1}&{\text{if }}k\geq 2\end{cases}}} such that for some M ≤ M ( k , d ) {\displaystyle M\leq M(k,d)} each subsequence s M n + ℓ {\displaystyle s_{Mn+\ell }} is either identically zero or non-degenerate. == Generalizations == A D-finite or holonomic sequence is a natural generalization where the coefficients of the recurrence are allowed to be polynomial functions of n {\displaystyle n} rather than constants. A k {\displaystyle k} -regular sequence satisfies a linear recurrences with constant coefficients, but the recurrences take a different form. Rather than s n {\displaystyle s_{n}} being a linear combination of s m {\displaystyle s_{m}} for some integers m {\displaystyle m} that are close to n {\displaystyle n} , each term s n {\displaystyle s_{n}} in a k {\displaystyle k} -regular sequence is a linear combination of s m {\displaystyle s_{m}} for some integers m {\displaystyle m} whose base- k {\displaystyle k} representations are close to that of n {\displaystyle n} . Constant-recursive sequences can be thought of as 1 {\displaystyle 1} -regular sequences, where the base-1 representation of n {\displaystyle n} consists of n {\displaystyle n} copies of the digit 1 {\displaystyle 1} . == Notes == == References == == External links == "OEIS Index Rec". OEIS index to a few thousand examples of linear recurrences, sorted by order (number of terms) and signature (vector of values of the constant coefficients)
|
Wikipedia:Constantin Carathéodory#0
|
Constantin Carathéodory (Greek: Κωνσταντίνος Καραθεοδωρή, romanized: Konstantinos Karatheodori; 13 September 1873 – 2 February 1950) was a Greek mathematician who spent most of his professional career in Germany. He made significant contributions to real and complex analysis, the calculus of variations, and measure theory. He also created an axiomatic formulation of thermodynamics. Carathéodory is considered one of the greatest mathematicians of his era and the most renowned Greek mathematician since antiquity. == Origins == Constantin Carathéodory was born in 1873 in Berlin to Greek parents and grew up in Brussels. His father Stephanos, a lawyer, served as the Ottoman ambassador to Belgium, St. Petersburg and Berlin. His mother, Despina, née Petrokokkinos, was from the island of Chios. The Carathéodory family, originally from Bosna, was well established and respected in Constantinople, and its members held many important governmental positions. His grandfather, the Ottoman Greek physician Constantinos Caratheodory, was the personal doctor to Sultan Abdülmecit I. The Carathéodory family spent 1874–75 in Constantinople, where Constantin's paternal grandfather lived, while his father Stephanos was on leave. Then in 1875 they went to Brussels when Stephanos was appointed there as Ottoman Ambassador. In Brussels, Constantin's younger sister Julia was born. The year 1879 was a tragic one for the family since Constantin's paternal grandfather died in that year, but much more tragically, Constantin's mother Despina died of pneumonia in Cannes. Constantin's maternal grandmother took on the task of bringing up Constantin and Julia in his father's home in Belgium. They employed a German maid who taught the children to speak German. Constantin was already bilingual in French and Greek by this time. Constantin began his formal schooling at a private school in Vanderstock in 1881. He left after two years and then spent time with his father on a visit to Berlin, and also spent the winters of 1883–84 and 1884–85 on the Italian Riviera. Back in Brussels in 1885 he attended a grammar school for a year where he first began to become interested in mathematics. In 1886, he entered the high school Athénée Royal d'Ixelles and studied there until his graduation in 1891. Twice during his time at this school Constantin won a prize as the best mathematics student in Belgium. At this stage Carathéodory began training as a military engineer. He attended the École Militaire de Belgique from October 1891 to May 1895 and he also studied at the École d'Application from 1893 to 1896. In 1897 a war broke out between the Ottoman Empire and Greece. This put Carathéodory in a difficult position since he sided with the Greeks, yet his father served the government of the Ottoman Empire. Since he was a trained engineer he was offered a job in the British colonial service. This job took him to Egypt where he worked on the construction of the Assiut dam until April 1900. During periods when construction work had to stop due to floods, he studied mathematics from some textbooks he had with him, such as Jordan's Cours d'Analyse and Salmon's text on the analytic geometry of conic sections. He also visited the Cheops pyramid and made measurements which he wrote up and published in 1901. He also published a book on Egypt in the same year which contained a wealth of information on the history and geography of the country. == Studies and university career == Carathéodory studied engineering in Belgium at the Royal Military Academy, where he was considered a charismatic and brilliant student. === University career === 1900 Studies at University of Berlin. 1902 Completed graduation at University of Göttingen (1904 Ph.D., 1905 Habilitation) 1908 Dozent at Bonn 1909 Ordinary Professor at Hannover Technical High School. 1910 Ordinary Professor at Breslau Technical High School. 1913 Professor following Klein at University of Göttingen. 1919 Professor at University of Berlin 1919 Elected to Prussian Academy of Science. 1920 University Dean at Ionian University of Smyrna (later, University of the Aegean). 1922 Professor at University of Athens. 1922 Professor at Athens Polytechnic. 1924 Professor following Lindemann at University of Munich. 1938 Retirement from professorship. Continued working at Bavarian Academy of Science === Doctoral students === Carathéodory had about 20 doctoral students among these being Hans Rademacher, known for his work on analysis and number theory, and Paul Finsler known for his creation of Finsler space. === Academic contacts in Germany === Carathéodory had numerous contacts in Germany. They included such famous names as: Hermann Minkowski, David Hilbert, Felix Klein, Albert Einstein, Edmund Landau, Hermann Amandus Schwarz, and Lipót Fejér. During the difficult period of World War II, his close associates at the Bavarian Academy of Sciences were Perron and Tietze. Einstein, then a member of the Prussian Academy of Sciences in Berlin, was working on his general theory of relativity when he contacted Carathéodory for clarifications on the Hamilton-Jacobi equation and canonical transformations. He wanted to see a satisfactory derivation of the former and the origins of the latter. Einstein told Carathéodory his derivation was "beautiful" and recommended its publication in the Annalen der Physik. Einstein employed the former in a 1917 paper titled Zum Quantensatz von Sommerfeld und Epstein (On the Quantum Theorem of Sommerfeld and Epstein). Carathéodory explained some fundamental details of the canonical transformations and referred Einstein to E.T. Whittaker's Analytical Dynamics. Einstein was trying to solve the problem of "closed time-lines" or the geodesics corresponding to the closed trajectory of light and free particles in a static universe, which he introduced in 1917. Landau and Schwarz stimulated his interest in the study of complex analysis. === Academic contacts in Greece === While in Germany, Carathéodory retained numerous links with the Greek academic world, details of which can be found in Georgiadou's book. He was directly involved with the reorganization of Greek universities. An especially close friend and colleague in Athens was Nicolaos Kritikos who had attended his lectures at Göttingen, later going with him to Smyrna, then becoming professor at Athens Polytechnic. Kritikos and Carathéodory helped the Greek topologist Christos Papakyriakopoulos take a doctorate in topology at Athens University in 1943 under very difficult circumstances. While teaching at Athens University, Carathéodory had Evangelos Stamatis as an undergraduate student, who subsequently achieved considerable distinction as a scholar of ancient Greek mathematical classics. == Works == === Calculus of variations === In his doctoral dissertation, Carathéodory showed how to extend solutions to discontinuous cases and studied isoperimetric problems. Previously, between the mid-1700s to the mid-1800s, Leonhard Euler, Adrien-Marie Legendre, and Carl Gustav Jacob Jacobi were able to establish necessary but insufficient conditions for the existence of a strong relative minimum. In 1879, Karl Weierstrass added a fourth that does indeed guarantee such a quantity exists. Carathéodory constructed his method for deriving sufficient conditions based on the use of the Hamilton–Jacobi equation to construct a field of extremals. The ideas are closely related to light propagation in optics. The method became known as Carathéodory's method of equivalent variational problems or the royal road to the calculus of variations. A key advantage of Carathéodory's work on this topic is that it illuminates the relation between the calculus of variations and partial differential equations. It allows for quick and elegant derivations of conditions of sufficiency in the calculus of variations and leads directly to the Euler-Lagrange equation and the Weierstrass condition. He published his Variationsrechnung und Partielle Differentialgleichungen Erster Ordnung (Calculus of Variations and First-order Partial Differential Equations) in 1935. More recently, Carathéodory's work on the calculus of variations and the Hamilton-Jacobi equation has been taken into the theory of optimal control and dynamic programming. === Convex geometry === Carathéodory's theorem in convex geometry states that if a point x {\displaystyle x} of R d {\displaystyle \mathbb {R} ^{d}} lies in the convex hull of a set P {\displaystyle P} , then x {\displaystyle x} can be written as the convex combination of at most d + 1 {\displaystyle d+1} points in P {\displaystyle P} . Namely, there is a subset P ′ {\displaystyle P'} of P {\displaystyle P} consisting of d + 1 {\displaystyle d+1} or fewer points such that x {\displaystyle x} lies in the convex hull of P ′ {\displaystyle P'} . Equivalently, x {\displaystyle x} lies in an r {\displaystyle r} -simplex with vertices in P {\displaystyle P} , where r ≤ d {\displaystyle r\leq d} . The smallest r {\displaystyle r} that makes the last statement valid for each x {\displaystyle x} in the convex hull of P is defined as the Carathéodory's number of P {\displaystyle P} . Depending on the properties of P {\displaystyle P} , upper bounds lower than the one provided by Carathéodory's theorem can be obtained. He is credited with the authorship of the Carathéodory conjecture claiming that a closed convex surface admits at least two umbilic points. The conjecture was proven in 2024 by Brendan Guilfoyle and Wilhelm Klingenberg. === Real analysis === He proved an existence theorem for the solution to ordinary differential equations under mild regularity conditions. Another theorem of his on the derivative of a function at a point could be used to prove the Chain Rule and the formula for the derivative of inverse functions. === Complex analysis === He greatly extended the theory of conformal transformation proving his theorem about the extension of conformal mapping to the boundary of Jordan domains. In studying boundary correspondence he originated the theory of prime ends. He exhibited an elementary proof of the Schwarz lemma. Carathéodory was also interested in the theory of functions of multiple complex variables. In his investigations on this subject he sought analogs of classical results from the single-variable case. He proved that a ball in C 2 {\displaystyle \mathbb {C} ^{2}} is not holomorphically equivalent to the bidisc. === Theory of measure === He is credited with the Carathéodory extension theorem which is fundamental to modern measure theory. Later Carathéodory extended the theory from sets to Boolean algebras. === Thermodynamics === Thermodynamics had been a subject dear to Carathéodory since his time in Belgium. In 1909, he published a pioneering work "Investigations on the Foundations of Thermodynamics" in which he formulated the second law of thermodynamics axiomatically, that is, without the use of Carnot engines and refrigerators and only by mathematical reasoning. This is yet another version of the second law, alongside the statements of Clausius, and of Kelvin and Planck. Carathéodory's version attracted the attention of some of the top physicists of the time, including Max Planck, Max Born, and Arnold Sommerfeld. According to Bailyn's survey of thermodynamics, Carathéodory's approach is called "mechanical," rather than "thermodynamic." Max Born acclaimed this "first axiomatically rigid foundation of thermodynamics" and he expressed his enthusiasm in his letters to Einstein. However, Max Planck had some misgivings in that while he was impressed by Carathéodory's mathematical prowess, he did not accept that this was a fundamental formulation, given the statistical nature of the second law. In his theory he simplified the basic concepts, for instance heat is not an essential concept but a derived one. He formulated the axiomatic principle of irreversibility in thermodynamics stating that inaccessibility of states is related to the existence of entropy, where temperature is the integration function. The second law of thermodynamics was expressed via the following axiom: "In the neighbourhood of any initial state, there are states which cannot be approached arbitrarily close through adiabatic changes of state." In this connexion he coined the term adiabatic accessibility. === Optics === Carathéodory's work in optics is closely related to his method in the calculus of variations. In 1926 he gave a strict and general proof that no system of lenses and mirrors can avoid aberration, except for the trivial case of plane mirrors. In his later work he gave the theory of the Schmidt telescope. In his Geometrische Optik (1937), Carathéodory demonstrated the equivalence of Huygens' principle and Fermat's principle starting from the former using Cauchy's theory of characteristics. He argued that an important advantage of his approach was that it covers the integral invariants of Henri Poincaré and Élie Cartan and completes the Malus law. He explained that in his investigations in optics, Pierre de Fermat conceived a minimum principle similar to that enunciated by Hero of Alexandria to study reflection. === Historical === During the Second World War Carathéodory edited two volumes of Euler's Complete Works dealing with the Calculus of Variations which were submitted for publication in 1946. == The University of Smyrna == At the time, Athens was the only major educational centre in the wider area and had limited capacity to sufficiently satisfy the growing educational needs of the eastern part of the Aegean Sea and the Balkans. Carathéodory, who was a professor at the University of Berlin at the time, proposed the establishment of a new university - the difficulties regarding the establishment of a Greek university in Constantinople led him to consider three other cities: Thessaloniki, Chios and Smyrna. At the invitation of the Greek Prime Minister Eleftherios Venizelos, he submitted a plan on 20 October 1919 for the creation of a new university at Smyrna in Asia Minor, to be named Ionian University of Smyrna. In 1920 Carathéodory was appointed dean of the university and took a major part in establishing the institution, touring Europe to buy books and equipment. The university however never actually admitted students, due to the War in Asia Minor which ended in the Great Fire of Smyrna. Carathéodory managed to save books from the library and was only rescued at the last moment by a journalist who took him by rowboat to the battleship Naxos which was standing by. Carathéodory brought to Athens some of the university library and stayed there, teaching at the university and technical school until 1924. In 1924 Carathéodory was appointed professor of mathematics at the University of Munich, and held this position until retirement in 1938. He later worked at the Bavarian Academy of Sciences until his death in 1950. The new Greek university in the broader area of the Southeast Mediterranean region, as originally envisioned by Carathéodory, finally materialised with the establishment of the Aristotle University of Thessaloniki in 1925. == Linguistic and oratorical talents == Carathéodory excelled at languages, much like many members of his family. Greek and French were his first languages, and he mastered German with such perfection, that his writings composed in the German language are stylistic masterworks. Carathéodory also spoke and wrote English, Italian, Turkish, and the ancient languages without any effort. Such an impressive linguistic arsenal enabled him to communicate and exchange ideas directly with other mathematicians during his numerous travels, and greatly extended his fields of knowledge. Much more than that, Carathéodory was a treasured conversation partner for his fellow professors in the Munich Department of Philosophy. The well-respected German philologist and professor of ancient languages, Kurt von Fritz, praised Carathéodory on the grounds that from him one could learn an endless amount about the old and new Greece, the old Greek language, and Hellenic mathematics. Von Fritz conducted numerous philosophical discussions with Carathéodory. The mathematician sent his son Stephanos and daughter Despina to a German high school, but they also obtained daily additional instruction in Greek language and culture from a Greek priest, and at home he allowed them to speak Greek only. Carathéodory was a talented public speaker, and was often invited to give speeches. In 1936, it was he who handed out the first ever Fields Medals at the meeting of the International Congress of Mathematicians in Oslo, Norway. == Legacy == In 2002, in recognition of his achievements, the University of Munich named one of the largest lecture rooms in the mathematical institute the Constantin-Carathéodory Lecture Hall. In the town of Nea Vyssa, Caratheodory's ancestral home, a unique family museum is to be found. The museum is located in the central square of the town near to its church, and includes a number of Karatheodory's personal items, as well as letters he exchanged with Albert Einstein. More information is provided at the original website of the club, http://www.s-karatheodoris.gr. At the same time, Greek authorities had long since intended to create a museum honoring Karatheodoris in Komotini, a major town of the northeastern Greek region, more than 200 km away from his home town above. On 21 March 2009, the "Karatheodoris" Museum (Καραθεοδωρής) opened its gates to the public in Komotini. The coordinator of the museum, Athanasios Lipordezis (Αθανάσιος Λιπορδέζης), has noted that the museum provides a home for original manuscripts of the mathematician running to about 10,000 pages, including correspondence with the German mathematician Arthur Rosenthal for the algebraization of measure. At the showcase, visitors are also able to view the books " Gesammelte mathematische Schriften Band 1,2,3,4 ", "Mass und ihre Algebraiserung", " Reelle Functionen Band 1", " Zahlen/Punktionen Funktionen ", and a number of others. Handwritten letters by Carathéodory to Albert Einstein and Hellmuth Kneser, as well as photographs of the Carathéodory family, are on display. Efforts to furnish the museum with more exhibits are ongoing. == Publications == === Journal articles === A complete list of Carathéodory's journal article publications can be found in his Collected Works(Ges. Math. Schr.). Notable publications are: Über die kanonischen Veränderlichen in der Variationsrechnung der mehrfachen Integrale Über das Schwarzsche Lemma bei analytischen Funktionen von zwei komplexen Veränderlichen Über die diskontinuierlichen Lösungen in der Variationsrechnung. Diss. Göttingen Univ. 1904; Ges. Math. Schr. I 3–79. Über die starken Maxima und Minima bei einfachen Integralen. Habilitationsschrift Göttingen 1905; Math. Annalen 62 1906 449–503; Ges. Math. Schr. I 80–142. Untersuchungen über die Grundlagen der Thermodynamik, Math. Ann. 67 (1909) pp. 355–386; Ges. Math. Schr. II 131–166. Über das lineare Mass von Punktmengen – eine Verallgemeinerung des Längenbegriffs., Gött. Nachr. (1914) 404–406; Ges. Math. Schr. IV 249–275. Elementarer Beweis für den Fundamentalsatz der konformen Abbildungen. Schwarzsche Festschrift, Berlin 1914; Ges. Math. Schr.IV 249–275. Zur Axiomatic der speziellen Relativitätstheorie. Sitzb. Preuss. Akad. Wiss. (1924) 12–27; Ges. Math. Schr. II 353–373. Variationsrechnung in Frank P. & von Mises (eds): Die Differential= und Integralgleichungen der Mechanik und Physik, Braunschweig 1930 (Vieweg); New York 1961 (Dover) 227–279; Ges. Math. Schr. I 312–370. Entwurf für eine Algebraisierung des Integralbegriffs, Sitzber. Bayer. Akad. Wiss. (1938) 27–69; Ges. Math. Schr. IV 302–342. === Books === Carathéodory, Constantin (1918), Vorlesungen über reelle Funktionen (3rd ed.), Leipzig: Teubner, ISBN 978-0-8284-0038-1, MR 0225940 {{citation}}: ISBN / Date incompatibility (help) Reprinted 1968 (Chelsea) Conformal Representation, Cambridge 1932 (Cambridge Tracts in Mathematics and Physics) Geometrische Optik, Berlin, 1937 Elementare Theorie des Spiegelteleskops von B. Schmidt (Elementary Theory of B. Schmidt's Reflecting Telescope), Leipzig Teubner, 1940 36 pp.; Ges. math. Schr. II 234–279 Funktionentheorie I, II, Basel 1950, 1961 (Birkhäuser). English translation: Theory of Functions of a Complex Variable, 2 vols, New York, Chelsea Publishing Company, 3rd ed 1958 Mass und Integral und ihre Algebraisierung, Basel 1956. English translation, Measure and Integral and Their Algebraisation, New York, Chelsea Publishing Company, 1963 Variationsrechnung und partielle Differentialgleichungen erster Ordnung, Leipzig, 1935. English translation next reference Calculus of Variations and Partial Differential Equations of the First Order, 2 vols. vol. I 1965, vol. II 1967 Holden-Day. Gesammelte mathematische Schriften München 1954–7 (Beck) I–V. == See also == Domain (mathematical analysis) Nemytskii operator Herbert Callen, who also sought an axiomatic formulation of thermodynamics == Notes == == References == === Books === Maria Georgiadou, Constantin Carathéodory: Mathematics and Politics in Turbulent Times, Berlin-Heidelberg: Springer Verlag, 2004. ISBN 3-540-44258-8. Themistocles M. Rassias (editor) (1991) Constantin Caratheodory: An International Tribute, Teaneck, NJ: World Scientific Publishing Co., ISBN 981-02-0544-9. Nicolaos K. Artemiadis; translated by Nikolaos E. Sofronidis [2000](2004), History of Mathematics: From a Mathematician's Vantage Point, Rhode Island, USA: American Mathematical Society, pp. 270–4, 281, ISBN 0-8218-3403-7. Constantin Carathéodory in his...origins. International Congress at Vissa-Orestiada, Greece, 1–4 September 2000. Proceedings: T Vougiouklis (ed.), Hadronic Press, Palm Harbor FL 2001. === Biographical articles === C. Carathéodory, Autobiographische Notizen, (In German) Wiener Akad. Wiss. 1954–57, vol.V, pp. 389–408. Reprinted in Carathéodory's Collected Writings vol.V. English translation in A. Shields, Carathéodory and conformal mapping, The Mathematical Intelligencer 10 (1) (1988), 18–22. O. Perron, Obituary: Constantin Carathéodory, Jahresberichte der Deutschen Mathematiker Vereinigung 55 (1952), 39–51. N. Sakellariou, Obituary: Constantin Carathéodory (Greek), Bull. Soc. Math. Grèce 26 (1952), 1–13. H Tietze, Obituary: Constantin Carathéodory, Arch. Math. 2 (1950), 241–245. H. Behnke, Carathéodorys Leben und Wirken, in A. Panayotopolos (ed.), Proceedings of C .Carathéodory International Symposium, September 1973, Athens (Athens, 1974), 17–33. Bulirsch R., Hardt M., (2000): Constantin Carathéodory: Life and Work, International Congress: "Constantin Carathéodory", 1–4 September 2000, Vissa, Orestiada, Greece === Encyclopaedias and reference works === Chambers Biographical Dictionary (1997), Constantine Carathéodory, 6th ed., Edinburgh: Chambers Harrap Publishers Ltd, pp 270–1, ISBN 0-550-10051-2 (also available online). The New Encyclopædia Britannica (1992), Constantine Carathéodory, 15th ed., vol. 2, USA: The University of Chicago, Encyclopædia Britannica, Inc., pp 842, ISBN 0-85229-553-7 * New edition Online entry H. Boerner, Biography of Carathéodory in Dictionary of Scientific Biography (New York 1970–1990). === Conferences === C. Carathéodory International Symposium, Athens, Greece September 1973. Proceedings edited by A. Panayiotopoulos (Greek Mathematical Society) 1975. Online Conference on Advances in Convex Analysis and Global Optimization (Honoring the memory of C. Carathéodory) June 5–9, 2000, Pythagorion, Samos, Greece. Online. International Congress: Carathéodory in his ... origins, September 1–4, 2000, Vissa Orestiada, Greece. Proceedings edited by Thomas Vougiouklis (Democritus University of Thrace), Hadronic Press FL USA, 2001. ISBN 1-57485-053-9. == External links == Media related to Constantin Caratheodory at Wikimedia Commons O'Connor, John J.; Robertson, Edmund F., "Constantin Carathéodory", MacTutor History of Mathematics Archive, University of St Andrews (in Greek) Web site dedicated to Carathéodory (in Greek) club www.s-karatheodoris.gr Constantin Carathéodory at the Mathematics Genealogy Project
|
Wikipedia:Constantin Climescu#0
|
Constantin Climescu (30 November 1844 – 6 August 1926) was a Moldavian, later Romanian mathematician and politician. Born in Bacău, he attended the princely academy in Iași, followed by the sciences faculty of Iași University. He then left for the École Normale Supérieure in Paris, and in 1870, took his undergraduate degree from the University of Paris, in mathematics and physical sciences. He was a professor of analytic geometry and spherical trigonometry at Iași University from 1871 to 1909, served as dean of the sciences faculty from 1880 to 1901, and as rector of the university from 1901 to 1907. Meanwhile, from 1884 to 1896, he taught at the upper normal school of Iași, and was among the founders of Recreații Științifice periodical in 1883; its chief contributor, he wrote articles on arithmetic, elementary and analytical geometry, algebra and mathematical analysis. He also belonged to the editorial board of Gazeta Matematică, where he wrote on the historiography of mathematics. He wrote several textbooks that were widely used at the time, on algebra (1887), rational-number arithmetic (1890), elementary geometry (1891) and analytic geometry (1898); the last volume was the second of its type to appear in Romania. He was elected a corresponding member of the Romanian Academy in 1892. A member of the National Liberal Party, he was elected to the Assembly of Deputies for Bacău in 1884, and represented Iași in the Senate from 1889 to 1910. He was an officer of the Order of the Star of Romania, and a commander of the Order of the Crown. He left the university when he reached the retirement age in 1909, died in 1926 and was buried in Eternitatea cemetery. == Notes ==
|
Wikipedia:Constantin Corduneanu#0
|
Constantin Corduneanu (23 April 1969, Iași – 15 April 2024, Târgu Mureș) was a Romanian freestyle wrestler who competed in the 1992 Summer Olympics and in the 1996 Summer Olympics. == Biography == Born into a poor family with eleven children, Corduneanu started wrestling at the Nicolina Sports Club in his hometown, Iași. During his active wrestling career he was a member of Steaua Bucharest and Dinamo Brașov, where he was a non-commissioned officer. Corduneanu retired from wrestling in 2002, but before that he had become the leader of a Mafia-type criminal organization in Iași. In 2018, he was indicted along with Dragos Cosmin Gradinariu, alias Bombardieru for the "establishment of an organized criminal group, leading and financing illegal drug activities, and illegal trafficking of high-risk drugs", but was found not guilty by Romania's Supreme Court in 2013. When he married his second wife, Alina Bogdan, he changed his family name into Bogdan, but reverted to the old family name after they divorced. In 2015, he had a song dedicated to him and his brother, Adrian Corduneanu (also known as Beleaua), named Mafia. The artist who sings it is Dani Mocanu, well known manele singer. Corduneanu died on 15 April 2024, at the age of 54. == References == == External links == Constantin Corduneanu at the International Wrestling Database Constantin Corduneanu at Olympedia
|
Wikipedia:Constantin Simirad#0
|
Constantin Simirad (13 May 1941 – 28 March 2021) was a Romanian politician and academic. == Biography == Simirad was born on 13 May 1941 in Coțușca, Botoșani County. He graduated from Iași National College and the University of Iași. He first began teaching secondary school in Dorohoi from 1965 to 1968. In 1968, he became an assistant professor of mathematics at the Gheorghe Asachi Technical University of Iași and was subsequently taught at the Faculty of Mathematics at the University of Oran. He earned his doctorate in mathematics in 1979. Simirad entered politics in 1991 with the Partidul Alianța Civică (PAC). The following year, he was elected Mayor of Iași, running as a member of the Romanian Democratic Convention. He was re-elected in 1996 with 71% of the vote. He left PAC in 1998 and founded the Partidului Moldovenilor, which he led until 2002. That year, the party merged with the Social Democratic Party (PSD), of which he became vice-president. On 28 November 2003, he left Iași and was appointed Ambassador of Romania to Cuba by President Ion Iliescu. Upon his return to Romania in 2006, he retired. On 28 March 2008, the PSD nominated him to run as President of the Iași County Council. He was elected on 1 June 2008 with 36.5% of the vote. However, he was expelled from PSD in September 2009 after voicing his support for President Traian Băsescu. He then joined National Union for the Progress of Romania (UNPR). In 2012, he was defeated in his re-election bid for President of the County Council alongside his party's mayoral candidate Tudor Ciuhodaru. Constantin Simirad died of COVID-19 in Iași on 28 March 2021, at the age of 79. He is buried at Eternitatea Cemetery. == Publications == Audiențe în aer liber (2000) Bordura de ipsos (2001) În umbra zeilor (2001) Simfonia vieții (2003) Arca lui Noe (2003) Izgoniții din rai (2005) == References ==
|
Wikipedia:Constructible polygon#0
|
In mathematics, a constructible polygon is a regular polygon that can be constructed with compass and straightedge. For example, a regular pentagon is constructible with compass and straightedge while a regular heptagon is not. There are infinitely many constructible polygons, but only 31 with an odd number of sides are known. == Conditions for constructibility == Some regular polygons are easy to construct with compass and straightedge; others are not. The ancient Greek mathematicians knew how to construct a regular polygon with 3, 4, or 5 sides,: p. xi and they knew how to construct a regular polygon with double the number of sides of a given regular polygon.: pp. 49–50 This led to the question being posed: is it possible to construct all regular polygons with compass and straightedge? If not, which n-gons (that is, polygons with n edges) are constructible and which are not? Carl Friedrich Gauss proved the constructibility of the regular 17-gon in 1796. Five years later, he developed the theory of Gaussian periods in his Disquisitiones Arithmeticae. This theory allowed him to formulate a sufficient condition for the constructibility of regular polygons. Gauss stated without proof that this condition was also necessary, but never published his proof. A full proof of necessity was given by Pierre Wantzel in 1837. The result is known as the Gauss–Wantzel theorem: A regular n-gon can be constructed with compass and straightedge if and only if n is the product of a power of 2 and any number of distinct (unequal) Fermat primes. Here, a power of 2 is a number of the form 2 m {\displaystyle 2^{m}} , where m ≥ 0 is an integer. A Fermat prime is a prime number of the form 2 ( 2 m ) + 1 {\displaystyle 2^{(2^{m})}+1} , where m ≥ 0 is an integer. The number of Fermat primes involved can be 0, in which case n is a power of 2. In order to reduce a geometric problem to a problem of pure number theory, the proof uses the fact that a regular n-gon is constructible if and only if the cosine cos ( 2 π / n ) {\displaystyle \cos(2\pi /n)} is a constructible number—that is, can be written in terms of the four basic arithmetic operations and the extraction of square roots. Equivalently, a regular n-gon is constructible if any root of the nth cyclotomic polynomial is constructible. === Detailed results by Gauss's theory === Restating the Gauss–Wantzel theorem: A regular n-gon is constructible with straightedge and compass if and only if n = 2kp1p2...pt where k and t are non-negative integers, and the pi's (when t > 0) are distinct Fermat primes. The five known Fermat primes are: F0 = 3, F1 = 5, F2 = 17, F3 = 257, and F4 = 65537 (sequence A019434 in the OEIS). Since there are 31 nonempty subsets of the five known Fermat primes, there are 31 known constructible polygons with an odd number of sides. The next twenty-eight Fermat numbers, F5 through F32, are known to be composite. Thus a regular n-gon is constructible if n = 3, 4, 5, 6, 8, 10, 12, 15, 16, 17, 20, 24, 30, 32, 34, 40, 48, 51, 60, 64, 68, 80, 85, 96, 102, 120, 128, 136, 160, 170, 192, 204, 240, 255, 256, 257, 272, 320, 340, 384, 408, 480, 510, 512, 514, 544, 640, 680, 768, 771, 816, 960, 1020, 1024, 1028, 1088, 1280, 1285, 1360, 1536, 1542, 1632, 1920, 2040, 2048, ... (sequence A003401 in the OEIS), while a regular n-gon is not constructible with compass and straightedge if n = 7, 9, 11, 13, 14, 18, 19, 21, 22, 23, 25, 26, 27, 28, 29, 31, 33, 35, 36, 37, 38, 39, 41, 42, 43, 44, 45, 46, 47, 49, 50, 52, 53, 54, 55, 56, 57, 58, 59, 61, 62, 63, 65, 66, 67, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 81, 82, 83, 84, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 97, 98, 99, 100, 101, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 121, 122, 123, 124, 125, 126, 127, ... (sequence A004169 in the OEIS). === Connection to Pascal's triangle === Since there are five known Fermat primes, we know of 31 numbers that are products of distinct Fermat primes, and hence 31 constructible odd-sided regular polygons. These are 3, 5, 15, 17, 51, 85, 255, 257, 771, 1285, 3855, 4369, 13107, 21845, 65535, 65537, 196611, 327685, 983055, 1114129, 3342387, 5570645, 16711935, 16843009, 50529027, 84215045, 252645135, 286331153, 858993459, 1431655765, 4294967295 (sequence A045544 in the OEIS). As John Conway commented in The Book of Numbers, these numbers, when written in binary, are equal to the first 32 rows of the modulo-2 Pascal's triangle, minus the top row, which corresponds to a monogon. (Because of this, the 1s in such a list form an approximation to the Sierpiński triangle.) This pattern breaks down after this, as the next Fermat number is composite (4294967297 = 641 × 6700417), so the following rows do not correspond to constructible polygons. It is unknown whether any more Fermat primes exist, and it is therefore unknown how many odd-sided constructible regular polygons exist. In general, if there are q Fermat primes, then there are 2q−1 odd-sided regular constructible polygons. == General theory == In the light of later work on Galois theory, the principles of these proofs have been clarified. It is straightforward to show from analytic geometry that constructible lengths must come from base lengths by the solution of some sequence of quadratic equations. In terms of field theory, such lengths must be contained in a field extension generated by a tower of quadratic extensions. It follows that a field generated by constructions will always have degree over the base field that is a power of two. In the specific case of a regular n-gon, the question reduces to the question of constructing a length cos 2π/n , which is a trigonometric number and hence an algebraic number. This number lies in the n-th cyclotomic field — and in fact in its real subfield, which is a totally real field and a rational vector space of dimension ½ φ(n), where φ(n) is Euler's totient function. Wantzel's result comes down to a calculation showing that φ(n) is a power of 2 precisely in the cases specified. As for the construction of Gauss, when the Galois group is a 2-group it follows that it has a sequence of subgroups of orders 1, 2, 4, 8, ... that are nested, each in the next (a composition series, in group theory terminology), something simple to prove by induction in this case of an abelian group. Therefore, there are subfields nested inside the cyclotomic field, each of degree 2 over the one before. Generators for each such field can be written down by Gaussian period theory. For example, for n = 17 there is a period that is a sum of eight roots of unity, one that is a sum of four roots of unity, and one that is the sum of two, which is cos 2π/17 . Each of those is a root of a quadratic equation in terms of the one before. Moreover, these equations have real rather than complex roots, so in principle can be solved by geometric construction: this is because the work all goes on inside a totally real field. In this way the result of Gauss can be understood in current terms; for actual calculation of the equations to be solved, the periods can be squared and compared with the 'lower' periods, in a quite feasible algorithm. == Compass and straightedge constructions == Compass and straightedge constructions are known for all known constructible polygons. If n = pq with p = 2 or p and q coprime, an n-gon can be constructed from a p-gon and a q-gon. If p = 2, draw a q-gon and bisect one of its central angles. From this, a 2q-gon can be constructed. If p > 2, inscribe a p-gon and a q-gon in the same circle in such a way that they share a vertex. Because p and q are coprime, there exists integers a and b such that ap + bq = 1. Then 2aπ/q + 2bπ/p = 2π/pq. From this, a pq-gon can be constructed. Thus one only has to find a compass and straightedge construction for n-gons where n is a Fermat prime. The construction for an equilateral triangle is simple and has been known since antiquity; see Equilateral triangle. Constructions for the regular pentagon were described both by Euclid (Elements, ca. 300 BC), and by Ptolemy (Almagest, ca. 150 AD). Although Gauss proved that the regular 17-gon is constructible, he did not actually show how to do it. The first construction is due to Erchinger, a few years after Gauss's work. The first explicit constructions of a regular 257-gon were given by Magnus Georg Paucker (1822) and Friedrich Julius Richelot (1832). A construction for a regular 65537-gon was first given by Johann Gustav Hermes (1894). The construction is very complex; Hermes spent 10 years completing the 200-page manuscript. === Gallery === From left to right, constructions of a 15-gon, 17-gon, 257-gon and 65537-gon. Only the first stage of the 65537-gon construction is shown; the constructions of the 15-gon, 17-gon, and 257-gon are given completely. == Other constructions == The concept of constructibility as discussed in this article applies specifically to compass and straightedge constructions. More constructions become possible if other tools are allowed. The so-called neusis constructions, for example, make use of a marked ruler. The constructions are a mathematical idealization and are assumed to be done exactly. A regular polygon with n sides can be constructed with ruler, compass, and angle trisector if and only if n = 2 r 3 s p 1 p 2 ⋯ p k , {\displaystyle n=2^{r}3^{s}p_{1}p_{2}\cdots p_{k},} where r, s, k ≥ 0 and where the pi are distinct Pierpont primes greater than 3 (primes of the form 2 t 3 u + 1 ) . {\displaystyle 2^{t}3^{u}+1).} : Thm. 2 These polygons are exactly the regular polygons that can be constructed with conic sections, and the regular polygons that can be constructed with paper folding. The first numbers of sides of these polygons are: 3, 4, 5, 6, 7, 8, 9, 10, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 24, 26, 27, 28, 30, 32, 34, 35, 36, 37, 38, 39, 40, 42, 45, 48, 51, 52, 54, 56, 57, 60, 63, 64, 65, 68, 70, 72, 73, 74, 76, 78, 80, 81, 84, 85, 90, 91, 95, 96, 97, 102, 104, 105, 108, 109, 111, 112, 114, 117, 119, 120, 126, 128, 130, 133, 135, 136, 140, 144, 146, 148, 152, 153, 156, 160, 162, 163, 168, 170, 171, 180, 182, 185, 189, 190, 192, 193, 194, 195, 204, 208, 210, 216, 218, 219, 221, 222, 224, 228, 234, 238, 240, 243, 247, 252, 255, 256, 257, 259, 260, 266, 270, 272, 273, 280, 285, 288, 291, 292, 296, ... (sequence A122254 in the OEIS) == See also == Polygon Carlyle circle == References == == External links == Duane W. DeTemple (1991). "Carlyle Circles and the Lemoine Simplicity of Polygonal Constructions". The American Mathematical Monthly. 98 (2): 97–108. doi:10.2307/2323939. JSTOR 2323939. MR 1089454. Christian Gottlieb (1999). "The Simple and Straightforward Construction of the Regular 257-gon". Mathematical Intelligencer. 21 (1): 31–37. doi:10.1007/BF03024829. MR 1665155. S2CID 123567824. Regular Polygon Formulas, Ask Dr. Math FAQ. Carl Schick: Weiche Primzahlen und das 257-Eck : eine analytische Lösung des 257-Ecks. Zürich : C. Schick, 2008. ISBN 978-3-9522917-1-9. 65537-gon, exact construction for the 1st side, using the Quadratrix of Hippias and GeoGebra as additional aids, with brief description (German)
|
Wikipedia:Constructive analysis#0
|
In mathematics, constructive analysis is mathematical analysis done according to some principles of constructive mathematics. == Introduction == The name of the subject contrasts with classical analysis, which in this context means analysis done according to the more common principles of classical mathematics. However, there are various schools of thought and many different formalizations of constructive analysis. Whether classical or constructive in some fashion, any such framework of analysis axiomatizes the real number line by some means, a collection extending the rationals and with an apartness relation definable from an asymmetric order structure. Center stage takes a positivity predicate, here denoted x > 0 {\displaystyle x>0} , which governs an equality-to-zero x ≅ 0 {\displaystyle x\cong 0} . The members of the collection are generally just called the real numbers. While this term is thus overloaded in the subject, all the frameworks share a broad common core of results that are also theorems of classical analysis. Constructive frameworks for its formulation are extensions of Heyting arithmetic by types including N N {\displaystyle {\mathbb {N} }^{\mathbb {N} }} , constructive second-order arithmetic, or strong enough topos-, type- or constructive set theories such as C Z F {\displaystyle {\mathsf {CZF}}} , a constructive counter-part of Z F {\displaystyle {\mathsf {ZF}}} . Of course, a direct axiomatization may be studied as well. == Logical preliminaries == The base logic of constructive analysis is intuitionistic logic, which means that the principle of excluded middle P E M {\displaystyle {\mathrm {PEM} }} is not automatically assumed for every proposition. If a proposition ¬ ¬ ∃ x . θ ( x ) {\displaystyle \neg \neg \exists x.\theta (x)} is provable, this exactly means that the non-existence claim ¬ ∃ x . θ ( x ) {\displaystyle \neg \exists x.\theta (x)} being provable would be absurd, and so the latter cannot also be provable in a consistent theory. The double-negated existence claim is a logically negative statement and implied by, but generally not equivalent to the existence claim itself. Much of the intricacies of constructive analysis can be framed in terms of the weakness of propositions of the logically negative form ¬ ¬ ϕ {\displaystyle \neg \neg \phi } , which is generally weaker than ϕ {\displaystyle \phi } . In turn, also an implication ( ∃ x . θ ( x ) ) → ¬ ∀ x . ¬ θ ( x ) {\displaystyle {\big (}\exists x.\theta (x){\big )}\to \neg \forall x.\neg \theta (x)} can generally be not reversed. While a constructive theory proves fewer theorems than its classical counter-part in its classical presentation, it may exhibit attractive meta-logical properties. For example, if a theory T {\displaystyle {\mathsf {T}}} exhibits the disjunction property, then if it proves a disjunction T ⊢ ϕ ∨ ψ {\displaystyle {\mathsf {T}}\vdash \phi \lor \psi } then also T ⊢ ϕ {\displaystyle {\mathsf {T}}\vdash \phi } or T ⊢ ψ {\displaystyle {\mathsf {T}}\vdash \psi } . Already in classical arithmetic, this is violated for the most basic propositions about sequences of numbers - as demonstrated next. === Undecidable predicates === A common strategy of formalization of real numbers is in terms of sequences or rationals, Q N {\displaystyle {\mathbb {Q} }^{\mathbb {N} }} and so we draw motivation and examples in terms of those. So to define terms, consider a decidable predicate on the naturals, which in the constructive vernacular means ∀ n . ( Q ( n ) ∨ ¬ Q ( n ) ) {\displaystyle \forall n.{\big (}Q(n)\lor \neg Q(n){\big )}} is provable, and let χ Q : N → { 0 , 1 } {\displaystyle \chi _{Q}\colon {\mathbb {N} }\to \{0,1\}} be the characteristic function defined to equal 0 {\displaystyle 0} exactly where Q {\displaystyle Q} is true. The associated sequence q n := ∑ k = 0 n χ Q ( n ) / 2 k + 1 {\displaystyle q_{n}\,:=\,{\textstyle \sum }_{k=0}^{n}\chi _{Q}(n)/2^{k+1}} is monotone, with values non-strictly growing between the bounds 0 {\displaystyle 0} and 1 {\displaystyle 1} . Here, for the sake of demonstration, defining an extensional equality to the zero sequence ( q ≅ e x t 0 ) := ∀ n . q n = 0 {\displaystyle (q\cong _{\mathrm {ext} }0)\,:=\,\forall n.q_{n}=0} , it follows that q ≅ e x t 0 ↔ ∀ n . Q ( n ) {\displaystyle q\cong _{\mathrm {ext} }0\leftrightarrow \forall n.Q(n)} . Note that the symbol " 0 {\displaystyle 0} " is used in several contexts here. For any theory capturing arithmetic, there are many yet undecided and even provenly independent such statements ∀ n . Q ( n ) {\displaystyle \forall n.Q(n)} . Two Π 1 0 {\displaystyle \Pi _{1}^{0}} -examples are the Goldbach conjecture and the Rosser sentence of a theory. Consider any theory T {\displaystyle {\mathsf {T}}} with quantifiers ranging over primitive recursive, rational-valued sequences. Already minimal logic proves the non-contradiction claim for any proposition, and that the negation of excluded middle for any given proposition would be absurd. This also means there is no consistent theory (even if anti-classical) rejecting the excluded middle disjunction for any given proposition. Indeed, it holds that T ⊢ ∀ ( x ∈ Q N ) . ¬ ¬ ( ( x ≅ e x t 0 ) ∨ ¬ ( x ≅ e x t 0 ) ) {\displaystyle {\mathsf {T}}\,\,\,\vdash \,\,\,\forall (x\in {\mathbb {Q} }^{\mathbb {N} }).\,\neg \neg {\big (}(x\cong _{\mathrm {ext} }0)\lor \neg (x\cong _{\mathrm {ext} }0){\big )}} This theorem is logically equivalent to the non-existence claim of a sequence for which the excluded middle disjunction about equality-to-zero would be disprovable. No sequence with that disjunction being rejected can be exhibited. Assume the theories at hand are consistent and arithmetically sound. Now Gödel's theorems mean that there is an explicit sequence g ∈ Q N {\displaystyle g\in {\mathbb {Q} }^{\mathbb {N} }} such that, for any fixed precision, T {\displaystyle {\mathsf {T}}} proves the zero-sequence to be a good approximation to g {\displaystyle g} , but it can also meta-logically be established that T ⊬ ( g ≅ e x t 0 ) {\displaystyle {\mathsf {T}}\,\nvdash \,(g\cong _{\mathrm {ext} }0)} as well as T ⊬ ¬ ( g ≅ e x t 0 ) {\displaystyle {\mathsf {T}}\,\nvdash \,\neg (g\cong _{\mathrm {ext} }0)} . Here this proposition g ≅ e x t 0 {\displaystyle g\cong _{\mathrm {ext} }0} again amounts to the proposition of universally quantified form. Trivially T + P E M ⊢ ∀ ( x ∈ Q N ) . ( x ≅ e x t 0 ) ∨ ¬ ( x ≅ e x t 0 ) {\displaystyle {\mathsf {T}}+{\mathrm {PEM} }\,\,\,\vdash \,\,\,\forall (x\in {\mathbb {Q} }^{\mathbb {N} }).\,(x\cong _{\mathrm {ext} }0)\lor \neg (x\cong _{\mathrm {ext} }0)} even if these disjunction claims here do not carry any information. In the absence of further axioms breaking the meta-logical properties, constructive entailment instead generally reflects provability. Taboo statements that ought not be decidable (if the aim is to respect the provability interpretation of constructive claims) can be designed for definitions of a custom equivalence " ≅ {\displaystyle \cong } " in formalizations below as well. For implications of disjunctions of yet not proven or disproven propositions, one speaks of weak Brouwerian counterexamples. === Order vs. disjunctions === The theory of the real closed field may be axiomatized such that all the non-logical axioms are in accordance with constructive principles. This concerns a commutative ring with postulates for a positivity predicate x > 0 {\displaystyle x>0} , with a positive unit and non-positive zero, i.e., 1 > 0 {\displaystyle 1>0} and ¬ ( 0 > 0 ) {\displaystyle \neg (0>0)} . In any such ring, one may define y > x := ( y − x > 0 ) {\displaystyle y>x\,:=\,(y-x>0)} , which constitutes a strict total order in its constructive formulation (also called linear order or, to be explicit about the context, a pseudo-order). As is usual, x < 0 {\displaystyle x<0} is defined as 0 > x {\displaystyle 0>x} . This first-order theory is relevant as the structures discussed below are model thereof. However, this section thus does not concern aspects akin to topology and relevant arithmetic substructures are not definable therein. As explained, various predicates will fail to be decidable in a constructive formulation, such as these formed from order-theoretical relations. This includes " ≅ {\displaystyle \cong } ", which will be rendered equivalent to a negation. Crucial disjunctions are now discussed explicitly. ==== Trichotomy ==== In intuitonistic logic, the disjunctive syllogism in the form ( ϕ ∨ ψ ) → ( ¬ ϕ → ψ ) {\displaystyle (\phi \lor \psi )\to (\neg \phi \to \psi )} generally really only goes in the → {\displaystyle \to } -direction. In a pseudo-order, one has ¬ ( x > 0 ∨ 0 > x ) → x ≅ 0 {\displaystyle \neg (x>0\lor 0>x)\to x\cong 0} and indeed at most one of the three can hold at once. But the stronger, logically positive law of trichotomy disjunction does not hold in general, i.e. it is not provable that for all reals, ( x > 0 ∨ 0 > x ) ∨ x ≅ 0 {\displaystyle (x>0\lor 0>x)\lor x\cong 0} See analytical L P O {\displaystyle {\mathrm {LPO} }} . Other disjunctions are however implied based on other positivity results, e.g. ( x + y > 0 ) → ( x > 0 ∨ y > 0 ) {\displaystyle (x+y>0)\to (x>0\lor y>0)} . Likewise, the asymmetric order in the theory ought to fulfill the weak linearity property ( y > x ) → ( y > t ∨ t > x ) {\displaystyle (y>x)\to (y>t\lor t>x)} for all t {\displaystyle t} , related to locatedness of the reals. The theory shall validate further axioms concerning the relation between the positivity predicate x > 0 {\displaystyle x>0} and the algebraic operations including multiplicative inversion, as well as the intermediate value theorem for polynomials. In this theory, between any two separated numbers, other numbers exist. ==== Apartness ==== In the context of analysis, the auxiliary logically positive predicate x # y := ( x > y ∨ y > x ) {\displaystyle x\#y\,:=\,(x>y\lor y>x)} may be independently defined and constitutes an apartness relation. With it, the substitute of the principles above give tightness ¬ ( x # 0 ) ↔ ( x ≅ 0 ) {\displaystyle \neg (x\#0)\leftrightarrow (x\cong 0)} Thus, apartness can also function as a definition of " ≅ {\displaystyle \cong } ", rendering it a negation. All negations are stable in intuitionistic logic, and therefore ¬ ¬ ( x ≅ y ) ↔ ( x ≅ y ) {\displaystyle \neg \neg (x\cong y)\leftrightarrow (x\cong y)} The elusive trichotomy disjunction itself then reads ( x # 0 ) ∨ ¬ ( x # 0 ) {\displaystyle (x\#0)\lor \neg (x\#0)} Importantly, a proof of the disjunction x # y {\displaystyle x\#y} carries positive information, in both senses of the word. Via ( ϕ → ¬ ψ ) ↔ ( ψ → ¬ ϕ ) {\displaystyle (\phi \to \neg \psi )\leftrightarrow (\psi \to \neg \phi )} it also follows that x # 0 → ¬ ( x ≅ 0 ) {\displaystyle x\#0\to \neg (x\cong 0)} . In words: A demonstration that a number is somehow apart from zero is also a demonstration that this number is non-zero. But constructively it does not follow that the doubly negative statement ¬ ( x ≅ 0 ) {\displaystyle \neg (x\cong 0)} would imply x # 0 {\displaystyle x\#0} . Consequently, many classically equivalent statements bifurcate into distinct statement. For example, for a fixed polynomial p ∈ R [ x ] {\displaystyle p\in {\mathbb {R} }[x]} and fixed k ∈ N {\displaystyle k\in {\mathbb {N} }} , the statement that the k {\displaystyle k} 'th coefficient a k {\displaystyle a_{k}} of p {\displaystyle p} is apart from zero is stronger than the mere statement that it is non-zero. A demonstration of former explicates how a k {\displaystyle a_{k}} and zero are related, with respect to the ordering predicate on the reals, while a demonstration of the latter shows how negation of such conditions would imply to a contradiction. In turn, there is then also a strong and a looser notion of, e.g., being a third-order polynomial. So the excluded middle for x # 0 {\displaystyle x\#0} is apriori stronger than that for x ≅ 0 {\displaystyle x\cong 0} . However, see the discussion of possible further axiomatic principles regarding the strength of " ≅ {\displaystyle \cong } " below. ==== Non-strict partial order ==== Lastly, the relation 0 ≥ x {\displaystyle 0\geq x} may be defined by or proven equivalent to the logically negative statement ¬ ( x > 0 ) {\displaystyle \neg (x>0)} , and then x ≤ 0 {\displaystyle x\leq 0} is defined as 0 ≥ x {\displaystyle 0\geq x} . Decidability of positivity may thus be expressed as x > 0 ∨ 0 ≥ x {\displaystyle x>0\lor 0\geq x} , which as noted will not be provable in general. But neither will the totality disjunction x ≥ 0 ∨ 0 ≥ x {\displaystyle x\geq 0\lor 0\geq x} , see also analytical L L P O {\displaystyle {\mathrm {LLPO} }} . By a valid De Morgan's law, the conjunction of such statements is also rendered a negation of apartness, and so ( x ≥ y ∧ y ≥ x ) ↔ ( x ≅ y ) {\displaystyle (x\geq y\land y\geq x)\leftrightarrow (x\cong y)} The disjunction x > y ∨ x ≅ y {\displaystyle x>y\lor x\cong y} implies x ≥ y {\displaystyle x\geq y} , but the other direction is also not provable in general. In a constructive real closed field, the relation " ≥ {\displaystyle \geq } " is a negation and is not equivalent to the disjunction in general. ==== Variations ==== Demanding good order properties as above but strong completeness properties at the same time implies P E M {\displaystyle {\mathrm {PEM} }} . Notably, the MacNeille completion has better completeness properties as a collection, but a more intricate theory of its order-relation and, in turn, worse locatedness properties. While less commonly employed, also this construction simplifies to the classical real numbers when assuming P E M {\displaystyle {\mathrm {PEM} }} . === Invertibility === In the commutative ring of real numbers, a provably non-invertible element equals zero. This and the most basic locality structure is abstracted in the theory of Heyting fields. == Formalization == === Rational sequences === A common approach is to identify the real numbers with non-volatile sequences in Q N {\displaystyle {\mathbb {Q} }^{\mathbb {N} }} . The constant sequences correspond to rational numbers. Algebraic operations such as addition and multiplication can be defined component-wise, together with a systematic reindexing for speedup. The definition in terms of sequences furthermore enables the definition of a strict order " > {\displaystyle >} " fulfilling the desired axioms. Other relations discussed above may then be defined in terms of it. In particular, any number x {\displaystyle x} apart from 0 {\displaystyle 0} , i.e. x # 0 {\displaystyle x\#0} , eventually has an index beyond which all its elements are invertible. Various implications between the relations, as well as between sequences with various properties, may then be proven. ==== Moduli ==== As the maximum on a finite set of rationals is decidable, an absolute value map on the reals may be defined and Cauchy convergence and limits of sequences of reals can be defined as usual. A modulus of convergence is often employed in the constructive study of Cauchy sequences of reals, meaning the association of any ε > 0 {\displaystyle \varepsilon >0} to an appropriate index (beyond which the sequences are closer than ε {\displaystyle \varepsilon } ) is required in the form of an explicit, strictly increasing function ε ↦ N ( ε ) {\displaystyle \varepsilon \mapsto N(\varepsilon )} . Such a modulus may be considered for a sequence of reals, but it may also be considered for all the reals themselves, in which case one is really dealing with a sequence of pairs. ==== Bounds and suprema ==== Given such a model then enables the definition of more set theoretic notions. For any subset of reals, one may speak of an upper bound b {\displaystyle b} , negatively characterized using x ≤ b {\displaystyle x\leq b} . One may speak of least upper bounds with respect to " ≤ {\displaystyle \leq } ". A supremum is an upper bound given through a sequence of reals, positively characterized using " < {\displaystyle <} ". If a subset with an upper bound is well-behaved with respect to " < {\displaystyle <} " (discussed below), it has a supremum. ==== Bishop's formalization ==== One formalization of constructive analysis, modeling the order properties described above, proves theorems for sequences of rationals x {\displaystyle x} fulfilling the regularity condition | x n − x m | ≤ 1 n + 1 m {\displaystyle |x_{n}-x_{m}|\leq {\tfrac {1}{n}}+{\tfrac {1}{m}}} . An alternative is using the tighter 2 − n {\displaystyle 2^{-n}} instead of 1 n {\displaystyle {\tfrac {1}{n}}} , and in the latter case non-zero indices ought to be used. No two of the rational entries in a regular sequence are more than 3 2 {\displaystyle {\tfrac {3}{2}}} apart and so one may compute natural numbers exceeding any real. For the regular sequences, one defines the logically positive loose positivity property as x > 0 := ∃ n . x n > 1 n {\displaystyle x>0\,:=\,\exists n.x_{n}>{\tfrac {1}{n}}} , where the relation on the right hand side is in terms of rational numbers. Formally, a positive real in this language is a regular sequence together with a natural witnessing positivity. Further, x ≅ y := ∀ n . | x n − y n | ≤ 2 n {\displaystyle x\cong y\,:=\,\forall n.|x_{n}-y_{n}|\leq {\tfrac {2}{n}}} , which is logically equivalent to the negation ¬ ∃ n . | x n − y n | > 2 n {\displaystyle \neg \exists n.|x_{n}-y_{n}|>{\tfrac {2}{n}}} . This is provably transitive and in turn an equivalence relation. Via this predicate, the regular sequences in the band | x n | ≤ 2 n {\displaystyle |x_{n}|\leq {\tfrac {2}{n}}} are deemed equivalent to the zero sequence. Such definitions are of course compatible with classical investigations and variations thereof were well studied also before. One has y > x {\displaystyle y>x} as ( y − x ) > 0 {\displaystyle (y-x)>0} . Also, x ≥ 0 {\displaystyle x\geq 0} may be defined from a numerical non-negativity property, as x n ≥ − 1 n {\displaystyle x_{n}\geq -{\tfrac {1}{n}}} for all n {\displaystyle n} , but then shown to be equivalent of the logical negation of the former. ==== Variations ==== The above definition of x ≅ y {\displaystyle x\cong y} uses a common bound 2 n {\displaystyle {\tfrac {2}{n}}} . Other formalizations directly take as definition that for any fixed bound 2 N {\displaystyle {\tfrac {2}{N}}} , the numbers x {\displaystyle x} and y {\displaystyle y} must eventually be forever at least as close. Exponentially falling bounds 2 − n {\displaystyle 2^{-n}} are also used, also say in a real number condition ∀ n . | x n − x n + 1 | < 2 − n {\displaystyle \forall n.|x_{n}-x_{n+1}|<2^{-n}} , and likewise for the equality of two such reals. And also the sequences of rationals may be required to carry a modulus of convergence. Positivity properties may defined as being eventually forever apart by some rational. Function choice in N N {\displaystyle {\mathbb {N} }^{\mathbb {N} }} or stronger principles aid such frameworks. ==== Coding ==== It is worth noting that sequences in Q N {\displaystyle {\mathbb {Q} }^{\mathbb {N} }} can be coded rather compactly, as they each may be mapped to a unique subclass of N {\displaystyle {\mathbb {N} }} . A sequence rationals i ↦ n i d i ( − 1 ) s i {\displaystyle i\mapsto {\tfrac {n_{i}}{d_{i}}}(-1)^{s_{i}}} may be encoded as set of quadruples ⟨ i , n i , d i , s i ⟩ ∈ N 4 {\displaystyle \langle i,n_{i},d_{i},s_{i}\rangle \in {\mathbb {N} }^{4}} . In turn, this can be encoded as unique naturals 2 i ⋅ 3 n i ⋅ 5 d i ⋅ 7 s i {\displaystyle 2^{i}\cdot 3^{n_{i}}\cdot 5^{d_{i}}\cdot 7^{s_{i}}} using the fundamental theorem of arithmetic. There are more economic pairing functions as well, or extension encoding tags or metadata. For an example using this encoding, the sequence i ↦ ∑ k = 0 i 1 k {\displaystyle i\mapsto {\textstyle \sum }_{k=0}^{i}{\tfrac {1}{k}}} , or 1 , 2 , 5 2 , 8 3 , … {\displaystyle 1,2,{\tfrac {5}{2}},{\tfrac {8}{3}},\dots } , may be used to compute Euler's number and with the above coding it maps to the subclass { 15 , 90 , 24300 , 6561000 , … } {\displaystyle \{15,90,24300,6561000,\dots \}} of N {\displaystyle {\mathbb {N} }} . While this example, an explicit sequence of sums, is a total recursive function to begin with, the encoding also means these objects are in scope of the quantifiers in second-order arithmetic. === Set theory === ==== Cauchy reals ==== In some frameworks of analysis, the name real numbers is given to such well-behaved sequences or rationals, and relations such as x ≅ y {\displaystyle x\cong y} are called the equality or real numbers. Note, however, that there are properties which can distinguish between two ≅ {\displaystyle \cong } -related reals. In contrast, in a set theory that models the naturals N {\displaystyle {\mathbb {N} }} and validates the existence of even classically uncountable function spaces (and certainly say C Z F {\displaystyle {\mathsf {CZF}}} or even Z F C {\displaystyle {\mathsf {ZFC}}} ) the numbers equivalent with respect to " ≅ {\displaystyle \cong } " in Q N {\displaystyle {\mathbb {Q} }^{\mathbb {N} }} may be collected into a set and then this is called the Cauchy real number. In that language, regular rational sequences are degraded to a mere representative of a Cauchy real. Equality of those reals is then given by the equality of sets, which is governed by the set theoretical axiom of extensionality. An upshot is that the set theory will prove properties for the reals, i.e. for this class of sets, expressed using the logical equality. Constructive reals in the presence of appropriate choice axioms will be Cauchy-complete but not automatically order-complete. ==== Dedekind reals ==== In this context it may also be possible to model a theory or real numbers in terms of Dedekind cuts of Q {\displaystyle {\mathbb {Q} }} . At least when assuming P E M {\displaystyle {\mathrm {PEM} }} or dependent choice, these structures are isomorphic. ==== Interval arithmetic ==== Another approach is to define a real number as a certain subset of Q × Q {\displaystyle {\mathbb {Q} }\times {\mathbb {Q} }} , holding pairs representing inhabited, pairwise intersecting intervals. ==== Uncountability ==== Recall that the preorder on cardinals " ≤ {\displaystyle \leq } " in set theory is the primary notion defined as injection existence. As a result, the constructive theory of cardinal order can diverge substantially from the classical one. Here, sets like Q N {\displaystyle {\mathbb {Q} }^{\mathbb {N} }} or some models of the reals can be taken to be subcountable. That said, Cantors diagonal construction proving uncountability of powersets like P N {\displaystyle {\mathcal {P}}{\mathbb {N} }} and plain function spaces like Q N {\displaystyle {\mathbb {Q} }^{\mathbb {N} }} is intuitionistically valid. Assuming P E M {\displaystyle {\mathrm {PEM} }} or alternatively the countable choice axiom, models of R {\displaystyle {\mathbb {R} }} are always uncountable also over a constructive framework. One variant of the diagonal construction relevant for the present context may be formulated as follows, proven using countable choice and for reals as sequences of rationals: For any two pair of reals a < b {\displaystyle a<b} and any sequence of reals x n {\displaystyle x_{n}} , there exists a real z {\displaystyle z} with a < z < b {\displaystyle a<z<b} and ∀ ( n ∈ N ) . x n # z {\displaystyle \forall (n\in {\mathbb {N} }).x_{n}\,\#\,z} . Formulations of the reals aided by explicit moduli permit separate treatments. According to Kanamori, "a historical misrepresentation has been perpetuated that associates diagonalization with non-constructivity" and a constructive component of the diagonal argument already appeared in Cantor's work. === Category and type theory === All these considerations may also be undertaken in a topos or appropriate dependent type theory. == Principles == For practical mathematics, the axiom of dependent choice is adopted in various schools. Markov's principle is adopted in the Russian school of recursive mathematics. This principle strengthens the impact of proven negation of strict equality. A so-called analytical form of it grants ¬ ( x ≤ 0 ) → x > 0 {\displaystyle \neg (x\leq 0)\to x>0} or ¬ ( x ≅ 0 ) → x # 0 {\displaystyle \neg (x\cong 0)\to x\#0} . Weaker forms may be formulated. The Brouwerian school reasons in terms of spreads and adopts the classically valid bar induction. === Anti-classical schools === Through the optional adoption of further consistent axioms, the negation of decidability may be provable. For example, equality-to-zero is rejected to be decidable when adopting Brouwerian continuity principles or Church's thesis in recursive mathematics. The weak continuity principle as well as C T 0 {\displaystyle {\mathrm {CT} _{0}}} even refute x ≥ 0 ∨ 0 ≥ x {\displaystyle x\geq 0\lor 0\geq x} . The existence of a Specker sequence is proven from C T 0 {\displaystyle {\mathrm {CT} _{0}}} . Such phenomena also occur in realizability topoi. Notably, there are two anti-classical schools as incompatible with one-another. This article discusses principles compatible with the classical theory and choice is made explicit. == Theorems == Many classical theorems can only be proven in a formulation that is logically equivalent, over classical logic. Generally speaking, theorem formulation in constructive analysis mirrors the classical theory closest in separable spaces. Some theorems can only be formulated in terms of approximations. === The intermediate value theorem === For a simple example, consider the intermediate value theorem (IVT). In classical analysis, IVT implies that, given any continuous function f from a closed interval [a,b] to the real line R, if f(a) is negative while f(b) is positive, then there exists a real number c in the interval such that f(c) is exactly zero. In constructive analysis, this does not hold, because the constructive interpretation of existential quantification ("there exists") requires one to be able to construct the real number c (in the sense that it can be approximated to any desired precision by a rational number). But if f hovers near zero during a stretch along its domain, then this cannot necessarily be done. However, constructive analysis provides several alternative formulations of IVT, all of which are equivalent to the usual form in classical analysis, but not in constructive analysis. For example, under the same conditions on f as in the classical theorem, given any natural number n (no matter how large), there exists (that is, we can construct) a real number cn in the interval such that the absolute value of f(cn) is less than 1/n. That is, we can get as close to zero as we like, even if we can't construct a c that gives us exactly zero. Alternatively, we can keep the same conclusion as in the classical IVT—a single c such that f(c) is exactly zero—while strengthening the conditions on f. We require that f be locally non-zero, meaning that given any point x in the interval [a,b] and any natural number m, there exists (we can construct) a real number y in the interval such that |y - x| < 1/m and |f(y)| > 0. In this case, the desired number c can be constructed. This is a complicated condition, but there are several other conditions that imply it and that are commonly met; for example, every analytic function is locally non-zero (assuming that it already satisfies f(a) < 0 and f(b) > 0). For another way to view this example, notice that according to classical logic, if the locally non-zero condition fails, then it must fail at some specific point x; and then f(x) will equal 0, so that IVT is valid automatically. Thus in classical analysis, which uses classical logic, in order to prove the full IVT, it is sufficient to prove the constructive version. From this perspective, the full IVT fails in constructive analysis simply because constructive analysis does not accept classical logic. Conversely, one may argue that the true meaning of IVT, even in classical mathematics, is the constructive version involving the locally non-zero condition, with the full IVT following by "pure logic" afterwards. Some logicians, while accepting that classical mathematics is correct, still believe that the constructive approach gives a better insight into the true meaning of theorems, in much this way. === The least-upper-bound principle and compact sets === Another difference between classical and constructive analysis is that constructive analysis does not prove the least-upper-bound principle, i.e. that any subset of the real line R would have a least upper bound (or supremum), possibly infinite. However, as with the intermediate value theorem, an alternative version survives; in constructive analysis, any located subset of the real line has a supremum. (Here a subset S of R is located if, whenever x < y are real numbers, either there exists an element s of S such that x < s, or y is an upper bound of S.) Again, this is classically equivalent to the full least upper bound principle, since every set is located in classical mathematics. And again, while the definition of located set is complicated, nevertheless it is satisfied by many commonly studied sets, including all intervals and all compact sets. Closely related to this, in constructive mathematics, fewer characterisations of compact spaces are constructively valid—or from another point of view, there are several different concepts that are classically equivalent but not constructively equivalent. Indeed, if the interval [a,b] were sequentially compact in constructive analysis, then the classical IVT would follow from the first constructive version in the example; one could find c as a cluster point of the infinite sequence (cn)n∈N. == See also == Computable analysis Constructive nonstandard analysis Heyting field Indecomposability (constructive mathematics) Pseudo-order == References == == Further reading == Bishop, Errett (1967). Foundations of Constructive Analysis. ISBN 4-87187-714-0. Bridger, Mark (2007). Real Analysis: A Constructive Approach. Hoboken: Wiley. ISBN 0-471-79230-6.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.